[Future Technology Research Index] [SGI Tech/Advice Index] [Nintendo64 Tech Info Index]


[WhatsNew] [P.I.] [Indigo] [Indy] [O2] [Indigo2] [Crimson] [Challenge] [Onyx] [Octane] [Origin] [Onyx2]

Ian's SGI Depot: FOR SALE! SGI Systems, Parts, Spares and Upgrades

(check my current auctions!)

Using PCI Fibre Channel Boards
in SGI Workstations

By Chris Kalisiak

Last Change: 14/Mar/2004

Introduction

This is a collection of notes related to setting up and using 1Gbit Fibre Channel in Silicon Graphics workstations that have PCI slots. This includes O2, Octane/Octane2 (with a PCI shoebox), and Origin 200 systems. Origin 2000 / Onyx2 systems with PCI slot options should be just as applicable, as they are the same vintage.

I don't know anything about the Fuel and other newer systems, but if you can afford to acquire a Fuel, you should probably be using newer 2Gbit hardware with those systems anyway.


Basics

Fibre Channel is somewhat like SCSI on steroids -- 1Gbit FC offers data rates that are a little faster than Ultra2 SCSI, and even uses the same SCSI commands to move data around, but allows for tremendous flexibility and scalability. SCSI is a parallel bus topology, and is typically direct-attach, whereas Fibre Channel is a loop topology. FC devices can be direct-attach, or the devices can also interconnect through a hub or a switch.

One major point to be aware of; as a loop topology, there is no such thing as a "Fibre Channel terminator". SCSI is a parallel bus topology, and as such, the ends of the parallel lines must be terminated. But in FC, the "transmit" of one device connects to the "receive" of the next device in the loop, even if the next device in the loop is itself.

Example Network using Fibre Channel

Fibre Channel disk drives are typically mounted in a FC array, and bare FC drives are typically easy to find inexpensively. These drives have a 40-pin SCA-II connector, similar to the 80-pin SCA connector found on the SCSI drives used in SGI boxes. In order to attach these FC drives to a PCI card, you need what is referred to as a "T-Card". I'm not sure where that name came from originally, but I've read references to it being a shortened version of "test card". And, similar again to SCSI, a Fibre Channel T-Card performs the same function as a SCSI 80-pin to 68-pin adapter, breaking out the pins to expose power, configuration jumpers, and data signals. Some T-Card designs (and also NetApp filer arrays, from what I have heard) require a "terminator", which is really just a loopback connector. The transmit pins in the loopback are shorted to the receive pins.

Fibre Channel devices expose one of two interface types -- copper and optical. There are a few different types of copper, and a few different types of optical. In the case of some PCI cards and most hubs/switches, the interfaces are pluggable, called GBICs (GigaBit Interface Connectors), which allow the swapping of interfaces.

The only real difference between copper and optical is the media itself. They both operate at the same 1Gbit/sec. The effective bandwidth is about 100MB/sec in one direction. If your application can make use of full-duplex (typically not the case in video editing, but more often in a datacenter), then you can get on the order of 125-130MB/sec bidirectional.

The choice of copper or optical is more a function of what you want to connect to. Optic is good for housing drives far from the machine, for, say, audible noise reasons. Although with most SGI boxes except an O2, I'm sure the computer would be louder than the drives, so audible noise probably doesn't mean much. The bright-orange optical cables certainly do add to the "cool" factor, though, as compared to black copper cables.

Copper is a little less expensive to work with, because the PCI cards are a little cheaper. Copper is really only good for up to 30m, and at that length, you're looking at a very thick and heavy cable. Copper cables up to 10m are fairly light, and not all that expensive. Cables in the range of 2-3m are very easy to acquire. Fibre Channel arrays are usually copper-based.

Optic cables aren't that much more fragile when compared to copper, unless you plan on stepping on or running over the cables a lot. Copper Fibre Channel cables have a bending radius limitation, just as optical cables do, so treat all FC cabling with some amount of care.

"Short-wave", or "multi-mode" optical, which is what 99.99% of all optical hardware is, is good for relatively long distances. I have a 100m spool of cable that I use on occasion to run a Fibre Channel link to my next door neighbor's house. Cables on the order of 10m-30m are fairly inexpensive and easy to find. The optical connector that is used in 1Gbit FC is called "SC", consisting of a pair of 1cm-square connectors ganged together, one for transmit and one for receive.

Avoid yellow optical cables; they're "single-mode". What you want are the orange "multi-mode" cables. Single-mode is for travelling extremely long distances, and is very difficult to work with. You need attenuators and other gadgets to be able to use single-mode, so you don't burn out the receivers at both ends of the cable. Single-mode and multi-mode GBICs all look the same, so be careful. You want 62.5/125 or 50/125 optical cables and GBICs, and avoid 9/125 gear.

There are two different copper connectors on the market, one is HSSDC and the other is DB9.

The HSSDC connector is like a flattened, ruggedized, RJ45, with a molded connector. QLogic, LSI Logic, and a couple other vendors use this connector in their PCI cards. Sometimes hubs and switches will have fixed HSSDC connectors as well.

Then there are DB9 connectors, which look like a serial port, typically with a white plastic plug in one of the pins, so you can't go plugging a DB9 male serial cable into the port. SGI's XIO card is DB9, and you can also find this connector on Emulex PCI cards, as well as most Fibre Channel arrays.

The advantage of the DB9 connector found on arrays (and also some interface cards and some T-Cards), and the reason why arrays use it, is it typically carries power for a "Media Interface Adapter", or MIA, allowing the copper interface to be converted to optical. It's easy to convert from a DB9 copper interface to optical, since the DB9 connector supplies power, but it's not possible to convert an SC optical interface to copper, unless you have an external power supply.


Silicon Graphics, and supported options

Under IRIX, as long as you have the right card (QLogic QLA2200), Fibre Channel is supported just like SCSI, and the devices usually just show up in the disk manager (unless you're in fabric mode, but that's beyond the scope of this discussion). While there have been discussions about what specific revisions of QLA2200 work and what revisions don't, I have not yet found a QLA2200 that doesn't work in any of my systems. It has been reported that the "-13" revision, resold by Dell for use with their PowerVaults, doesn't work, but I haven't seen that problem.

I have tested, in an O2 and an Octane with a shoebox, the revisions "-03" (optical), "-05" (copper), "-07" (optical), "-13" (copper) and "-16" (optical). In all cases, IRIX 6.5.20f (which is what I had running on my systems at the time of the test) is able to mount the filesystem on the drive attached to the QLA2200's.

All QLA2200 PCI cards are 64-bit. If you're using a QLA2200 in an SGI box, then 64-bit is a good thing. The QLA2200F is an optical card, and QLA2200 proper is a copper card. There may be a /33 or /66, which indicates the supported PCI bus speed of 33MHz and 66MHz, respectively. If neither is mentioned, then it's probably an earlier card that only supports 33MHz. If the card will be used in an SGI box (O2, Octane/2, Origin 200[0]), then don't worry about whether it's 33 or 66, because the slots in this generation of SGI boxes are all 33MHz. Note that 66MHz cards are backward compatible with 33MHz slots, so you don't have to worry about making sure what speed the card is.

The QLA2300 family of PCI cards is beyond the scope of this discussion, as they are 2Gbit cards. I have no experience with working with 2Gbit hardware in an SGI machine, so I have no idea if any of the information in this document is applicable to a QLA2300.

There are also XIO card options that can be used in Octanes or Origins, but I don't know much about them. I do know that there are different flavors of cards for Octanes and for Origins, so be careful about what you order. With the Octane, the XIO cards are screwed down to a sled, whereas with Origins, the XIO cards each slide into slots, and latch into place by pushing a tab. Other than that difference, the cards are the same.


Q) Now that I have everything connected, what do I do?

A) Start up your system, and watch for a variation on "/hw/module/1/slot/MotherBoard/node/xtalk/8/pci/5/scsi_ctlr/0: Firmware version: 2.1.38: TP. /hw/module/1/slot/MotherBoard/node/xtalk/8/pci/5/scsi_ctlr/0: 8 FCP targets.". This indicates that the IRIX-specific firmware has been downloaded to the QLA2200.

Once the system has finished booting, and you have logged in, execute a "hinv -v".

Under normal circumstances, that's all you should have to do, assuming you have a recent version of IRIX. I believe support for Fibre Channel was mature as of IRIX 6.5.9.

Look at the hinv output. In an Octane with QLA2200's in a shoebox and drives attached, this is what it should look like:

Integral SCSI controller 2: Version Fibre Channel QL2200A, 33 MHz PCI 
Disk drive: unit 0 on SCSI controller 2 (unit 0) 
Disk drive: unit 1 on SCSI controller 2 (unit 1) 
Disk drive: unit 2 on SCSI controller 2 (unit 2) 
Disk drive: unit 3 on SCSI controller 2 (unit 3) 
Disk drive: unit 4 on SCSI controller 2 (unit 4) 
Disk drive: unit 5 on SCSI controller 2 (unit 5) 
Disk drive: unit 6 on SCSI controller 2 (unit 6) 
Disk drive: unit 7 on SCSI controller 2 (unit 7) 
Integral SCSI controller 4: Version Fibre Channel QL2200A, 33 MHz PCI 
Disk drive: unit 22 on SCSI controller 4 (unit 22) 
Disk drive: unit 25 on SCSI controller 4 (unit 25) 
Disk drive: unit 103 on SCSI controller 4 (unit 103) 
Disk drive: unit 105 on SCSI controller 4 (unit 105) 
Disk drive: unit 110 on SCSI controller 4 (unit 110) 
Disk drive: unit 116 on SCSI controller 4 (unit 116)

Note that the drive numbers attached to controller 2 are sequential starting with zero, and the drives attached to controller 3 are somewhat random. This is because the eight are in a commercial array with predetermined ID's, and the six are in a configuration known as a JBOD, or "Just a Bunch of Disks", using my T-Cards with somewhat random drive ID jumpers.

The devices can also be found by executing an "ls /dev/scsi". Some variation on /dev/scsi/sc3d1l0 is what it would look like when direct attached to a PCI card. If you're working with a Fibre Channel switch, then some variation on /dev/scsi/200d00a0b80c13ec/lun0/c3p1 is what it would look like when fabric attached. The large string in the middle is the WWN, or "World Wide Name", of the device you're talking to, which is basically a MAC address -- a globally unique ID.

Note that direct-attached drives can be manipulated with the disk manager GUI under IRIX, but fabric-attached drives cannot. You have to use CLI tools to make and mount filesystems with fabric-attached drives.


Q) The QLA2200 isn't seen by my system. What's wrong?

A) Have you tried an "autoconfigure -f" with the card installed? My Origin 200 didn't recognize the QLA2200 that I installed until I executed an autoconfigure, and now it's recognized at boot time.


Q) What is the main difference (apart from the bus) between the PCI and XIO fibre channel cards and the advantage of one over the other?

A) As far as I can tell, the main difference really is in cost, availability, and slot consumption.

If you can find an XIO FC card, be prepared to pay about 7-10x the cost of a QLA2200.

The PCI "shoebox", available in different forms for the Octane and Origin 2000, has three PCI slots, which can be filled with single-channel QLA2200's.

There also exists PCI "shoehorns", which are single slot XIO-PCI adapters, used in Origin systems with XIO slots. The QLA2200 may also be used in shoehorns.

The XIO subsystem has four slots, some of which are occupied by graphics cards. The XIO FC card has two channels and occupies one XIO slot.

Also, the QLA2200 is available in both optical and HSSDC copper, whereas the XIO card is only available in DB9 copper. The latter is not all bad, though, since you can attach a "Media Interface Adapter" to the XIO card's ports to expose optical connectors.


Q) Can you use RJ45/Cat5 wire and DB9 serial connectors to make Fibre Channel cables?

A) Well, basically, that works about as well as using Cat3 or Cat4 cable to run 100Base-TX. For really short distances, you can get by. But anything longer than a couple meters? It might work by accident, best case, or it might sort of work at a lower speed because of all of the errors and re-transmissions, or, worst case, it won't work at all, and you won't know why.

Cat5e UTP is technically not the correct impedance for Fibre Channel -- Cat5e UTP is 100 Ohm, and FC is 150 Ohm, because it's shielded. If you use Cat5e STP, then it's the correct impedance, but STP is much more expensive than UTP, and then you have to buy the DB9 connectors. The costs get to the point where you may as well just go with premade FC cable and do it right.

I would guess that you can use Cat5e UTP for runs no longer than 7-8m, assuming you haven't untwisted too much of the transmit and receive lines, and solder the Cat5e wire directly to the pins of a DB9 male connector. Cat6 might be able to go to 10-15m, but I don't know for sure, since I haven't worked with Cat6. Don't even bother trying the commercially available RJ45-DB9 converters -- I tried using a tiny serial gender changer once, a DB9-DB9 female to female, but that didn't work at all. Apparently the impedances inside the gender changer are way off. So we designed our own Fibre Channel-compatible gender changer, to fit into a PCI slot cover, to make for a cleaner install. See my webstore for a picture of what it looks like.

The T-Cards I sell do use standard Cat5e patch cables, but there are a couple reasons why that works. One reason is the T-Cards have RJ45 sockets, where the traces going to and from the RJ45 sockets are correctly spaced on the PCB, based on the PCB thickness, the trace width, and the distance from the ground plane on the other side of the PCB. Another reason is that I suggest that you use short patch cables between drives, 1m or less in length.


Q) Is FDDI compatible with Fibre Channel?

A) No. You can't connect a Fibre Channel array to a FDDI network. FDDI might have been (and probably was, but my knowledge of FC doesn't go that far back) the ancestor to Fibre Channel, but they are not compatible.

The only connector that FDDI shares with FC is the SC optical connector found in later FDDI implementations. (As an aside, 62.5/125 SC-SC FDDI cables can be used with 1Gbit optical Fibre Channel, as long as the cable lengths are relatively short. I forget what the restrictions are, but I think it's good for 100m. The appropriate cable type for 1Gbit optical FC is 50/125 SC-SC, which is good for longer lengths.)

There is no true "twisted pair" pair cable in Fibre Channel, even though there was one with copper FDDI. The copper FC cables are constructed of a shielded cable with four wires, designed with a nominal impedance of 150 Ohms. Standard twisted pair for ethernet is 100 Ohms. The connectors involved with copper FC cabling are DB9 male or HSSDC, not RJ45.

At the physical layer, the speeds the two networks operate at are completely different. Very early implementations of Fibre Channel were 266MBit/sec (but from what I've seen, only Sun used that speed), whereas all modern Fibre Channel implementations are 1Gbit/sec or 2Gbit/sec. FDDI was exclusively 100MBit/sec.


Q) Can you connect both an O2 and an Octane to the same enclosure?

A) Yes, this will work, but there's one thing you need to be careful of. Modern operating systems expect to have complete control over disk volumes. They get a bit cranky (to say the least) if someone else twiddles some bits in their filesystems without their knowledge. So while you can connect multiple systems to a storage array (through a Fibre Channel hub, for example), it's best to have the filesystem mounted only on one computer at a time.

You should just be able to "umount" from the O2 after capturing and editing and then "mount" on the Octane to do the compression, and vice versa. Another option would be to use 'mount -o ro' to mount the filesystem read-only on one system, with the other mounting the filesystem normally with read-write capabilities.

There are software packages out there that allow multiple computers to read and write to storage simultaneously, but they're commercial apps and there's not really a lot of support for IRIX that I know of. I don't recall seeing any such package being available from SGI, although "CXFS Infinite Storage" might be an option.

One such application is the IBM/Tivoli product SANergy. The basic idea of this software is it has a server, the Meta-Data Controller, which owns the storage. When clients want to read/write the data, it asks the MDC for permission over an ethernet link, and the MDC responds over ethernet to the client, with vectors for where to find the storage it's looking for, and then the client does its thing over Fibre Channel. A neat idea in theory, but it really only works well with large files; every file access has its own request/response/access exchange, so working with small files is slower than ethernet.

(If someone knows specific details to the contrary -- i.e. how to share volumes across multiple FC-based systems running IRIX with a non-commercial app -- please speak up!)

As far as other devices are concerned, such as CD-ROM drives/jukeboxes and tape drives/libraries, there's not a problem at all.

Tape drives/libraries have the same problem as disk drives, of course -- you need to be aware of what computer "owns" the drive at any given time, so there's no clobbering each other.

I have a Fibre Channel SAN with two Fibre Channel to SCSI bridges. One bridge has a Plextor 12x4x32x CD-RW, a Pioneer DVD-ROM drive, and a 47GB UltraWide drive for temporary storage. The other bridge has a Quantum DLT4700 mini-library that I use to back up the servers. Directly connected to the Fibre Channel switch is a 36GB Barracuda that I use with Norton Ghost to do backups of the PC's.

I can burn CD's or watch movies from any computer in the house. Or even next door, if I run the 100m cable out the window to the neighbor's computer.

The trick is always to keep track of who is writing to what device, and make sure nobody else tries to do writes at the same time.


Q) Can a Fibre Channel drive be connected while the computer is running?

A) Yes.

1) Attach the disk to the machine.
2) Execute a ' hinv -v ' to see what controller number was assigned to the FC card.
3) Run as root: ' scsiha -p X ' where "X" is the controller ID.
4) Execute another ' hinv -v ' to see if the disk is recognized.
5) Then run ' ioconfig -f /hw ' as root to complete the SCSI initialization.
6) Use the filesystems manager to mount the filesystems.


Q) What is "dual-loop", and what does "split-plane" mean?

A) Fibre Channel hard drives are, inherently, dual-loop. What this means is that each FC hard drive has two ports on it, a "Port A" and a "Port B". The ports operate independently, and can each sustain 100MB/sec in each direction, for a maximum bandwidth across both ports of 200MB/sec. Dual-porting is typically used for redundancy, but what you could do is dual port the drives across two PCI cards to increase the maximum available throughput. One of the T-Card products that I offer is a dual-loop T-Card and interface.

As an aside, the 100MB/sec and 200MB/sec speeds are what the drives are rated for, anyway. Just because a drive can support a 200MB/sec interface doesn't mean that each drive in the loop can move 200MB/sec from the platters to the interface. There are no commercially available drives that can move data that fast.

So, to answer the second half of the question, a "split-plane" is when you logically divide a JBOD (Just a Bunch Of Disks, meaning no hardware RAID support) array into two groups of drives, to maximize the bandwidth. The intent of "splitting the plane" is to allow drives to run at their full speed, with fewer drives per port, to spread the wealth, as it were. Typically, in an array of 10000RPM drives, the 100MB/sec maximum bandwidth will be saturated with four to six drives. Adding more drives doesn't really help improve performance, it just adds more storage. By splitting the plane, and associating some of the drives with one PCI card and the rest of the drives with another PCI card, then both groups can use the full 100MB/sec bandwidth per port.

The only catch is the operating system will see all of the drives twice, once per controller. I am not sure how to deal with this situation in Windows, but this is an IRIX FAQ anyway, so it doesn't matter. What you should be able to do in IRIX is to mount the first four drives attached to controller A, for example, and then, continuing the example, mount the second four drives attached to controller B. In the disk manager, you should be able to set up a striped array with the 4+4 and create a single partition spread across both channels.

Be careful to associate each of the drives with only one controller, otherwise your filesystem will be corrupted.


Q) What's the best way to do disk performance testing?

A) I use the following commands to do performance testing at the filesystem level. This isn't necessarily the best way to test performance, if you are planning on comparing an SGI box to a PC, for example, though. The diskperf benchmark operates on filesystems, so you're exercising the operating system as well as the Fibre Channel drivers. However, if the results will be compared with other SGI users, then this is as good a test as any.

First, so everyone knows what kinds of devices you're working with, so they can compare properly:

scsicontrol -i /dev/scsi/[disk name, such as sc0d0l0]

Then, run the benchmark itself:

diskperf -W -D -t10 -n "[comment]" -c500m /[filesystem path]/testfile 


Q) According to the documentation for SGI Clariion FC disk enclosures, only specific disk sizes are supported. Is this correct? Is there any limitation on the size of FC-AL disks you can put in these? I would have assumed that if you are not using a RAID or software striped configuration, then you would be able to just add disk of any size then just mount these as XFS option disks (as per simple SCSI chain). Is SGI just trying to sell more hardware?

A) The drive size restrictions you are seeing are with regards to the Origin Vault itself, where the storage processors in the vault are responsible for the RAID. They only understand certain drives, depending on the version of software in the RAID box. The software I had for mine was limited to 9GB Barracuda's (paperweights, as far as performance is concerned), so I unloaded the vault to a friend of mine just to get it out of the house. If you can get newer software, allowing 36GB drives, then you'll be ok. But keep in mind that Clariion RAID boxes need drives that are formatted for 520-byte sector size, which is beyond the scope of this discussion.

If you just attach drives directly to your SGI box through, say, a PC enclosure full of disk drives attached to a QLA2200 in an Octane, without having any Clariion hardware in the middle, then you can connect just about any FC drive to your system.

Note that there are Clariion boxes that don't have SP's in them, so the drives are in a JBOD configuration. As such, the boxes just provide power, cooling, and connectivity to the drives, and don't have any intelligence in them. These are the boxes that you can stick anything in them that you want.


My web store is: http://ckcomputersystems.com/ckcs/catalog/default.php


Sample output from an Origin 200

                           Starting up the system...

IRIX Release 6.5 IP27 Version 07141607 System V - 64 Bit
Copyright 1987-2003 Silicon Graphics, Inc.
All Rights Reserved.

QLFC: running as interrupt thread.
QLFC: using spinlocks.
Setting rbaud to 19200
/hw/module/1/slot/MotherBoard/node/xtalk/8/pci/5/scsi_ctlr/0: Firmware version:
2.1.38: TP.
/hw/module/1/slot/MotherBoard/node/xtalk/8/pci/5/scsi_ctlr/0: 8 FCP targets.
/hw/module/1/slot/MotherBoard/node/xtalk/8/pci/6/scsi_ctlr/0: Firmware version:
2.1.38: TP.
/hw/module/1/slot/MotherBoard/node/xtalk/8/pci/6/scsi_ctlr/0: 6 FCP targets.
/hw/module/1/slot/MotherBoard/node/xtalk/8/pci/7/scsi_ctlr/0: Firmware version:
2.1.38: TP.
The system is coming up.

IRIS console login: root
IRIX Release 6.5 IP27 IRIS
Copyright 1987-2003 Silicon Graphics, Inc. All Rights Reserved.
Last login: Mon Feb  9 13:43:37 PST 2004 by UNKNOWN@192.168.0.67
TERM = (vt100)
IRIS 1# xlv_make
xlv_make> vol array
array
xlv_make> data
array.data
xlv_make> ve -stripe dks2d0s7  dks2d1s7  dks2d2s7 dks2d3s7 dks2d4s7 dks2d5s7 dks
2d6s7 dks2d7s7
array.data.0.0
xlv_make> end
Object specification completed
xlv_make> exit
Newly created objects will be written to disk.
Is this what you want?(yes)  yes
Invoking xlv_assemble

IRIS 3# xlv_make
xlv_make> vol cards
cards
xlv_make> data
cards.data
xlv_make> ve -stripe dks3d103s7 dks3d105s7 dks3d110s7 dks3d22s7 dks3d116s7 dks3d
25s7
cards.data.0.0
xlv_make> end
Object specification completed
xlv_make> exit
Newly created objects will be written to disk.
Is this what you want?(yes)  yes
Invoking xlv_assemble

IRIS 4# mkfs /dev/rxlv/array
meta-data=/dev/rxlv/array        isize=256    agcount=34, agsize=1048560 blks
data     =                       bsize=4096   blocks=34906112, imaxpct=25
         =                       sunit=16     swidth=128 blks, unwritten=1
naming   =version 2              bsize=4096
log      =internal log           bsize=4096   blocks=4272
realtime =none                   extsz=65536  blocks=0, rtextents=0
IRIS 5# mkfs /dev/rxlv/cards
meta-data=/dev/rxlv/cards        isize=256    agcount=25, agsize=1048576 blks
data     =                       bsize=4096   blocks=26179584, imaxpct=25
         =                       sunit=16     swidth=96 blks, unwritten=1
naming   =version 2              bsize=4096
log      =internal log           bsize=4096   blocks=3200
realtime =none                   extsz=65536  blocks=0, rtextents=0
IRIS 6# mount /dev/xlv/array /test
IRIS 7# mount /dev/xlv/cards /test2
IRIS 8# diskperf -W -D -t10 -n"Origin200 8-drive array" -c500 /test/testfile
#---------------------------------------------------------
# Disk Performance Test Results Generated By Diskperf V1.2
#
# Test name     : Origin200 8-drive array
# Test date     : Mon Feb  9 14:02:47 2004
# Test machine  : IRIX64 IRIS 6.5 07141607 IP27
# Test type     : XFS striped data subvolume
# Test path     : /test/testfile
# Disk striping : group=8  unit=128
# Request sizes : min=524288 max=4194304
# Parameters    : direct=1 time=10 scale=1.000 delay=0.000
# XFS file size : 4194304 bytes
#---------------------------------------------------------
# req_size  fwd_wt  fwd_rd  bwd_wt  bwd_rd  rnd_wt  rnd_rd
#  (bytes)  (MB/s)  (MB/s)  (MB/s)  (MB/s)  (MB/s)  (MB/s)
#---------------------------------------------------------
     524288   81.26   86.41   60.88   47.95   73.27   60.74
    1048576   88.80   89.49   76.69   63.74   85.26   75.16
    2097152   90.51   91.22   88.76   85.50   88.15   88.65
    4194304   91.43   91.92    0.00    0.00   89.38   83.29
IRIS 9# diskperf -W -D -t10 -n"Origin200 6-drive T-Cards" -c500 /test2/testfile
#---------------------------------------------------------
# Disk Performance Test Results Generated By Diskperf V1.2
#
# Test name     : Origin200 6-drive T-Cards
# Test date     : Mon Feb  9 14:06:25 2004
# Test machine  : IRIX64 IRIS 6.5 07141607 IP27
# Test type     : XFS striped data subvolume
# Test path     : /test2/testfile
# Disk striping : group=6  unit=128
# Request sizes : min=393216 max=4194304
# Parameters    : direct=1 time=10 scale=1.000 delay=0.000
# XFS file size : 3932160 bytes
#---------------------------------------------------------
# req_size  fwd_wt  fwd_rd  bwd_wt  bwd_rd  rnd_wt  rnd_rd
#  (bytes)  (MB/s)  (MB/s)  (MB/s)  (MB/s)  (MB/s)  (MB/s)
#---------------------------------------------------------
     393216   76.02   83.59   52.45   37.47   52.32   46.67
     786432   88.31   86.37   77.04   54.12   71.83   65.37
    1572864   91.86   87.59   89.10   79.67   83.53   84.48
    3145728   93.48   87.54    0.00    0.00   87.77   78.31
IRIS 10# umount /test
IRIS 11# umount /test2
IRIS 12# xlv_mgr
xlv_mgr> delete object array
Object array deleted.

xlv_mgr> delete object cards
Object cards deleted.

xlv_mgr> exit
IRIS 13# xlv_make
xlv_make> vol stripe
stripe
xlv_make> data
stripe.data
xlv_make> ve -stripe dks2d0s7 dks2d5s7 dks3d110s7 dks2d1s7 dks2d6s7 dks3d116s7 dks2d2s7 dks2d7s7 dks3d22s7 dks2d3s7 dks3d103s7 dks3d25s7 dks2d4s7 dks3d105s7
stripe.data.0.0
xlv_make> end
Object specification completed
xlv_make> exit
Newly created objects will be written to disk.
Is this what you want?(yes)  yes
Invoking xlv_assemble

IRIS 14# mkfs /dev/rxlv/stripe
meta-data=/dev/rxlv/stripe       isize=256    agcount=59, agsize=1048576 blks
data     =                       bsize=4096   blocks=61085696, imaxpct=25
         =                       sunit=16     swidth=224 blks, unwritten=1
naming   =version 2              bsize=4096
log      =internal log           bsize=4096   blocks=7456
realtime =none                   extsz=65536  blocks=0, rtextents=0
IRIS 15# mount /dev/xlv/stripe /test
IRIS 16# diskperf -W -D -t10 -n"Origin200 14-drive stripe" -c500m /test/testfile
#---------------------------------------------------------
# Disk Performance Test Results Generated By Diskperf V1.2
#
# Test name     : Origin200 14-drive stripe
# Test date     : Mon Feb  9 14:14:52 2004
# Test machine  : IRIX64 IRIS 6.5 07141607 IP27
# Test type     : XFS striped data subvolume
# Test path     : /test/testfile
# Disk striping : group=14  unit=128
# Request sizes : min=917504 max=4194304
# Parameters    : direct=1 time=10 scale=1.000 delay=0.000
# XFS file size : 524812288 bytes
#---------------------------------------------------------
# req_size  fwd_wt  fwd_rd  bwd_wt  bwd_rd  rnd_wt  rnd_rd
#  (bytes)  (MB/s)  (MB/s)  (MB/s)  (MB/s)  (MB/s)  (MB/s)
#---------------------------------------------------------
     917504  129.97  138.15  110.93   76.14  110.38   75.36
    1835008  139.38  144.11  133.78  103.49  133.07  103.35
    3670016  141.66  145.71  138.82  121.03  138.86  121.33
IRIS 17#


Ian's SGI Depot: FOR SALE! SGI Systems, Parts, Spares and Upgrades

(check my current auctions!)
[WhatsNew] [P.I.] [Indigo] [Indy] [O2] [Indigo2] [Crimson] [Challenge] [Onyx] [Octane] [Origin] [Onyx2]
[Future Technology Research Index] [SGI Tech/Advice Index] [Nintendo64 Tech Info Index]