Topics: AIX, Storage, System Administration

Allocating shared storage to VIOS clients

The following is a procedure to add shared storage to a clustered, virtualized environment. This assumes the following: You have a PowerHA cluster on two nodes, nodeA and nodeB. Each node is on a separate physical system, and each node is a client of a VIOS. The storage from the VIOS is mapped as vSCSI to the client. Client nodeA is on viosA, and client nodeB is on viosB. Futhermore, this procedure assumes you're using SDDPCM for multi-pathing on the VIOS.

First of all, have your storage admin allocate and zone shared LUN(s) to the two VIOS. This needs to be one or more LUNs that is zoned to both of the VIOS. This procedure assumes you will be zoning 4 LUNs of 128 GB.

Once that is completed, then move to work on the VIOS:

SERVER: viosA

First, gather some system information as user root on the VIOS, and save this information to a file for safe-keeping.

# lspv
# lsdev -Cc disk
# /usr/ios/cli/ioscli lsdev -virtual
# lsvpcfg
# datapath query adapter
# datapath query device
# lsmap -all
Discover new SAN LUNs (4 * 128 GB) as user padmin on the VIOS. This can be accomplished by running cfgdev, the alternative to cfgmgr on the VIOS. Once that has run, identify the 4 new hdisk devices on the system, and run the "bootinfo -s" command to determine the size of each of the 4 new disks:
# cfgdev
# lspv
# datapath query device
# bootinfo -s hdiskX
Change PVID for the disks (repeat for all the LUNs):
# chdev -l hdiskX -a pv=yes
Next, map the new LUN from viosA to the nodeA lpar. You'll need to know 2 things here: [a] What vhost adapter (or "vadapter) to use, and [b] what name to give the new device (or "virtual target device"). Have a look at the output of the "lsmap -all" command that you ran previously. That will provide you information on the current naming scheme for the virtual target devices. Also, it will show you what vhost adapters already exist, and are in use for the client. In this case, we'll assume the vhost adapter is vhost0, and there are already some virtual target devices, called: nodeA_vtd0001 through nodeA_vtd0019. The new four LUNs therefore will be named: nodeA_vtd0020 through nodeA_vtd0023. We'll also assume the new disks are numbered hdisk44 through hdisk47.
# mkvdev -vdev hdisk44 -vadapter vhost0 -dev nodeA_vtd0020
# mkvdev -vdev hdisk45 -vadapter vhost0 -dev nodeA_vtd0021
# mkvdev -vdev hdisk46 -vadapter vhost0 -dev nodeA_vtd0022
# mkvdev -vdev hdisk47 -vadapter vhost0 -dev nodeA_vtd0023
Now the mapping of the LUNs is complete on viosA. You'll have to repeat the same process on viosB:

SERVER: viosB

First, gather some system information as user root on the VIOS, and save this information to a file for safe-keeping.
# lspv
# lsdev -Cc disk
# /usr/ios/cli/ioscli lsdev -virtual
# lsvpcfg
# datapath query adapter
# datapath query device
# lsmap -all
Discover new SAN LUNs (4 * 128 GB) as user padmin on the VIOS. This can be accomplished by running cfgdev, the alternative to cfgmgr on the VIOS. Once that has run, identify the 4 new hdisk devices on the system, and run the "bootinfo -s" command to determine the size of each of the 4 new disks:
# cfgdev
# lspv
# datapath query device
# bootinfo -s hdiskX
No need to set the PVID this time. It was already configured on viosA, and after running the cfgdev command, the PVID should be visible on viosB, and it should match the PIVIDs on viosA. Make sure this is correct:
# lspv
Map the new LUN from viosB to the nodeB lpar. Again, you'll need to know the vadapter and the virtual target device names to use, and you can derive that information by looking at the output of the "lsmap -all" command. If you've done your work correctly in the past, the naming of the vadapter and the virtual target devices will probably be the same on viosB as on viosA:
# mkvdev -vdev hdisk44 -vadapter vhost0 -dev nodeB_vtd0020
# mkvdev -vdev hdisk45 -vadapter vhost0 -dev nodeB_vtd0020
# mkvdev -vdev hdisk46 -vadapter vhost0 -dev nodeB_vtd0020
# mkvdev -vdev hdisk47 -vadapter vhost0 -dev nodeB_vtd0020
Now that the mapping on both the VIOS has been completed, it is time to move to the client side. First, gather some information about the PowerHA cluster on the clients, by running as root on the nodeA client:
# clstat -o
# clRGinfo
# lsvg |lsvg -pi
Run cfgmgr on nodeA to discover the mapped LUNs, and then on nodeB:
# cfgmgr
# lspv
Ensure that the disk attributes are correctly set on both servers. Repeat the following command for all 4 new disks:
# chdev -l hdiskX -a algorithm=fail_over -a hcheck_interval=60 -a queue_depth=20 -a reserve_policy=no_reserve
Now you can add the 4 new added physical volumes to a shared volume group. In our example, the shared volume group is called sharedvg, and the newly discovered disks are called hdisk55 through hdisk58. Finally, the concurrent resource group is called concurrent_rg.
# /usr/es/sbin/cluster/sbin/cl_extendvg -cspoc -g'concurrent_rg' -R'nodeA' sharedvg hdisk55 hdisk56 hdisk57 hdisk58
Next, you can move forward to creating logical volumes (and file systems if necessary), for example, when creating raw logical volumes for an Oracle database:
# /usr/es/sbin/cluster/sbin/cl_mklv -TO -t raw -R'nodeA' -U oracle -G dba -P 600 -y asm_raw5 sharedvg 1023 hdisk55
# /usr/es/sbin/cluster/sbin/cl_mklv -TO -t raw -R'nodeA' -U oracle -G dba -P 600 -y asm_raw6 sharedvg 1023 hdisk56
# /usr/es/sbin/cluster/sbin/cl_mklv -TO -t raw -R'nodeA' -U oracle -G dba -P 600 -y asm_raw7 sharedvg 1023 hdisk57
# /usr/es/sbin/cluster/sbin/cl_mklv -TO -t raw -R'nodeA' -U oracle -G dba -P 600 -y asm_raw8 sharedvg 1023 hdisk58
Finally, verify the volume group:
# lsvg -p sharedvg
# lsvg sharedvg
# ls -l /dev/asm_raw*
If necessary, these are the steps to complete, if the addition of LUNs has to be backed out:
  1. Remove the raw logical volumes (using the cl_rmlv command)
  2. Remove the added LUNs from the volume group (using the cl_reducevg command)
  3. Remove the disk devices on both client nodes: rmdev -dl hdiskX
  4. Remove LUN mappings from each VIOS (using the rmvdev command)
  5. Remove the LUNs frome each VIOS (using the rmdev command)

Topics: AIX, Storage, System Administration

Identifying a Disk Bottleneck Using filemon

This blog will display the steps required to identify an IO problem in the storage area network and/or disk arrays on AIX.

Note: Do not execute filemon with AIX 6.1 Technology Level 6 Service Pack 1 if WebSphere MQ is running. WebSphere MQ will abnormally terminate with this AIX release.

Running filemon: As a rule of thumb, a write to a cached fiber attached disk array should average less than 2.5 ms and a read from a cached fiber attached disk array should average less than 15 ms. To confirm the responsiveness of the storage area network and disk array, filemon can be utilized. The following example will collect statistics for a 90 second interval.

# filemon -PT 268435184 -O pv,detailed -o /tmp/filemon.rpt;sleep 90;trcstop

Run trcstop command to signal end of trace.
Tue Sep 15 13:42:12 2015
System: AIX 6.1 Node: hostname Machine: 0000868CF300
[filemon: Reporting started]
# [filemon: Reporting completed]

[filemon: 90.027 secs in measured interval]
Then, review the generated report (/tmp/filemon.rpt).
# more /tmp/filemon.rpt
.
.
.
------------------------------------------------------------------------
Detailed Physical Volume Stats   (512 byte blocks)
------------------------------------------------------------------------

VOLUME: /dev/hdisk11  description: XP MPIO Disk P9500   (Fibre)
reads:                  437296  (0 errs)
  read sizes (blks):    avg     8.0 min       8 max       8 sdev     0.0
  read times (msec):    avg   11.111 min   0.122 max  75.429 sdev   0.347
  read sequences:       1
  read seq. lengths:    avg 3498368.0 min 3498368 max 3498368 sdev     0.0
seeks:                  1       (0.0%)
  seek dist (blks):     init 3067240
  seek dist (%tot blks):init 4.87525
time to next req(msec): avg   0.206 min   0.018 max 461.074 sdev   1.736
throughput:             19429.5 KB/sec
utilization:            0.77

VOLUME: /dev/hdisk12  description: XP MPIO Disk P9500   (Fibre)
writes:                 434036  (0 errs)
  write sizes (blks):   avg     8.1 min       8 max      56 sdev     1.4
  write times (msec):   avg   2.222 min   0.159 max  79.639 sdev   0.915
  write sequences:      1
  write seq. lengths:   avg 3498344.0 min 3498344 max 3498344 sdev     0.0
seeks:                  1       (0.0%)
  seek dist (blks):     init 3067216
  seek dist (%tot blks):init 4.87521
time to next req(msec): avg   0.206 min   0.005 max 536.330 sdev   1.875
throughput:             19429.3 KB/sec
utilization:            0.72
.
.
.
In the above report, hdisk11 was the busiest disk on the system during the 90 second sample. The reads from hdisk11 averaged 11.111 ms. Since this is less than 15 ms, the storage area network and disk array were performing within scope for reads.

Also, hdisk12 was the second busiest disk on the system during the 90 second sample. The writes to hdisk12 averaged 2.222 ms. Since this is less than 2.5 ms, the storage area network and disk array were performing within scope for writes.

Other methods to measure similar information:

You can use the topas command using the -D option to get an overview of the most busiest disks on the system:
# topas -D
In the output, columns ART and AWT provide similar information. ART stands for the average time to receive a response from the hosting server for the read request sent. And AWT stands for the average time to receive a response from the hosting server for the write request sent.

You can also use the iostat command, using the -D (for drive utilization) and -l (for long listing mode) options:
# iostat -Dl 60
This will provide an overview over a 60 second period of your disks. The "avg serv" column under the read and write sections will provide you average service times for reads and writes for each disk.

An occasional peak value recorded on a system, doesn't immediately mean there is a disk bottleneck on the system. It requires longer periods of monitoring to determine if a certain disk is indeed a bottleneck for your system.

Topics: AIX, Backup & restore, Storage, System Administration

Using mkvgdata and restvg in DR situations

It is useful to run the following commands before you create your (at least) weekly mksysb image:

# lsvg -o | xargs -i mkvgdata {}
# tar -cvf /sysadm/vgdata.tar /tmp/vgdata
Add these commands to your mksysb script, just before running the mksysb command. What this does is to run the mkvgdata command for each online volume group. This will generate output for a volume group in /tmp/vgdata. The resulting output is then tar'd and stored in the /sysadm folder or file system. This allows information regarding your volume groups, logical volumes, and file systems to be included in your mksysb image.

To recreate the volume groups, logical volumes and file systems:
  • Run:
    # tar -xvf /sysadm/vgdata.tar
  • Now edit /tmp/vgdata/{volume group name}/{volume group name}.data file and look for the line with "VG_SOURCE_DISK_LIST=". Change the line to have the hdisks, vpaths or hdiskpowers as needed.
  • Run:
    # restvg -r -d /tmp/vgdata/{volume group name}/{volume group name}.data
Make sure to remove file systems with the rmfs command before running restvg, or it will not run correctly. Or, you can just run it once, run the exportvg command for the same volume group, and run the restvg command again. There is also a "-s" flag for restvg that lets you shrink the file system to its minimum size needed, but depending on when the vgdata was created, you could run out of space, when restoring the contents of the file system. Just something to keep in mind.

Topics: AIX, Storage, System Administration, Virtualization

Change default value of hcheck_interval

The default value of hcheck_interval for VSCSI hdisks is set to 0, meaning that health checking is disabled. The hcheck_interval attribute of an hdisk can only be changed online if the volume group to which the hdisk belongs, is not active. If the volume group is active, the ODM value of the hcheck_interval can be altered in the CuAt class, as shown in the following example for hdisk0:

# chdev -l hdisk0 -a hcheck_interval=60 -P
The change will then be applied once the system is rebooted. However, it is possible to change the default value of the hcheck_interval attribute in the PdAt ODM class. As a result, you won't have to worry about its value anymore and newly discovered hdisks will automatically get the new default value, as illustrated in the example below:
# odmget -q 'attribute = hcheck_interval AND uniquetype = \
PCM/friend/vscsi' PdAt | sed 's/deflt = \"0\"/deflt = \"60\"/' \
| odmchange -o PdAt -q 'attribute = hcheck_interval AND \
uniquetype = PCM/friend/vscsi'

Topics: AIX, Storage, System Administration

Mounting USB drive on AIX

To familiarize yourself with using USB drives on AIX, take a look at the following article at IBM developerWorks:

http://www.ibm.com/developerworks/aix/library/au-flashdrive/

Before you start using it, make sure you DLPAR the USB controller to your LPAR, if not done so already. You should see the USB devices on your system:

# lsconf | grep usb
+ usbhc0 U78C0.001.DBJX589-P2          USB Host Controller
+ usbhc1 U78C0.001.DBJX589-P2          USB Host Controller
+ usbhc2 U78C0.001.DBJX589-P2          USB Enhanced Host Controller
+ usbms0 U78C0.001.DBJX589-P2-C8-T5-L1 USB Mass Storage
After you plug in the USB drive, run cfgmgr to discover the drive, or if you don't want the run the whole cfgmgr, run:
# /etc/methods/cfgusb -l usb0
Some devices may not be recognized by AIX, and may require you to run the lquerypv command:
# lquerypv -h /dev/usbms0
To create a 2 TB file system on the drive, run:
# mkfs -olog=INLINE,ea=v2 -s2000G -Vjfs2 /dev/usbms0
To mount the file system, run:
# mount -o log=INLINE /dev/usbms0 /usbmnt
Then enjoy using a 2 TB file system:
# df -g /usbmnt
Filesystem    GB blocks      Free %Used    Iused %Iused Mounted on
/dev/usbms0     2000.00   1986.27    1%     3182     1% /usbmnt

Topics: AIX, Hardware, Storage, System Administration

Creating a dummy disk device

At some times it may be necessary to create a dummy disk device, for example when you need a disk to be discovered while running cfgmgr with a certain name on multiple hosts.

For example, if you need the disk to be called hdisk2, and only hdisk0 exists on the system, then running cfgmgr will discover the disk as hdisk1, not as hdisk2. In order to make sure cfgmgr indeed discovers the new disk as hdisk2, you can fool the system by temporarily creating a dummy disk device.

Here are the steps involved:

First: remove the newly discovered disk (in the example below known as hdisk1 - we will configure this disk as hdisk2):

# rmdev -dl hdisk1
Next, we create a dummy disk device with the name hdisk1:
# mkdev -l hdisk1 -p dummy -c disk -t hdisk -w 0000
Note that running the command above may result in an error. However, if you run the following command afterwards, you will notice that the dummy disk device indeed has been created:
# lsdev -Cc disk | grep hdisk1
hdisk1 Defined    SSA Logical Disk Drive
Also note that the dummy disk device will not show up if you run the lspv command. That is no concern.

Now run the cfgmgr command to discover the new disk. You'll notice that the new disk will now be discovered as hdisk2, because hdisk0 and hdisk1 are already in use.
# cfgmgr
# lsdev -Cc disk | grep hdisk2
Finally, remove the dummy disk device:
# rmdev -dl hdisk1

Topics: AIX, Storage, System Administration

Erasing disks

During a system decommission process, it is advisable to format or at least erase all drives. There are 2 ways of accomplishing that:

If you have time:

AIX allows disks to be erased via the Format media service aid in the AIX diagnostic package. To erase a hard disk, run the following command:

# diag -T format
This will start the Format media service aid in a menu driven interface. If prompted, choose your terminal. You will then be presented with a resource selection list. Choose the hdisk devices you want to erase from this list and commit your changes according to the instructions on the screen.

Once you have committed your selection, choose Erase Disk from the menu. You will then be asked to confirm your selection. Choose Yes. You will be asked if you want to Read data from drive or Write patterns to drive. Choose Write patterns to drive. You will then have the opportunity to modify the disk erasure options. After you specify the options you prefer, choose Commit Your Changes. The disk is now erased. Please note, that it can take a long time for this process to complete.

If you want to do it quick-and-dirty:

For each disk, use the dd command to overwrite the data on the disk. For example:
for disk in $(lspv | awk '{print $1}') ; do
   dd if=/dev/zero of=/dev/r${disk} bs=1024 count=10
   echo $disk wiped
done
This does the trick, as it reads zeroes from /dev/zero and outputs 10 times 1024 zeroes to each disk. That overwrites anything on the start of the disk, rendering the disk useless.

Topics: AIX, LVM, Storage, System Administration

VGs (normal, big, and scalable)

The VG type, commonly known as standard or normal, allows a maximum of 32 physical volumes (PVs). A standard or normal VG is no more than 1016 physical partitions (PPs) per PV and has an upper limit of 256 logical volumes (LVs) per VG. Subsequently, a new VG type was introduced which was referred to as big VG. A big VG allows up to 128 PVs and a maximum of 512 LVs.

AIX 5L Version 5.3 has introduced a new VG type called scalable volume group (scalable VG). A scalable VG allows a maximum of 1024 PVs and 4096 LVs. The maximum number of PPs applies to the entire VG and is no longer defined on a per disk basis. This opens up the prospect of configuring VGs with a relatively small number of disks and fine-grained storage allocation options through a large number of PPs, which are small in size. The scalable VG can hold up to 2,097,152 (2048 K) PPs. As with the older VG types, the size is specified in units of megabytes and the size variable must be equal to a power of 2. The range of PP sizes starts at 1 (1 MB) and goes up to 131,072 (128 GB). This is more than two orders of magnitude above the 1024 (1 GB), which is the maximum for both normal and big VG types in AIX 5L Version 5.2. The new maximum PP size provides an architectural support for 256 petabyte disks.

The table below shows the variation of configuration limits with different VG types. Note that the maximum number of user definable LVs is given by the maximum number of LVs per VG minus 1 because one LV is reserved for system use. Consequently, system administrators can configure 255 LVs in normal VGs, 511 in big VGs, and 4095 in scalable VGs.

VG typeMax PVsMax LVsMax PPs per VGMax PP size
Normal VG3225632,512 (1016 * 32)1 GB
Big VG128512130,048 (1016 * 128)1 GB
Scalable VG102440962,097,152128 GB

The scalable VG implementation in AIX 5L Version 5.3 provides configuration flexibility with respect to the number of PVs and LVs that can be accommodated by a given instance of the new VG type. The configuration options allow any scalable VG to contain 32, 64, 128, 256, 512, 768, or 1024 disks and 256, 512, 1024, 2048, or 4096 LVs. You do not need to configure the maximum values of 1024 PVs and 4096 LVs at the time of VG creation to account for potential future growth. You can always increase the initial settings at a later date as required.

The System Management Interface Tool (SMIT) and the Web-based System Manager graphical user interface fully support the scalable VG. Existing SMIT panels, which are related to VG management tasks, have been changed and many new panels added to account for the scalable VG type. For example, you can use the new SMIT fast path _mksvg to directly access the Add a Scalable VG SMIT menu.

The user commands mkvg, chvg, and lsvg have been enhanced in support of the scalable VG type.

For more information:
http://www.ibm.com/developerworks/aix/library/au-aix5l-lvm.html.

Topics: AIX, Oracle, SDD, Storage, System Administration

RAC OCR and VOTE LUNs

Consisting naming is nog required for Oracle ASM devices, but LUNs used for the OCR and VOTE functions of Oracle RAC environments must have the same device names on all RAC systems. If the names for the OCR and VOTE devices are different, create a new device for each of these functions, on each of the RAC nodes, as follows:

First, check the PVIDs of each disk that is to be used as an OCR or VOTE device on all the RAC nodes. For example, if you're setting up a RAC cluster consisting of 2 nodes, called node1 and node2, check the disks as follows:

root@node1 # lspv | grep vpath | grep -i none
vpath6          00f69a11a2f620c5                    None
vpath7          00f69a11a2f622c8                    None
vpath8          00f69a11a2f624a7                    None
vpath13         00f69a11a2f62f1f                    None
vpath14         00f69a11a2f63212                    None

root@node2 /root # lspv | grep vpath | grep -i none
vpath4          00f69a11a2f620c5                    None
vpath5          00f69a11a2f622c8                    None
vpath6          00f69a11a2f624a7                    None
vpath9          00f69a11a2f62f1f                    None
vpath10         00f69a11a2f63212                    None
As you can see, vpath6 on node 1 is the same disk as vpath4 on node 2. You can determine this by looking at the PVID.

Check the major and minor numbers of each device:
root@node1 # cd /dev
root@node1 # lspv|grep vpath|grep None|awk '{print $1}'|xargs ls -als
0 brw-------    1 root     system       47,  6 Apr 28 18:56 vpath6
0 brw-------    1 root     system       47,  7 Apr 28 18:56 vpath7
0 brw-------    1 root     system       47,  8 Apr 28 18:56 vpath8
0 brw-------    1 root     system       47, 13 Apr 28 18:56 vpath13
0 brw-------    1 root     system       47, 14 Apr 28 18:56 vpath14

root#node2 # cd /dev
root@node2 # lspv|grep vpath|grep None|awk '{print $1}'|xargs ls -als
0 brw-------    1 root     system       47,  4 Apr 29 13:33 vpath4
0 brw-------    1 root     system       47,  5 Apr 29 13:33 vpath5
0 brw-------    1 root     system       47,  6 Apr 29 13:33 vpath6
0 brw-------    1 root     system       47,  9 Apr 29 13:33 vpath9
0 brw-------    1 root     system       47, 10 Apr 29 13:33 vpath10
Now, on each node set up a consisting naming convention for the OCR and VOTE devices. For example, if you wish to set up 2 ORC and 3 VOTE devices:

On server node1:
# mknod /dev/ocr_disk01 c 47 6
# mknod /dev/ocr_disk02 c 47 7
# mknod /dev/voting_disk01 c 47 8
# mknod /dev/voting_disk02 c 47 13
# mknod /dev/voting_disk03 c 47 14
On server node2:
mknod /dev/ocr_disk01 c 47 4
mknod /dev/ocr_disk02 c 47 5
mknod /dev/voting_disk01 c 47 6
mknod /dev/voting_disk02 c 47 9
mknod /dev/voting_disk03 c 47 10
This will result in a consisting naming convention for the OCR and VOTE devices on bothe nodes:
root@node1 # ls -als /dev/*_disk*
0 crw-r--r-- 1 root system  47,  6 May 13 07:18 /dev/ocr_disk01
0 crw-r--r-- 1 root system  47,  7 May 13 07:19 /dev/ocr_disk02
0 crw-r--r-- 1 root system  47,  8 May 13 07:19 /dev/voting_disk01
0 crw-r--r-- 1 root system  47, 13 May 13 07:19 /dev/voting_disk02
0 crw-r--r-- 1 root system  47, 14 May 13 07:20 /dev/voting_disk03

root@node2 # ls -als /dev/*_disk*
0 crw-r--r-- 1 root system  47,  4 May 13 07:20 /dev/ocr_disk01
0 crw-r--r-- 1 root system  47,  5 May 13 07:20 /dev/ocr_disk02
0 crw-r--r-- 1 root system  47,  6 May 13 07:21 /dev/voting_disk01
0 crw-r--r-- 1 root system  47,  9 May 13 07:21 /dev/voting_disk02
0 crw-r--r-- 1 root system  47, 10 May 13 07:21 /dev/voting_disk03

Topics: AIX, Backup & restore, LVM, Performance, Storage, System Administration

Using lvmstat

One of the best tools to look at LVM usage is with lvmstat. It can report the bytes read and written to logical volumes. Using that information, you can determine which logical volumes are used the most.

Gathering LVM statistics is not enabled by default:

# lvmstat -v data2vg
0516-1309 lvmstat: Statistics collection is not enabled for
        this logical device. Use -e option to enable.
As you can see by the output here, it is not enabled, so you need to actually enable it for each volume group prior to running the tool using:
# lvmstat -v data2vg -e
The following command takes a snapshot of LVM information every second for 10 intervals:
# lvmstat -v data2vg 1 10
This view shows the most utilized logical volumes on your system since you started the data collection. This is very helpful when drilling down to the logical volume layer when tuning your systems.
# lvmstat -v data2vg

Logical Volume    iocnt   Kb_read  Kb_wrtn   Kbps
  appdatalv      306653  47493022   383822  103.2
  loglv00            34         0     3340    2.8
  data2lv           453    234543   234343   89.3         
What are you looking at here?
  • iocnt: Reports back the number of read and write requests.
  • Kb_read: Reports back the total data (kilobytes) from your measured interval that is read.
  • Kb_wrtn: Reports back the amount of data (kilobytes) from your measured interval that is written.
  • Kbps: Reports back the amount of data transferred in kilobytes per second.
You can use the -d option for lvmstat to disable the collection of LVM statistics.

Number of results found for topic Storage: 43.
Displaying results: 1 - 10.