Saturday, February 13, 2016

Using DD direct IO for meaningful IO test (LGWR especially) -- oflag=direct

Recently done a healtcheck in a critical Production envrionment an saw the following warning in LGWR trace files.

"Warning: log write elapsed time 12000m"

There were high log file syncs and parallel write wait times in the AWR reports , so decided to do an I/O test on the underlying device.
Redolog files were residing on a Veritas Filesystem which was residing on top of the disks coming from 3par storage.

As the LGWR process always do an direct IO and bypasses the filesystem buffer cache, doing an dd  test like the following was not meaningful.
dd if=/dev/zero of=/erm/testdatafile1 bs=1k count=2500k conv=fsync

In addition, the comparisons in such a test were not meaningful too, because it seemed Veritas was doing direct IOs  even when the OS fs cache is enabled and direct_io argument wasnt used in dd command  , but a linux filesystem like ext3 was using cache for its write operation and that s why there were a big difference in dd outputs .

The correct method in these situations is to make the write tests using direct_io flag of the dd command.
Something like the following would do the job;

dd if=/dev/zero of=/erm/testfilenew1 bs=1k count=50000 oflag=direct"

Just before the command above, we can also disable the filesystem cache just to be sure that, cache is not populated at all .( actually it does not matter if oflag=direct is supplied to dd, but still it worths to mention)

hdparm -W0 /dev/sda1


Lastly, here i a example output that I have done on a standard ext3 filesystem. 

[root@ermantest ~]# dd if=/dev/zero of=/erm/testfilenew1 bs=1k count=50000 oflag=direct
50000+0 records in
50000+0 records out
51200000 bytes (51 MB) copied, 16.8227 seconds, 3.0 MB/s

[root@ermantest ~]# hdparm -W0 /dev/sda1 (disabling the cache)

/dev/sda1:
 setting drive write-caching to 0 (off)
 HDIO_DRIVE_CMD(setcache) failed: Inappropriate ioctl for device
[root@exatest ~]# dd if=/dev/zero of=/erm/testfilenew2 bs=1k count=50000 oflag=direct
50000+0 records in
50000+0 records out
51200000 bytes (51 MB) copied, 16.6633 seconds, 3.1 MB/s

So , as seen, we do fixed 1k sized IOs and we see 3.1 MB /s IO throughput in a standard Linux ext3 residing on a local filesystem.

If we use /dev/urandom rather than /dev/zero , then we determine 1.5 MB/s throughput, but is just because of the cpu overhead actually and that's why using urandom or /dev/zero is not any different in IO perspective in such a test.

[root@ermantest erm]# dd if=/dev/urandom of=testfilenew4 bs=1k count=50000 oflag=direct
50000+0 records in
50000+0 records out
51200000 bytes (51 MB) copied, 35.7821 seconds, 1.4 MB/s

Why am i sharing these?

Because I want to give you a reasonable dd test for making a decision about the I/O performance in Linux.
I suggest using a similar command like dd if=/dev/zero of=/erm/testfilenew2 bs=1k count=50000 oflag=direct for testing the performance of LGWR type of IO  and expect at least 3MB/s througput from that.

If you see lower throughputs like 500k/s, then I suggest you to speak with OS and filesystem admins(Ex:Veritas admin), as well as the underlying Storage Admins to make them check their tiers as well. Especially the HBA should be checked as Direct IOs will need space in queue and HBA queue is a good place to check for that.
 
I find it interesting, so waiting for your comments about this post. 

No comments :

Post a Comment

If you will ask a question, please don't comment here..

For your questions, please create an issue into my forum.

Forum Link: http://ermanarslan.blogspot.com.tr/p/forum.html

Register and create an issue in the related category.
I will support you from there.