Skip to content

Latest commit

 

History

History
73 lines (22 loc) · 4.35 KB

1.22 Swat Vdbench路径回放.md

File metadata and controls

73 lines (22 loc) · 4.35 KB

Swat is an Oracle-internal only. Creating two separate Vdbench manuals was just too much work

J

Vdbench, in cooperation with the Sun StorageTekTM Workload Analysis Tool (Swat) Trace Facility (STF) allows you to replay the I/O workload of a trace created using Swat.

A trace file created and processed by Swat using the ‘Create Replay File’ option creates file flatfile.bin.gz which contains one record for each I/O operation identified by Swat.

See Example 6: Swat Vdbench Trace Replay.

There are two ways to do a replay:

\1. If you want precise control over which device is replayed on which SD, specify the device number in the SD: sd=sd1,lun=xx,replay=(123,456,789). You cannot replay larger devices on a smaller target SD.

\2. If you want Vdbench to decide what goes where, create a Replay Group: rg=group1,devices=(123,456,789)

sd=sd1,lun=xxx,replay=group1 sd=sd2,lun=yyy,replay=group1

Using this method, Vdbench will act like his own volume manager

Swat will even help you with the creation of this replay parameter file. Select ‘File’ ‘Create replay parameter file’. Just add enough SDs and some flour.

You can create a mix of both methods if you want partial control over what is replayed where.

Add the Swat replay file name to the Run Definition parameters, and set an elapsed time at least larger than the duration of the original trace: 'rd=rd1,….,elapsed=60m,replay=flatfile.bin.gz'.

The I/O rate by default is set to the I/O rate as it is observed in the trace input. If a different I/O rate is requested, the inter-arrival times of all the I/Os found in the trace will be adjusted to allow for the requested change. The run will terminate as soon as the last I/O has completed.

Vdbench/Swat trace replay has been completely rewritten for Vdbench503. Two of the original requirements of Replay became a hindrance: the fact that all of the to-be-replayed I/O information had to be loaded into memory, and the problem that Replay could only run inside of just one JVM (Replaying a high IOPS SSD workload just wouldn’t work this way).

At the start of a replay run the replay file (flatfile.bin.gz) will be split into one separate file per replayed device. This splitting will be done only once, unless subsequent runs change the amount of devices that will be replayed. These ‘split’ files then will be read during the replay instead of needing to have all the data in memory. (BTW: if you want to remove unused devices from your replay file even before you give it to Vdbench, use the ‘File’ Filter Replay file’ menu option.)

You can now request the Replay to be done ‘repeat=nn’ times instead of stopping after the last I/O is complete.

As I mentioned above, Replay can now be run using multiple JVMs. Parameter change to the RD parameter:

replay*=(/xx/flatfile.bin.gz, split=split_directory,repeat=nnn*), where ‘nnn’ is the amount of times that Vdbench needs to do the Replay, and split_directory is the target directory where the ‘split’ files are written (instead of in the same directory as flatfile.bin.gz).

Note about 'nnn', how often to do the Replay: realize that even though you can repeat the replay 'nnn' times, the likelihood that previously accessed data will be cached somewhere is great if you do not have a very long trace or when your cache is very large. The same is valid for any consecutive replay, whether by repeating replay 'nnn' times using above parameter, or by a new start of Vdbench.

With Swat replay, there is no longer a need to have an application installed to do performance testing. All you need is a one-time trace of the application's I/O workload and from that moment on, you can replay that workload as often as you want without having to go to the effort and expense of installing and/or purchasing the application and/or data base package and copying the customer’s production data onto your system.

You can replay the workload on a different type of storage device to see what the performance will be, you can increase the workload to see what happens with performance, and with Veritas VxVm raw volumes (Veritas VxVm I/O is also traced) you can even modify the underlying Veritas VxVm structure to see what the performance will be when, for instance, you change from RAID 1 to RAID 5 or change stripe size!