Feb 23 2011

MSA 2012i SAN Performance Windows vs Linux

Published by under Storage




I’ve noticed backups are getting slower and slower over time going from 500 to 2500MB/min throughput.
I run backups on 2 dedicated RAID0 disks hosted on a iSCSI MSA array first, and duplicate them on tapes. Since I want to use the same array for a new database project, I want to measure and tune access to SAN disks from Windows 2003 Server and Linux Redhat 5.
 

Environment

4 disks have been set up in a RAID 10 set.
All interfaces are forced in 1G speed on the SAN and the server side.
Servers are connected on to the SAN via 2 interfaces in a load-balanced modethat gives a max theorical throughput of 250MB/s (a bit less considering frame headers).
Jumboframes have been disabled, they’re not supported on the switches I’m using in this setup. They could give slightly better performance.
I also tuned the read ahead cache but I got no significant improvement.
 

Results

The tests were conducted with a simple dd on Linux and Diskbench on Windows.
 

Reads		MB/s	IO/s
Windows		70	1100
Linux		110	900

Writes		MB/s	IO/s
Windows		140	2100
Linux		140	400

 

Optimizations

There was no optimization whatsoever on the Linux platform. The partition was formatted with default ext3 filesystem on a LVM volume.
On the Windows side, there was no fragmentation since we started off with a brand new drive. Fragmentation degrades performance indeed. The disk is formatted in NTFS with default 4k clusters: Increasing the cluster size does not seem to be of much impact.
The partition has been aligned with the physical disk in diskpart to correct Windows 2003 Server caveat. This led a 5MB/s increase in the maximum disk throughput.
 

Then?

2 questions:
Why the HP MSA 2012i writes faster than reads!? Write cache?
 
Why Windows read operations maximum bandwidth gets stuck to 70M?
Throughput was roughly improved by 5M with above Windows optimizations but we seem to now reach a cap limit. I get the same results with whichever RAID0, RAID5 or RAID10!!

 

No responses yet

Jan 18 2011

Open Any Windows Document from AS400

Published by under AS400

You can open any kind of document on Windows from an IBM i 5250 session or a CL program.
The type of file can either be an image or a picture (jpeg, tiff, bmp, png), a video (avi, mpeg), a PDF, an office document (word, excel, powerpoint), and even an Internet link (URL).
 
AS400 provides the strpccmd command in this purpose:

strpccmd pccmd('explorer C:\directory\file')

 
explorer command tell Windows to open a document with the default application that is configured in the control panel. It will act as if you double clicked the file in explorer; Eg: Acrobat reader for PDF files.
 
You don’t absolutely need the ‘explorer’ command to open the document but it fixes some issues:
– You do not to get a DOS window displayed
– URLs launch with the default browser, Internet explorer or Firefox (This wouldn’t work otherwise)
– This lets you open documents with spaces in the filename that tends to be buggy

 

No responses yet

Jan 16 2011

6 Reasons NOT to Use Microsoft DFS Replication

Published by under Windows

Based on advice from a Microsoft techie, I decided to set up DFS (Distributed File System) on a few remote sites about a year ago. Sceptical at first, The product is so disappointing that I decided to warn people who haven’t made their mind up.
 
Here are 6 good reasons NOT to use DFS Replication
 

hpgruesen / Pixabay

No Defragmentation

After defragmenting, a new replication occurs generating a huge traffic that could max out your lines bandwith for a few days if the folders size is big enough.
I haven’t tried but I wonder if the same thing wouldn’t happen when creating/removing the disk index?
 

File Locking

There’s no such thing, inter-site file locking, since there are 2 local copies of each files (or more).
 

Concurrent Access

Drawback from the previous point, data may be overwritten. If someone leaves a file open the whole day and saves it before he leaves the office, all changes made from another replication site will be lost.
Therefore, DFS is advised for write-protected files or files that can be modified from a single site.

Failover

If one of the DFS server becomes unavailable on a same site, some users are still redirected to that server and may lose access to their files. Wasn’t it one of DFS goals?
Of course, users are still able to manually select the DFS target but most of them probably don’t know the trick. The problem does not happen on all folders as a DFS server is randomly selected for each one of them. There’s also a cache that defaults to 1800 seconds, that’s 30 minutes.
Solution: Upgrade to Windows Server Enterprise which provides clustering and rising the price of your license.
 

Diagnostic Tools

No graphical tool (yet Microsoft speciality) is available to monitor what’s being replicated.
There are indeed those scripts connstat.cmd and iologsum.cmd written in Perl (Really!?) provided with Microsoft support tools. You first need to relocate the files in a path with no space or you’ll get the following error:

Can't open perl script "C:\Program": No such file or directory


Microsoft have posted a bug report on their website back in 2007 rather than fixing it!
Try to replace “@perl %~dpn0.cmd %*” by “@perl %0 %*” around the end of the script…
Script usage is not very intuitive and information not always relevant in my opinion. Usage with a practical example is available on Microsoft indeed.
 

Bandwith Throttling

Bandwith throttling isn’t possible as far as I know on Windows 2003 Server…
As well as that, to stop an ongoing replication, I stop the DFS service. Nothing happens, replication still goes on. I had to run ‘net stop ntfrs’ to stop the flow. Unfortunately, this also denies Active Directory from replicating if you’re on a domain controller.
 
In a nutshell, DFS has so many gaps and restrictions, it becomes difficult to use it for anything but content publishing across an enterprise.

 

One response so far

Jan 14 2011

Multipath on iSCSI Disks and LVM Partitions on Linux

Published by under Linux,Storage

Here are a few steps to configure iSCSI disks on Linux. Although I set this up on a Redhat Enterprise connected to an HP MSA 2012i, the whole configuration remains generic and can be applied to any SAN.
I will add another post to check how both Linux and Windows perform on the same iSCSI device, since there has been a lot of issues reported on the net.
 

iSCSI Setup

First off, the iscsi tools package is required

redhat $ yum install iscsi-initiator-utils
debian $ apt-get install open-iscsi

 
Configure the security bit if any applied on the SAN. On Ubuntu/Debian, also set startup to automatic

$ vi /etc/iscsi/iscsid.conf
node.session.auth.authmethod = <CHAP most of the time>
node.session.auth.username = <ISCSI_USERNAME>
node.session.auth.auth.password = <Password>
discovery.sendtargets.auth.authmethod = <CHAP most of the time>
discovery.sendtargets.auth.username = <ISCSI_USERNAME>
discovery.sendtargets.auth.password = <Password>

debian $ vi /etc/iscsi/iscsid.conf
# node.startup = manual
node.startup = automatic

 
You don’t necessarily have to set a password if the network is secured with vlans or dedicated switches and only yourself connects to the SAN. Authentication adds up another layer of complexity while troubleshooting.
 
The hostname will appear on the SAN as configured on the server. Originially InitiatorName= iqn.1994-05.com.redhat:2ea02d8870eb, it can be changed to a friendly hostname for a simpler setup.
You can find it in /etc/iscsi/initiatorname.iscsi. If it’s not found, you can set the initiatorName manually.
 
Now you can start the iscsi service, permanently

$ systemctl enable iscsi
$ systemctl start iscsi
$ systemctl enable iscsid
$ systemctl start iscsid

iSCSI SAN connections

 
Targets can be discovered with the iscsiadm command. Running on 1 IP is usually sufficient.

$ iscsiadm -m discovery -t sendtargets -p 10.0.0.1
$ iscsiadm -m discovery -t sendtargets -p 10.0.0.2

 
You can display them all

$ iscsiadm -m node
10.1.0.1:3260,2 iqn.1986-03.com.hp:storage.msa2012i.0919d81b4b.a
10.0.0.1:3260,1 iqn.1986-03.com.hp:storage.msa2012i.0919d81b4b.a
10.1.0.2:3260,2 iqn.1986-03.com.hp:storage.msa2012i.0919d81b4b.b
10.0.0.2:3260,1 iqn.1986-03.com.hp:storage.msa2012i.0919d81b4b.b

 
And connect (the service should do that to for you)

$ iscsiadm -m node -T iqn.1986-03.com.hp:storage.msa2012i.0919d81b4b.a --login
Logging in to [iface: default, target:
iqn.1986-03.com.hp:storage.msa2012i.0919d81b4b.a, portal: 10.0.0.1,3260] (multiple)
Logging in to [iface: default, target:
iqn.1986-03.com.hp:storage.msa2012i.0919d81b4b.a, portal: 10.1.0.1,3260] (multiple)
Login to [iface: default, target:
iqn.1986-03.com.hp:storage.msa2012i.0919d81b4b.a, portal: 10.0.0.1,3260] successful.
Login to [iface: default, target:
iqn.1986-03.com.hp:storage.msa2012i.0919d81b4b.a, portal: 10.1.0.1,3260] successful.

$ iscsiadm -m node -T iqn.1986-03.com.hp:storage.msa2012i.0919d81b4b.b --login
Logging in to [iface: default, target:
iqn.1986-03.com.hp:storage.msa2012i.0919d81b4b.b, portal: 10.0.0.2,3260] (multiple)
Logging in to [iface: default, target:
iqn.1986-03.com.hp:storage.msa2012i.0919d81b4b.b, portal: 10.1.0.2,3260] (multiple)
Login to [iface: default, target:
iqn.1986-03.com.hp:storage.msa2012i.0919d81b4b.b, portal: 10.0.0.2,3260] successful.
Login to [iface: default, target:
iqn.1986-03.com.hp:storage.msa2012i.0919d81b4b.b, portal: 10.1.0.2,3260] successful.

 
Each new iscsi disk should be listed as /dev/sd[a-z], or /dev/mapper. Run “fdisk -l” or “lsblk”. In a 2 controller SAN setup, each device is displayed as 2 separate disks. Read on the Multipath section to configure your device. If the SAN is equipped with a single controller, you can work with your /dev/sd[a-z] straight away (not recommended indeed!).
 

Multipath

Install the multipath tools:

redhat $ yum install device-mapper-multipath
debian $ apt-get install multipath-tools

 
As advised on HP website, I set up /etc/multipath.conf as follow. You must check your provider’s website to add your own hardware specific configuration:

blacklist {
        devnode "^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*"
}
defaults {
        user_friendly_names yes
}
devices {
        device {
                vendor                 "HP"
                product                "MSA2[02]12fc|MSA2012i"
                getuid_callout         "/sbin/scsi_id -g -u -s /block/%n"
                hardware_handler       "0"
                path_selector          "round-robin 0"
                path_grouping_policy   multibus
                failback               immediate
                rr_weight              uniform
                no_path_retry          18
                rr_min_io              100
                path_checker           tur
        }
}

Leaving the device section commented out does not seem to actually apply, so this should work for any NAS as long as you make sure /dev/sd[a-z] devices are not blacklisted.
 
Turn multipath service on:

redhat $ modprobe dm-multipath
all $ systemctl enable multipathd
all $ systemctl start multipathd

 
Multipath device mapper will set disks with matching wwid (world wide id) together automatically. Display the multipath topology:

$ multipath -ll
mpath1 (3600c0ff000d8239a6b082b4d01000000) dm-17 HP,MSA2012i
[size=9.3G][features=1 queue_if_no_path][hwhandler=0][rw]
\_ round-robin 0 [prio=2][active]
 \_ 8:0:0:30 sde 8:64  [active][ready]
 \_ 9:0:0:30 sdf 8:80  [active][ready]
mpath0 (3600c0ff000d8239a1846274d01000000) dm-15 HP,MSA2012i
[size=1.9G][features=1 queue_if_no_path][hwhandler=0][rw]
\_ round-robin 0 [prio=2][active]
 \_ 9:0:0:29 sdb 8:16  [active][ready]
 \_ 8:0:0:29 sdd 8:48  [active][ready]


If nothing shows up, run multipath -v3 for debug. Blacklisting is the most common issue on this.

LVM Partitionning

 Resulting partitions to work with are listed as /dev/mapper/mpath[0-9] in my case.
I initialize the disk with LVM for ease of use: LVM partitions are hot resizable volumes, let you extend a partition on extra disks, provide snapshot feature, etc… LVM is a must have, if you do not use it yet, start right now!

$ pvcreate /dev/mapper/mpath0
$ vgcreate myVolumeGroup /dev/mapper/mpath0
$ lvcreate -n myVolume -L10G /dev/myVolumeGroup
$ mkfs.ext4 /dev/myVolumeGroup/myVolume


Operations on LUNs


Add a new LUN

Once a new LUN has been created on the SAN, the server does not detect the disk until you do a refresh

$ iscsiadm -m node --rescan

iSCSI disks are now visible, multipath automatically creates the new device.
 

LUN removal

After unmounting related filesystems, remove LUNs on the SAN and run “multipath -f mpath?” for the desired device
 

Expand volume

LVM is great as you can resize a physical volume instead of creating a new volume and adding it up in the volume group. Therefore, We stick with a clean configuration on the server and the SAN.
Refresh the disk size

$ iscsiadm -m node --rescan


Check with fdisk -l disk size matches the size on SAN

$ /etc/init.d/multipathd reload


Check with multipath -ll the device size has increased

$ pvresize /dev/mapper/mpath0

The new disk space should now be available. You can then extend the volume with lvresize, -r to resize the filesystem as well.
 

Load-balancing and Failover

In this setup, traffic is load-balanced on the 2 NICs. If an interface goes down, all the traffic flows through the 2nd link.
I launched a big file copy on to the iscsi disk and turned off one of the interface. The CPU load goes high quick enough and drops as soon as the failover timeout has expired. The copy then fails over on to the 2nd link. Knowing this, set the timeout as small as possible eg 5 sec.

 

No responses yet

Jan 12 2011

Store Windows Credentials for Auto Login

Published by under Windows

Windows prompts for a login and password when the current logged in user wants to access a share on a domain or workgroup that does not match the one you’ve logged in. It is possible to store identification information or “credentials” so you’re not prompted for the login and password every time you access the Windows share.
 

From the Command Line

 
Server_address can be replaced with the host name or IP address.

C:\>net use * \\Server_address\My_Share /savecred
The password or user name is invalid for \\Server_address\My_Share.

Enter the user name for 'Server_address': MyDomain\Administrator
Enter the password for Server_address:
Drive Z: is now connected to \\Server_address\My_Share.

The command completed successfully.

 
All shares on the remote machine are now reachable, if the saved username is authorized to.

C:\>DIR \\Server_address\My_Other_Share
 Volume in drive \\Server_address\My_Other_Share is System
 Volume Serial Number is 7CBD-E099

 Directory of \\Server_address\My_Other_Share

03/08/2009  11:11              .
03/08/2009  11:11              ..
               0 File(s)                0 bytes
               3 Dir(s) 28 764 954 624 bytes free

 
A volume Z: is automatically created but can be removed safely.
 

Storing Credentials in the Control Panel

 
Storing credentials is also possible through the control panel in some Windows but all:
Control Panel -> Stored User Names and Passwords on Windows Server
or
Control Panel -> User Accounts -> Advanced -> Manage on Windows XP
 

From a Unix/Linux station

mount -t cifs //Server_address/Share /mount_point -o cred=credential.txt


The credential.txt file contains the shared folder authentication details:

username=my_username
password=my_password

 
Access should be granted to the owner exclusively since values are stored in a clear text file. Set file permissions to 700.

 

No responses yet

« Prev - Next »