Tuesday, July 22, 2008

When is 64 bit not really 64 bit?

Hello Everyone,

My journey, and I can call it that, for deploying a test environment (running multiple OSes, 32 & 64 bit OSes) based on VMware ESx 3.5 has taken a LOT longer than I expected and has taught me a thing or two about 64 bit. This also explains the bigger problem affecting the uptake of "64 bit" products and how with an idea if adopted can hopefully help dramatically increase admins buying into "64 bit".

There is a lot of talk about 64 bit applications, 64 bit operating systems, 64 bit hardware and all the wonderful things it brings. Well, that might be true, but the current software & hardware state of affairs is that 64 bit is not really 64 bit and can cause a lot of headaches, delays, and stress. Most vendors do not seem to be ready for it. Many "64 bit" applications are coded to run using an emulator (WoW64) for insuring "64 bit" compatibility, a software package might have some 64 bit apps but not all, lack of what level of 64 bit hardware is required, and others. Why? My guess is lack of clear standards and documentation. There is an answer though! 64 bit needs an easy to understand marketing association that can help insure end users that when something says 64 bit, it really is. Something similar to the hugely success compatibility standard called Wi-Fi (aka 802.11b). When someone bought a Wi-Fi card by one manufacturer and an access point by another, there was no doubt it would work. Why? The Wi-FI Alliance tested for compatibility and worked with vendors to insure everything "just worked". This is what we need for "64 bit" if we think it's going to take off. Otherwise, everyone will stick with "32 bit" until it simply does not exist anymore and based on that, that will be a long time. So, Microsoft, Intel, AMD, IBM, etc need to create a 64-bit alliance or call the Wi-Fi Alliance for some marketing assistance. Let's call it "64 bit-Tested for Performance" and a check-mark. Plaster that logo on everything that is fully 64 bit developed & supports Intel's VT or AMD-V.

Now, back to my story about my journey of ESx starting around January 2008 with no end in sight yet. The first step for VMware ESx 3.5 or Hyper-V 1.0 is a foundation based on a quality server. I considered a generic box, but realized, that ESx has a compatibility list (aka Systems Guide http://www.vmware.com/pdf/vi35_systems_guide.pdf) for servers due to the drivers since it installs on bare metal (no Windows here folks). So I strongly strongly recommend you read that to insure the server, cpu, and RAID card are all compatible. Hyper-V's requirements are a lot easier since if Windows 2008 installs and it support Intel VT or AMD-V, you *should* be good to go. I've included below all the CPUs that support virtualization.

On to my VMware sage. The first server hardware I attempted to use was an IBM x346. I purchased all the components (e.g. dual core CPUs, 16GB of ram, 6 x Fijitsu U320 300GB 10k HDs) and was ready to start the process. To make a long story short and summarize, the IBM server I was using does not like end users using non-IBM hard drives and a serious bug causing HDs to go offline which affected an entire line of RAID controllers card which prevented the use of the server. I did quite a bit of research and in the end the only answer I found was to replace the HDs with same Fujitsu model # HDs but IBM branded for $1300/each (about $1k more than I paid per drive) or use other IBM branded HDs. Another major flaw in the design of the IBM branded RAID card 7k is the need to boot an IBM RAID CD to gain access to almost any functionality (put a HD back online, replace it, config change, etc). This is horrible and slow process compared to Adaptec, Dell, or HP cards where you simply enter into the card BIOS by pushing a key combination and you make all your changes there. I was not impressed with my first foray with an IBM server. So, I will not be upset if it's my last. Goodbye IBM Servers.

Further reading on my battle with the IBM x346 Server.
To read the issue in more detail, check this out. I'm Benx346 if you're in doubt.
http://www.ibm.com/developerworks/forums/thread.jspa?threadID=207574&tstart=0
A "friend" suggested that I try and replace the IBM RAID card with an Adaptec (on VMware compatibilty list) and then connect the new RAID card to the backplane connector with my replacement SCSI cable. Well, a month or so later and trying to get SCSI cabling companies to custom manufacturer a SCSI cable simply isn't possible unless I plan to buy 1000 units. Feel free to look at the rare SCSI connector (micro centronics 80 female SCA) I was dealing with.
http://photos.serebin.com/gallery2/v/ben/temp/ibmx346-cable
Content Alert: anyone who can find me an 18" micro centronics 80 female SCA to LVD high density 68 pin male with ultra 320 data speeds will get a $100 reward.

The 2nd server was a Dell PowerEdge 2850 which supports 2 x dual core 64 bit 2.8GHz CPUs, 6 x UltraSCSI 320 HDs, and 16GB of ram. The unit arrived dead on arrival. So, the vendor only had one unit since it was off lease and ended up credit-ing me back for it. Still no server after 6 months. One of the major downsides of eBay. Things frequently take longer.

3rd Server was a Dell PowerEdge 2850 from another vendor I found via eBay which arrived operational (there are many companies that work with companies giving up servers on leases). I took the 6 HDs from the IBM x346 and installed them and configured them for RAID 10. Moved over 12GB of the memory (only 6 slots versus the 8 on the IBM) and powered it up. Still good and very fast. So, I installed Windows 2003 32 bit, and it went smoothly. I then reinstalled with Windows 2003 R2 64 bit and started my 48 hour burn-in. This burn-in stresses the hard drives, CPU, & ram. The application I use is PassMark's BurnInTest (http://www.passmark.com/products/bit.htm). Make sure you buy the "Pro" version for 64 bit support. Then I started looking at upgrading the ram to 4GB chips to hit the magic 16GB of ram of which I had calculated to need. I'm not sure what caused me to look at this, but something caught my eye. It turns out ESx 3.0 & 3.5 does not support 64 bit OSes without hardware assisted virtualization (e.g. Intel's VT or AMD-V). I was shocked. I wondered why does VMware Workstation, VMware Server, Microsoft Virtual Server support 64 bit OSes without hardware assistance? The reason was performance. VMware felt performance was too big of an issue, so it was removed. But at least 64 bit was an option under those applications. To be honest, CPU performance is rarely the bottleneck for VMware Workstation/Server or Microsoft Virtual Server, it's I/O which is all hard drive based bottlenecks. So, now I'm back in search of another server. I'll probably bite the bullet and purchase a new Dell Server 2950 Series III or 2970 since I'm familiar with them. Once I start my environment, I'll definitely share it with the blog.

As per the lesson, make sure you know what is required for "64 bit" support. Is this a full 64 bit application (as opposed to a 64 bit application with 32 bit components), 64 bit operating system without 32 bit legacy support, or need for what I refer to as v2 of 64 bit CPU which is the CPU virtualization feature called Intel's VT or AMD-V. Or check and make sure it contains the logo of "64 bit-Tested for Performance" or I can only hope for that.

Best of luck with your "64 bit" projects,
-Ben

CPUs Support Virtualization with the Intel VT or AMD-V Feature.
Intel's Website for it as well
http://compare.intel.com/PCC/default.aspx?familyid=5&culture=en-US

Intel Quad-Core CPUs Series supporting VT
- Xeon 7300, 5400, 5300, 3000, X3200, LV

Intel Dual-Core CPUs Series supporting VT
- Xeon 7000, 5200, 3070, 3065, 3060, 3050, 3040, E3100 (fyi: not all 3000 series are supported)

AMD CPUs Series Supporting AMD-V (Virtualization)
- Opteron 1000, 2000, 8000
AMD does not have a website that I've found as easy to use.
------------------------------------------

Friday, July 4, 2008

A Solid Lie on Solid State (Hard) Drives

Hello Everyone,

Happy July 4th. Quick multiple question test. A Solid State Drive includes which features?

A) Flash-based storage
B) Rugged and reliable
C) Low power consumption
D) High performance
E) Silent operation
F) Lightweight
G) All of the Above

If you answered "G", you're wrong! It turns out that low power consumption is not correct (only mistake above). SSDs actually use MORE power and so they reduce battery life on laptops compared to traditional hard drives. Many of the SSD vendors claim battery life is improved, but this is not the case. Crucial as of 7/4/08 listed the above features as benefits of SSDs. Sadly marketing cannot be trusted without independent tests. The folks at Tom's Hardware uncovered this little "lie" in an article called "The SSD Power Consumption Hoax: Flash SSDs Don't Improve Your Notebook Battery Runtime - they Reduce It". That title alone illuminates the issue, but dig into the article and testing and you see performance gains of about 10% for SSDs but battery life differences of an hour (15-20% difference)! How can that be possible? Turns out SSDs don't have an idle power mode like traditional hard drives do. So, they you're traditional hard drive is not working, it's using very little power, but not SSDs. They are always on. Whoops.

Also, the tests would have been a lot worse for SSDs if Tom's hardware had picked a 4200 or 5400 rpm drive. They used a power hungry 7200 rpm drive, and the difference was still as clear as night and day. Don't select SSDs for power savings, but for durability and speed. I was considering a Mtron SSD back in Feb 08 for performance and power reasons on my ultra-portable semi-rugged Panasonic Toughbook W5, luckily I decided against it since I wanted a 64GB SSD, and they were still too costly. Mtron does make excellent SSDs, so I would recommend folks look into them if battery life isn't an issue.

Summary of SSDs
- overall, for improved laptop performance and keeping battery life acceptable, I would stick with 7200 rpm HDs (10% performance difference)
- battery is a serious issue for SSDs (15-20% loss), right now. I suspect due to the Tom's Hardware article, this will draw attention to the issue and the next versions (6-12 months) will improve dramatically
- be aware there are 2 types of SSDs, SLC and MLC. SLC is faster and more durable. So, stick with SLC for the moment if you want to consider SSDs.

Comments and feedback is welcome...
-Ben