Sunday, December 28, 2008

Favorite URLs for my Exchange & Admin Work Posted...

Ehlo All,

I figured I would share the websites I use for my Exchange and other admin work so other admin's would benefit. So, enjoy. Any comments or suggestions for other sites, let me know in the comments.


Wednesday, December 10, 2008

Exchange Server 2007 Rollups Existing for SP1 & non-SP version

So Everyone,

Question: which is the latest Update Rollup for Exchange Server 2007, 5 or 7? That was last night at the Exchange User Group meeting ( We had a discussion of the latest update for Exchange Server 2007. Some folks said roll-up 7 was the latest, and I said roll-up 5 was the latest. Turns out everyone was right. How can that be you may ask? Exchange Server 2007 is a DIFFERENT product than Exchange Server 2007 SP1 so they have different updates. Which explains why the tech who installed a rollup for non-SP1 broke his SP1 Exchange. He smartly do a snapshot before upgrading and disconnected the server from the network, so he quickly reverted to the pre-patch state. So, pay attention which updates are for your version of Exchange.

As of December 10, 2008, here are the latest updates for each version of Exchange Server 2007. I don't know why folks would stay on base version 2007 due to all the gains in SP1, but obviously many folks are and MS is paying attention to them.

Exchange Server 2007 - Update Rollup 7

Exchange Server 2007 SP1 - Update Rollup 5 (not on Windows Update yet, will be posted in 1-2 weeks, so you must know about it to download it)


Tuesday, December 9, 2008

NYExUG Member Review of PowerShell Book

At one of our last NY Exchange User Group meeting which was focused on PowerShell and Exchange presented by PowerShell MVP Brandon Shell, Manning Publications provided us a copy of their Windows PowerShell in Action book to review. One of the winners of the book provided me this review.


Here is my review of the PowerShell book:

There are usually two different types of sysadmin who script: the ones who will find a script online, modify it for their needs (hopefully test it) and deploy it, and one that will figure out all the methods and procedures and write a script from scratch. This book while it says is geared to all beginner’s is really written for the later type of sysadmin. The book does a wonderful job of explaining in detail how the pipeline works but for most sysadmins they would want a more finished example of doing something versus how the pipeline itself works.


Sunday, December 7, 2008

In pursuit of my Exchange 2007 deployment - the ugly of VMware ESXi 3.5 and hard drive resizing for Windows Server 2008

Hello All,

How do you resize partition sizes in VMware ESXi 3.5 (version 110271, VI 2.5 build 103682, VCenter 2.5.0 build 104215) and Windows Server 2008 (these were MBR and Simple volumes)?

1) VI client built-in functionality (VI 2.5 build 103682)
2) DISKPART (as included with Windows 2003)
3) GParted (gparted-livecd-0.3.4-11.iso)
4) VMware Converter (3.0.3 build 89816)
5) vmkfstools (VMware ESXi 3.5 (version 110271)
6) none of the above. Re-install the OS.

Spoiler plot... this does NOT have a happy ending. The answer is 6. If you think you can prove me wrong with the versions above, please let me know how.

If you're a reader of my blog skip to the next paragraph, since this is an executive summary for the non-regulars. I'm in the process of finally upgrading to Exchange 2007 from 2003. I've decided to make it a challenge and do it on a platform I have had no experience with, Windows Server 2008. All my clients and all my servers are Windows 2003 or a variation (e.g. R2, x64, etc). To make it even more difficult, even though I have extensive experience with VMware Server and Workstation, I'm running it under VMware ESXi 3.5 [embedded & w/virtual center]. ESXi is a different beast than other VMware products. It contains a WHOLE lot more functionality especially when you add Virtual Center.

After a few months of planning on functionality & redundancy desired, hardware sizing, migration plans, and a number of other factors, I took the plunge into testing. So, I created my base OS images of Windows 2003 (x32) and Windows 2008 (x64) to test for 1 month which turned into 4 months. After I was done testing installs and configurations, I deleted all those images and started again. The party began.

So, I setup base OS images with 30GB partitions since I was planning on adding a minimum of 14 images & 12 virtual machines (aka vm's). Space is a concern since it is 6 x 300GB SAS 10k hard drives in direct attached storage (DAS) in a RAID 10 config. Problem was I mistakenly made the 2008 system partition size 30GB. I figured, it would be large enough. I'm blogging about this, so obviously it was not and the attempted fix wasn't a quick 1-2-3 or even a 2-3 hour fix.

Attempt #1 - How VMware hard drive resizing is suppose to work
So, after making my 2008 partition size 30GB, I figured the beauty of VMware would show of it's skills. Similar to VMware Server & Workstation, you right-click and enter the new larger hard drive size, and click OK, and a few seconds later it would be increased to 50GB. No such luck. Attempting this resulted in ESXi simply ignoring my request to enlarge the partition. No error, nothing. It stayed on 30GB. A bit of research indicated that Windows Server 2008 is NOT supported for hard drive increases via the VMware Infrastructure Client (aka VI). So, I looked into the other options.

Attempt #2 - Diskpart
This article talks about Diskpart, but all the screenshots are 2003 related, so I didn't even bother, since I figured since diskpart shipped with Windows 2003 SP1, it probably was a dead-end solution. Diskpar was Windows 2003 related, diskpart was 2003 SP1 related. You can find a good article about Exchange 2003 and align your partitions for database storage here.

Attempt #3 - GParted
As per attempt #2, that same techtarget article talks about using an open-source partition tool. I uploaded the ISO to the datastore, did the booting, and attempted to modify the partition to 50GB. It reported no additional space execpt for one more 1MB. Obviously, that isn't correct. Next....

Attempt #4 - VMware Converter
I tried this 2 different ways since some sites including the above techtech article here talked about running VMware Converter directly on the server you are attempting to modify the hard drive partition size, and via a remote server. In my case I attempted via Virtual Center. Both ways failed with the same error "Unable to determine Guest Operating system". Problem: Windows Server 2008 is not supported. Oops. Next attempt.

Attempt #5 - vmkfstools
Run a single command to enlarge your partition ("vmkfstools -X 50g ExchangeSrv.vmdk" ["50g" is for 50GB the size you want, and "ExchangeSrv.vmdk" is for the server name]) and all your troubles go away. Or that's how all the documentation sounded. Problem is, you can't run this command via VI or a command line on ESXi out-of-the-box. A number of advanced functionality for ESX require run command line based commands. The only way to do this with ESXi, is to enable console access. This is NOT supported on ESXi by VMware and use your judgement (similar warnings when using regedit). So, to gain console access, I found this handy blog posting that gave a summary how to enable SSH access for ESXi (I found it better than the VMware KB tech article). This is when I had to remember my days back on AIX with the vi editor. Since you need to modify a configuration file, and it's a bit tricky. You'll need to google "vi" if you need help. I then had ssh root access to my ESXi server.

So, I ran the command and then issued a "ls -l" and noticed the server vmdk had increased to 50GB. Awesome, I figured I was done. Nope. So, I then had to run the process in attempt #3, GParted. I ran that and it saw the extra 20GB and enlarged the partition. Almost there. Until I rebooted the server and it failed to boot with a non-bootable error of "winload.exe is missing or corrupt." I started the repair process and then realized I didn't want a base image to be off a corrupt partition or possible issues with files. So, at this point I scrapped the process and did a re-install of 2008.

The best part of Windows Server 2008 is I can install it under 40 minutes on the hardware I'm running. So, this makes it very fast. Also, I recommend you make OS partitions at least 40-50GB. I plan on using a secondary partition for Exchange databases, but I still wanted extra space on the 1st partition. Here are the Microsoft Windows Server 2008 System Requirements. So, I have my 50GB OS partition and a 2nd for the Exchange databases. I return to installing Exchange now.

Good site to explain VMware's resizing of Virtual Disks (primarily for Windows 2003)

Side Notes
- Windows Server 2008 annoyance, why is hibernation enabled on a server OS. I quickly noticed storage was being used up faster than it should. I found out a hibernate file is created on the root of c:\. To delete it and disable this "very useful" [note sarcasm] server feature, open a command prompt and enter "powercfg -h OFF". And it'll delete the file and disable the feature.

- thin provisioning capability (means, create a 50GB partition, and have it dynamically grow. So, only use what you need). Only downside of this, is a reduction in performance. It's only available to SAN based storage. I'll need to explore that in the future.

- I tend to recommend folks avoid Dynamic Disks and GPT based partitions unless there is a requirement to select these since compatibility can easily become a problem with these types of disks. GPT is used when you want partitions over 2TB, as per Dynamic disks, for online resizing (I think) and use of OS based RAID types.

Until next time,

Tuesday, December 2, 2008

Emptying Those Pesky OLK Folders (Outlook design flaw)

Hello All,

This came up on the New York Exchange Server User Group (aka NYExUG) Google Group mailing list. See below to download the script to empty those pesky Outlook 2003 OLK folders.

Down Here: Magical Script to Purge the Outlook OLK Folder

Background on the issue. When an Outlook 2003 (not sure if Outlook 2007 has the same problem) user opens an attachment, Outlook creates a cached copy (with a similar name and a few digits) in a hidden OLK folder (random folder name) in the user's profile. Now, the issue is, if the user opens up the same file name (e.g. scanner creates PDF files named Scanned.pdf", eventually the random digits run out and user will not be able to open the new file due to a same file name error. Of course the error message isn't so clear cut. The reason I call this a design flaw, there isn't a way to clear the Outlook "cache" without manually doing this. I've looked for Group Policy adm add-ons, and a number of other solutions, but in the end, a co-worker of mine at REEF Solutions found and configured a script to empty all OLK folders on Startup. Now, we simply place that script in the startup folder, and "BANG!" every login solves the problem. I'm sure there's a better way to deploy it, but this was a quick and easy solution. If you know how to handle this via GP, post it in the comments.

Until next time.

Friday, November 7, 2008

Feedback on my 1st Exchange 2007 Install running on Windows 2008

Hello Everyone,

So, I decided to really challenge myself. Not only had I never used or installed Exchange 2007, but I decided to do it on an OS I have had no experience with, Windows 2008 Server (I have a test Vista laptop but rarely use it). So, this was a completely new beast to me. Overall, even though Exchange 2007 has a brand new interface, requires more work on user creation, more complicated install requirements, & Microsoft pushes for you to learn PowerShell scripting, Exchange 2007 has a well laid out interface for configuration and the use of wizards makes learning this product easier than Exchange 2003. Yes, you heard that right. This is easier to setup and manage than Exchange 2003 out of the box. Anyone who is familiar with Exchange 2003, should not have an issue learning Exchange 2007. The learning curve is a lot easier than expected. Remember, I've never seen Exchange 2007 nor have I used Windows 2008 and was able to get the Exchange Server functionality up and running fairly quickly. Good job Microsoft!

My complaints seeing it for the first time, are minor issues, such as copying text from dialog boxes is either not possible or limited, fancy GUI is a waste of CPU processing [View -> Visual Effects -> Never resolves that], and wizards take longer to execute than the old ESM or setting the configuration yourself. But Microsoft has made your life a lot easier with a one stop article (below) to install it. So, would I recommend everyone go out and upgrade, no. I'll address who should upgrade in a future post.

Well, it's taken me a while to finally get to the point of testing Exchange 2007 since performance has not been an issue so far with all the Exchange 2003 environments I've worked on (maybe after a hardware upgrade though). Which is a compliment to Exchange 2003. It's a great product since it's rock solid and scalable when you properly size your Exchange Server.

So, I'm testing Exchange 2007 SP1 (remember, SP1 is the full software code so you don't need to install the base and then SP1, just install the full version via SP1) on the following:
- Windows 2008 Server x64
- OS running under VMware ESx 3i 3.5
- VM configured for 2 CPUs (up to 3GHz/cpu, 8192MB, 30GB OS partition, 240GB Exchange install partition [not needed, but plan to deploy my "production" environment in this configuration].
- non-internet based network (in VMware speak, it's called a "virtual switch")

Recommend you read the following Microsoft article for pre-requisites for Windows 2008 or Vista. This article is excellent and includes the command line code needed to load the necessary software (e.g. Roles, Features, etc).

This URL is a gold mine of information. Save this!!!

I ran the typical install (all roles minus Unified Messaging on a single server) of Exchange Server 2007 SP1 setup on a Windows 2008 Server after running the pre-requisites as per Microsoft above, I received the following error:

The Active Directory Schema is not up-to-date and Ldifde.exe is not installed on this computer. You must install Ldifde.exe by running 'ServerManagerCmd -i RSAT -ADDS' or restart setup on a domain controller.

Turns out I missed the following command (on the above URL) which I promptly ran on the planned Exchange server and rebooted. My AD is based on Windows 2003 Native Mode environment.

C:\Users\administrator.domaintest>ServerManagerCmd -i RSAT-ADDS

Start Installation...

[Installation] Succeeded: .
[Installation] Succeeded: [Remote Server Administration Tools] Active Directory
Domain Services Tools.
Warning: [Installation] Succeeded: [Remote Server Administration Tools] Active D
irectory Domain Controller Tools. You must restart this server to finish the ins
tallation process.

Warning: [Installation] Succeeded: [Remote Server Administration Tools] Server f
or NIS Tools. You must restart this server to finish the installation process.


Success: A restart is required to complete the installation.



Next issue was the following SMTP detection issue. The answer to create a "Send Connector" as per .

Hub Transport Role Prerequisites

Setup cannot detect an SMTP or Send connector with an address space of '*'. Mail flow to the Internet may not work properly.

Elapsed Time: 00:00:14


On to the install. When the Exchange setup runs the pre-requisites, it attempts to connect to Microsoft for the latest requirements. Since there is no internet, it fails but it's not reported. This is the setup dialog as it continues. Technically you could disable this auto-internet check using ExBPA and configuring some xml files, but I don't think it's worth the time.

Process took 50 minutes to complete. No errors reported. The next steps are presented by the Exchange Management Console. Here are the more important configuration steps to get up and running in order of importance.

- configure domains for which you will accept e-mail
- configure internet mail flow
- configure the E-mail Address Policies (formerly known as Recipient Update Policy) to automatically change all your users "from" address
[optional/recommended] - configure OAB public folder distribution for Outlook 2003 and earlier
[optional/recommended] - configure SSL for CAS (Client Access Server)
[optional] - configure ActiveSync
[optional] - configure offline address book (OAB) for Outlook 2007
[optional] configure an external postmaster recipient to receive mails from our systems (e.g. NDRs, etc)

I performed the following:

"Configure Domains for which You Will Accept E-mail"

Clicking on the link inside the wizard pointed me to the correct location and then I selected the Actions "New Accepted Domain". You type the "Accepted Domain" which is your emailed domain, and then you probably want to leave it as "Authoritative Domain". If you don't know what this means, this is most likely your correct setting. The other 2 options are Internal Relay Domain and External Relay Domain. Then you're done while you wait for the wizard to run the command which took 15 seconds on my server.

After completing the above, you probably want to make that domain your default. So, highlight the domain you added, and on the Actions, click "Set as Default".

Now, you can receive if you've configured your firewall and DNS for this domain, but you need to be able to send email.

Next step - configuring sending email. In Exchange 2007 speak, "Configure internet mail flow"

Exchange Management Console -> Organization Configuration -> Hub Transport -> Send Connectors tab -> click "New Send Connector..." Actions.

Now you have 4 options
Custom - for sending via non-Exchange servers (e.g. relay servers, your smtp gateway server, etc)
Internal - for sending email to other Exchange servers
Internet - use DNS to route email out. Connect to our domains servers directly.
Partner - for sending to domains with TLS encryption that are listed on the "domain-secure domains".

I selected custom, and for the Address space, listed the accepted domain as entered above, and left all settings as is. Network settings (Use domain name system) which means your Exchange Server will communicate to a variety of other servers on the internet or "Route mail through the following smart hosts". I selected a smart host and entered the LAN IP of it (I never allow my Exchange Server to communicate on the internet. All mail inbound and outbound is routed via another smtp server). Now, under "Configure smart host authentication settings", I left this to "None" since I whitelist the Exchange Server on the smtp relay server. "Source Server" lists this Exchange Server.

Configuring the E-mail Address Policies to change the "from" address
Organization Configuration -> Hub Transport -> E-mail Address Policies -> right-click Default Policy and select Edit.
- add an additional domain entry under "E-Mail Addresses". This is typically your new accepted domain.
- I left the default "E-mail address local part" to "Use alias"
- check "Select accepted domain for e-mail address:" and Browse and select domain used above for "new accepted domain"
- highlight newly added SMTP e-mail address, and select "Set as Reply". It should become bold now.

Adding your First Email User Account
- Now this is back to the good ol' days of Exchange 5.5, somewhat, but not as bad as when Exchange 2007 was first released. You can add the user account in AD, and then head over to the EMC (Exchange Management Console in 2007, formerly ESM in 2003, Exchange System Manager) and under Recipient Configuration -> Mailbox -> New Mailbox... under Actions. You want the basic User Mailbox (there are numerous other options). For User Type, select Existing Users, and select the user(s). Select the Mailbox database and "Exchange ActiveSync mailbox policy" if you plan to use that and then click Next. Or you can have EMC create the AD account and then go to the container called "Users" and move it to the correct OU. Hopefully SP2 or an update allows you to select the OU to place the user(s) being created.

And that's it. I logged into Outlook Web Access, and thanks to Microsoft for loading a SSL certificate so out of the box, OWA can be secure & support forms based authenication (major difference from 2003). Some screen shots of OWA and OWA Light. Enjoy.

Initial OWA Login Screen

Your 1st Login Screen and Prompt to Set Time Zone and Language - this is an improvement from Exchange 2003/2000 which the end user had to know to click Settings and set this information.

Logged in OWA on Exchange 2007 running IE 7

Logged in OWA Light on Exchange 2007 running IE 7. This would be similar to Firefox, Safari, and other non-IE browsers.

Comments, feedback, etc.

Monday, October 20, 2008

How To Article on Virtualization for Exchange 2003 & Office Communications Server Support for the BlackBerry

This is a how to article on the process to virtualize Exchange 2003 with Hyper-V writing by an Exchange MVP, Brien Posey. Good walk-through of the process. And, since this is Hyper-V vm'ed, this is supported by Microsoft Support.,289483,sid43_gci1334779,00.html?track=NL-359&ad=667516&asrc=EM_NLT_4749386&uid=1446997

And BlackBerry now supports Office Communication Server 2007.


Sunday, September 21, 2008

Continuing on our NYExUG Meeting - PowerShell Education

Hello Everyone,

For those who attended the past September 2008 NY Exchange User Group meeting, we had a great intro to PowerShell by Brandon Shell (a PowerShell MVP). If you missed this or are interested in learning more PowerShell that is Exchange specific, see this SearchExchange article. It's a good start and provides many examples. Article was written by Brien M. Posey, MCSE (& former Exchange MVP).

SearchExchange's Primer on PowerShell for Exchange.


Thursday, August 28, 2008

New Formula: Exchange + Virtualization = Microsoft Support

Yes, the new formula is correct. And I did not expect to see this so soon. Microsoft has announced that you can run Exchange 2003/2007 in a Virtual Server 2005 R2/Hyper-V virtualized environment and get Microsoft Professional Support Services. Here is the official Microsoft release about this. There are a details for each configuration (e.g. all 2007 roles supported except Unified Messaging), but this is a step in the right direction.

Also, Windows ITPro's article on this mentions that Microsoft has created a Server Virtualization Validation Program (SVVP) and that VMware is in the process of attempting to achieve certification. This is not a show stopper in my eyes since VMware supports clients Windows & Exchange hosts currently.


Saturday, August 23, 2008

First Experiences with VMware 3i

Overall, I love it. I'm might start drinking the VMware 3i koolaid. And if you're familiar with VMware Server, this will be an easy transition. Or if you are new, you'll need to hover over icons until you remember what they are. This is pretty easy though. If you don't know, 3i is now free. 3i is a slimmed down version of 3.5 which cuts multiple host type features (e.g. VMotion, Update Manager, etc).

So, I finally had some time today, and after reading all the documentation that came with my new server (Dell PowerEdge 2900 III) and inspecting the inside & removing the USB flash drive (Kingston 1GB) I fired it up. And yes, I normally read all documentation before I start to use a product. And boy, do I like the purr of a dual quad core, six 15k hard drive, & dual power supply server. After about 30 minutes, I actually turned off the music since it was bothering me and listened to the humming of the server.

Some notes on the 3i setup on my Dell. I quickly ran through the BIOS configuration and realized even though I had (read: paid / I don't have enough time in my life with my new daughter) Dell to pre-load 3i on this server, they did not enable the internal USB port to allow VMware 3i booting and on the CPU instructions, VT was disabled. If you recall from my previous blog posting, Intel's VT or AMD-V is a requirement. Nice touch Dell. After that, I booted it up, and 3i just loaded. I changed the root password and set the DHCP IP to static and then used another PC's web browser to download the VMware Infrastructure Client (aka VI Client) which is used to manage your ESx host. The only aspect of 3i I had to configure was storage. So, I gave all 6 hard drives in a RAID 10 configuration to VMware (file system is called VMFS, and I set the block size to 1MB, since I don't expect to have a single file over 256GB) to 3i. So, it handled the formating and everything. So, I now had 836GB of space.

After that, my first OS to install on my new VMware server was Windows 2008 Standard 64 bit. You run through adding a new virtual machine, and I select a CPU, 1GB of ram, and 20GB of hard drive space and place the install CD on your local PC and then select "Connect DVD" and your local DVD/CD automatically appears on the "server vm". I was running this via a 10/100 network, and the install proceeded very quickly compared to Windows 2003. Keep in mind, I had never installed 2008 before. It went very smoothly the install. No issues on install or setup. But, little did I know, the new server OS takes over 10.5GB of space. Holy smokes. I guess I'll be re-installing Windows 2008 again. Oh well.

The management console shows quite a number of performance related statistics (e.g. overall memory usage, network, hard drive, etc) for all virtual machines. Like I initially said, anyone with any VMware Workstation/Server experience will feel right at home, otherwise it's still fairly easy to get around. I'll post again once I dig deeper in the product.


P.S. Comments or feedback is always welcome.

Monday, August 18, 2008

Virtualization Performance is better than you think for Exchange Server

Hello Everyone,

The common thought is Exchange Server does not get virtualized. But, I'll tell you what, Exchange should be virtualized. Applications that are mission critical should be protected using a number of backup, high availability, and fail-over type solutions. I consider virtualization a method of fail-over.

The biggest concern I frequently hear about Exchange is performance (after complexity - I would disagree on that one). Well, the performance difference between virtual and physical environments (at least VMware ESx, haven't seen performance benchmarks of Hyper-V) is a lot closer than one would expect when properly configured (I don't want to hear about the single SATA hard drive configuration you are running with 1GB of ram). I'll summarize the technical details of a performance test of Exchange 2003, VMware ESx 3.0 on fibre channel on Dell/EMC hardware (all 32 bit, 2GB of memory only). URL for the report is here in PDF.

Close Ball-Game
1) a single virtual CPU could obtain 76% performance of a physical CPU (clocked at 1300 heavy user profiles with acceptable performance using LoadSim, Microsoft's Outlook/Exchange testing tool)
2) 2 virtual CPUs could obtain 71% performance of a 2 physical CPU solution (support for up to 2200 heavy user profiles)
3) CPU Utilization - 30% difference in utilization, but not an issue. Exchange is not a heavy CPU user. More important to focus on I/O and memory.
4) VMware's memory sharing technology did not show any performance degradation.

One of the biggest surprises for me was the VMware memory sharing technology had no effect. I'll be taking a closer look at this in other benchmarks and personal testing since it's hard to believe that there was no difference. Just to re-cap the memory sharing technology, if you run 4 virtual machines (aka vm) with Windows 2003 Server, you're running many of the same services (e.g. netlogon.exe, explorer.exe, etc) which consume the same memory, so VMware does a "single instance" type memory sharing between all 4 vm's.

To summarize, if you have the I/O capabilities and want to improve your business continuity solutions, I would consider looking into this further. I wouldn't just count this out. And I plan to run my Exchange Server in ESx very soon. Hope to see you there...


Monday, August 11, 2008

Serious DNS Vulnerability (Kaminsky) Can Affect Email Services

Hello All,

This is the beginning of shorter posts, but more often.

This recently released serious DNS vulnerability (found by Kaminsky) can affect email services, so while hackers are spoofing DNS for web site attacks, the same could be done for email attacks. See the US-Cert for an overview of the issue. This effects dozens of DNS implementations including Windows DNS.

Official US-Cert Posting on the DNS Vulnerability

An Illustrated Guide to the Problem

There is some discussions about what the best approach is to fix it (e.g. DNSSEC, increase Query ID, randomize the source port, IPv6, SSL, etc). So, it at the moment, the easiest fix is increase the query ID and randomize the source port. For your servers, use (at least for Windows Server DNS), the root based hints included in the operating system. "Man in the middle" attacks are a lot more common and dangerous than people realize, hence why I prefer using my EVDO card than some random WiFi hotspot. Stay safe.


Tuesday, July 22, 2008

When is 64 bit not really 64 bit?

Hello Everyone,

My journey, and I can call it that, for deploying a test environment (running multiple OSes, 32 & 64 bit OSes) based on VMware ESx 3.5 has taken a LOT longer than I expected and has taught me a thing or two about 64 bit. This also explains the bigger problem affecting the uptake of "64 bit" products and how with an idea if adopted can hopefully help dramatically increase admins buying into "64 bit".

There is a lot of talk about 64 bit applications, 64 bit operating systems, 64 bit hardware and all the wonderful things it brings. Well, that might be true, but the current software & hardware state of affairs is that 64 bit is not really 64 bit and can cause a lot of headaches, delays, and stress. Most vendors do not seem to be ready for it. Many "64 bit" applications are coded to run using an emulator (WoW64) for insuring "64 bit" compatibility, a software package might have some 64 bit apps but not all, lack of what level of 64 bit hardware is required, and others. Why? My guess is lack of clear standards and documentation. There is an answer though! 64 bit needs an easy to understand marketing association that can help insure end users that when something says 64 bit, it really is. Something similar to the hugely success compatibility standard called Wi-Fi (aka 802.11b). When someone bought a Wi-Fi card by one manufacturer and an access point by another, there was no doubt it would work. Why? The Wi-FI Alliance tested for compatibility and worked with vendors to insure everything "just worked". This is what we need for "64 bit" if we think it's going to take off. Otherwise, everyone will stick with "32 bit" until it simply does not exist anymore and based on that, that will be a long time. So, Microsoft, Intel, AMD, IBM, etc need to create a 64-bit alliance or call the Wi-Fi Alliance for some marketing assistance. Let's call it "64 bit-Tested for Performance" and a check-mark. Plaster that logo on everything that is fully 64 bit developed & supports Intel's VT or AMD-V.

Now, back to my story about my journey of ESx starting around January 2008 with no end in sight yet. The first step for VMware ESx 3.5 or Hyper-V 1.0 is a foundation based on a quality server. I considered a generic box, but realized, that ESx has a compatibility list (aka Systems Guide for servers due to the drivers since it installs on bare metal (no Windows here folks). So I strongly strongly recommend you read that to insure the server, cpu, and RAID card are all compatible. Hyper-V's requirements are a lot easier since if Windows 2008 installs and it support Intel VT or AMD-V, you *should* be good to go. I've included below all the CPUs that support virtualization.

On to my VMware sage. The first server hardware I attempted to use was an IBM x346. I purchased all the components (e.g. dual core CPUs, 16GB of ram, 6 x Fijitsu U320 300GB 10k HDs) and was ready to start the process. To make a long story short and summarize, the IBM server I was using does not like end users using non-IBM hard drives and a serious bug causing HDs to go offline which affected an entire line of RAID controllers card which prevented the use of the server. I did quite a bit of research and in the end the only answer I found was to replace the HDs with same Fujitsu model # HDs but IBM branded for $1300/each (about $1k more than I paid per drive) or use other IBM branded HDs. Another major flaw in the design of the IBM branded RAID card 7k is the need to boot an IBM RAID CD to gain access to almost any functionality (put a HD back online, replace it, config change, etc). This is horrible and slow process compared to Adaptec, Dell, or HP cards where you simply enter into the card BIOS by pushing a key combination and you make all your changes there. I was not impressed with my first foray with an IBM server. So, I will not be upset if it's my last. Goodbye IBM Servers.

Further reading on my battle with the IBM x346 Server.
To read the issue in more detail, check this out. I'm Benx346 if you're in doubt.
A "friend" suggested that I try and replace the IBM RAID card with an Adaptec (on VMware compatibilty list) and then connect the new RAID card to the backplane connector with my replacement SCSI cable. Well, a month or so later and trying to get SCSI cabling companies to custom manufacturer a SCSI cable simply isn't possible unless I plan to buy 1000 units. Feel free to look at the rare SCSI connector (micro centronics 80 female SCA) I was dealing with.
Content Alert: anyone who can find me an 18" micro centronics 80 female SCA to LVD high density 68 pin male with ultra 320 data speeds will get a $100 reward.

The 2nd server was a Dell PowerEdge 2850 which supports 2 x dual core 64 bit 2.8GHz CPUs, 6 x UltraSCSI 320 HDs, and 16GB of ram. The unit arrived dead on arrival. So, the vendor only had one unit since it was off lease and ended up credit-ing me back for it. Still no server after 6 months. One of the major downsides of eBay. Things frequently take longer.

3rd Server was a Dell PowerEdge 2850 from another vendor I found via eBay which arrived operational (there are many companies that work with companies giving up servers on leases). I took the 6 HDs from the IBM x346 and installed them and configured them for RAID 10. Moved over 12GB of the memory (only 6 slots versus the 8 on the IBM) and powered it up. Still good and very fast. So, I installed Windows 2003 32 bit, and it went smoothly. I then reinstalled with Windows 2003 R2 64 bit and started my 48 hour burn-in. This burn-in stresses the hard drives, CPU, & ram. The application I use is PassMark's BurnInTest ( Make sure you buy the "Pro" version for 64 bit support. Then I started looking at upgrading the ram to 4GB chips to hit the magic 16GB of ram of which I had calculated to need. I'm not sure what caused me to look at this, but something caught my eye. It turns out ESx 3.0 & 3.5 does not support 64 bit OSes without hardware assisted virtualization (e.g. Intel's VT or AMD-V). I was shocked. I wondered why does VMware Workstation, VMware Server, Microsoft Virtual Server support 64 bit OSes without hardware assistance? The reason was performance. VMware felt performance was too big of an issue, so it was removed. But at least 64 bit was an option under those applications. To be honest, CPU performance is rarely the bottleneck for VMware Workstation/Server or Microsoft Virtual Server, it's I/O which is all hard drive based bottlenecks. So, now I'm back in search of another server. I'll probably bite the bullet and purchase a new Dell Server 2950 Series III or 2970 since I'm familiar with them. Once I start my environment, I'll definitely share it with the blog.

As per the lesson, make sure you know what is required for "64 bit" support. Is this a full 64 bit application (as opposed to a 64 bit application with 32 bit components), 64 bit operating system without 32 bit legacy support, or need for what I refer to as v2 of 64 bit CPU which is the CPU virtualization feature called Intel's VT or AMD-V. Or check and make sure it contains the logo of "64 bit-Tested for Performance" or I can only hope for that.

Best of luck with your "64 bit" projects,

CPUs Support Virtualization with the Intel VT or AMD-V Feature.
Intel's Website for it as well

Intel Quad-Core CPUs Series supporting VT
- Xeon 7300, 5400, 5300, 3000, X3200, LV

Intel Dual-Core CPUs Series supporting VT
- Xeon 7000, 5200, 3070, 3065, 3060, 3050, 3040, E3100 (fyi: not all 3000 series are supported)

AMD CPUs Series Supporting AMD-V (Virtualization)
- Opteron 1000, 2000, 8000
AMD does not have a website that I've found as easy to use.

Friday, July 4, 2008

A Solid Lie on Solid State (Hard) Drives

Hello Everyone,

Happy July 4th. Quick multiple question test. A Solid State Drive includes which features?

A) Flash-based storage
B) Rugged and reliable
C) Low power consumption
D) High performance
E) Silent operation
F) Lightweight
G) All of the Above

If you answered "G", you're wrong! It turns out that low power consumption is not correct (only mistake above). SSDs actually use MORE power and so they reduce battery life on laptops compared to traditional hard drives. Many of the SSD vendors claim battery life is improved, but this is not the case. Crucial as of 7/4/08 listed the above features as benefits of SSDs. Sadly marketing cannot be trusted without independent tests. The folks at Tom's Hardware uncovered this little "lie" in an article called "The SSD Power Consumption Hoax: Flash SSDs Don't Improve Your Notebook Battery Runtime - they Reduce It". That title alone illuminates the issue, but dig into the article and testing and you see performance gains of about 10% for SSDs but battery life differences of an hour (15-20% difference)! How can that be possible? Turns out SSDs don't have an idle power mode like traditional hard drives do. So, they you're traditional hard drive is not working, it's using very little power, but not SSDs. They are always on. Whoops.

Also, the tests would have been a lot worse for SSDs if Tom's hardware had picked a 4200 or 5400 rpm drive. They used a power hungry 7200 rpm drive, and the difference was still as clear as night and day. Don't select SSDs for power savings, but for durability and speed. I was considering a Mtron SSD back in Feb 08 for performance and power reasons on my ultra-portable semi-rugged Panasonic Toughbook W5, luckily I decided against it since I wanted a 64GB SSD, and they were still too costly. Mtron does make excellent SSDs, so I would recommend folks look into them if battery life isn't an issue.

Summary of SSDs
- overall, for improved laptop performance and keeping battery life acceptable, I would stick with 7200 rpm HDs (10% performance difference)
- battery is a serious issue for SSDs (15-20% loss), right now. I suspect due to the Tom's Hardware article, this will draw attention to the issue and the next versions (6-12 months) will improve dramatically
- be aware there are 2 types of SSDs, SLC and MLC. SLC is faster and more durable. So, stick with SLC for the moment if you want to consider SSDs.

Comments and feedback is welcome...

Monday, June 30, 2008

"Cheating" on an Exchange 2003 Hardware Upgrade

Hello Everyone,

I "cheated" on an Exchange 2003 hardware upgrade I did 2 weekends ago (Fri-Sat), or that's at least how I feel since this was hands down the fastest and easiest upgrade I've ever done (and it was about 80GB of db's on an older server with direct attached storage). At the end of the weekend, I started to think maybe I should carry around one of these "things" for my clients for upgrades. I'll share what this "thing" was later in the posting. I don't want folks to think I'm pushing products. My role in the project was to insure the replacement of the Exchange Server hardware went smoothly. The client was in production 24/7 and literally the office was staffed 6 days a week. So, I was concerned originally how to insure minimal downtime.

Background on existing hardware & performance
We were upgrading from an Exchange 2003 Server that was installed with 3 hard drives in a RAID 5 hard drive configuration (direct attached storage) for the OS, transaction logs, and Exchange databases. Company had about 60 users and 30 BlackBerrys or so. 1 BES user adds a load similar to 2 Outlook users. So, total company usage was about 120 users. Performance was an issue, so some users were configured for cached mode to "improve" performance. Cached mode should not be required for LANs, unless Outlook end users are receiving "retrieving data from server". Always a bad sign to see unless you have poor network connection. Recommended another DAS server that used RAID 1 for the OS, RAID 1 for transaction logs, and RAID 10 for the Exchange databases.

The Migration for a hardware refresh
So, I checked the OS install (another admin handled that), Exchange install (using the /disasterrecovery switch for setup and service pack 2), Exchange config, and insuring the email & public folder migration completed successful. Only catch was during this server replacement, there was to be no downtime and no use of Exchange clustered services. Hmmm, that's a challenge. Or so one would think.

The Cheat
The client I was working for happened to have a 3rd party product (keeping read to find out) for Exchange that in essence allowed the "cheating". And I mean this in a very good way. It saved us a LOT of time. Meaning, we told the 3rd party product to take over all the existing Exchange services (MAPI, SMTP, OWA, IMAP, etc) and all data for the Outlook, OWA, ActiveSync, & BlackBerry was available to all users. This took a few minutes (3 or 4 minutes) on the switch-over. Meaning the appliance takes some time to take control. Once that was done, everyone was operating off the appliance and end users didn't know this besides restarting Outlook and re-authenticating to OWA (ActiveSync & BES users had a slight delay. BES users could be out of service for up to 15 minutes, but that's a limitation of BES). Once the appliance took over, we copied over the Exchange databases (.edb/.stm's) using robocopy to the new Exchange 2003 Server. We considered upgrading to 2007, but the appliance and all the associated Exchange applications would have had to been upgraded, and it wasn't cost effective (TCO reminder). So, after we started robocopy-ing, we went home.

Day 2 of the Migration & Failback
I'm not going to go into the details, but migrating took a few hours including getting the SSL certs for OWA and handling all that. Once the new hardware was setup with Exchange, it was time to bring back all the new email. As I previously said, the reason to copy over the databases to the new server, was the appliance then doesn't need to copy over all data, and just new email/data. So, this is a huge time saver. Once we copied over the databases and transaction logs, we were able to get the Exchange Server fully operational and enable the failback from the appliance. We then failed back from the appliance to the new Exchange Server. This took a lot longer to check that all data was copied from the appliance to the new Exchange Server. This took 10 hours or so and then everyone had to relaunch Outlook and re-authenticate against the new server.

Appliance Details
The "cheat" was an Exchange high availability appliance from Teneros. Even though this appliance runs 2 operating systems, Linux and Windows, the entire configuration is on 2 web pages. Meaning, the Teneros support team is really what runs this product, not the Exchange admin. As per the web interface, to say the amount of information and configuration is sparse, is putting it lightly. Overall the product worked well and we ran into 2 glitches due to permissions and resetting process of the AD name due to poor documentation. And the migration process took longer than expected since the status of synchronization is not very accurate. Not a big deal, since end users are working during the failover and failback. Overall solution is very impressive, but I have some doubts since I'm not a big fan of trusting secret functionality of a black box type solution. I like to know how applications work and I do have concerns over Exchange updates or patches breaking the Teneros functionality. If you are curious, pricing is around $10k, give or take a few thousands. If you wanted to see the demo, Teneros did present at the NY Exchange User Group meeting back in November of December of 2007 or check out their website.

There are many other software solutions on the market that do this, and so I'll blog about when I work with them. My user group has had demo's and presentation on a # of them, but this was the first real world usage of one.

Let me know your thoughts on this.


Friday, June 20, 2008

2 Outlook Add-ons / Improving TCO for Exchange 2007

Hello All,

I feel I owe an apology to my blog readers for the long period between blog posts. I've thought about my blog each time I read about a mail related topic. It's time to share my insight into Exchange and life as a ehlo tech. I've committed myself to post shorter if necessary postings, just to insure folks are kept up to date.

So, 3 things that I've wanted to share for a few months (actually a few years for one tool).

1) I personally use and recommend to all clients the best Outlook searching tool I've found for performance, ease of use, stability, and cost for Outlook 2003 (it's been around for years, and is still rock solid). The tool is called Lookout. A bit of background on the tool. The creators of this amazing tool were hired (software company was bought by Microsoft) and now this functionality has been incorporated into MSN Desktop Search. When I reviewed MDS, I still thought Lookout was better based on my 4 criteria above. It might have changed, but Lookout is pretty close to perfect. For clients, the easiest way to explain Lookout, is to refer to Lookout as Google for Outlook. Very fast and easy to use. You can download the latest version 1.30 before they closed shop to work for Microsoft via this URL. If you have any issues, drop me an email or add a comment and I'll post it since I still have a copy of both latest versions (1.30 & 1.28).

2) There has been a lot of talk about Xobni for it's built-in searching, stats functionality for Outlook usage, and handy access to recent attachment. I love the idea of stats (reporting how much email sent/received, etc), unfortunately, I don't need searching and can't afford to give up so much real estate in Outlook (it adds a side panel similar to the 3 paneled look) for very cool usage "stats" and UI change for attachments. But, let me know in the comments area any feedback on it. I'm curious. I'll probably need to fire up a virtual machine (VM) to take a look at this.

Joys of Lookout and Xobni by Windows IT Pro. (Site was done when I posted this, so I wanted to confirm no login was required for it. Not sure.)

Xobni's Outlook 2003/2007 Tool for Improving UI, Searching, and Stats. Company's website of their FAQ URL .

3) My background in economics and accounting has always played a major role in insuring technology upgrades and improvements are cost effective. So, having said this, I was a bit surprised when I heard the UI had been changed dramatically in Exchange 2007. One feature that increased TCO was the requirement of using 2 UIs to create new users. The single ADUC (AD Users and Computers) is not adequate anymore. So, a vendor realizing the concern for companies created a single UI for AD and Exchange user creation and modifications. They also plan to add a # of very useful features. I think this is a great idea even though it's 3rd party software, especially considering they'll probably add a lot more functionality than MS would.

Product is called Exchange Tasks 2007 and is from U-BSmart.

Demo of Exchange Tasks 2007 by 3rd party (improving TCO for Exchange 2007)

Quote from MS about U-BSmart's Exchange 2007 Tasks

Besides what we covered here, the guys behind the Exchange 2007 Tasks utility have plans on adding features such as Export to PST, Export to Mailbox, a fully integrated Active Directory property page for valid recipient objects, the ability to handle and manage Dynamic Distribution Groups, a Hide Group Members task, the ability to handle and manage Resource Mailboxes, improved management of Unified Messaging and more to future versions of the product.

If you use any of the products above, post some comments and let me know your results.


Friday, April 11, 2008

Microsoft's Unified Communication Solution

Hello Everyone,

Our April 8th, NY Exchange User Group meeting featured Stephen Chirico, Sr. Tech. Solutions Professional presenting the details beyond the technical deployment needs of Microsoft Unified Communications (aka UC, which is Microsoft's VoIP, IM, video conferencing, and more solution).

1 word for the meeting, Wow. That's how I would sum up our last meeting and
"Star Trek visits NYExUG". Also the concept of a "Communicator Call" is definitely forward thinking, see the highlights below for explanations. This meeting is one for the record books. Great topic, great hardware and software demo-ed, and fun.

It really helped in determining what was required for deployment of Microsoft's UC technology. I've listed some highlights from the slides. Stephen did 2 presentations in one (UC Vision & OCS Architecture). So, you'll see 2 PDFs posted online in addition to the sponsor's (Azaleos) presentation. I would recommend you review both in detail if you're interested in UC.

This Presentation and Past Meeting Presentations

Access Edge Proxy - DMZ based server that proxies all traffic. No AD or authentication done on this box unlike an Exchange Edge Server that uses ADAM or a Windows RODC.
PBX Integration Options - 1) PBX supports mediation server w/o gateway (new PBX), 2) use of an Advanced Media Gateway w/existing PBX, or 3) use of an OCS mediation server w/Basic Media Gateway w/existing PBX. Slide 16 explains this. In essence, the Advanced Media Gateway eliminates the needs for a Windows OCS Mediation Server while the other 2 options require that.
Identity and Presence - available, on call, in meeting, etc. Your status available in Office, SharePoint, Live Communicator, etc.
Communicator Call - call's an identity (not a method/location such as mobile, work, home, IM, email, etc).
MOS = Mean Opinion Scores (what a user thinks of the voice quality).
Star Trek and Microsoft's UC - We saw the Star Trekish Round Table in action. It's a 360 degree audio/video conferencing system that mere mortals can afford as opposed to other 360 degree audio/video solutions out there. See Slide 23 for what attendees saw demo-ed. Speaker is shown on video based on triangulation of voice. Very cool.

URLs to assist users with deploying Unified Communications.
Supported gateways:
IP PBX and PBX Support
Office Communication Server 2007 Partners:
Microsoft Unified Communications: Phones and Devices Optimized for Microsoft Office Communicator

Don't miss our next meeting or the following ones... here are the summaries of the upcoming meetings.

May - Microsoft's Behind the Scenes Look at Exchange Hosting Services by Keith Keeler. (fyi: they don't host Exchange). Don't miss this, I expect this to be like our Dogfood lab backup meeting.
June - Technical details between 2003 & 2007 & Why Companies Upgrade. Speaker Keith McCall.

July - Presentation/Sponsor of AppAssure's Replay for Exchange. (fyi: this is a interesting solution that offers the ability to backup direct to a virtualized file format [vmdk].)


Tuesday, April 8, 2008

They are back from the dead... Exchange's next version will "re-emphasis" Public Folders

Hello Exchange Folks,

Microsoft reversed course and now Exchange's Public Folders will stay a major component in Exchange Server. This is good news for all. I was a bit worried about the loss of such and the removal of the GUI for Public Folders management in Exchange 2007. Microsoft fixed this in SP1, adding such functionality. Now, Microsoft outlined the following at the URL below... (fyi: this is the Exchange team blog, and has a wealth of great information).

Use Public Folders Currently?
Document Sharing - SharePoint may be better option.
Calendar Sharing - No need to move
Contact Sharing - No need to move
Discussion Forums - No need to move
Distribution Group Archive - No need to move
Custom Applications - SharePoint may be better option
Organizational Forms - No need to move (or look into use of InfoPath)

From my experience, Public Folders are most frequently used at companies for Calendar Sharing, Contact Sharing, and Distribution Group Archive. So, the need to add SharePoint with it's entire line of support applications (e.g. backup agents [that's plural for SQL and SharePoint], anti-virus, server(s), is a great thing for everyone. Exchange is a great product, so obviously removing and then trying to convince existing users to add more products (e.g. SharePoint) and increase TCO (total cost of ownership) to have the same functionality was a bad idea. Thank you Microsoft for seeing this and making sure Public Folders stayed in Exchange Server.


Saturday, March 15, 2008

My "Enterprise Class" Home Theater Media Center

At the last NY Exchange User Group meeting (March 2008 - StealthBITS Exchange auditing software), the topic of DVRs came up before the meeting. Not sure how, but it did. So, I figured I would share how I built the ultimate DVR for me. And this ain't your run of the mill DVR (e.g. cable company, TiVo, Replay, etc). This IS art to me. See the in construction HTPC photos.

The home theater media center (aka home theater pc, htpc) journey started about one and a half years ago, back in August 2006. A buddy of mine dropped me an email and literally told me he was building a "media center PC" due to a Maximum PC article and wanted to know if I was interested. Little did I know when I responded and told him "Cool. Until they can record HD off cable, I'm not building. I'd love to see yours." Well, I've learned quite a lot since then, and that the holy grail is not HD, but commercial skip and portability (think iPod/streaming/burning).

The journey, and it was one since it took over a year from start to finish involved determining what media center software to run, what hardware, and managing the whole process. So, we quickly determined that the media center software was going to be SnapStream's BeyondTV which is probably the most feature complete (e.g. multiple tuners, commercial skip, multi-format support on record [MPEG2, DIVX, WMV, H.264], no monthly fees [sorry TiVo], burn shows/films to DVDs, download them) and stable solution (runs on Windows and works - sorry MythTV) on the market.

The bigger and more complicated process was selecting what hardware to run this on. We quickly determined this could not be a standard PC (e.g. Dell, HP, laptop, etc) due to noise levels, looks, and functionality. We wanted more than those could provide. So, we quickly realized we must build a custom PC. This was something, I gave up in 2000 due to time constraints and deciding that once multi-processor computers were widely available, I'd be willing to accept an OEM one (fyi: main stream desktop OEMs started providing SMP support back in 2000). But, not on this. This HTPC was going to be in my living room so it was going to be very visible and so it needed to be unique and eye-catching. This was going to be our art work center piece.

So, before we could select the exterior case, we had to figure out which "heart" was going to run this. This was the most critical piece of the entire project, since a motherboard which is not stable or has not been sufficiently tested with hardware for compatibility, can easily derail a stable environment (e.g. lockups, hangs, reboots, etc). So, we spent months reading, discussing, and evaluating which motherboard to go with. In the end we decided on a Asus P5B Deluxe w/o WiFi motherboard. This was going to be the heart of our future HTPC.

Realizing that "just" selecting the motherboard was very difficult and time consuming, I realized we were going to need some help maintaining and managing this entire project. So, we quickly embraced and started using Google Docs (Spreadsheets). It's a great free web based collaboration tool. This made keeping track of parts #'s, comments, URLs, notes, etc a whole lot easier than the original back and forth email and phone calls.

Once we had selected the motherboard, we spent a few months repeating this process for the case (roomy for all the components and works with the motherboard), hard drives (they had to be quiet, fast, and reliable), the video card (had to be passive, no fans on this one), and the CPU heat sink (quiet and efficient). The rest of the components were figured out in a matter of mere weeks. I know, sometimes one of my flaws is I'm too thorough and detail oriented. My buddy didn't help me on that since there were times he was performing calculations on sizing of the CPU heat sink and case.

In the end,
I purchased the following hardware components:
1 x Asus P5B Deluxe w/o WiFi, Intel LGA775
1 x Zalman HD160XT HTPC Enclosure with 7" LCD Touchscreen LCD
1 x XFX GeForce 7600GS 256MB DDR2 PCI-E GPU (PV-T73P-UMH4) RoHS, HDTV ready, HDCP Ready, SLI ready, Vista ready
2 x Samsung HD080HJ, 80GB, SATA, 8MB - 8.9ms, 2.5/2.9bel
2 x SAMSUNG SpinPoint T Series HD501LJ 500GB 7200 RPM 16MB SATA 3.0Gb/s Hard Drive
1 x Pioneer DVR-112D IDE, 18x DVD+R, 10x DVD+DL
1 x Processor, Core 2 Duo E6550 2.33GHz, 4MB L2 Cache, 1333MHz FSB
1 x Artic Cooling Freezer 7 Pro for All Pentium D
1 x OCZ OCZ2G8002GK 2GB Kit DDR2-800 PC2-6400 Gold Gamer eXtreme XTC Edition Dual Channel Memory
1 x Antec NeoHE 500W Power Supply
1 x SnapStream Beyond TV PCI Bundle (Digital)
1 x 3ware 8006-2LP SATA RAID Card
1 x Hauppauge PVR-500 for 2 tuner support (Ben only). Card supports 2 inputs. My buddy purchased the PVR-150.
1 x Adesso WKB-4000US, wireless SlimTouch Mini 2.4GHz USB Touchpad Keyboard (Ben only)
1 x APC 1500VA UPS (Ben only)

So, after waiting 2-3 weeks and getting all the equipment, we started the process to unpack and build one HTPC. The reasoning behind building one at a time, was to see what issues we might run into, and to limit the confusion of having duplicate items out. Our first issue was getting the CPU (w/ brand new 1333 FSB support) working on the motherboard (Asus P5B Deluxe). The motherboard had a sticker claiming support of 1333 FSB, but this took a few hours of upgrading the BIOS firmware 2 or 3 versions later and doing this only via a USB memory stick (we didn't have a floppy drive of course - make sure all servers you buy have a floppy drive). The next issue was getting the BIOS configured in the correct RAID configuration. The RAID configuration we wanted to run was two RAID 1 configuration. One RAID 1 for the operating system and one RAID 1 for the media files. Who wants to lose their TV shows/movies or all that configuration? This is a mission critical application like Exchange, hence the need for redundancy. ;-) More on that later. So, we ran into a problem that the motherboard only supported 1 array (this took hours to figure it out since you needed to plug the SATA HDs into different ports on the motherboard to get the RAID array working versus standard SATA ports, the technical writers for the documentation first language is probably not English, and none of the folks we read using this motherboard on the internet were running/writing about 2 arrays difficulties). So, you could not configure 2 separate arrays of RAID 1, but a single RAID 1 array. This would have meant combining our OS and media drives, which was unacceptable. So, we ended up purchasing the 3ware SATA RAID card above. This obviously delayed the HTPC build by another week. Once we had this, we configured all 4 HDs off the RAID card to two separate RAID 1 arrays.

After this, we connected the motherboard up and connected all the cables for the Zalman case. This turned out to be an issue, since when we installed Windows XP Pro w/SP2 (pre-applied), the SD/CF/Memory Stick and those ports took drive letter C and forced XP to install the boot onto drive letter D. So, back to powering off and disconnected those ports and then another re-install. Once this was done we shutdown all the extra services and tweaked it for speed. This HTPC will not get Windows Updates, anti-virus, or any firewall above the XP one. Speed and stability are too important for this. Hence the importance of running a locked down XP for the HTPC.

One would think we are done, but we still had a full day of work ahead of us. I had bought special heat resistant tubing (, F6 - self wrapping braided sleeving) for the best cabling job one could do. Cabling is very important when designing an ultra quiet PC. So, we spent about 7 hours dedicated to just cabling the inside of one of the HTPC's. A bit excessive, but it's a master piece in increasing airflow which reduces fan noise. Keep in mind, this is for the living room, so you don't want to hear it at all.

Current Configuration
HTPC is connected via HDMI (audio + video on this single cable) to a Sony Bravia 40" LCD.
Switched from RF Firefly remote for BeyondTV control to an IR Firefly remote (Firefly is the BeyondTV remote). Using an IR remote for the Sony TV.
7" touchscreen LCD on Zalman is used for performance monitoring and photo showing.

Planned Upgrade
Convert both IR remotes (TV & BeyondTV) to a single unified Logitech Harmony One Universal Remote
Connect second cable box connect to second TV tuner card already installed.

Redundancy and Data Protection on the HTPC
One would think two RAID 1 arrays for the operating system and media files would be sufficient, but not in Ben's book. So, I run Symantec Ghost 12 (image the entire OS volume) before any major configuration changes (e.g. software changes, version updates, hardware additions, etc). I run a Ghost Server on my home network which makes it easy for backups and have prepared the boot CD in case of the need for a bare metal restore. And I have already prepared (slip streamed) the Ghost boot CD for the RAID and motherboard drivers for the HTPC.

So, in the end, we spent about 1 1/2 years from brain storming in the beginning to completed product. During this time, about 3 or 4 full days (7-12 hrs) over a period of about a month building the base configuration of the hardware and getting Windows installed. It was weeks longer to get all the software (e.g. BeyondTV, video card, remote software, etc) and hardware configured correctly. As of Fall of 2007, the HTPC has been fully operational, and it's a pleasure to look at it and use it (even my wife uses it, it's that easy). Will I get my money's worth out of subscription fees, no, but sometimes in life folks work on something because they love it. This is what I love.... computers, technology, and increased productivity.

Any questions, drop me an email.


Wednesday, March 5, 2008

My Research into dimmable compact fluorescent lights

Hello All,

I figured I would share this with everyone. I spent a few hours over a few weeks researching compact fluorescent lights (CFLs) that are dimmable (versus standard on/off CFLs). And in the end, I've decided not to purchase or recommend dimmable CFLs yet. The best dimmable solution I found was from TCP though.

There are 2 major issues with the current dimmable CFL technology. First is only mechanical dimmers (e.g. sliders with a switch, or slider which clicks when power is off) are supported. Meaning, electronic (e.g. touchpad's, LEDs, etc) will cause pre-mature* failure since most electronic dimmers always have a small amount of electricity flowing to the bulb. This causes the ballast (analogy is the engine of a car) to cycle on and off rapidly constantly which is very bad for the longevity of ballast. And the mechanical dimmers are fine if you can configure them to cut-off all electricity to the CFL at 20% power. Power of 0-20% is bad for a ballast similar to the way a electronic dimmer is. It causes it to experience that on-off problem as well. So, if you ever see any flickering on a CFL, that's a bad sign. The second issue is some dimmable CFLs need to be used in non-enclosed fixtures due to heat generation. In other words, open lamps, sconces, etc. And the CFLs that can be used in closed fixtures tend to be significantly longer (up to 2" for a 23 watt CF dimmable) than dimmable CFLs & even traditional incandescent light bulbs. Due to these issues, I've decided to re-visit this towards the end of the year. Planned fixes within about a year according to the tech at TCP was addressing the 0-20% issue for mechanical dimmers, low voltage for electronic dimmers even in off position, and reduced size.

* I spoke with a
technician (customer service rep was not helpful since I knew more about the products then she did) at one of the major lighting manufacturers (TCP) and he amused me with the term "pre-mature" failure. He explained that dimmers w/o 20% cut-off would cause the ballast to go into that on/off mode when within 0-20% and result in a failure sooner than normal. The ballast would just fail (versus a lamp failure). Which would be a problem especially since the TCP 161 Series is probably going to be discontinued since it has not sold well (it's 25% more costly than the 101 series). So, there will be a point in the future you will not be able to buy either the ballast or the lamp.

Good website for dimmable CF bulbs (they also talk about 3 way, and more)

Considered the following before deciding not to proceed with the replacements.
TCP, 161 Series - enclosed fixtures
2 piece dimmable SpringLamp
Item # 16120L - 20 watt, 6.1"
Replacement Lamp - 36020

TCP 101, SpringLamp CF - open fixtures
# 10120 - 20 watt, 5.28"
# 10123 - 23 watt, 5.4"


Attended the NYC Launch Event for Windows Server 2008

Hello All,

I attended the NYC Launch Event for Windows Server 2008 and while it was interesting (RemoteApp & beta look of Hyper-V), the venue really brought down the entire experience due to organization and layout. Some examples were finding different things (e.g. user group areas, hands on demo, etc) and making my way through the vendor areas. Microsoft should realize this is not the venue to use in the future. Also, the NFR software provided was a let-down. The only real piece of software provided that was non-crippled was Vista Ultimate w/SP1 & Visual Studio 2008 (I'm not a dev, so this isn't very useful to me). Nice, but we all received Vista a year ago or so at the Vista Launch. Oops. :-) So, a copy of Windows 2008 Server was expected. I'll be donating the NFR software to the user group at our upcoming meeting. On that note....

Our upcoming meeting for Tue, March 11 is a vendor presented Exchange auditing solution by StealthBITS. Keep track of Exchange Server changes via this solution. They are also sponsoring the meeting. Another major announcement for the meeting is my company donated an Xbox 360 Arcade System (includes 5 games) to be raffle off. So, you don't want to miss this meeting. Visit to RSVP to the meeting.


Thursday, February 7, 2008

Performance Testing your Disk Subsystem for Exchange

Hello All,

So, I've spent some time (a few months) researching a major upgrade to provide a client higher performance for Exchange, near real time disaster recovery, and ability to quickly restore large Exchange databases (50-100GB). I just want to touch on the higher performance aspect of my research.

One of the major issues with Exchange 2003, is when users have large mailboxes (think 5-15GB (yes, GB) mailboxes, and they attempt to sort a column in Outlook that has thousands of items, Exchange has to work very hard to accommodate this request. This will stress an Exchange subsystem to handle this. So, in my quest to recommend a higher performing disk subsystem (they were using direct attached storage [aka SCSI], I proposed a scaled storage area network solution). So, one of my first tasks to determine how to get the best performance out of the SAN will be to use the following 2 Microsoft tools for performance testing of Exchange when I attempt to configure the SAN disk subsystem for RAID 1, 5, and 10. Also, these tools can highlight network related issues, so make sure to examine your networking especially if you are running on a SAN (e.g. MPIO, jumbo frames, teaming, etc). These can make a very big difference in performance as well.

1) Microsoft Exchange Server Jetstress
*** Summary ***Microsoft Exchange Server is a disk-intensive application that requires a fast, reliable disk subsystem to function correctly. Jetstress is a tool that helps administrators verify the performance and stability of the disk subsystem before putting their Exchange server in a production environment.

Jetstress works with Exchange Server Database Storage Engine to simulate the Exchange database and log disk I/O load. If you run Jetstress and have missing libraries, you will receive a message that states that you must copy any missing DLL files from Exchange 2000/2003/2007 installation CD/DVDs to the Jetstress installation directory and rerun Jetstress.

2) Microsoft Exchange LoadSim

*** Summary ***
Use Microsoft Exchange Server 2003 Load Simulator (LoadSim) as a benchmarking tool to simulate the performance load of MAPI clients. LoadSim allows you to test how a server running Exchange 2003 responds to e-mail loads. To simulate the delivery of these messaging requests, you run LoadSim tests on client computers. These tests send multiple messaging requests to the Exchange server, thereby causing a mail load. LoadSim is a useful tool for administrators who are sizing servers and validating a deployment plan. Specifically, LoadSim helps you determine if each of your servers can handle the load to which they are intended to carry. Another use for LoadSim is to help validate the overall solution.


Monday, January 28, 2008

Upcoming Meeting for NY Exchange User Group

I am looking forward to this meeting, since I had the opportunity to test out Good Technology's wireless sync solution, and found it very impressive and better than my BlackBerry in a number of ways except for the hardware's battery life it ran on (Palm Centro). This caused me to abandon the use of it since my BlackBerry can go 3-4 days without a recharge, while my Centro w/Good could barely go a day. So, I'm back to using ActiveSync and a BlackBerry. Here is information regarding the upcoming meeting.

I'll also post a TechEd presentation that compared all 3 solutions as well (Good, BlackBerry, ActiveSync) in the next day or so.

Tuesday, February 12, 2007 Meeting Topics
Doors Open 6pm
Meeting Begins 6:30pm

Visit for more information.

Partner Sponsor is The Never Fail Group
Meeting Sponsored by Motorola's Good Technology Group

Presentation Topics
1) Good Technology will be presenting Good Technology's PDA wireless synchronization solution compared to Research In Motion's Blackberry Exchange Server & Microsoft's ActiveSync. See how one of the upstarts has one of the most feature complete solutions behind BlackBerry and ActiveSync. Speaker is Scott Davenport.

Folks attending will be able to win a various raffle items including Microsoft software. Free food, and open to all simply by RSVPing. We'll also run a LiveMeeting session for folks who cannot attend the meeting at the NY Microsoft office

Visit for details.

Tuesday, January 8, 2008

Thoughts on Upcoming Presentations at NY Exchange User Group Meeting (Tue 1/8)

Vendor Thoughts
The NY Exchange (Server) User Group ( is having a vendor presentation meeting by The NeverFail Group on Tue Jan 8, 2008 at 6pm start. I'm curious to see how NeverFail's solution handles Blackberry Exchange Server (aka BES) replication since there is a "license key" (aka SRP) that is not allowed to be released on the internet via more than 1 BES server (if it does, it disables both until you contact RIM & ask for forgiveness). So, we'll see how that works & I'll post back. This meeting follows a few other replication solutions we have recently seen at the public monthly meetings for NYExUG (e.g. DoubleTake [software], Teneros [hardware], & Asempra [hardware]). This unintentional focus on replication definitely has allowed members to be more informed and know what to look for in an Exchange replication solution. I know some folks might not classify the Asempra BCS solution as replication, but it has the capability to replace such, so I figured I would classify under that.

My Presentation
I'll be presenting on "Tips to fix Exchange 2003 database problems". This will be a case study on the steps used to solve a serious Exchange database corruption problem that a law firm experienced. It affected the entire company until it was resolved, so there was a lot of pressure to resolve it as quickly as possible with minimal downtime.

Sunday, January 6, 2008

First Hand Feedback of ActiveSync, Blackberry, and Good Wireless Syncing

I'm a heavy email user. Maybe heavy isn't accurate, excessive/addicted email user/admin. So, I'm always looking for the best client side PDA email solution for my needs (since Outlook is on my desktop and laptop).

I digress for 1 paragraph.... on the PDA side of things, my 1st PDA (Kyocera 6035) was Palm OS based and the concept of replacing/upgrading a PDA/phone and simply syncing it and watching all the contacts re-appear was such a great idea, I swore never to go back to a "simple" phone (e.g. Razor, etc). My current (as of Jan '08) phone is still based on Palm OS (I don't need to get into the religious wars on why now), but I recently upgraded from the Palm 700p to Palm Centro). If you're wondering why, the hardware & software is the same, it's the form factor. Oh, back to the point of the post.

Intro to syncing
When I first started using the Palm OS, Palm (technically a 3rd party) had licensed the ActiveSync functionality to allow syncing of one's calendar, contacts, and email to an Exchange Server. So, I tested that out, and it was fine until I started running into other problems when one adds SSL and make sure other security enhancements to IIS. This broke ActiveSync, and after spending many hours troubleshooting it, I resolved it. But, in the past few years, every so often an IIS update or other weirdness just happens and I need to troubleshoot what's broken (delete my IIS config and reload it [what a pain]). The other thing I didn't like was typing was too slow on the Palm hardware. On Research in Motion's side, the Blackberry (aka BB), I can touch type and type faster on my BB than many folks on a standard computer keyboard. In other words, the keyboard is excellent on the BB. It's simple designed for typing emails. My first jump onto the BB ship was the Blackberry 6750. Excellent BB, even though it's a bit tall, allows for a lot of email to show. I know it's B&W, but who cares when it's email (I have a color BB 7250 now, I don't need any donations ;-). Then about 3 months ago (Oct or so) I had the opportunity to test out Motorola's Good Technology. I've know about Good and have heard it's the most feature complete out of the BIG 3. I refer to those as Microsoft ActiveSync, RIM's Blackberry Exchange Server (aka BES), and Motorola's Good Technology's Mobile Messaging (aka GMM).

Testing out GMM (Good's Mobile wireless sync solution)
Most folks would probably watch a flash demo and read the product datasheet. I decided it was worth the time to see if I could combine all my needs on 1 device, my Palm Centro running the Palm OS and replace my Blackberry (FYI: Palm hardware can run Palm OS or Windows Mobile OS. Palm is now a hardware vendor. It's confusing, I know). Many folks are amazed I carry 2 devices, then again they probably think about productivity & efficiency like I do. So, I fired up VMware Workstation on my test computer and powered on an available Windows 2003 Server OS and started the install of GMM (there were a # of steps in between, such a VPN site a site to site link so I could connect to my Exchange Server at the colo facility it's housed in and test to insure the latency was low enough, it all passed). Then I started testing GMM on my Centro. The GMM functionality is impressive but you need to use the Good applications which are loaded on the handheld (Pro & Con) wirelessly of course. They run a bit slower (e.g. switching between views, opening, closing, etc) than the built-in ones (e.g. calendar, contacts, etc), but have features that BES & ActiveSync 5 with an Exchange 2003 server don't (e.g. flagging, searching, etc). I was very impressed with GMM except for the fact the hardware's battery usage on EVDO (on Sprint) can barely handle a day of usage & password protected units take about 2 seconds to unlock (not sure why it's so slow, but that's AFTER you enter the password and click OK). So, after almost a whole day of syncing (I receive about 150 messages and send about 100 messages a day), battery life was almost dead on a new fully charge Centro. Based on that usage, the battery life was a serious issues. So much, I had to give up the Good functionality and returned to the basic ActiveSync on my Palm for quick reviewing of email on weekends when I might step out without my Blackberry.

- the GMM is an excellent synchronization solution for devices & users usage that can handle the always on network needs. I would consider it the most feature complete solution among the big 3.
- Good has better documentation than Microsoft & RIM on implementation (it's so detailed they explain how to uninstall and remove GMM, impressive)
- RIM's Blackberry hardware is a generation ahead of Palm & Windows hardware for battery usage on always on email
- RIM's Blackberry hardware allows for the fastest typing and I would consider it the gold standard for email synchronization
- ActiveSync is a good solution for low usage and companies not willing to pay for additional functionality besides the basic email, calendar, contacts, etc sync (e.g. more support, functionality, logging, etc).
- Good's sync is partnered with Palm & Windows Mobile hardware, which is a strength & weakness as I explained above. Good previously had RIM similar hardware, not sure when they stopped offering it.

My Final Thoughts
GMM is excellent, the probably is the hardware. So, if and when a hardware solution is smaller than a Blackberry with full keyboard and offer similar or battery battery life, I'll consider returning. Until then, I'll be waiting since I'm not a big fan of the current Mobile devices. I consider them too large or with keyboards that simply don't match those of the efficiency of RIM's Blackberrys.

Here is a photo comparison of my wife's Blackberry Pearl and my Palm Centro. I had originally hoped this Centro would be the sole device I carry. No thanks, I'll happily carry both (Centro & BB).