Skip to main content

The Zen of Hosting: Part 10 - Windows 2008 Core

Since we had Windows 2008 we just had to try out Core edition, which is the version of Windows where Microsoft promised everything would be command line based. I like to think of it, that if Vista stole the UI from Apple Mac, then Win2k8 tried to steal it from Linux...

So before I get into core, let me first state that Win2k8 is the best server OS Microsoft has ever released. It is amazing how well polished everything is, and the tools that are there are great. Does it compare to Linux servers, well in some places it kicks ass and others it doesn’t, but since Linux servers are the de facto for command line based systems if we compare the command line features then they have done a HORRIBLE job.

All that is actually happening is you are getting the normal command prompt in a Windows and they dropped Explorer.exe from being the shell. In fact explorer.exe does not even get installed, but a lot of our old favourites are there, such as Ctrl+Alt+Del still brings up the usual menu and task manager still works.

Actually Microsoft dropped so much the gain in RAM is impressive (our avg RAM usage normally is 750Mb but on core it is a mere 300Mb) and the attack surface and patch requirements shrinkage is great.

Getting back to command.com as the shell, is likely the biggest single mistake of core.It’s not like Microsoft doesn’t have a great command line system, called Powershell which they could have used. In fact there is so little added to the command line that after this experience I went to a Win2k3 machine and was able to do most of this anyway, and it’s not hard to kill explorer.exe as the shell in Win2k3. One advantage doing this core mockup on 2k3 has, is that at least Internet Explorer is there for you to get online to get help, Win2k8 core has no decent help (just the same old crappy command.com stuff).

Linux has man pages, Powershell has get-help, the console has... Thank the heavens that I was able to use my laptop to get on to the Internet. For example I had problems with the first two core boxes trying to run Hyper-V on them, it just gave all kinds of RPC issues. Turned out that although I had not set the DNS correctly using netsh, I had set it for Primary Only and not Both. What the difference is beyond me because using the Windows GUI to set network settings for the last 20 years obviously sets this correctly so why make it so much tougher.

Another interesting feature of core, which I never had to it my head with but learnt about when I attended Win2k8 IIS training that Microsoft ran and the trainer said that in Core you couldn't run ASP.NET for web sites, because Core doesn't have the .Net framework. This is because the .Net framework installer needs a GUI. I suspect this is the same reason why Powershell can't be used, being .Net based and all. But the part I don’t understand is that THERE IS A FRIGGING GUI! It's all around the command prompt Window!

My recommendation is avoid Core as the extra work doesn’t make up for the cost of a little bit of extra ram, rather spend less time on setting up the server, more time billing customers and buy the ram. Hopefully in Windows Server vNext gets it right.

The Zen of Hosting: Part 9 - Hyper-V

As I approach the end of this series I want to highlight some of the technology that the hosting machine is built on and some of the experiences I learnt with that. These last few posts are much shorter than the earlier ones but hopefully provide some quick bite size info.

So if you have looked at standard HMC then add all the technology we have added to it, you would assume there is a building full of servers. The reality is the server room has got lots of space and isn’t that big. How did we achieve this? Slow applications because we running everything on a fewer servers? Not at all.

We bought some seriously powerful HP machines loaded a ton of ram and installed Windows 2008; but how does that help with running lots of systems and doesn't HMC break if it runs on Win2k8 (see way back to part 2)? Well Win2k8 has the best virtualisation technology Microsoft has ever developed, named Hyper-V. This is seriously cool stuff in that it actually runs prior to Windows starting and virtualises Windows completely (rather than running virtual machines on an OS, they run next to it). The performance compared to Virtual Server is not even worth talking about, it basically pushes Virtual Server into the stone age.

It is very fast and it seems to handle the randomness of the servers usage (those little spikes when you run multiple machines at one piece of hardware) so very well. But not every thing is virtualised, there is a monster of an active-active SQL Server cluster (since so much needs SQL) and we have a number of oddities such as the box which does media streaming due to the fact that some specialised hardware can’t be used in a virtual machine. A worry for when we started with Hyper-V was it's beta/rc status... Well with thousands of hours of uptime logged so far by servers on it, it has been ROCK solid.

Speaking at Tech-Ed Africa

I can now officially let out one of my many secrets, which is I am speaking at Tech-Ed Africa this year! Oddly enough I am speaking about something I have never blogged about, WPF and building business applications with it. I will be co-speaking with a good friend Simon (from Blacklight) who is an amazing designer. It will be a very fun talk. For more details see the Tech-Ed Africa site at http://www.tech-ed.co.za

The Zen of Hosting: Part 8 - Microsoft Dynamics GP and Terminal Services

For this instalment the product I am going cover is Microsoft Dynamics GP which is very interesting compared to MSCRM (part 6) and MOSS (part 7) in that it is not web based and thus a completely new challenge to expose in a web based hosting environment. For those who not know the architecture it is a Windows form application (not sure if it is .Net or WinAPI) but the GUI is a thin veil to a monster SQL database with so many tables and stored procs it is scary. So the normal way is that the user gets this client installed and the client directly connects to the SQL server. So if you thinking for hosting that you end up having to allow direct connections over the web to SQL, think again. The security risk of this just makes that a huge no. So after spending some time investigating the other people offering hosted GP, the solution everyone else seems if give you a server and let you remote in via Citrix. As this is a Microsoft end to end solution, Citrix is not an option, but Microsoft does have Terminal Services (TS) to compete and in Windows Server 2008 it can compete better than before. TS has always been this way to connect to a full session, which is nice, but we don't want nice, we want amazing. So the TS in Windows 2008 has a feature called Remote Applications.

Remote Apps lets an admin publish an application to a user, so it runs from a special icon, an MSI file (which you could deploy using AD or System Centre) or from a web site, and looks just like it is running on the users machine. In the background it is spawning a normal TS session on the server, starting the application and pushing the UI of the application only to the user. It's great as the user thinks it's on it's machine and it's super fast thanks to the lots of server power the application has and it is not fighting for resources on the client machine.

As this is the first version of this there is still some rough edges which need addressing. Firstly the application still runs on the server, so if you go File -> Open you browse the servers file system :( I know TS an share drives from the client to host, but the look like other drives not the normal C, D etc... users are expecting. What should be happening is that the admin should be able to disable the servers drives from being exposed and the clients drives are the only ones shown. The same should apply to printers. One advantage for GP is that working with the file system isn't a big requirement, but the printing is and that is less of a pain. The next area is security, it is still launching a TS session and that means if you want to allow a user to run a remote application, they end up needing login rights. I understand the technical requirements around this but there should be a way to separate people who will login via TS to the machine and those who just need Remote Apps on the terminal service level. Despite this Microsoft Dynamics GP looked like it was going to be difficult, but in the end was very easy to deploy.

When to use what

The official MSCRM blog (http://blogs.msdn.com/crm – which I knew out of my head. I am aware I need to get a life) seldom excites me because they are still so far behind what companies, like where I currently work, but Amy Langlois really did shine today. So forgive this +1 post and if you are a Microsoft Dynamics CRM developer working with the SDK you must read it because it contains vital information on changes they are making to the SDK assemblies and when/what you should be using in your code.

Read all about it at: Web Services & DLLs or What’s up with all the duplicate classes?

The official way to change MSCRM ports

Finally Microsoft have released a support article which details how to change the MSCRM web site ports correctly. This also fixes the workflow doesn’t work et al issues (see here). This is great to have, not because details a third way to do it (so besides the SQL edit and IFD tool), but that it actually fills in the gaps for everything else you need to do such as reconfiguring Outlook clients and data migration tools etc…

You can read it at http://support.microsoft.com/kb/947423/

The Zen of Hosting: Part 7 - MOSS

Next up on the technology list I want to profile in the series is Microsoft Office SharePoint Server (MOSS), which you may think is easy since HMC supports it's little brother Windows SharePoint Services (WSS). Unfortunately the added complexity MOSS brings adds significant challenges to the mix and the first question is how do you do deal with users in MOSS while keeping the customers separate? Dealing with users means that you must provide all the functionality of the authentication system and any web part in MOSS (like the People Picker) while making sure one customer doesn't see anything or anyone from another customer.

Well the answer is easy, use a custom authentication provider. The out of the box the AD provider is not up to the task as it means that all users can see all others. Next thought would be to use the forms based one, but that means additional replication of users from AD to a database which is also a pain. MOSS also includes a generic LDAP provider which seems like the perfect fit as you can specify the root OU to start from (so limiting what each site can see), but this provider is very error prone due to the HMC structure/properties so in the end you are better off building your own LDAP one. So all you need to do once the custom authentication provider is build a custom provider for HMC which setups and manages the site creation and feeds the information on what a customer has directly to HMC.

A great example of how this is done, is one of our first customers who got their own mini-environment within our hosted environment. So they don't sit on our big MOSS farm, they have their own one and have some special customizations to it, but the principals of their solution and the HMC one is very similar. The customer is South African Tourism's (SAT) and their web site is www.freesatsite.co.za. One of the special changes is the use of specialised custom authentication provider, which was changed from an LDAP one to one that uses Windows Live! So your MSN login becomes your web site login (how cool is that) and it includes a full self provisioning system which allows SAT members to login, and provision a new site in seconds. Rather than being completely separate domains each new web site is a sub-site of www.freesatsite.co.za. Please don't think I suddenly became a great MOSS expert, I really just provided servers and the environment for this brilliant solution, the team that built this solution are the guys from Blacklight who designed the UI and all the themes support and to Mark Lenferna de la Motte and his team who did the bulk of the heavy lifting configuration to make MOSS do it's magic.

The Zen of Hosting: Part 6 - Microsoft Dynamics CRM

So in the first five parts we have looked at the standard stuff, now lets dive into a real product and we'll tackle the one I am most familiar with: Microsoft Dynamics CRM 4.0. Thankfully MSCRM 4.0 is the first version of the product to really support a hosted model. Somehow MSCRM 3.0 could do hosted, but based on the architecture you would have ended up hacking a lot in it to get it to work. I never did hosted MSCRM myself in version 3.0 so that thinking is just based my understanding of the architecture of MSCRM 3.0.

So how is MSCRM 4.0 different from 3.0 and how does that allow it to be easily hosted? Well firstly you can now have a single deployment with multiple databases, one for each organisation. This means that each organisations data, settings and customizations are completely separate! This is great if every machine is on the domain but in hosting you need a way to provide a way to login over the web or via a special client. This is because in a hosted model, despite the fact you have a domain, your end users may be on a separate domain. Thankfully MSCRM 4.0 provides BOTH! This is configured using the IFD tool which actually enables MSCRM to look at what the source IP address is, and if it is a local network IP uses standard NTLM authentication. However if it sees it as external IP address it presents a form based authentication which the user can use to login with, this means that not only does the web interface work over the Internet but it also means the outlook client works too.

If you are a regular MSCRM user you likely love the dynamic export to Excel, and for those who don't know what it is let me explain briefly. In MSCRM you can export almost any data to Excel and it can be updated dynamically live from within MSCRM. This works by creating a data set in the Excel spreadsheet and putting the SQL for your query in the data set. The problem with this scenario is that Excel uses direct connections to SQL to do this, so does this mean you need to expose your SQL server? Not at all, if you are running the Outlook client a button is added to Excel which actually reverses the SQL and uses the normal MSCRM web services to get the data! So you can still just expose MSCRM to the net, keeping security high, and lowering administrative overhead. Note: This is only available if you are using IFD deployments.

If you are planning to do hosted MSCRM you may find the hosted deployment guide interesting, as it explains how to setup MSCRM 4.0, however it is not the most logical guide as it is broken into three sections. The first section is how to configure your environment for hosted MSCRM. This is actually the exact same information as included in the HMC guide for configuration of the environment, which brings us to the second section how to use HMC with MSCRM. So not only do they repeat what is in the HMC guide, they then tell you to do go through that guide. It's pointless and a massive waste of space. The only advantage is that if you had never heard of HMC this might point you in the right direction. The last section in contrast has some interesting and useful information on the additional steps for MSCRM to get it to run in an IFD mode like how to edit the install configuration file to setup IFD from the install (but the easier and less error prone route is to use the IFD tool) and any extra configuration needed for hosted (such as changing the security of the web site in IIS to anonymous).

Something that is vital for a happy hosting environment for MSCRM is that you must make sure the async service is running all the time. This is vital not only because it manages the workflow (and what good is MSCRM without workflow), imports and background processes but in a hosted scenario it also handles logins which are done via the forms based authentication.

Really MSCRM is pretty easy to get hosted and setup and while the demo HMC web console doesn't provide automatic provisioning tools, a lot of the third party ones do have options for MSCRM. Something I have learnt is that when you deploy MSCRM 4.0, even if it is not a hosted deployment it is worthwhile to make every MSCRM deployment an IFD. My reasoning for this is two fold:

  1. Authentication is handled in a superior way, as you have normal NTLM and forms based. This can give you a way to solve those complex Kerberos based issues caused by problems in AD without needing to mess around with AD.
  2. If not now, at some point in the near future, someone in your business will want to work from home or while on a business trip. You can save them having to mess around with VPNs and just point them to the same URL as they normally use (provided you have setup your DNS and firewall right), so you will save some headaches for you and your users.

If this has interested you, make sure you go to TechEd Africa as there is a IFD Tips and Tricks session for Microsoft CRM!