28 Jul 2008

Team Foundation Server could not resolve the user or group

One of my recent tasks was to setup a TFS 2008 server, and migrate our VSS system across to it. Once done setup the projects and users. Well since I have a good knowledge of the systems and I did a TFS 2005 deployment previously (although it was not adopted), I felt confident that the install wouldn’t be an issue. I did the usual prep of reading blogs and learning from others and that did help me avoid some pit falls.
Next up was the migration of VSS to TFS, which was actually not a major requirement as it is just there for legacy projects. All active projects would have to check their code into new TFS projects planned to create in TFS. The key benefit of this is it would allow us to align with EPM better than the migration tool would allow us to. I created a project, and imported the 1.7Gb of source code into it! It took some time. Then I needed to add the users, and this is where I met a problem.
Regardless if I used the command line, or the TFS admin tool or the GUI I kept getting an error: Team Foundation Server could not resolve the user or group. <AD Distinguished Name of the User>. The user or group might be a member of a different domain, or the server might not have access to that domain. Verify the domain membership of the server and any domain trusts.


The AD issue and TFS issue both revolved around the fact that in our AD the Authenticated Users (AuthUsers) group does not have read permissions to our users and the containers they are in. This is odd to the outside person because when AD is setup the AuthUsers group does have permissions, so why would our AD be different and what are the implications of changing it. The reason there is a difference is because our AD is setup according to Hosted Messaging and Collaboration (you can read more about it here) which specifically removes the AuthUsers group permissions for security reasons (i.e. to prevent users from seeing other customers). Because of this change, the GPO could not access the users accounts and neither could TFS read from AD what it needed.
To solve this for TFS meant giving AuthUsers read permissions to the users who needed to access TFS and their immediate container while for AD/GPO it required just AuthUsers to have permissions on the container for the users (it doesn’t need the permissions on the actual users) and all it’s parent containers. Once those were done the group policies and TFS started to work 100%.
That’s great but what is the impact to the hosted environment and is this the best way to solve the issue? Well this does open up a security risk in that customers could see other customers, simply by logging into the domain. For us this is mitigated as we are not offering hosted TFS, this is just for our own internal staff who are aware of who our customers are and we aren’t worried if our customers know about our staff. It is also very difficult for a customer to see other customers as most applications don’t allow it and those that do allow it in their standard configurations, such as MSCRM, ignore it in a HMC environment.
In regards to is this the best way to solve the issue, my view is that it is not it. You should run a separate AD for each customer, this is a normal AD system which runs at the client premises and using the Customer Integration component of HMC (which is based on MIIS) sync the customer AD to the hosted AD. This means that you could run GPO’s and TFS on the customer site without the need to change anything in a hosted way.
21 Jul 2008

HMC tips for Exchange: Part 3 - Fixing GAL issues

It’s an unfortunate problem that the GAL integration isn’t rock solid with HMC and Exchange, and that it is merely controlled by the AD schema attributes (see The Zen of Hosting: Part 5 – HMC and Exchange for more info), and it’s very for this to be screwed up by a number of things. Most common for me is the use of the Exchange PowerShell which seems to reset that attribute with a lot of it’s commands. The easiest way to resolve it is with another XML request passed to provtest, in this case the nicely named RepairExchangeObject. Basically it just needs the domain controller and the LDAP path to the user who has had their attributes screwed and off it goes and fixes it.
NOTE: This is for HMC 4.0, 4.5 has a different structure completely. Check the SDK for the message which will give you a sample you can use.
The request looks like this:
<request> 
    <data> 
        <preferredDomainController>Domain Controller</preferredDomainController>
        <path>LDAP Path</path>
    </data>
    <procedure> 
        <execute namespace="Exchange 2007 Provider" procedure="RepairExchangeObject" impersonate="1" > 
            <before source="data" destination="executeData" mode="merge" />
            <after source="executeData" destination="data" mode="merge" />
        </execute>
    </procedure>
</request>

Sample:

<request> 
    <data> 
        <preferredDomainController>srv01</preferredDomainController>
        <path>LDAP://[email protected],OU=MyCustomer,OU=MyReseller,OU=Hosting,DC=litware,DC=local</path>
    </data>
    <procedure> 
        <execute namespace="Exchange 2007 Provider" procedure="RepairExchangeObject" impersonate="1" > 
            <before source="data" destination="executeData" mode="merge" />
            <after source="executeData" destination="data" mode="merge" />
        </execute>
    </procedure>
</request>

Tags: 
17 Jul 2008

HMC tips for Exchange: Part 2 - Adding a distribution list

The second tip is distribution lists, which are also kind of an important thing to get set up. To do this you need to craft an CreateDistributionList XML request, this is just an XML file which looks like:

<request> 
    <data> 
        <container>LDAP Path</container>
        <preferredDomainController>Domain Controller</preferredDomainController>
        <managedBy>List Owner</managedBy>
        <name>List Name</name>
    </data>
    <procedure> 
        <execute namespace="Hosted Email 2007" procedure="CreateDistributionList" impersonate="1" > 
            <before source="data" destination="executeData" mode="merge" />
            <after source="executeData" destination="data" mode="merge" />
        </execute>
    </procedure>
</request>

Sample
<request> 
    <data> 
        <container>LDAP://OU=MyCustomer,OU=MyReseller,OU=Hosting,DC=litware,DC=local</container>
        <preferredDomainController>srv01</preferredDomainController>
        <managedBy>[email protected]</managedBy>
        <name>Triage</name>
    </data>
    <procedure> 
        <execute namespace="Hosted Email 2007" procedure="CreateDistributionList" impersonate="1" > 
            <before source="data" destination="executeData" mode="merge" />
            <after source="executeData" destination="data" mode="merge" />
        </execute>
    </procedure>
</request>

You can then run that on your HMC server using the provtest command. So how do you manage who actually is in the list? Well this actually very easy, thanks to Outlook. First just open an email and type the list name in to line, then right click and select properties:

 

 

You can then use the Modify Members… button to add/remove members of this list.

 

 

Note: This can ONLY be done my the list owner which you specified when you created the list in the managedBy node.

Tags: 
14 Jul 2008

HMC tips for Exchange: Part 1 - Adding a Room

First of a new series, but this is more a mini series (just three parts). It is just a follow up on the last series Zen of Hosting so it focuses on a few tips for working with HMC. All this series is from HMC 4.0, so on 4.5 your mileage may vary. The first one is how to add a room, because meeting scheduling is kind of important. To do that, first add a user via the normal UI (i.e. the web portal), from this point it’s actually normal stuff for adding a room. Firstly go into AD user and groups and disable the user, then go into the Exchange management console and add a room to your existing (disabled) user and viola done. The GAL and other Exchange/AD stuff is maintained because the user was added via the HMC way.
Tags: 
11 Jul 2008

The Zen of Hosting: Part 12 - Server Naming

For a consistent environment you need naming standards, but the idea of a standard is a universal adherence and in IT there is no such thing. The first thing I started to look at is a naming standard for the servers themselves. Thankfully Microsoft has published a recommendation on this (available here) which is what we decided to follow since it is a simple one and is easy enough to use and remember.

Microsoft's published recommended naming convention which is aa-bbb-ccccc-dd. The definition of the format is aa is the country code, bbb is the city designation, ccccc is the server role and dd is the number of the server. If the server is part of a cluster, array or similar then last two characters of the server role indicate which cluster it is part of.

Samples:

The first domain controller in Redmond, USA would be: us-rmd-ad-01

  • us = USA
  • rmd = Redmond
  • Ad = Active Directory
  • 01 = First Server

 

The first BizTalk server in the second BizTalk cluster in Cape Town, South Africa would be: za-cpt-bts02-01

  • Za = South Africa
  • Cpt = Cape Town
  • Bts02 = BizTalk Cluster 2
  • 01 = First Server

 

The first MSCRM server in Auckland, New Zealand would be: nz-ack-crm-01

  • nz = New Zealand
  • ack = Auckland
  • infr = Infrastructure
  • 01 = First Server

However this is the only published naming standard I could find, so the naming for databases, ISA rules etc… have all been developed internally so I can’t disclose those.

This also brings to an end this series on HMC hosting, but fear not I have a quick 3 part mini-series on the top 3 tips I have for managing a HMC environment to keep busy with.

Tags: 
08 Jul 2008

The Zen of Hosting: Part 11 - DNS

The last of the hurdles to overcome for the deployment was the running of the DNS server. This is because we run on a private IP range internally and use ISA to match external IP's and ports to the services we want to publish (i.e. NAT). This basically allows us to lower the attack surface because we only let out what is needed and an also mix and match servers to the same IP (lowering our IP address usage).

This also means that we have not only DNS servers to allow the servers and staff internally to find the other servers and services but we also have to have external servers too to allow users on the big bad Internet to find them. There is so much duplication of work for this configuration deployment scenario as you are having to create records on a best case of two servers and worst case is four servers and configure them differently. This also means the area for mistake is increased considerably. The upside is that internal staff do not need to go out the LAN and back in via the net or even go through the external firewalls and that we an have different domain names internally and externally, which is great for testing and development and only publishing when needed.

What I do not understand is why the DNS server team at Microsoft can't take a leaf from MSCRM 4.0's IFD deployment and allow you to specify what the internal IP range is and allow you to set A/CNAME’s for both internal IP ranges and external IP ranges. So when an internal IP requests a resolution it gives the internal A/CNAME records and for non-Internal they get the external A/CNAME record. This is such a logical thing to do, that Bind has this feature for ages, so come on Microsoft steal another idea from Linux ;)

One of the design choices for the DNS structure is a concept of mine called IP address abstraction. The idea of DNS is to get us away from IP’s but the problem is that in normal DNS configurations you end up with loads of A records and the moment you need to change IP addresses you end up with spending days changing IP addresses through all the records. What IP address abstraction is that you take a core domain name, and create a single A record for each IP you have.

Examples:

  • internal1.test.com A 192.168.0.1
  • internal2.test.com A 192.168.0.2

What you do then is everywhere else you use CNAMEs to those names, regardless of what the domain name.

Example:

The advantage is that if the IP’s change ever, you change them in one place and it reflects everywhere, yet the experience to the end user is the exact same as DNS has always been.

Tags: 
04 Jul 2008

The Zen of Hosting: Part 10 - Windows 2008 Core

Since we had Windows 2008 we just had to try out Core edition, which is the version of Windows where Microsoft promised everything would be command line based. I like to think of it, that if Vista stole the UI from Apple Mac, then Win2k8 tried to steal it from Linux...

So before I get into core, let me first state that Win2k8 is the best server OS Microsoft has ever released. It is amazing how well polished everything is, and the tools that are there are great. Does it compare to Linux servers, well in some places it kicks ass and others it doesn’t, but since Linux servers are the de facto for command line based systems if we compare the command line features then they have done a HORRIBLE job.

All that is actually happening is you are getting the normal command prompt in a Windows and they dropped Explorer.exe from being the shell. In fact explorer.exe does not even get installed, but a lot of our old favourites are there, such as Ctrl+Alt+Del still brings up the usual menu and task manager still works.

Actually Microsoft dropped so much the gain in RAM is impressive (our avg RAM usage normally is 750Mb but on core it is a mere 300Mb) and the attack surface and patch requirements shrinkage is great.

Getting back to command.com as the shell, is likely the biggest single mistake of core.It’s not like Microsoft doesn’t have a great command line system, called Powershell which they could have used. In fact there is so little added to the command line that after this experience I went to a Win2k3 machine and was able to do most of this anyway, and it’s not hard to kill explorer.exe as the shell in Win2k3. One advantage doing this core mockup on 2k3 has, is that at least Internet Explorer is there for you to get online to get help, Win2k8 core has no decent help (just the same old crappy command.com stuff).

Linux has man pages, Powershell has get-help, the console has... Thank the heavens that I was able to use my laptop to get on to the Internet. For example I had problems with the first two core boxes trying to run Hyper-V on them, it just gave all kinds of RPC issues. Turned out that although I had not set the DNS correctly using netsh, I had set it for Primary Only and not Both. What the difference is beyond me because using the Windows GUI to set network settings for the last 20 years obviously sets this correctly so why make it so much tougher.

Another interesting feature of core, which I never had to it my head with but learnt about when I attended Win2k8 IIS training that Microsoft ran and the trainer said that in Core you couldn't run ASP.NET for web sites, because Core doesn't have the .Net framework. This is because the .Net framework installer needs a GUI. I suspect this is the same reason why Powershell can't be used, being .Net based and all. But the part I don’t understand is that THERE IS A FRIGGING GUI! It's all around the command prompt Window!

My recommendation is avoid Core as the extra work doesn’t make up for the cost of a little bit of extra ram, rather spend less time on setting up the server, more time billing customers and buy the ram. Hopefully in Windows Server vNext gets it right.

Tags: 
01 Jul 2008

The Zen of Hosting: Part 9 - Hyper-V

As I approach the end of this series I want to highlight some of the technology that the hosting machine is built on and some of the experiences I learnt with that. These last few posts are much shorter than the earlier ones but hopefully provide some quick bite size info.

So if you have looked at standard HMC then add all the technology we have added to it, you would assume there is a building full of servers. The reality is the server room has got lots of space and isn’t that big. How did we achieve this? Slow applications because we running everything on a fewer servers? Not at all.

We bought some seriously powerful HP machines loaded a ton of ram and installed Windows 2008; but how does that help with running lots of systems and doesn't HMC break if it runs on Win2k8 (see way back to part 2)? Well Win2k8 has the best virtualisation technology Microsoft has ever developed, named Hyper-V. This is seriously cool stuff in that it actually runs prior to Windows starting and virtualises Windows completely (rather than running virtual machines on an OS, they run next to it). The performance compared to Virtual Server is not even worth talking about, it basically pushes Virtual Server into the stone age.

It is very fast and it seems to handle the randomness of the servers usage (those little spikes when you run multiple machines at one piece of hardware) so very well. But not every thing is virtualised, there is a monster of an active-active SQL Server cluster (since so much needs SQL) and we have a number of oddities such as the box which does media streaming due to the fact that some specialised hardware can’t be used in a virtual machine. A worry for when we started with Hyper-V was it's beta/rc status... Well with thousands of hours of uptime logged so far by servers on it, it has been ROCK solid.

Pages