02 Aug 2008

Ch-Ch-Ch-Changes!

Last week was a special week, as it was the last week I worked at The i5 Group, and it was the week I started at BB&D in the ATC group. I will definitely be going into more of what that means in the future but for now know it means I work with an AMAZING team in a non-client role :)
Before I finish, I will be at TechEd Africa next week and will likely not be posting until Thursday or Friday. If you are attending come and see my talk on WPF!!
02 Aug 2008

Complex Reporting: Part 3 - Structure

So in part two I showed what was needed, so now how do we actually build this report. Well if your gut said to use a table or matrix then you would quickly hit a problem because what data needs to be shown in each field is not constant. A picture may help explain what I mean. If we look at our previous example:

 

 

 

Our fields would have to look like this:

 

 

So note how F1Q1Value appears in a different place in each row, try building that in a table or  matrix. You just can’t! Also now think about the requirement for sub totals, going horizontal is ok in concept but going vertical is tough because you are adding the values of a lot of different fields together.

So how do you go about building this? Well the financial year indicators at the top could just be calculated from the value from the drop down and put into three text boxes. The next row, which is the header row for quarters could also be text boxes or the header row of a table since it doesn’t change, then we would need a table below that for the commodity names and values, but how do we get the values of the field to dynamically change based on the commodity? If you go back to part 1, you’ll remember I mentioned sub-reports, but I will get to that next time.

Tags: 
28 Jul 2008

Complex Reporting: Part 2 - Report Requirements

In part 1 we looked at the requirements at a high level and also looked at the data structure, in part 2 we will now look at the finer details of the requirement. First the user needs to be able select from a drop down, which contains a list of all possible fiscal years, the first fiscal year they want to see, next the report must show blank spaces for commodities which do not feature in that fiscal, but may feature in future fiscals and lastly we need sub totals for years and commodities.
So the drop down is a simple SRS parameter which is connected to a data set which does a simple SQL command:
SELECT DISTINCT Fiscal1 AS Fiscal FROM Deals 
UNION
SELECT DISTINCT Fiscal2 AS Fiscal FROM Deals 
UNION
SELECT DISTINCT Fiscal3 AS Fiscal FROM Deals 
and then set the value and name of the drop down to the Fiscal field.
The next requirement is actually telling us something very subtle, which is at the core of the complexity. That is that if I choose FY06, the report will render FY06, FY07 and FY08 and that if a commodity starts in any of those years it must be shown. So we are not ONLY showing what starts in FY06, we are showing things that start in FY07 and FY08 too. This means our final result should look like:


If you are a regular SRS developer your gut is telling you that it is just a table, or maybe a matrix, unfortunately it’s a little more than that. But we will look at that next time.
The last requirement, sub totals, is very easy and logical (especially looking at the image above) so I am not going to go into detail about that.
Tags: 
28 Jul 2008

Team Foundation Server could not resolve the user or group

One of my recent tasks was to setup a TFS 2008 server, and migrate our VSS system across to it. Once done setup the projects and users. Well since I have a good knowledge of the systems and I did a TFS 2005 deployment previously (although it was not adopted), I felt confident that the install wouldn’t be an issue. I did the usual prep of reading blogs and learning from others and that did help me avoid some pit falls.
Next up was the migration of VSS to TFS, which was actually not a major requirement as it is just there for legacy projects. All active projects would have to check their code into new TFS projects planned to create in TFS. The key benefit of this is it would allow us to align with EPM better than the migration tool would allow us to. I created a project, and imported the 1.7Gb of source code into it! It took some time. Then I needed to add the users, and this is where I met a problem.
Regardless if I used the command line, or the TFS admin tool or the GUI I kept getting an error: Team Foundation Server could not resolve the user or group. <AD Distinguished Name of the User>. The user or group might be a member of a different domain, or the server might not have access to that domain. Verify the domain membership of the server and any domain trusts.


The AD issue and TFS issue both revolved around the fact that in our AD the Authenticated Users (AuthUsers) group does not have read permissions to our users and the containers they are in. This is odd to the outside person because when AD is setup the AuthUsers group does have permissions, so why would our AD be different and what are the implications of changing it. The reason there is a difference is because our AD is setup according to Hosted Messaging and Collaboration (you can read more about it here) which specifically removes the AuthUsers group permissions for security reasons (i.e. to prevent users from seeing other customers). Because of this change, the GPO could not access the users accounts and neither could TFS read from AD what it needed.
To solve this for TFS meant giving AuthUsers read permissions to the users who needed to access TFS and their immediate container while for AD/GPO it required just AuthUsers to have permissions on the container for the users (it doesn’t need the permissions on the actual users) and all it’s parent containers. Once those were done the group policies and TFS started to work 100%.
That’s great but what is the impact to the hosted environment and is this the best way to solve the issue? Well this does open up a security risk in that customers could see other customers, simply by logging into the domain. For us this is mitigated as we are not offering hosted TFS, this is just for our own internal staff who are aware of who our customers are and we aren’t worried if our customers know about our staff. It is also very difficult for a customer to see other customers as most applications don’t allow it and those that do allow it in their standard configurations, such as MSCRM, ignore it in a HMC environment.
In regards to is this the best way to solve the issue, my view is that it is not it. You should run a separate AD for each customer, this is a normal AD system which runs at the client premises and using the Customer Integration component of HMC (which is based on MIIS) sync the customer AD to the hosted AD. This means that you could run GPO’s and TFS on the customer site without the need to change anything in a hosted way.
24 Jul 2008

Complex Reporting: Part 1 - Introduction

Recently I got the chance to build a report using Microsoft SQL Server Reporting Service 2005 for a customer, that in itself is nothing fantastic. However this was one of the most complex reports I have ever had to build, for two main reasons:
  1. What needs to be displayed is so simple in concept, that you get misled into thinking it’s easy.
  2. To do this you need to use a component of SRS which is seldom needed, sub-reports.
But before I give away too much let me explain what the report should show, the customer I am using in this series, tracks the deals for commodities over a three year period, broken down to each quarter. So they want to know in year 1, quarter 1 they sold x values worth of beer, and so on for all 12 quarters. As they are constantly adding new commodities the year 1 is not the same for each commodity. So they may start selling beer in 2008 and chickens in 2009, so they would only sell beer until 2011 but chickens would be sold until 2012. And all we need is a report to show this.
Disclaimer: Most of this has been changed to protect the innocent, so the report images are drawn in Visio and are not real images, the customer doesn’t sell commodities (especially not coffee and beer) and the data is completely faked.

So lets look at the data structure, it’s a simple single table (no I didn’t have anything with the design of this), which has a Deal field for the name of commodity; Fiscal1, Fiscal2, Fiscal3 store the years that relate to each year we are tracking and the FxQxValue is the value for that year and quarter.

Lets look at how this would look with sample data, this is the same data we will use for the rest of the series:

Next time we will look at the report requirements itself and start to look at how to build it.
Tags: 
21 Jul 2008

IP Address Abstraction, should you use it?

Previous I blogged about a concept called IP address abstraction, IAA for simplicity, (see Zen of Hosting pt 11) where I wrote about using CNAMEs in DNS to abstract yourself away from having lots of IP addresses and needing to update lots and lots of DNS records should your IPs change. It seems like a good idea, however no good idea seems to be a perfect fit in IT anymore :(
In this case the biggest issue is that according to Common DNS Operational and Configuration Errors (RFC 1912, for those who care) states a few issues and many an admin may point out that this is the cause for all kinds of things like email breaking, but as we will see that may not be the case. But lets cover the highlights from RFC 1912 which will be pointed out to you:
A CNAME record is not allowed to coexist with any other data. However, DNS    servers like BIND will see the CNAME and refuse to add any other resources for that name.  Since no other records are allowed to coexist with a CNAME, the NS entries are ignored.  Therefore all the hosts in the podunk.xx domain are ignored as well!
That’s a big one since if you use IAA it will co-exist with MX, NS etc.. it also goes on to say
Don't go overboard with CNAMEs.  Use them when renaming hosts, but plan to get rid of them (and inform your users).  However CNAMEs are useful (and encouraged) for generalized names for servers -- `ftp' for your ftp server, `www' for your Web server, `gopher' for your Gopher server, `news' for your Usenet news server, etc.
[RFC 1034] in section 3.6.2 says this should not be done, and [RFC 974] explicitly states that MX records shall not point to an alias defined by a CNAME.  This results in unnecessary indirection in accessing the data, and DNS resolvers and servers need to work more to get the answer.
This basically goes against everything IAA identifies as a reason for using it :( Lastly it goes on to state
Also, having chained records such as CNAMEs pointing to CNAMEs may make administration issues easier, but is known to tickle bugs in some resolvers that fail to check loops correctly.  As a result some hosts may not be able to resolve such names.
Having NS records pointing to a CNAME is bad and may conflict badly with current BIND servers.  In fact, current BIND implementations will ignore such records, possibly leading to a lame delegation.  There is a certain amount of security checking done in BIND to prevent spoofing DNS NS records.  Also, older BIND servers reportedly will get caught in an infinite query loop trying to figure out the address for the aliased nameserver, causing a continuous stream of DNS requests to be sent.
Basically stating it may make administration issues easier, is kind of the point of all this. However there is a few things that the wiley admins may not point you to, first off this was published in Feb '1996! That’s 12 years ago, since then the superior DNS software like BIND, no longer has the issues that are stated, and to that point even the inferior DNS software like that which ships with Windows doesn’t show these issues. Basically that nulifies the first and last points but what about that bit in the middle pointing to RFC 1034 and RFC 974.
Well RFC 974 deals with MX records and routing, so it is similar to the first point but does state:
If the response contains an answer which is a CNAME RR, it indicates that REMOTE is actually an alias for some other domain name. The query should be repeated with the canonical domain name.
So basically even if you chain CNAME’s it should not break any email system. RFC 1034 is more about DNS (it’s actually called DOMAIN NAMES - CONCEPTS AND FACILITIES) and covers the overview of how should work without covering the technical details. However it was written in Nov of 1987 (so even older than RFC 1912) but is not obsoleted by any other RFC. It states:
Of course, by the robustness principle, domain software should not fail when presented with CNAME chains or loops; CNAME chains should be followed and CNAME loops signalled as an error.
Basically that DNS should be robust and that the idea of IAA should work regardless. The one issue I cannot disprove is that it takes additional time and bandwidth to have lots of CNAMEs. Then again in 1996 56k was the blinding speed of the internet, now that is not the case. Bandwidth has increased and latency decreased so much since then, that it makes sense to utilize that additional power to make a more stable internet through the use of making administration easier. Hopefully we can soon get some tools to test for loops which are the biggest issue caused by this structure.
Looking at all of this I would state that IAA is worth implementing and there is not a significant reason anymore not to utilize it. Hopefully this document should help answer any questions or be of use when dealing with those admins who haven’t seen the light.
21 Jul 2008

HMC tips for Exchange: Part 3 - Fixing GAL issues

It’s an unfortunate problem that the GAL integration isn’t rock solid with HMC and Exchange, and that it is merely controlled by the AD schema attributes (see The Zen of Hosting: Part 5 – HMC and Exchange for more info), and it’s very for this to be screwed up by a number of things. Most common for me is the use of the Exchange PowerShell which seems to reset that attribute with a lot of it’s commands. The easiest way to resolve it is with another XML request passed to provtest, in this case the nicely named RepairExchangeObject. Basically it just needs the domain controller and the LDAP path to the user who has had their attributes screwed and off it goes and fixes it.
NOTE: This is for HMC 4.0, 4.5 has a different structure completely. Check the SDK for the message which will give you a sample you can use.
The request looks like this:
<request> 
    <data> 
        <preferredDomainController>Domain Controller</preferredDomainController>
        <path>LDAP Path</path>
    </data>
    <procedure> 
        <execute namespace="Exchange 2007 Provider" procedure="RepairExchangeObject" impersonate="1" > 
            <before source="data" destination="executeData" mode="merge" />
            <after source="executeData" destination="data" mode="merge" />
        </execute>
    </procedure>
</request>

Sample:

<request> 
    <data> 
        <preferredDomainController>srv01</preferredDomainController>
        <path>LDAP://CN=[email protected],OU=MyCustomer,OU=MyReseller,OU=Hosting,DC=litware,DC=local</path>
    </data>
    <procedure> 
        <execute namespace="Exchange 2007 Provider" procedure="RepairExchangeObject" impersonate="1" > 
            <before source="data" destination="executeData" mode="merge" />
            <after source="executeData" destination="data" mode="merge" />
        </execute>
    </procedure>
</request>

Tags: 
17 Jul 2008

HMC tips for Exchange: Part 2 - Adding a distribution list

The second tip is distribution lists, which are also kind of an important thing to get set up. To do this you need to craft an CreateDistributionList XML request, this is just an XML file which looks like:

<request> 
    <data> 
        <container>LDAP Path</container>
        <preferredDomainController>Domain Controller</preferredDomainController>
        <managedBy>List Owner</managedBy>
        <name>List Name</name>
    </data>
    <procedure> 
        <execute namespace="Hosted Email 2007" procedure="CreateDistributionList" impersonate="1" > 
            <before source="data" destination="executeData" mode="merge" />
            <after source="executeData" destination="data" mode="merge" />
        </execute>
    </procedure>
</request>

Sample
<request> 
    <data> 
        <container>LDAP://OU=MyCustomer,OU=MyReseller,OU=Hosting,DC=litware,DC=local</container>
        <preferredDomainController>srv01</preferredDomainController>
        <managedBy>[email protected]</managedBy>
        <name>Triage</name>
    </data>
    <procedure> 
        <execute namespace="Hosted Email 2007" procedure="CreateDistributionList" impersonate="1" > 
            <before source="data" destination="executeData" mode="merge" />
            <after source="executeData" destination="data" mode="merge" />
        </execute>
    </procedure>
</request>

You can then run that on your HMC server using the provtest command. So how do you manage who actually is in the list? Well this actually very easy, thanks to Outlook. First just open an email and type the list name in to line, then right click and select properties:

 

 

You can then use the Modify Members… button to add/remove members of this list.

 

 

Note: This can ONLY be done my the list owner which you specified when you created the list in the managedBy node.

Tags: 

Pages