Skip to main content

Rhodes students, you lucky bunch!

Willy posted about the upcoming trip to Rhodes (see here) which is really interesting. I am not going along for the road trip but I have had a chance to see what swag will be given away and let me state that those at Rhodes who are attending are going to be the envy of those who do not attend. Not only is there the standard flyers and free pens, but a great t-shirt (I got my hands on a development one! There is also three others… but that can be surprise) and a DVD with over 3.5Gb of content. I have been looking at the DVD content and there is not only full copies of the 3 books Willy has written on there but copies of all the content from S.A. Architect! Personally I rate that swag is as good as what you would find at a major event like Tech-Ed!

WSS and audience targeting - Part II

Today is NOT my day for things just working. Besides the demo gods eating every demo I did today and presenting me with excrement to work with, I found out that the super cool WSS audience targeting is broken on every browser except IE :( There is no use in complaining so silver lining is that I do get a chance to solve the problem and learn something new about browsers.

First off is that XmlHttpRequest is supported by all browsers, the XML DOM implementation is not compatible so even though you can get the data you can’t work with it easy. The cause is that Firefox or Chrome do not support selectNodes or selectSingleNodes methods or IE has unique methods which no other browser supports (depending which camp you are in). selectSingleNode is what I use to parse my results! To solve this I found some code at http://km0.la/js/mozXPath/ which adds the methods to the JavaScript classes! This is great in theory but it didn’t work because I have a default namespace being returned by my XML! The code I found didn’t work as selectSingleNode always returned null because it needed the namespace prefix on the XPath (i.e. /default:user) which IE can’t work with. So that meant not only implementing a new resolver which I found out about on developer.mozilla.org, it also meant doing a check and doing two XPath queries.

Something I didn’t mention in the first post is security and how secure or insecure this method is. Your biggest attack vector on this is that it runs client side and that the LoadXmlDoc method or even just the length check on the username can be changed easily so that it shows the content for logged in users to users who are not logged in! Basically it is not secure, but it doesn’t mean it opened a security hole because so you need to think about what is shown.In my case I made sure what is show is the bare minimum, just links to other pages. If an attacker gets the link they shouldn’t it does not worry me because the change password page, which signed in people can click to, is protected by WSS’s security. So even if someone were to try and access it they would get denied! The page itself also has security on it to help prevent security issues. The point I want to emphases is that this is NOT a security model, but a model for a better user interface. 

 

<script type="text/javascript">
    // Created by RMacLean - for comments email [email protected]
    // Partially from http://www.w3schools.com/XML/xml_http.asp
    // Partially from http://en.wikipedia.org/wiki/Xmlhttprequest
    // Partially from http://km0.la/js/mozXPath/
    // Partially from http://developer.mozilla.org/en/Introduction_to_using_XPath_in_JavaScript#Implementing_a_User_Defined_Namespace_Resolver
 
    // check for XPath implementation
    if (document.implementation.hasFeature("XPath", "3.0")) {
 
        // prototying the XMLDocument.selectNodes
        XMLDocument.prototype.selectNodes = function(cXPathString, xNode) {
            if (!xNode) { xNode = this; }
 
            var oNSResolver = document.createNSResolver(this.ownerDocument == null ? this.documentElement : this.ownerDocument.documentElement);
            function resolver() {
                return 'http://schemas.saarchitect.net/ajax/2008/09/user';
            }
 
            var aItems = this.evaluate(cXPathString, xNode, resolver, XPathResult.ORDERED_NODE_SNAPSHOT_TYPE, null);
            var aResult = [];
            for (var i = 0; i < aItems.snapshotLength; i++) {
                aResult[i] = aItems.snapshotItem(i);
            }
            return aResult;
        }
        18.
        // prototying the Element
        Element.prototype.selectNodes = function(cXPathString) {
            if (this.ownerDocument.selectNodes) {
                return this.ownerDocument.selectNodes(cXPathString, this);
            }
            else { throw "For XML Elements Only"; }
        }
 
        // prototying the XMLDocument.selectSingleNode
        XMLDocument.prototype.selectSingleNode = function(cXPathString, xNode) {
            if (!xNode) { xNode = this; }
            var xItems = this.selectNodes(cXPathString, xNode);
            if (xItems.length > 0) {
                return xItems[0];
            }
            else {
                return null;
            }
        }
 
        // prototying the Element
        Element.prototype.selectSingleNode = function(cXPathString) {
            if (this.ownerDocument.selectSingleNode) {
                return this.ownerDocument.selectSingleNode(cXPathString, this);
            }
            else { throw "For XML Elements Only"; }
        }
    };
 
    // Provide the XMLHttpRequest class for IE 5.x-6.x:
    if (typeof XMLHttpRequest == "undefined") XMLHttpRequest = function() {
        try { return new ActiveXObject("Msxml2.XMLHTTP.6.0") } catch (e) { }
        try { return new ActiveXObject("Msxml2.XMLHTTP.3.0") } catch (e) { }
        try { return new ActiveXObject("Msxml2.XMLHTTP") } catch (e) { }
        try { return new ActiveXObject("Microsoft.XMLHTTP") } catch (e) { }
        throw new Error("This browser does not support XMLHttpRequest.")
    };
 
    var xmlhttp;
 
    function loadXMLDoc(url) {
        xmlhttp = new XMLHttpRequest();
 
        if (xmlhttp != null) {
            xmlhttp.onreadystatechange = state_Change;
            xmlhttp.open("GET", url, true);
            xmlhttp.send(null);
        }
        else {
            alert("Your browser does not support XMLHTTP.");
        }
    }
 
    function state_Change() {
        if (xmlhttp.readyState == 4) {// 4 = "loaded"
            if (xmlhttp.status == 200) {// 200 = OK
                var username = "";
 
                if (xmlhttp.responseXML.selectSingleNode('//user') == null) {
                    username = xmlhttp.responseXML.selectSingleNode('//myns:user').getAttribute('username');
                }
                else {
                    username = xmlhttp.responseXML.selectSingleNode('//user').getAttribute('username');
                }
                if (username.length > 0) {
                    // user logged in
                    document.getElementById('resultText').innerHTML = '<P><A href="/memberPages/changepassword.aspx">Change Password</A></P>';
                }
                else {
                    // anonymous
                    document.getElementById('resultText').innerHTML = '<P><A href="/Pages/signup.aspx">Signup</A><BR><A href="/Pages/forgotpassword.aspx">Lost Password</A></P>';
                }
            }
            else {
                alert("Problem retrieving XML data");
            }
        }
    }
 
    loadXMLDoc("/Pages/loggedinuser.aspx");
</script>
 
<span id="resultText">Loading...</span>

Podcasting Kit for Sharepoint + Sub Site = It can be done

Recently I aged about a hundred years due to my intension to deploy the August 2008 release of the Podcasting Kit for SharePoint on a sub site due to the fact that there is so much hard coded into it (you can vote on that link for the work item to be done to fix this) that expects it to be at the root. Being stubborn and trying to show my elite SharePoint skills to all around me, I did not let little things like hard coding values stop me. No, I gave up a few years of my boyish good looks to get it working and in the end I did :) It’s not elegant but it works and should tide you over until the fixed release comes along.

First off let me say how great the August release is compared to the July release, it’s great. I actually don’t think you could’ve got this to work on the July release, there is that much cleanup and streamlining in it. One of the things they did was streamline the documentation from a monolithic single beast to a lot of smaller edible chunks. This is also a double edged sword for first timers because it means you go through the install doc and think you are done… when you aren’t, but I shall cover that later on.

For the tale assume the following facts:

  • SharePoint is deployed to a site available at http://intranet
  • We want the Podcasting kit to be available at http://intranet/sites/multimedia
  • I have created a life jacket file which contains versions of all the files I changed and may be of use to you. Note I have not tested it so it may not work. Best case is a simple find and replace and upload and you are in business, but worst case you need to follow the steps to get your files done. Download the life jacket file HERE
  • I am insane and this could all be wrong and my mind has made me believe it works.

Now I want to tell my tale of heroism at defeating bugs so that future crazy people don’t need to do this themselves. If you have the August 08 install guide and are following along (good idea, since that’s what I am doing to write this) I assume you have done your prerequisites and have a site collection as a sub site (i.e. http://intranet/sites/multimedia) and are at Installation Method #1 (Easy Method). Easy method that in itself should be the sign of the devil which awaits, because anything labeled easy never is. What you need to do is run the install to the root site (i.e. ssm.exe install PKSFull.xml http://intranet) and then uninstall it (i.e. ssm.exe uninstall PKSFull.xml http://intranet). I know it seems pointless but the installer doesn’t clean up well and leaves a few files behind (The key stuff it leaves is CSS’s, XSL’s and some JS files), and having these files at root helps us later on. Now run the installer again but now to your site collection (i.e. ssm.exe install PKSFull.xml http://intranet/sites/multimedia).

The next steps in the guide of the ratings DB, media encoder and feature activation work as documented. The user interface steps also work as expected but make sure you “fix” their links to have your site collection. So for a few examples:

  • SmartPhone access make the URL /sites/multimedia/mobilepages/pksmobilehome.aspx
  • Upload Podcasts make the URL: /sites/multimedia/PKS Podcasts/NewForm.aspx?RootFolder=%2FPKS%20Podcasts&Source=/sites/multimedia/pages/pkshomepage.aspx

a1Moving swiftly along the master and welcome pages instructions should work fine and that leads us to the first big hurdle the SmartPhone page because this is where the hard coding comes in. See the PKS uses a special web part called the Content Query Override web part, which is like the out of the box Content Query web part but allows you access to all the properties you couldn’t normally get to. However the PKS team set these up on their side and when you import it you are assured of doom and destruction if you are on a sub site. One of the properties that is hard coded is the path to the web URL for the site.  If you simply open the pksmobilehome.aspx page, find the weburl tag and change it to be your site path (in my case /sites/multimedia) prior to uploading the file, as per the document, you will be fine and continue through the document happily following the sections on Profiles, Silverlight 2.0, Rating and Commenting, External File Store, and Media Encoder Service to bring yourself merrily to the Pages section.

Before heading into the pages section we need to fix up the XSL files so they point correctly. You can find these in All Site Content –> Style Library –> XSL Style Sheets. Best is to switch to explorer view get all the files out of there and open them all up in text editor which allows Find & Replace across multiple files (I used Visual Studio, how developer of me) to fix the URL’s up. There are hundreds of the URLs so I suggest doing find and replace to fix it. As I said earlier all changed files are in the life jacket file, which includes these. It may be easier to use mine since you just need to search for /sites/multimedia and replace it with what ever your URL is. Use it, don’t use it, whatever. Once done re-upload the files and then make sure you check each file in as a major version.

The reporting page works great but you will find you are stuck at the podcasterdetail.aspx page because you can’t actually get to those settings. Once again this has to do with the fact that this web part is the content query override webpart. This one is a little worse than the mobile page because it has been pre-configured and is incorrect. To solve it, prior to editing the settings export the web part and save it to disk then open it up in your favorite text editor and make sure all the paths are correct, those being:

  • ItemXslLink
  • WebUrl
  • MainXslLink
  • ItemXslLinkOverride
  • MainXslLinkOverride
  • Xsl

Once you have done that you can upload the web part (either to the web part gallery or direct on the page) and you should be able to work with it successfully now.

Moving to the podcastdetail.aspx make sure you set the URL correct for the Xslt Override Location on the Content Rating-Review Results web part. You can/should skip the 9 to 15 (about the content editor web part which shows the edit link) since it references a JS file I have yet to be able to find and fix so it will point to the wrong URL and it will not work. If you do fix this please let me know. Setting up the web connections on the page should be fine and so should the search. If you do not have the metadata properties, like in my case, please read the troubleshooting guide. It’s likely you assumed too much and thought life was easy… easy install and all ;)

That should take you to the end of the guide, but not the end of the of our tale. If you recall I mentioned the double edged sword of splitting the documentation well the next step is not obvious. You should now go through the How To Apply Security Settings file and follow that because if you don’t you will have lots of crap. You should also be aware of a bug in the PKS which can prevent any non-admin like people from viewing/downloading content see here for details on that.

a2

a3Right, now that the security is sorted we can get back to getting the site working as a sub-site. The next step is to fix the configuration so go to All Site Content and go to Content Rating Configuration Settings. You were here earlier and find the item you copied from before (the one that is a GUID@GUID) and make sure the value points correctly. Now head back to All Site Content and go to PKS Configuration Settings and set the various values in their to point to the correct URLs, this includes:

  • DownloadTracking.Location
  • ErrorHandling.InvalidMediaFile.Image
  • ErrorHandling.NoVideoStream.Image
  • Thumbnail.Custom.Images (not you have multiple URL’s in the value here, make sure you get them all).

 

Now you have the main page pkshomepage.aspx which suffers from the same issue as the podcasterdetail.aspx page in that the content query override webpart is incorrectly configured. You can follow the same procedure as before (export, edit, upload) to solve it.

At this point everything should look like it works fine, but there is a bug lurking under the covers. Anytime anyone watches a video it will increment the download count which is stored on the file entry in the PKS Podcasts list, this update of the list item causes the File URL to change and even though originally it pointed correctly to /sites/multimedia/_layouts/MSIT.customfiles/Download.aspx?…  it gets updated to point to the root site again, in other words /_layouts/MSIT.customfiles/Download.aspx?…

Before: a4

After: a5

I am not sure all the circumstances around the bug (it doesn’t seem to effect admins). To solve this I wrote a quick ASP.NET page which will redirect requests back to the correct URL, the file is in the life jacket file and is called downloads.aspx. Now if you have a different sub site path (i.e. not /sites/multimedia like me) then you need to open that file up in a text editor and change the URL as needed.

a6

Important Notes:

  • YOU DO NOT NEED VISUAL STUDIO FOR THIS. The file is a single ASPX page with the code in it, the server will compile automatically at runtime.
  • The fact it points to a file called realdownload.aspx is correct.

Now get on to your SharePoint server (if you aren’t there already) and navigate to C:\Program Files\Common Files\Microsoft Shared\Web Server Extensions\12\template\layouts\MSIT.CustomPages. This folder should have a single file called download.aspx, which you will rename to realdownload.aspx and copy the download.aspx file from the life jacket file into this folder. So all requests will hit the custom file which will redirect them back to the correct file. This should fix up the issue caused by the bug.

Now finally at this point you are actually done! You can now go through the documents on workflow and series and get all your content up and (hopefully) enjoy your Podcasting kit!

I want to thank a few people who helped me along in getting this right namely Michael Gannotti, Zlatan Dzinic, and of course the PKS team who responded to my posts on the site!

Slide.Show and SharePoint: Part II - Getting the data

This is a multi-part series. The other parts can be found at

In part I we focused on getting Slide.Show to work with the content editor web part (CEWP), now the next step is getting the image information out of of the picture library so Slide.Show can display it. The first point you’ll need to understand is how Slide.Show knows where to find the images. It does it with a DataProvider, which is really a XML file reader. In the default configuration it looks for a file in same directory as the configuration file named Data.XML for the information, but it can be overridden with a custom DataProvider, like the Flickr one which was used in the last step of part I. Unfortunately there is no SharePoint data provider yet so we need to either roll our own version or provide the SharePoint data in a format Slide.Show can work with natively. Both require development but I prefer the second option because it means that I am not investing massively into Slide.Show, rather I am investing my time into a tool to get the data out and then a small bit of time into formatting it for Slide.Show. The advantage of this means that should a better solution come along in the future I can easily change the formatting part and have it work with the newer system. The other advantage of the second option is a personal one, because I prefer C# development (which I will build the provider in) with all of Visual Studio’s helping and documentation to JavaScript development (which is what would have to be used for the DataProvider).

The XML format which Slide.Show requires is something like this:

   1: <data>
   2:     <album ...>
   3:         <slide .../>
   4:         <slide .../>
   5:         <slide .../>
   6:         ...
   7:     </album>
   8: </data>

SharePoint won’t give the data to us in the format so to solve this I wrote an ASP.NET page does the following:

b1

The Get stage is fairly logical in that we connect to the Lists web service in SharePoint and using the GetListItems method we retrieve the items from the list which returns as an XMLNode, which is not the easiest way to work for two reasons. Firstly SharePoint has a lot of namespaces so using XPath to extract the items is tricky and annoying, but thankfully John Wood did post a nice way to work with the XMLNamespaceManager which I use. The second reason comes back to the reason why I am doing this in C# as opposed to a dedicated Slide.Show provider. Once I convert it to easier to work with structure I can do the outputting to various systems a lot quicker. So in this case I created a simple class which contains a few properties to define the slide image and pop that into a List<T>. I am a bit of a List<T> fan boy using it where I can because it is fast and it allows me to use Linq with it easily (yes I know I can use Linq To XML but I prefer the rich integration in VS of Linq to objects and the thought of SharePoint’s XML scares me), and using Linq will allow for some nice features later on.

ClassDiagram

The last stage Provide is where I take my list of slides and use it to build up an XMLDocument which I will return. The building of the XMLDocument is not rocket science (just a couple of foreach’s really), but the returning it may be of interest to some people so I will just cover that briefly. What I do is drop the PageLoad method of the ASP.NET page and work in the render method which allows me to get into the page life cycle earlier on and control a few more parts of the output using the Response class. There are only two things I really care about, first is Response.ContentType which specifies to the caller (normally your browser but in this case Slide.Show’s DataProvider) what the content is. So we set that to Text/XML. This is one of the first things I do, and then one of the last things I do is send the XMLDocument.OuterXML to the browser using Response.Write. Provided I have done all my stages correctly I should be able to browse to the ASP.NET page and see XML formatted as I wanted it.

b2

Part of building this also means making it reusable so there is a couple of things I did. First off the security is handled using built in Windows Authentication which may be a problem on web facing sites but it does mean that on intranet sites the security of the list items is maintained which is vitally imported. This is easily done by getting the configuration on IIS right (which we will cover in part III) and setting the Lists.UseDefaultCredentials to true. The next two vital configuration items are the URL to the lists web service and the name (or GUID) of the list we want. This is done using mandatory query string parameters (failure to provide them throws an AgrumentException) so your most minimalistic query string would look like this:

http://sharepoint/addon/slideshow.aspx?url=http://sharepoint/site/_vti_bin/lists.asmx&amp;list=Photo%20Gallery

Breaking that down it has the following parts

I have added support for a few extra optional items to the query string as well which allows you to tailor the results:

  • view : Used to specify the view name (or GUID) you want returned. Leaving it out returns the default view. Type is a string.
  • limit : Used to specify the number of items to return. Default is all items. Type is a integer.
  • recurse : Used to specify if you want to recurse into folders or just return the items from the root. Default is to recurse. Type is a boolean.
  • group : Used to create Slide.Show albums based on the folders. Default is false. Type is a boolean.
  • random : Used to return the items (not albums, if group is true) in a random order. Default is true and type is boolean.

An example of using some of those options would look like this: http://sharepoint/addons/slideshow.aspx?url=http://sharepoint/site/_vti_bin/lists.asmx&list=Photo%20Gallery&group=true&random=false&limit=20

b3

So the default provides all images (regardless of folders) in random order which looks great with the standard Slide.Show settings (lots of images, randomly displaying etc..).

That is the hard part of this, the last part is getting this web application to deploy on the server (some things to be aware of) and configuring Slide.Show to use it, which is surprisingly easy.

WSS and audience targeting

If you follow Willy’s blog you would have seen a post mentioning I have been a busy little person recently (if you hadn’t see it you can go here to read it). One of the things about SAArchitect.net is that it does not run on the big boys edition of SharePoint, it actually runs on WSS. However the thing people commonly forget about WSS is that it actually can do everything it’s brother can do, it just does not have all the templates and the fiddly bits out of the box. This means the time/cost/pain to get to the same point as it’s brother is a bit/minor/average/excessively more. One of the fiddly little bits which is missing, is audience targeting. Which I think is super sexy for public web sites, especially when you want to show different things to users who have logged in or and those who haven’t. An example is on the new SAArchitect site when you are logged in it should let you change your password and if you aren’t logged in then it should let you apply or request a forgotten password.

So putting my developer cap on (or at least adjusting it since it doesn’t ever come off) I fired up my trusty copy of Visual Studio. I needed VS as I had decided to solve this using ASP.NET and a web part which does come with WSS, the Content Editor Web Part (CEWP). Utilizing all of the power of Visual Studio 2008, I toiled away to produce my ASP.NET page with a staggering 22 lines of code which looks something like this:

Note: All the code is available in the file at the end of this post.

   1: <%@ Page Language="C#" ContentType="text/xml" %>
   2:  
   3: <%@ Import Namespace="System" %>
   4: <%@ Import Namespace="System.Globalization" %>
   5: <%@ Import Namespace="System.Web.UI" %>
   6: <script language="C#" runat="server">
   7:     protected override void OnPreRender(EventArgs e)
   8:         {
   9:             string result = string.Empty;
  10:             if (Page.User.Identity.IsAuthenticated)
  11:             {
  12:                 result = String.Format(CultureInfo.CurrentCulture, "<?xml version=\"1.0\" encoding=\"utf-8\" ?>" +
  13:                     "<user xmlns=\"http://schemas.saarchitect.net/ajax/2008/09/user\" username=\"{0}\" />", Page.User.Identity.Name);
  14:             }
  15:             else
  16:             {
  17:                 result = "<?xml version=\"1.0\" encoding=\"utf-8\" ?>" +
  18:                     "<user xmlns=\"http://schemas.saarchitect.net/ajax/2008/09/user\" username=\"\" />";
  19:             }
  20:             Response.Write(result);
  21:         }
  22: </script>

For those out there looking at that code and thinking it looks a little odd, it could be that the code all goes in the .aspx page itself. No code behind magic or web.configs or app_data folders to weigh this down. At 22 lines of code it’s all very simple: just checking if you are authenticated and if you are it returns an XML fragment with your name in it, and if you aren’t a XML fragment with no name and even a pretty namespace. In reality that could have been smaller like <u a=”0”/> if you aren’t logged in and  <u a=”1”/> if you are, but that isn’t very human readable ;) That single file sits on the WSS site in a document library, which is interesting (at least to me) because SharePoint will not let the code run by default as a way of protecting you from security issues. However if you are smart/brave/stupid enough you can tell SharePoint to let you run it, using the SharePoint web.config. Microsoft has a great article on how to do that over there.

Now that I have the code to tell me if I am logged in or not, I need to hide/show content based on it, which is easily done using the source editing options of the CEWP to plop in a bit of JavaScript. This bit of JavaScript uses a XMLHttpRequest to call the ASP.NET page and gets the result, which I check and based on the result show the relevant content as described below:

   1: <script type="text/javascript">
   2: //Partially from http://www.w3schools.com/XML/xml_http.asp
   3: // Partially from http://en.wikipedia.org/wiki/Xmlhttprequest
   4: // Provide the XMLHttpRequest class for IE 5.x-6.x:
   5: if( typeof XMLHttpRequest == "undefined" ) XMLHttpRequest = function() {
   6:   try { return new ActiveXObject("Msxml2.XMLHTTP.6.0") } catch(e) {}
   7:   try { return new ActiveXObject("Msxml2.XMLHTTP.3.0") } catch(e) {}
   8:   try { return new ActiveXObject("Msxml2.XMLHTTP") } catch(e) {}
   9:   try { return new ActiveXObject("Microsoft.XMLHTTP") } catch(e) {}
  10:   throw new Error( "This browser does not support XMLHttpRequest." )
  11: };
  12:  
  13:  
  14: var xmlhttp;
  15: function loadXMLDoc(url)
  16: {
  17:   xmlhttp = new XMLHttpRequest();
  18:  
  19:   if (xmlhttp != null)
  20:    {
  21:      xmlhttp.onreadystatechange=state_Change;
  22:      xmlhttp.open("GET",url,true);
  23:      xmlhttp.send(null);
  24:    }
  25:    else
  26:    {
  27:      alert("Your browser does not support XMLHTTP.");
  28:    }
  29:  }
  30:   
  31:  function state_Change()
  32:  {
  33:    if (xmlhttp.readyState == 4)
  34:    {// 4 = "loaded"
  35:      if (xmlhttp.status == 200)
  36:      {// 200 = OK
  37:        var username = xmlhttp.responseXML.selectSingleNode('//user').getAttribute('username');
  38:        if (username.length > 0)
  39:        {
  40:          // user logged in
  41:  document.getElementById('resultText').outerHTML = '<P><A href="/memberPages/changepassword.aspx">Change Password</A></P>';
  42:        }
  43:        else
  44:        {
  45:          // anonymous
  46:          document.getElementById('resultText').outerHTML = '<P><A href="/Pages/signup.aspx">Signup</A><BR><A href="/Pages/forgotpassword.aspx">Lost Password</A></P>';
  47:        }
  48:      }
  49:      else
  50:      {
  51:        alert("Problem retrieving XML data");
  52:      }
  53:    }
  54:  }
  55:   
  56:  loadXMLDoc("/Pages/loggedinuser.aspx");
  57:  </script> 
  58:  <span id="resultText"/>
  59:  

Admittedly this is not a solution that the little old lady with the white lunch box will be able to implement while she updates the bridge league site, but this might work just fine for a site about architecture which has a few smart people, a couple of MVP’s and me running around behind it.

You can download the page and the JavaScript in this file: DOWNLOAD

IMPORTANT NOTE: I have posted a follow up article which fixes some issues with this version. For more information see WSS and audience targeting - Part II.

Microsoft CRM Email Troubles

An email asking for help on a problem floated by me and I thought it was an interesting enough one to blog about. First off all I had was the following error message from the MSCRM Exchange router:

#61042 - An error occurred while processing the outgoing e-mail message with subject "CRM ADMIN TEST MAIL CRM:0004015" for SMTP: http://crmserver/tenant/ for delivery through exchangeserver. System.Net.Mail.SmtpException: Mailbox unavailable. The server response was: 5.7.1 Client does not have permissions to send as this sender

The guy asking the question was trying to figure out what was wrong with CRM because that’s where the error was coming from. The problem is that he was looking in the wrong place. If you read the error carefully the last bit is actually the most important: The server response was: 5.7.1 Client does not have permissions to send as this sender

It actually states that the server (not CRM, but Exchange) responded with the client not having permissions to send. This is a fairly common issue and the resolutions fall into one of two options:

  1. The email address in the from can’t be used because you do not have permission to use it (duh) and you merely need to change the address to one you have permission to use.
  2. The second option is to enable SMTP authentication which should give you the permissions to send them mail.

The first tool for anyone using virtualisation

I am a fan of virtualisation, my comments on an earlier post about it being the future should be a give away. As such I have a “few” VHD's around and sometimes I have problems which the out of the box technology can’t cope with. A few of these are:

  1. I sometimes there is some code, or a sample or I need to get a file to fix something which is sitting in the VHD file. The pain here is that to do that you need to boot up the VM, login in to it, find the file and copy the file to the host machine.
  2. As pointed out before I work in a team which does not prescribe to what is the best tool to use. I personally use Hyper-V but that’s me. VirtualPC is used by some people who are running Vista or XP, VMWare is popular with lots of people (duh) and so is Xen, so how do I get my files to them?
  3. Some times I need to test something that runs on a live system so I need a VM of the system.
  4. Other times I want to deploy what I have done on a VM and deploy it to a physical machine.

Well all the above are just plain annoying, but thankfully there is a program called WinImage! (It does come with a 30day full functional evaluation edition, so don’t worry if you need to test this out before you buy it.)
So how does it solve my problems? (The numbering of each point below matches the issues I list above)

  1. It allows you to open a VHD’s directly as well as a number of other formats, like ISO or VMWare’s formats! It's very simple to use as as well (similar interface to most compression tools like WinRAR or WinZIP), so you just go to File -> Open, select the VHD and it displays the contents of the almost VHD instantly (the 47Gb one from the previous example in about 2 seconds on my standard spec laptop). Now you browse it like you would any other file system, find the file, right click and hit Extract! There are even options to extract multiple files and keep the folder structure when extracting!  
  2. Moving between Hyper-V and VirtualPC is fine (VHD is a compatible format between them), all I have to do is uninstall the additions before I move it. I keep VirtualPC installed for this purpose but that’s all it gets used for. WinImage allows me to convert between VHD (Virtual Server, VirtualPC and Hyper-V) and VMDK (VMWare). I can also convert from an IMA (image) file to VHD and VMDK but not too it. Unfortunately there is nothing for Xen yet, but hopefully as it gets more widely used we will see something.
  3. I can rip an image of a real hard drive to a VHD or VMDK format! And because WinImage doesn’t need to be installed (it’s a single .exe which can be run from the zip archive) it makes it easy to get it onto a server! 
  4. The same is true if I have an image I can write it to a physical hard drive, so no more extracting and coping and rebuilding. Once your environment works, just deploy it!

image imageClipboard01

Automated Testing Testimonial

At the end of last week a very interesting document on doing automated testing was added to SA Architect, and what makes the document interesting is not the content (there is a lot out there on automated test) but the fact it is a document type I have never seen, it’s a testimonial.

What is a testimonial? It’s written by someone who worked with the system extensively and then provides their feedback, good and bad, on it. In some respects it’s like a white paper, but where a white paper may be more technology, brand, or system focused this is people and, to a smaller degree, project focused.

Congrats to Sacheen for providing this document and doing so well with it (it was the first time he had to write anything like this)!!
You can get the document here.
Thanks to Willy for the heads up!

Outlook 2007: POP3 and delayed email or how to avoid downloading RSS feeds too often.

I have been spoilt for a long time by living in an Exchange environment, so when I recently had to use a POP3 environment (even if it was just temporarily), I felt like I had gone back 15 years. One of the reason it feels like I have gone from Vista to Windows 3.1, is that Exchange pushes the mail down (or at least that is how I appears to work – I’m no Outlook expert), so mail arrives instantly when someone sends it.
Unfortunately POP3 is pull based and it doesn't come down until Outlook checks for mail. The horrible part is that by default it is configured for only checking every 30 minutes :( That could mean if someone misses your check window (like they would know) you could wait almost forever for their mail. Thankfully you can change that, first go to Tools -> Options
image
Next go to Mail Setup and click the Send/Receive button.
image
By default you should have one group (called All Accounts) and below that there is an option to Schedule an automatic send/receive every x minutes. In the picture below you'll see it is set to 1 minutes, which really helps (close enough to instant that it doesn't matter).
However if you are like me then you also use Outlook for RSS feeds, and that change will mean you will now be downloading feeds every 1 min! You can fix that easily by splitting RSS and Emails check times
To do that click the Edit button and remove RSS from being included, then click OK. If you are a perfectionist (which you may gather I am from my picture below), you could also click Rename to understand what it is easier. Next click the New button and do not select to include Email, just RSS. Click OK and now you should have two Send/Receive groups. You can now click on RSS in the list and set a separate interval for how often it should check (once an hour is good).  Click Close, OK and you are done :)
image

Take your toolbox with you

Previously I posted about two ways to get SysInternals to your machine, well that great bunch have released yet another method! This time in a download package which contains all the tools and none of the fluff (so no BSOD screen saver)! This is a great addition for your toolbox! You can get it from here.
Thanks to Sarah for the heads up on this!