Skip to main content

ASP.NET MVC Cheat Sheets

My latest batch of cheat sheets is out on DRP which are focused on ASP.NET MVC. So what is in this set:

ASP.NET MVC View Cheat Sheet

This focuses on the HTML Helpers, URL Helpers and so on that you would use within your views.

Slide1 

ASP.NET MVC Controller Cheat Sheet

This focuses on what you return from your controller and how to use them and it also includes a lot of information on the MVC specific attributes.

Slide2

ASP.NET MVC Framework Cheat Sheet

This focuses on the rest of MVC like routing, folder structure, execution pipeline etc… and some info on where you can get more info (is that meta info?).

 Slide3

ASP.NET MVC Proven Practises Cheat Sheet

This contains ten key learnings that every ASP.NET MVC developer should know - it also includes links to the experts in this field where you can get a ton more information on those key learning's.

 Slide4

What are the links in the poster?

Think before you data bind
    TinyURL: http://TinyURL.com/aspnetmvcpp1
    Full URL: http://www.codethinked.com/post/2009/01/08/ASPNET-MVC-Think-Before-You-Bind.aspx

Keep the controller thin
    TinyURL: http://tinyurl.com/aspnetmvcpp2
    Full URL: http://codebetter.com/blogs/ian_cooper/archive/2008/12/03/the-fat-controller.aspx

Create UrlHelper extensions
    TinyURL: http://tinyurl.com/aspnetmvcpp3
    Full URL: http://weblogs.asp.net/rashid/archive/2009/04/01/asp-net-mvc-best-practices-part-1.aspx#urlHelperRoute

Keep the controller HTTP free
    TinyURL: http://tinyurl.com/aspnetmvcpp4
    Full URL: http://weblogs.asp.net/rashid/archive/2009/04/01/asp-net-mvc-best-practices-part-1.aspx#httpContext

Use the OutputCache attribute
    TinyURL: http://tinyurl.com/aspnetmvcpp5
    Full URL: http://weblogs.asp.net/rashid/archive/2009/04/01/asp-net-mvc-best-practices-part-1.aspx#outputCache

Plan your routes
    TinyURL: http://tinyurl.com/aspnetmvcpp6
    Full URL: http://weblogs.asp.net/rashid/archive/2009/04/03/asp-net-mvc-best-practices-part-2.aspx#routing

Split your view into multiple view controls
    TinyURL: http://tinyurl.com/aspnetmvcpp7
    Full URL: http://weblogs.asp.net/rashid/archive/2009/04/03/asp-net-mvc-best-practices-part-2.aspx#userControl

Separation of Concerns (1)
    TinyURL: http://tinyurl.com/aspnetmvcpp8
    Full URL: http://blog.wekeroad.com/blog/asp-net-mvc-avoiding-tag-soup

Separation of Concerns (2)
    TinyURL: http://tinyurl.com/aspnetmvcpp9
    Full URL: http://en.wikipedia.org/wiki/Separation_of_concerns

The basics of security still apply
    TinyURL: http://tinyurl.com/aspnetmvcpp10
    Full URL: http://www.hanselman.com/blog/BackToBasicsTrustNothingAsUserInputComesFromAllOver.aspx

Decorate your actions with AcceptVerb
    TinyURL: http://tinyurl.com/aspnetmvcpp11
    Full URL: http://weblogs.asp.net/rashid/archive/2009/04/01/asp-net-mvc-best-pract…

VSTS Rangers - Using PowerShell for Automation - Part II: Using the right tool for the job

PowerShell is magically powerful - besides the beautiful syntax and the commandlets there is the ability to invoke .NET code which can lead you down a horrible path of trying to do the super solution by using these, writing a few (hundred) lines of code and ignoring some old school (read: DOS) ways of solving a problem. This is similar to the case of when you only have a hammer everything is a nail - except this is the alpha developer (as in alpha male) version where you have 50 tools but 1 is newer and shiner so that is the tool you just have to use.

So with the VSTS Rangers virtualisation project we are creating a VM which is not meant for production (in fact I think I need to create a special bright pink or green wallpaper for it which has that written over it), and so we want to make it super easy for connections and the users of this VM. So one example of where the PowerShell version of the command and the DOS version is so much easier is allowing all connections in via the firewall.

So in there is a command line tool call netshell (netsh is the command) and if you just type it you get a special command prompt and can basically change every network related setting. However the genius who designed this (and it is so well designed) is that you can type a single command at a time or chain commands up in the netsh interface (which makes it easy to test) and then when you have a working solution you can provide it as a parameter to the netshell command. So to allow all connections in the command looks like:

netsh advfirewall firewall add rule name="Allow All In" dir=in action=allow

Once I had that - I slapped that into a PowerShell script, because PowerShell can run DOS commands and viola another script to the collection done, in 1 line :)

Another example of this is that I need the machine hostname for a number of things that I use in PowerShell and in DOS there is a create command called hostname. Well you can easily combine that with PowerShell by assigning it to a variable:

$hostname = hostname

Now I can just use $hostname anywhere in PowerShell and all works well.

Two days in and what's been happening?

Two days into the new S.A. Architect site and what's been happening?

  • Added Willie Roberts to the community leads page - please blame me for forgetting to have him there in the first place.
  • Got our first hosted blogger - Bramzer! Very keen to see what content he provides.
  • Profile images are disabled - we having a technical issue with them which I hope to fix in the next week.
  • Seeing more traffic than usual for a weekend on the site, which is in part due to the mails and so on, but more than that people are making use of things like the contact forms and so on! This is really great to see.

If you have suggestions or ideas on how to improve the site going forward, I'm looking at those two people who voted the site is average :) please let me know!

VSTS Rangers - Using PowerShell for Automation - Part I: Structure & Build

powershell_2

As Willy-Peter pointed out, a lot of my evenings have been filled with a type of visual cloud computing, that being PowerShell’s white text on a blue (not quiet azure) background that make me think of clouds for the purpose of automating the virtual machines that the VSTS Rangers are building.

So how are we doing this? Well there is a great team involved, it’s not just me and there are a few key scripts we are trying build which should come as no surprise it’s configuration before and after you install VSTS/VS/Required software, tailoring of the environment and installing software.

The Structure

So how have structured this? Well since each script is developed by a team of people I didn’t want to have one super script that everyone is working on and fighting conflicts and merges all day and yet at the same time I don’t want to ship 100 scripts that do small functions and call each other - I want to ship one script but have development done on many. So how are we doing that? Step one was to break down the tasks of a script into functions, assign them to people and assign a number to them - like this:

*Note this is close to reality, but the names have been changed to protect the innocent team members and tasks at this point.

Task Assigned To Reference Number
Install SQL Team Member A 2015
Install WSS Team Member B 2020
Install VSTS Team Member C 2025

And out of that people produce the scripts in the smallest way and they name them beginning with the reference number - so I get a bunch of files like:

  • 2015-InstallSQL.ps1
  • 2020-InstallWSS.ps1
  • 2025-InstallVSTS.ps1

The Build Script

These all go into a folder and then I wrote another PowerShell script which combines them into a super script which is what is used. The build script is very simply made up of a number PowerShell commands that get the content and outputs it to a file, which looks like:

Get-Content .\SoftwareInstall\*.ps1 | Out-File softwareInstall.ps1

That handles the combining of the scripts and the reference number keeps the order of the scripts correct.

As an aside I didn’t use PowerShell for the build script originally, I used the old school style DOS copy command - however it had a few bugs.

The Numbering

What’s up with that numbering you may ask? Well for those younger generation who never coded with line numbers and GOTO statements it may seem weird to leave gaps in the numbering and should rather have sequential numbering - but what happens when someone realises we have missed something? Can you image going to each file after that and having to change numbers -EEK! So leaving gaps is leaving the ability to deal with mistakes in a non-costly way.

Next why I am starting with 1015? Well each script is given a 1000 numbers (so pre install would be 1000, software install 2000 etc…) so that I can look at script and know what it’s for and if it’s in the wrong place. I start at 15 as 00, 05 and 10 are already taken:

  • 00 - Header. The header for the file explaining what it is.
  • 05 - Functions. Common functions for all scripts.
  • 10 - Introduction. This is a bit of text that will be written to the screen explaining the purpose of the script and ending with a pause. The pause is important because if you ran the wrong script you can hit Ctrl+C at that point and nothing will have been run.

So that is part I. In future parts I will be looking at some of the scripts and learning's I have been getting.

File attachments
powershell_2.jpg (35.25 KB)

What is a South African ID number made up of?

Update 11 August 2011: Want this as an app for your smartphone? Click here
Update 30 March 2012: Details of the racial identifier can be found at: http://www.sadev.co.za/content/south-african-id-numbers-racial-identifier-flag

Cecil Tshikedi asked a great question - what is actually in an ID number:

  • The first six numbers are the birth date of the person in YYMMDD format - so no surprise that my ID number starts 820716.
  • The next four are a gender, 5000 and above is male and below 5000 is female. So my ID number would have a number of 5000 or greater.
  • The next number is the country ID, 0 is South Africa and 1 is not. My ID number with have 0 here.
  • The second last number used to be a racial identifier but now means nothing.
  • The last number is a check bit. Which verifies the rest of the number.

So for my ID number it would look something like: 820716[5000-9999]0??

There you go, it’s that easy.

Reading and writing to Excel 2007 or Excel 2010 from C# - Part IV: Putting it together

[Note: See the series index for a list of all parts in this series.]

Clipboard08

In part III we looked the interesting part of Excel, Shared Strings, which is just a central store for unique values that the actual spreadsheet cells can map to. Now how do we take that data and combine it with the sheet to get the values?

What makes up a sheet?

First lets look at what a sheet looks like in the package:

<?xml version="1.0" encoding="UTF-8" standalone="yes" ?> 
<worksheet xmlns="http://schemas.openxmlformats.org/spreadsheetml/2006/main" xmlns:r="http://schemas.openxmlformats.org/officeDocument/2006/relationships" xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006" mc:Ignorable="x14ac" xmlns:x14ac="http://schemas.microsoft.com/office/spreadsheetml/2008/2/ac">
<dimension ref="A1:A4" /> 
<sheetViews>
<sheetView tabSelected="1" workbookViewId="0">
<selection activeCell="A5" sqref="A5" /> 
</sheetView>
</sheetViews>
<sheetFormatPr defaultRowHeight="15" x14ac:dyDescent="0.25" /> 
<sheetData>
<row r="1" spans="1:1" x14ac:dyDescent="0.25">
<c r="A1" t="s">
<v>0</v> 
</c>
</row>
<row r="2" spans="1:1" x14ac:dyDescent="0.25">
<c r="A2" t="s">
<v>1</v> 
</c>
</row>
<row r="3" spans="1:1" x14ac:dyDescent="0.25">
<c r="A3" t="s">
<v>2</v> 
</c>
</row>
<row r="4" spans="1:1" x14ac:dyDescent="0.25">
<c r="A4" t="s">
<v>3</v> 
</c>
</row>
</sheetData>
<pageMargins left="0.7" right="0.7" top="0.75" bottom="0.75" header="0.3" footer="0.3" /> 
</worksheet>

Well there is a lot to understand in the XML, but for now we care about the <row> (which is the rows in our speadsheet) and within that the cells which the first one looks like:

<c r="A1" t="s">
<v>0</v> 
</c>

First that t=”s” attribute is very important, it tells us the value is stored in the shared strings. Then the index to the shared string is in the v node, in this example it is index 0. It is also important to note the r attribute for both rows and cells contains the position in the sheet.

As an aside what would this look like if we didn’t use shared strings?

<c r="A1">
<v>Some</v> 
</c>

The v node contains the actual value now and we no longer have the t attribute on the c node.

The foundation code for parsing the data

Now that we understand the structure and we have this Dictionary<int,string> which contains the shared strings we can combine them - but first we need a class to store the data in, then we need to get to the right worksheet part and a way to parse the column and row info, once we have that we can parse the data.

Before we read the data, we need a simple class to put the info into:

public class Cell
{
    public Cell(string column, int row, string data)
    {
        this.Column = column;
        this.Row = row;
        this.Data = data;
    }

    public override string ToString()
    {
       return string.Format("{0}:{1} - {2}", Row, Column, Data);
    }

    public string Column { get; set; }
    public int Row { get; set; }
    public string Data { get; set; }
}

How do we find the right worksheet? In the same way as we did get the shared strings in part II.

private static XElement GetWorksheet(int worksheetID, PackagePartCollection allParts)
{
   PackagePart worksheetPart = (from part in allParts
                                 where part.Uri.OriginalString.Equals(String.Format("/xl/worksheets/sheet{0}.xml", worksheetID))
                                 select part).Single();

    return XElement.Load(XmlReader.Create(worksheetPart.GetStream()));
}

How do we know the column and row? Well the c node has that in the r attribute. We’ll pull that data out as part of getting the data, we just need a small helper function which tells us were the column part ends and the row part begins. Thankfully that is easy since rows are always numbers and columns always letters. The function looks like this:

private static int IndexOfNumber(string value)
{
    for (int counter = 0; counter < value.Length; counter++)
    {
        if (char.IsNumber(value[counter]))
        {
            return counter;
        }
    }
    return 0;
}

Finally - we get the data!

We got the worksheet, then we got the cells using LINQ to XML and then we looped over them in a foreach loop. We then got the location from the r attribute - split it into columns and rows using our helper function and then grabbed the index, which we then go to the shared strings object and retrieve the value. The following code puts all those bits together and should go in your main method:

    List<Cell> parsedCells = new List<Cell>();

    XElement worksheetElement = GetWorksheet(1, allParts);

    IEnumerable<XElement> cells = from c in worksheetElement.Descendants(ExcelNamespaces.excelNamespace + "c")
                                  select c;

    foreach (XElement cell in cells)
    {
        string cellPosition = cell.Attribute("r").Value;
        int index = IndexOfNumber(cellPosition);
        string column = cellPosition.Substring(0, index);
        int row = Convert.ToInt32(cellPosition.Substring(index, cellPosition.Length - index));
        int valueIndex = Convert.ToInt32(cell.Descendants(ExcelNamespaces.excelNamespace +  "v").Single().Value);

        parsedCells.Add(new Cell(column, row, sharedStrings[valueIndex]));
    }

And finally we get a list back with  all the data in a sheet!

S.A. Architect's new website

Coming this weekend is the S.A. Architect website! So I thought I would drop a quick teaser to explain all the good new features, the bad of the change and the pretty.

The Good

  • The new site allows members to register without admin intervention.
  • Ability to use Google Analytics for the states.
  • We can offer blogs to members very easily now.
  • Users can use the traditional username & password or OpenId for authentication.
  • Proper event management system – We can now create an event and have invites sent out, lists of who is coming, how many they are brining etc...
  • Printing support on every page.
  • Automatic Google group registration – so when someone signs up to the site they join the Google group too (new members can untick that option).
  • Looks great in all browsers, no more browser specific options.

The Bad

  • Every user needs to be migrated, so this weekend all users will be moved and their passwords reset. So you will get an email from the system with the new password. 

The Pretty

Below is a sneak peak at what to expect:

Clipboard01

Some flair!

In the last week I decided to add some flair to my site, as I am spending a lot of time in various communities be they online, like StackOverflow, or offline, like InformationWorker - so I wanted to add flair for those communities to my website.

Clipboard01

However the problem is that I’m in a lot of them and don’t want a page that just scrolls and scrolls so I decided to make a rotating flair block, i.e. it shows a piece of flair for a few seconds and then rotates to another one. Thankfully I already have jQuery setup on my site so this was fairly easy. One thing that caused some headache was getting away from the idea of having a loop, where I’d show one flair then wait then hide it and show the next one. This is a very bad idea because it means that it runs forever which is what I want - but not an endless loop because browsers will detect that and stop the script. Also from a performance point of view JavaScript in a loop tends to make a browser run slowly.

Clipboard01

The solution is to use events and kick them off in a staggered fashion - thankfully JavaScript natively has a function for that: setTimeout which takes a string which it will execute and an integer which is the milliseconds delay to wait for. Then on it’s turn show it, wait (using setTimeout again), then hide it and lastly wait again to show it. Because that cycle is the same for each item the staggering ensures that they do not overlap and you get a nice, smooth flowing and non-loop loop :)

Clipboard012

The technical bits

My HTML is made up of a lot of divs - each one for a flair:

<div class="flair-badge">
    <div class="flair-title">
        <a class="flair-title" href="http://www.stackoverflow.com">StackOverflow.com</a></div>
    <iframe src="http://stackoverflow.com/users/flair/53236.html" marginwidth="0" marginheight="0" frameborder="0" scrolling="no" width="210px" height="60px"></iframe>
</div>

A dash of CSS for the styling, most importantly hiding all of them initially.

And the JavaScript:

var interval = 5000;

$(document).ready(function() {
    var badges = $(".flair-badge").length;
    var counter = 0;
    for (var counter = 0; counter<badges; counter++) {
        setTimeout('BadgeRotate(' + counter + ',' + badges + ')', counter * interval);
    }
});

function BadgeRotate(badge, badgeCount) {
    $(".flair-badge:nth(" + badge + ")").fadeIn("slow");
    setTimeout('BadgeRotateEnd(' + badge + ',' + badgeCount + ')', interval);
}

function BadgeRotateEnd(badge, badgeCount) {
    $(".flair-badge:nth(" + badge + ")").hide();
    setTimeout('BadgeRotate(' + badge + ',' + badgeCount + ')', (badgeCount * interval) - interval);
}

Reading and writing to Excel 2007 or Excel 2010 from C# - Part III: Shared Strings

[Note: See the series index for a list of all parts in this series.]

Clipboard08

Excel’s file format is an interesting one compared to the rest of the Office Suite in that it can store data in two places where most others store the data in a single place. The reason Excel supports this is for good performance while keeping the size of the file small. To illustrate the scenario lets pretend we had a single sheet with some info in it:

Clipboard02

Now for each cell we need to process the value and the total size would be 32 characters of data. However with a shared strings model we get something that looks like this:

Clipboard03

The result is the same however we are processing values once and the size is less, in this example 24 characters.

The Excel format is pliable, in that it will let you do either way. Note the Excel client will always use the shared strings method, so for reading you should support it. This brings up an interesting scenario, say you are filling a spreadsheet using direct input and then you open it in Excel, what happens? Well Excel identifies the structure, remaps it automatically and then when the user wishes to close (regardless if they have made a change or not) will prompt them to save the file.

The element we loaded at the end of part 2 is that shared strings file, which in the archive is \xl\sharedstrings.xml. If we look at it, it looks something similar to this:



  
    Some
  
  
    Data
  
  
    Belongs
  
  
    Here
  
Each <t> node is a value and it corresponds to a value in the sheet which we will parse later. The sheet will have a value in it, which is the key to the item in the share string. The key is an zero based index. So in the above example the first <t> node (Some) will be stored as 0, the second (Data) will be 1 and so on. The code to parse it which I wrote looks like this:
private static void ParseSharedStrings(XElement SharedStringsElement, Dictionary<int, string>sharedStrings)
{
    IEnumerable<XElement> sharedStringsElements = from s in SharedStringsElement.Descendants(ExcelNamespaces.excelNamespace + "t")
                                                  select s;

    int Counter = 0;
    foreach (XElement sharedString in sharedStringsElements)
    {
        sharedStrings.Add(Counter, sharedString.Value);
        Counter++;
    }
}

Using this I am parsing the node and putting the results into a Dictionary<int,string>.

Reading and Writing to Excel 2007 or Excel 2010 from C# - Part II: Basics

[Note: See the series index for a list of all parts in this series.]

image

To get support for the technologies we will use in this we need to add a few assembly references to our solution:

  • WindowsBase.dll
  • System.Xml
  • System.Xml.Linq
  • System.Core

Next make sure you have the following namespaces added to your using/imports:

  • System.IO.Packaging: This provides the functionality to open the files.
  • System.Xml
  • System.Xml.Linq
  • System.Linq
  • System.IO

Right next there is a XML namespace (not to be confused with .NET code name spaces) we need to use for most of our queries: http://schemas.openxmlformats.org/spreadsheetml/2006/main and there is a second one we will use seldom http://schemas.openxmlformats.org/officeDocument/2006/relationships. So I dumped this into a nice static class as follows:

namespace XlsxWriter
{
    using System.Xml.Linq;

    internal static class ExcelNamespaces
    {
        internal static XNamespace excelNamespace = XNamespace.Get("http://schemas.openxmlformats.org/spreadsheetml/2006/main");
        internal static XNamespace excelRelationshipsNamepace = XNamespace.Get("http://schemas.openxmlformats.org/officeDocument/2006/relationships");
    }
}

Next we need to create an instance of the System.IO.Packaging.Package class (from WindowsBase.dll) and instantiate it by calling the static method Open.

 Package xlsxPackage = Package.Open(fileName, FileMode.Open, FileAccess.ReadWrite);

Note: It is at this point that the file is opened, this is important since Excel will LOCK an open file. This is an important issue to be aware of because when you open a file that is locked a lovely exception is thrown. To correct that you must make sure to call the close method on the package, for example:

xlsxPackage.Close();

When you open the XLSX file manually, the first file you’ll see is the [Content_Types].xml file which is a manifest of all the files in the ZIP archive. What is nice with using Packaging is that you can call the GetParts method to get a collection of Parts, which are actually just the files within the XLSX file.

image
The contents of the XLSX if renamed to a ZIP file and opened.
image
The various files listed in the [Content_Types].xml file.

What we will use during this is the ContentType parameter to filter the parts to the specific item we want to work with. The second image above to identify the value for the ContentType. For example the ContentType for a worksheet is: application/vnd.openxmlformats-officedocument.speadsheetml.worksheet+xml.

Once we have all the parts of the XLSX file we can navigate through it to get the bits we need to read the content, which involves two steps:

  1. Finding the shared strings part. This is another XML file which allows for strings of values to shared between worksheets. This is optional for writing, to use but does save space and speed up loading. For reading values it is required as Excel will use it.
  2. Finding the worksheet that we want to read from, this is a separate part from the shared strings.

Lets start with reading the shared strings part, this will be basis for reading any part later in series. What we need to do is get the first PackagePart with the type: application/vnd.openxmlformats-officedocument.spreadsheetml.sharedStrings+xml

PackagePart sharedStringsPart = (from part in allParts
    where part.ContentType.Equals("application/vnd.openxmlformats-officedocument.spreadsheetml.sharedStrings+xml")
    select part).Single();

Now we need to get the XML content out of the PackagePart, which is easy with the GetStream method, which we load into an XmlReader so that it can be loaded into a XElement. This is a bit convoluted but it is just one line to get it from one type to another and the benefits of using LINQ to XML are worth it:

XElement sharedStringsElement = XElement.Load(XmlReader.Create(sharedStringsPart.GetStream()));

Now we have the ability to work with the XElement and do some real work. In the next parts, we’ll look at what we can do with it and how to get from a single part to an actual sheet.