Skip to main content
Anything related to the .NET framework

Note worthy

I have been very focused during the day on a project and my evenings have been taken up a lot with VSTS Rangers work so the blog has lagged a bit so here are some things you should be aware of (if you follow me on Twitter, then you probably have heard these in 140 characters or less):

I was awarded the title of VSTS Rangers Champion - this is a great honour since it is a peer vote from VSTS External Rangers (no Microsoft Staff) and MVP’s for involvement in the VSTS Rangers projects.

The VSTS Rangers shipped the alpha of the integration platform for TFS 2010 - this is important for me because it means some of the bits I have worked on are now public and I am expecting some feedback to get them better for beta and release next year. It is also important since my big contribution to the integration platform, which is an adapter I will cover in future blog posts, has a fairly stable base.

Dev4Dev’s in coming up in just over a week. This is one of my favourite events because it really is event for passionate developers since they have to give up a Saturday morning for it (no using an event to sneak off work). I will be presenting on Visual Studio 2010! Which should be great, based on my first dry run to an internal audience at BB&D last week. Two more of my BB&D team mates will be presenting Zayd Kara on TFS Basic and (if memory serves me) Rudi Grobler on Sketchflow!

The Information Worker user group is really blowing my mind with it’s growth, on Tuesday we had 74 people attend our meeting. For a community that only had a 100 or so people signed up on the website at the beginning of the year that is brilliant. Thanks must go to my fellow leads: Veronique, Michael, Marc, Zlatan, Hilton and Daniel. We will be having a final Jo’burg event for the year on the 2nd and it will be a fun ask the experts session.

NDepend - The field report

I received a free copy of NDepend a few months back, which was timed almost perfectly to the start of a project I was going on to. However before I get to that, what is NDepend?

NDepend is a static analysis tool, in other words it looks at your compiled .NET code and runs analysis on it. If you know the Visual Studio code analysis or FxCop then you are thinking of the right thing - except this is not design or security rules but more focused at the architecture of the code.

Right back to the field, the new project has gone through a few phases:

  • Fire fighting - There were immediate burning issues that needed to be resolved.
  • Analysis - Now that the fires are out, what caused them and how do we prevent it going forward.
  • Hand over - Getting the team who will live with the project up to speed.

Right, so how did NDepend help me? Well let’s look at each phase since it has helped differently in each phase.

Note: The screen shots here are not from the project, since that is NDA - these are from the application I am using in my upcoming Dev4Dev’s talk.

Fire Fighting

The code base has over 30000 lines of code and the key bugs were very subtle and almost impossible to duplicate. How am I supposed to understand it quick enough? Well first I ran the entire solution and I start looking at it in the Visual Explorer:

image

The first thing that it helps is dependency graph in the middle which visually shows me what depends on what, not just one level but multiple levels and so on a large project it could look like:

ComponentDependenciesDiagram

Now that may be scary to see, but you can interact with it and zoom, click and manipulate it to find out what is going on.

image

For fighting code I could sit with the customer people, and easily see where the possible impact could be coming from. So that gets it down to libraries, but what about getting it down further? Well I can use the metrics view (those black squares at the top of the image above) which I change what they mean - so maybe the bigger the square the bigger the method, class, library etc… so using the logic that at some magical point (about 200 lines - according to Code Complete by Steve McConnell), the bigger the method the more likely that there is bugs in it. I could use that to find out where to spend time looking for the problems first, which meant that the problems were found quicker and resolved.

Analysis

Right now that the fires were over moved on to analysis to make sure that it never happened again - well when a project is analysed by NDepend it produces an HTML report with the information above but also a lot of other information like this cool chart which shows how much your assemblies are used (horizontal axis) vs. how a change may effect other parts of the code (vertical axis):

AbstractnessVSInstability

And that is great to see what you should focus on in refactoring (or maybe what to avoid), but there is another part which is more powerful and that is the CQL language which is like SQL but for code so you can have queries like show me the top 10 methods which have more than 200 lines of code:

WARN IF Count > 0 IN SELECT TOP 10 METHODS WHERE NbLinesOfCode > 200 ORDER BY NbLinesOfCode DESC

Some of these are in the report, but there is loads more in the visual tool and you can even write your own. I found that I ended up writing a few to understand where some deep inheritance was getting used when it came to exception handling specifically. In the visual tool this is all interactive too, so when you run that query it lights up the dependency tree and the black squares so you can easily see what is the problem spots and identify hot spots in the code.

Hand Over

Moving the final stages, I have to get the long term guys up to speed - how do I do that in a way they can understand without going through the code line by line? Easy, just pop this on a projector and use it as your presentation tool, with a custom set of CQL’s as slides or key points to show. What makes this shine is that it is live and interactive so when taking questions or doing a discussion you can easily move to other parts and highlight those.

All Perfect Then?

No, there are some minor UI issues that are more annoyance than anything else (labels not showing correctly in the ribbon mode or the fact that you must specify a project extension), but those are easily overlooked. The big problem is that this is not something you can pick up and run with - in fact I had tried NDepend a few years back and decided it wasn’t for me very quickly. If it wasn’t for a lot more experience and having an immediate need which forced me over that steep initial learning curve then I would never have gotten how powerful it is. That also brings up another point, the curve is steep - and if you aren’t used to metrics and thinking on an architectural level then this tool will really cause your head to melt and so this is not a tool for every team member, it is a tool for the architects and senior devs in your team to use.

VS2010/TFS2010 Information Landslide Begins

image001 Yesterday (19th Oct) the information landslide for VS2010 & TFS2010 began with a number of items appearing all over:

Two new Visual Studio snippets

Blue Man Holding a Pencil and Drawing a Circle on a Blueprint Clipart Illustration I’ve been working on an interesting project recently and found that I needed two pieces of code a lot, so what better than wrapping them as snippets.

What are snippets?

Well if you start typing in VS you may see some options with a torn paper icon, if you select that and hit tab (or hit tab twice, once to select and once to invoke) it will write code for you! These are contained in .snippet files, which are just XML files in a specific location.

image 

To deploy these snippets copy them to your C# custom snippet’s folder which should be something like C:\Users\<Username>\Documents\Visual Studio 2008\Code Snippets\Visual C#\My Code Snippets

You can look at the end of this post for a sample of what the snippets create, but lets have a quick overview of them.

Snippet 1: StructC

Visual Studio already includes a snippet for creating a struct (which is also the snippet) however it is very bland:

image

StructC is a more complete implementation of a struct, mainly so it complies with fxCop requirements for a struct. So it includes:

  • GetHashCode method
  • Both Equals methods
  • The positive and negative equality operators (== and !=)
  • Lots of comments

which all in all runs in at 74 lines of code, rather than the three you got previously.

Warning - the GetHashCode uses reflection to figure out a unique hash code, but this may not be best for all scenarios. Please review prior to use.

Snippet 2: Dispose

If you are implementing a class that needs to inherit IDisposable you can use the the option in VS to implement the methods.

image

Once again from a fxCop point of view it is lacking since you just get the Dispose method. Now instead of doing that you can use the dispose snippet which produces 41 lines of code which has:

  • Region for the code - same as if you used the VS option
  • Properly implemented Dispose method which calls Dispose(bool) and GC.SuppressFinalize
  • A Dispose(bool) method for cleanup of managed and unmanaged objects
  • A private bool variable to make sure we do not call dispose multiple times.

StructC Sample

/// <summary></summary>
struct MyStruct
{
    //TODO: Add properties, fields, constructors etc...

    /// <summary>
    /// Returns a hash code for this instance.
    /// </summary>
    /// <returns>
    /// A hash code for this instance, suitable for use in hashing algorithms and data structures like a hash table. 
    /// </returns>
    public override int GetHashCode()
    {
        int valueStorage = 0;
        object objectValue = null;
        foreach (PropertyInfo property in typeof(MyStruct).GetProperties())
        {
            objectValue = property.GetValue(this, null);
            if (objectValue != null)
            {
                valueStorage += objectValue.GetHashCode();
            }
        }

        return valueStorage;
    }

    /// <summary>
    /// Determines whether the specified <see cref="System.Object"/> is equal to this instance.
    /// </summary>
    /// <param name="obj">The <see cref="System.Object"/> to compare with this instance.</param>
    /// <returns>
    ///     <c>true</c> if the specified <see cref="System.Object"/> is equal to this instance; otherwise, <c>false</c>.
    /// </returns>
    public override bool Equals(object obj)
    {
        if (!(obj is MyStruct))
            return false;

        return Equals((MyStruct)obj);
    }

    /// <summary>
    /// Equalses the specified other.
    /// </summary>
    /// <param name="other">The other.</param>
    /// <returns></returns>
    public bool Equals(MyStruct other)
    {
        //TODO: Implement check to compare two instances of MyStruct
        
        return true;
    }

    /// <summary>
    /// Implements the operator ==.
    /// </summary>
    /// <param name="first">The first.</param>
    /// <param name="second">The second.</param>
    /// <returns>The result of the operator.</returns>
    public static bool operator ==(MyStruct first, MyStruct second)
    {
        return first.Equals(second);
    }

    /// <summary>
    /// Implements the operator !=.
    /// </summary>
    /// <param name="first">The first.</param>
    /// <param name="second">The second.</param>
    /// <returns>The result of the operator.</returns>
    public static bool operator !=(MyStruct first, MyStruct second)
    {
        return !first.Equals(second);
    }
}                

Dispose Sample

#region IDisposable Members

/// <summary>
/// Internal variable which checks if Dispose has already been called
/// </summary>
private Boolean disposed;

/// <summary>
/// Releases unmanaged and - optionally - managed resources
/// </summary>
/// <param name="disposing"><c>true</c> to release both managed and unmanaged resources; <c>false</c> to release only unmanaged resources.</param>
private void Dispose(Boolean disposing)
{
    if (disposed)
    {
        return;
    }

    if (disposing)
    {
        //TODO: Managed cleanup code here, while managed refs still valid
    }
    //TODO: Unmanaged cleanup code here

    disposed = true;
}

/// <summary>
/// Performs application-defined tasks associated with freeing, releasing, or resetting unmanaged resources.
/// </summary>
public void Dispose()
{
    // Call the private Dispose(bool) helper and indicate 
    // that we are explicitly disposing
    this.Dispose(true);

    // Tell the garbage collector that the object doesn't require any
    // cleanup when collected since Dispose was called explicitly.
    GC.SuppressFinalize(this);
}

#endregion

ASP.NET MVC Cheat Sheets

My latest batch of cheat sheets is out on DRP which are focused on ASP.NET MVC. So what is in this set:

ASP.NET MVC View Cheat Sheet

This focuses on the HTML Helpers, URL Helpers and so on that you would use within your views.

Slide1 

ASP.NET MVC Controller Cheat Sheet

This focuses on what you return from your controller and how to use them and it also includes a lot of information on the MVC specific attributes.

Slide2

ASP.NET MVC Framework Cheat Sheet

This focuses on the rest of MVC like routing, folder structure, execution pipeline etc… and some info on where you can get more info (is that meta info?).

 Slide3

ASP.NET MVC Proven Practises Cheat Sheet

This contains ten key learnings that every ASP.NET MVC developer should know - it also includes links to the experts in this field where you can get a ton more information on those key learning's.

 Slide4

What are the links in the poster?

Think before you data bind
    TinyURL: http://TinyURL.com/aspnetmvcpp1
    Full URL: http://www.codethinked.com/post/2009/01/08/ASPNET-MVC-Think-Before-You-Bind.aspx

Keep the controller thin
    TinyURL: http://tinyurl.com/aspnetmvcpp2
    Full URL: http://codebetter.com/blogs/ian_cooper/archive/2008/12/03/the-fat-controller.aspx

Create UrlHelper extensions
    TinyURL: http://tinyurl.com/aspnetmvcpp3
    Full URL: http://weblogs.asp.net/rashid/archive/2009/04/01/asp-net-mvc-best-practices-part-1.aspx#urlHelperRoute

Keep the controller HTTP free
    TinyURL: http://tinyurl.com/aspnetmvcpp4
    Full URL: http://weblogs.asp.net/rashid/archive/2009/04/01/asp-net-mvc-best-practices-part-1.aspx#httpContext

Use the OutputCache attribute
    TinyURL: http://tinyurl.com/aspnetmvcpp5
    Full URL: http://weblogs.asp.net/rashid/archive/2009/04/01/asp-net-mvc-best-practices-part-1.aspx#outputCache

Plan your routes
    TinyURL: http://tinyurl.com/aspnetmvcpp6
    Full URL: http://weblogs.asp.net/rashid/archive/2009/04/03/asp-net-mvc-best-practices-part-2.aspx#routing

Split your view into multiple view controls
    TinyURL: http://tinyurl.com/aspnetmvcpp7
    Full URL: http://weblogs.asp.net/rashid/archive/2009/04/03/asp-net-mvc-best-practices-part-2.aspx#userControl

Separation of Concerns (1)
    TinyURL: http://tinyurl.com/aspnetmvcpp8
    Full URL: http://blog.wekeroad.com/blog/asp-net-mvc-avoiding-tag-soup

Separation of Concerns (2)
    TinyURL: http://tinyurl.com/aspnetmvcpp9
    Full URL: http://en.wikipedia.org/wiki/Separation_of_concerns

The basics of security still apply
    TinyURL: http://tinyurl.com/aspnetmvcpp10
    Full URL: http://www.hanselman.com/blog/BackToBasicsTrustNothingAsUserInputComesFromAllOver.aspx

Decorate your actions with AcceptVerb
    TinyURL: http://tinyurl.com/aspnetmvcpp11
    Full URL: http://weblogs.asp.net/rashid/archive/2009/04/01/asp-net-mvc-best-pract…

Reading and writing to Excel 2007 or Excel 2010 from C# - Part IV: Putting it together

[Note: See the series index for a list of all parts in this series.]

Clipboard08

In part III we looked the interesting part of Excel, Shared Strings, which is just a central store for unique values that the actual spreadsheet cells can map to. Now how do we take that data and combine it with the sheet to get the values?

What makes up a sheet?

First lets look at what a sheet looks like in the package:

<?xml version="1.0" encoding="UTF-8" standalone="yes" ?> 
<worksheet xmlns="http://schemas.openxmlformats.org/spreadsheetml/2006/main" xmlns:r="http://schemas.openxmlformats.org/officeDocument/2006/relationships" xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006" mc:Ignorable="x14ac" xmlns:x14ac="http://schemas.microsoft.com/office/spreadsheetml/2008/2/ac">
<dimension ref="A1:A4" /> 
<sheetViews>
<sheetView tabSelected="1" workbookViewId="0">
<selection activeCell="A5" sqref="A5" /> 
</sheetView>
</sheetViews>
<sheetFormatPr defaultRowHeight="15" x14ac:dyDescent="0.25" /> 
<sheetData>
<row r="1" spans="1:1" x14ac:dyDescent="0.25">
<c r="A1" t="s">
<v>0</v> 
</c>
</row>
<row r="2" spans="1:1" x14ac:dyDescent="0.25">
<c r="A2" t="s">
<v>1</v> 
</c>
</row>
<row r="3" spans="1:1" x14ac:dyDescent="0.25">
<c r="A3" t="s">
<v>2</v> 
</c>
</row>
<row r="4" spans="1:1" x14ac:dyDescent="0.25">
<c r="A4" t="s">
<v>3</v> 
</c>
</row>
</sheetData>
<pageMargins left="0.7" right="0.7" top="0.75" bottom="0.75" header="0.3" footer="0.3" /> 
</worksheet>

Well there is a lot to understand in the XML, but for now we care about the <row> (which is the rows in our speadsheet) and within that the cells which the first one looks like:

<c r="A1" t="s">
<v>0</v> 
</c>

First that t=”s” attribute is very important, it tells us the value is stored in the shared strings. Then the index to the shared string is in the v node, in this example it is index 0. It is also important to note the r attribute for both rows and cells contains the position in the sheet.

As an aside what would this look like if we didn’t use shared strings?

<c r="A1">
<v>Some</v> 
</c>

The v node contains the actual value now and we no longer have the t attribute on the c node.

The foundation code for parsing the data

Now that we understand the structure and we have this Dictionary<int,string> which contains the shared strings we can combine them - but first we need a class to store the data in, then we need to get to the right worksheet part and a way to parse the column and row info, once we have that we can parse the data.

Before we read the data, we need a simple class to put the info into:

public class Cell
{
    public Cell(string column, int row, string data)
    {
        this.Column = column;
        this.Row = row;
        this.Data = data;
    }

    public override string ToString()
    {
       return string.Format("{0}:{1} - {2}", Row, Column, Data);
    }

    public string Column { get; set; }
    public int Row { get; set; }
    public string Data { get; set; }
}

How do we find the right worksheet? In the same way as we did get the shared strings in part II.

private static XElement GetWorksheet(int worksheetID, PackagePartCollection allParts)
{
   PackagePart worksheetPart = (from part in allParts
                                 where part.Uri.OriginalString.Equals(String.Format("/xl/worksheets/sheet{0}.xml", worksheetID))
                                 select part).Single();

    return XElement.Load(XmlReader.Create(worksheetPart.GetStream()));
}

How do we know the column and row? Well the c node has that in the r attribute. We’ll pull that data out as part of getting the data, we just need a small helper function which tells us were the column part ends and the row part begins. Thankfully that is easy since rows are always numbers and columns always letters. The function looks like this:

private static int IndexOfNumber(string value)
{
    for (int counter = 0; counter < value.Length; counter++)
    {
        if (char.IsNumber(value[counter]))
        {
            return counter;
        }
    }
    return 0;
}

Finally - we get the data!

We got the worksheet, then we got the cells using LINQ to XML and then we looped over them in a foreach loop. We then got the location from the r attribute - split it into columns and rows using our helper function and then grabbed the index, which we then go to the shared strings object and retrieve the value. The following code puts all those bits together and should go in your main method:

    List<Cell> parsedCells = new List<Cell>();

    XElement worksheetElement = GetWorksheet(1, allParts);

    IEnumerable<XElement> cells = from c in worksheetElement.Descendants(ExcelNamespaces.excelNamespace + "c")
                                  select c;

    foreach (XElement cell in cells)
    {
        string cellPosition = cell.Attribute("r").Value;
        int index = IndexOfNumber(cellPosition);
        string column = cellPosition.Substring(0, index);
        int row = Convert.ToInt32(cellPosition.Substring(index, cellPosition.Length - index));
        int valueIndex = Convert.ToInt32(cell.Descendants(ExcelNamespaces.excelNamespace +  "v").Single().Value);

        parsedCells.Add(new Cell(column, row, sharedStrings[valueIndex]));
    }

And finally we get a list back with  all the data in a sheet!

Reading and writing to Excel 2007 or Excel 2010 from C# - Part III: Shared Strings

[Note: See the series index for a list of all parts in this series.]

Clipboard08

Excel’s file format is an interesting one compared to the rest of the Office Suite in that it can store data in two places where most others store the data in a single place. The reason Excel supports this is for good performance while keeping the size of the file small. To illustrate the scenario lets pretend we had a single sheet with some info in it:

Clipboard02

Now for each cell we need to process the value and the total size would be 32 characters of data. However with a shared strings model we get something that looks like this:

Clipboard03

The result is the same however we are processing values once and the size is less, in this example 24 characters.

The Excel format is pliable, in that it will let you do either way. Note the Excel client will always use the shared strings method, so for reading you should support it. This brings up an interesting scenario, say you are filling a spreadsheet using direct input and then you open it in Excel, what happens? Well Excel identifies the structure, remaps it automatically and then when the user wishes to close (regardless if they have made a change or not) will prompt them to save the file.

The element we loaded at the end of part 2 is that shared strings file, which in the archive is \xl\sharedstrings.xml. If we look at it, it looks something similar to this:



  
    Some
  
  
    Data
  
  
    Belongs
  
  
    Here
  
Each <t> node is a value and it corresponds to a value in the sheet which we will parse later. The sheet will have a value in it, which is the key to the item in the share string. The key is an zero based index. So in the above example the first <t> node (Some) will be stored as 0, the second (Data) will be 1 and so on. The code to parse it which I wrote looks like this:
private static void ParseSharedStrings(XElement SharedStringsElement, Dictionary<int, string>sharedStrings)
{
    IEnumerable<XElement> sharedStringsElements = from s in SharedStringsElement.Descendants(ExcelNamespaces.excelNamespace + "t")
                                                  select s;

    int Counter = 0;
    foreach (XElement sharedString in sharedStringsElements)
    {
        sharedStrings.Add(Counter, sharedString.Value);
        Counter++;
    }
}

Using this I am parsing the node and putting the results into a Dictionary<int,string>.

Reading and Writing to Excel 2007 or Excel 2010 from C# - Part II: Basics

[Note: See the series index for a list of all parts in this series.]

image

To get support for the technologies we will use in this we need to add a few assembly references to our solution:

  • WindowsBase.dll
  • System.Xml
  • System.Xml.Linq
  • System.Core

Next make sure you have the following namespaces added to your using/imports:

  • System.IO.Packaging: This provides the functionality to open the files.
  • System.Xml
  • System.Xml.Linq
  • System.Linq
  • System.IO

Right next there is a XML namespace (not to be confused with .NET code name spaces) we need to use for most of our queries: http://schemas.openxmlformats.org/spreadsheetml/2006/main and there is a second one we will use seldom http://schemas.openxmlformats.org/officeDocument/2006/relationships. So I dumped this into a nice static class as follows:

namespace XlsxWriter
{
    using System.Xml.Linq;

    internal static class ExcelNamespaces
    {
        internal static XNamespace excelNamespace = XNamespace.Get("http://schemas.openxmlformats.org/spreadsheetml/2006/main");
        internal static XNamespace excelRelationshipsNamepace = XNamespace.Get("http://schemas.openxmlformats.org/officeDocument/2006/relationships");
    }
}

Next we need to create an instance of the System.IO.Packaging.Package class (from WindowsBase.dll) and instantiate it by calling the static method Open.

 Package xlsxPackage = Package.Open(fileName, FileMode.Open, FileAccess.ReadWrite);

Note: It is at this point that the file is opened, this is important since Excel will LOCK an open file. This is an important issue to be aware of because when you open a file that is locked a lovely exception is thrown. To correct that you must make sure to call the close method on the package, for example:

xlsxPackage.Close();

When you open the XLSX file manually, the first file you’ll see is the [Content_Types].xml file which is a manifest of all the files in the ZIP archive. What is nice with using Packaging is that you can call the GetParts method to get a collection of Parts, which are actually just the files within the XLSX file.

image
The contents of the XLSX if renamed to a ZIP file and opened.
image
The various files listed in the [Content_Types].xml file.

What we will use during this is the ContentType parameter to filter the parts to the specific item we want to work with. The second image above to identify the value for the ContentType. For example the ContentType for a worksheet is: application/vnd.openxmlformats-officedocument.speadsheetml.worksheet+xml.

Once we have all the parts of the XLSX file we can navigate through it to get the bits we need to read the content, which involves two steps:

  1. Finding the shared strings part. This is another XML file which allows for strings of values to shared between worksheets. This is optional for writing, to use but does save space and speed up loading. For reading values it is required as Excel will use it.
  2. Finding the worksheet that we want to read from, this is a separate part from the shared strings.

Lets start with reading the shared strings part, this will be basis for reading any part later in series. What we need to do is get the first PackagePart with the type: application/vnd.openxmlformats-officedocument.spreadsheetml.sharedStrings+xml

PackagePart sharedStringsPart = (from part in allParts
    where part.ContentType.Equals("application/vnd.openxmlformats-officedocument.spreadsheetml.sharedStrings+xml")
    select part).Single();

Now we need to get the XML content out of the PackagePart, which is easy with the GetStream method, which we load into an XmlReader so that it can be loaded into a XElement. This is a bit convoluted but it is just one line to get it from one type to another and the benefits of using LINQ to XML are worth it:

XElement sharedStringsElement = XElement.Load(XmlReader.Create(sharedStringsPart.GetStream()));

Now we have the ability to work with the XElement and do some real work. In the next parts, we’ll look at what we can do with it and how to get from a single part to an actual sheet.

Gallery2 + C# - Beta 2 Available

A few weeks back I posted beta 2 of the gallery 2 .net toolkit where I have done considerable more work on it than I ever expected I would. Lots of need bits of code and features available. What’s in it now:

There are four items currently available:

  • (Tool) For people just wanting to export all their images out of Gallery2, there is g2Export which is a command line tool to export images.
  • (Tool) For people wanting to get information out of Gallery2 into a sane format, there is g2 Album Management which is an Excel 2007 add-in to export information about albums and images to Excel.
  • (API) For developers wanting to write their own tools or integrations, there is the SADev.Gallery2.Protocol which wraps the Gallery2 remote API. Please see the What you should know? page for information on using the API.
  • (Source) Lastly for developers needing some help, there is the source code for the the g2 Export Tool and the g2 Album Management Excel Add-in
Here is a screen shot of g2 Album Management in action:

Here is a screen shot of g2Export in action:

If you are interested in how much of the Gallery2 API is catered for, it’s most of it (the file upload parts are the only major outstanding ones). The key thing to note on the table is the tested column. While the code is written, it may not be tested and may not work at all. I have found the documentation is not 100% in line with the actual gallery2 code so something it needs considerable rework for it to actually work.

API Call Basic Request Basic Response Tested Advanced Request Advanced Response
login done done done done done
fetch-albums done done done done done
fetch-albums-prune done done done done done
add-item (upload)   done     done
add-item (url) done done   done done
album-properties done done done done done
new-album done done   done done
fetch-album-images done done done done done
move-album done done   done done
increment-view-count done done   done done
image-properties done done done done done
no-op done done done done done

Proven Source Control Practises Poster

Proven Practises Poster

Maybe one of the toughest things in software development to get right all the time: source control. Well now with this nice bright A3 poster printed on your wall (or maybe above the monitor of the guy who breaks the builds daily) you’ll never go wrong again.

It covers 17 proven practises broken into 5 key areas:

Things YOU should do

  • Keep up to date
  • Be light and quick with checkouts
  • Don’t check in unneeded binaries
  • Working folders should be disposable
  • Use undo/revert sparingly

Branching

  • Plan your branching
  • Own the merge
  • Look after branches

Management

  • Useful & meaningful check in messages
  • Don’t use the audit trial for blame

Repository

  • Don’t break the build
  • Separate your repo
  • Don’t forget to shelve
  • Use labels

Technology

  • Try concurrent access
  • Don’t be afraid of branching concepts
  • Automerge for checkout only