Organizing a Large Photo Collection

May 10, 2010
I recently decided to attempt to organize the collection of around 17,000 digital photos I have taken over the years I ended up using a combination of pre-existing tools and some of my own code and finally got everything really well organized. My goals for the project were the following:
 
  • High Quality Original photos separated from processed/cropped versions or random graphics.
  • Photos grouped by date-taken ranges
  • Person and Location data embedded into the photos and search/browseable
  • Multiple redundant backups
  • Duplicates removed

Knowing that my digital cameras had always taken photos with the filename pattern: IMG_*.jpg, I just searched for all photos with that pattern and moved them to a single folder. In order to get all the photos in one folder I was thankful to have Windows 7′s "merge" feature which allows you to keep multiple files with the same name by renaming the conflicting files automatically. Once they were all in the folder I exposed the details view and added the "dimension" column, sorted by dimension and removed all of the ones that were obviously not original due to having extremely low resolution.

The next step involved using a very handy freeware tool: CloneSpy.

I ran CloneSpy on the folder telling it to only delete exact duplicate files. It does this by computing the hashcode of all the files and removing files with identical hashcodes. On my quad core 2.8 processor using Windows 7, it took about 15 minutes to process all 17,000 files (around 42 Gigs) on the external drive, connected via USB 3.

At this point, I could (and did) still have thousands of duplicate photos that maybe are just rotated or slightly different from each other. How to get rid of the dupes and keep the originals???

When digital cameras create photo files, the names they assign are not always meaningful. For example, on my Canons, the files look like this: IMG_2384.jpg. It’s just an incrementing integer value with no info about the actual photo. It turns out, cameras DO embed useful info into digital photos, but it’s not in the filename. It’s in some metadata called "EXIF". This data is similar to the the ID3 tag data in MP3 files. I like to use a free tool called NameExif to rename my files according to the "date picture taken" embedded in the EXIF data. Running this tool on my folder of over 40 gigs of photos took about a half hour. What you end up with is files named like this: 2009.07.04 21.23.07.jpg, which would mean this photo was taken at 9:23pm on the 4th of July, 2009.

I noticed that some of the photos were not renamed, these were ones that didn’t have EXIF data, so it was safe to assume they were not originals. I moved them to a miscellaneous folder for possible later attention.

If NameExif finds 2 photos which were taken in the exact same moment (year, month, day, minute, second) then it simply appends a ‘-’ character to the end of the filename, before the extension. If more than one are found it appends additional ‘-’ characters.

At this point I wrote some code to find the files with completely unique names and move them to another folder, knowing that they were truly one and only originals. Here is the code for my "NotDupeMover" tool:

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.IO;

namespace NotDupeMover
{
    class Program
    {
        static string srcPath = @"e:\dupes\";
        static string destPath = @"e:\notdupe\";

        static void Main(string[] args)
        {
            DirectoryInfo dir = new DirectoryInfo(srcPath);
            List<FileInfo> files = new List<FileInfo>();
            foreach (FileInfo file in dir.GetFiles())
            {
                files.Add(file);
            }

            foreach (FileInfo file in files)
            {
                List<FileInfo> dupes = files.FindAll(f => (f.Name.Contains(file.Name.Substring(0, 19))));
                if (dupes.Count == 1)
                {
                    file.MoveTo(destPath + file.Name);
                }
                else
                {

                }
            }

        }
    }
}

The above code generates a list of all the filenames in a folder, looks at the first 19 characters (in my case, something like: “2009.07.04 21.23.07”) and if there are no other files in the folder which were taken at that same moment, it moves the file to another folder. Running this tool on my 40 gigs of photos took about 10 minutes. I was impressed with the performance of the .Net Framework in processing so much data gracefully.

So, now I had a folder called “dupes” in which there were files with the same root names. There were a couple thousand photos here. I used thumbnail view to identify the truly unique files (ones that actually were taken in the same moment, but were different) and moved these manually into the “notdupe” folder. I then sorted the rest by filename and deleted the smaller files, switching back and forth between details and thumbnail view to make sure I only got rid of unnecessary files. This was a step I would have loved to have automated, but the set of ways and reasons for why 2 photos might be “different but the same” was too complex for me to attack right now. The ultimate tool here would be able to determine exactly which were the originals, what changes had created duplicates, whether they were true optical duplicates (image recognition) etc… Maybe some code to write another day! The manual process took me about a half hour. Not too bad.

My next step was that I wanted to group the photos into folders by date and time. For this I wrote some code and created a tool called PhotoDateGrouper:

using System;
using System.Collections.Generic;
using System.Linq;
using System.IO;

namespace PhotoDateGrouper
{
    class Program
    {
        static string inputDirectory = @"G:\tosort";
        static string outputDirectory = @"G:\GroupedPhotos";
        static List<FileInfo> Photos = new List<FileInfo>();

        static void Main(string[] args)
        {
            try
            {
                CreatePhotoList(inputDirectory);
                Photos.Sort(CompareByName);
                GroupPhotos();
            }
            catch (Exception ex)
            {

            }
        }

        private static void CreatePhotoList(string inDir)
        {
            foreach (string dir in Directory.GetDirectories(inDir))
            {
                CreatePhotoList(dir);
            }
            foreach (FileInfo photo in (new DirectoryInfo(inDir)).GetFiles("*.jpg"))
            {
                try
                {
                    DateTime photoTime = DateTime.Parse(photo.Name.Substring(0, 10));
                    photo.CreationTime = photoTime;
                    Photos.Add(photo);
                }
                catch
                {

                }
            }
        }

        private static void GroupPhotos()
        {
            DateTime startTime = new DateTime(1900, 1, 1);
            DateTime photoTime = new DateTime(1900, 1, 1);
            List<FileInfo> photoGroup = new List<FileInfo>();

            foreach (FileInfo photo in Photos)
            {
                if (startTime.Year == 1900)
                {
                    startTime = photo.CreationTime;
                    photoTime = startTime;
                }

                if ((photo.CreationTime.Subtract(photoTime)).TotalHours < 25)
                {
                    photoGroup.Add(photo);
                    photoTime = photo.CreationTime;
                }
                else
                {
                    SaveGroup(photoGroup, startTime, photoTime);
                    foreach (FileInfo fi in photoGroup)
                    {
                        Photos.Remove(fi);
                    }
                    GroupPhotos();
                }
            }
        }

        private static void SaveGroup(List<FileInfo> photoGroup, DateTime startTime, DateTime endTime)
        {
            try
            {
                string groupPath = outputDirectory + "\\" + startTime.Year + "." + startTime.Month + "." + startTime.Day + " – " + endTime.Year + "." + endTime.Month + "." + endTime.Day + "(" + photoGroup.Count.ToString() + ")";
                if (photoGroup.Count > 1)
                {
                    Directory.CreateDirectory(groupPath);
                    foreach (FileInfo file in photoGroup)
                    {
                        file.MoveTo(groupPath + "\\" + file.Name);
                    }
                }
                else
                {
                    foreach (FileInfo file in photoGroup)
                    {
                        file.MoveTo(outputDirectory + "\\" + file.Name);
                    }
                }
            }
            catch (Exception ex)
            {

            }
        }

        static int CompareByName(FileInfo x, FileInfo y)
        {
            return x.Name.CompareTo(y.Name);
        }
    }
}

The key lines of code in this tool is these:

if ((photo.CreationTime.Subtract(photoTime)).TotalHours < 25)
{
    photoGroup.Add(photo);
    photoTime = photo.CreationTime;
}

Since it is iterating through a sorted list of photos whose filenames are the date/time the photo was taken (thanks to NameExif), this line of code simply determines that the photos were taken within 24 hours of each other, and, if so, adds the photo to the group. It also sets the file CreationTime to the time extracted from the EXIF, which proved useful in how other tools, which use the creation time rather than the exif time, group the photos.

The “PhotoGroup”s are then saved, moving them into a new folder whose name includes the start and end dates for the group, as well as the number of photos in the group for example: “\2008-09-15 – 2008-09-17 (87)\”thanks to this line:

string groupPath = outputDirectory + "\\" + startTime.Year + "." + startTime.Month + "." + startTime.Day + " – " + endTime.Year + "." + endTime.Month + "." + endTime.Day + "(" + photoGroup.Count.ToString() + ")";

So, at this point I ended up with a folder containing around 350 subfolders, with a dozen or so ungrouped single photos floating out in the root level. I was quite happy with this result as it logically matched up to how I perceive my own camera usage. It made sense that I had taken my camera on a total of 350 or so “outings” or “sessions” and had only occasionally taken a single photo that was isolated in time and space.

At this point, I used a simply amazing tool called Picasa which allowed me to rename the folders to something meaningful, identify all of the faces and tag the photos with geographical data. This was the most labor intensive part of the process and took me a couple hours per day over the course of a few days. In the end, I had identified over 100 of my friends and family and now had easy access to thousands of their photos, and could easily browse through my photos by the adventures on which they had been taken, their location and who had been present.

Throughout the entire process I found myself re-discovering old friends and memories, and found some really amazing gems of photos which I now have printed and framed to decorate my home and give to friends and family as gifts. Having a well organized photo collection is truly rewarding. This process also inspired me to take my camera out with me more often. I may even end up buying a DSLR. I love photos!!

Remember to backup your work!!!!

Hello world!

January 20, 2010

Welcome to WordPress.com. This is your first post. Edit or delete it and start blogging!

Organizing a Large Music Collection

January 19, 2010

If you are like me, you have thousands of old CDs, a hard drive full of MP3s and a desire to get the most out of your music collection. In this post I will discuss the tools and processes, including batch files and source code that I use to keep my music collection in tip top shape.

First, let’s define what a good digital music collection means:

  • complete albums
  • high sound quality
  • properly tagged with genre, release date, artist, album, title, etc…
  • album artwork
  • all songs play with the same loudness
  • organized into a folder structure that makes it easy to browse and maintain

When your music collection is organized as described above, you will find yourself listening to more of the music you like and discovering gems in your collection that will keep you entertained and inspired.

First let me give major props to a few awesome web sites and services:

  • Pandora Internet Radio – type in the name of a song or artist that fits your mood and it will generate a streaming playlist of awesome music based on intelligence gathered from the open source Music Genome Project.
  • Last.fm – another internet radio station but also a massive database of info about music AND has a downloadable application that keeps track of your listening habits and updates your profile, automatically suggesting similar artists and other users with similar tastes to yourself.
  • All Music – like IMDB for music. Great info about related artists, who influenced them and who they influenced.

Before you can start, you need to rip the music from your old CDs. iTunes, Winamp and Windows Media Player can do this for you, but if you want a little more control over the options I recommend using either the open source CDex or Exact Audio Copy.

When ripping with either program you need to make sure the ripped files are named and tagged appropriately. I suggest a high bit-rate. At least 192. 320 is the best.

After ripping all your CDs, you want to make sure you have the complete albums and up to date tags. The best software I have found for this is called MusicBrainz Picard. It’s free and it has many ways to tag your music files properly and make sure you have complete albums whose artwork will be automatically downloaded by your player. I suggest adding the Last.fm Plus Picard Plugin to add additional functionality to Picard. Picard and the plugin have so many settings that you really have to dig into and personalize for your collection so I suggest reading the documentation and forums for these tools before using them.

After you’ve properly tagged your music collection, you should run the open source MP3Gain application (use AACGain to add M4A and MP4 support), to set all your music tracks to the same perceived loudness levels. This program uses a complicated algorithm called Replay Gain to determine the perceived loudness of an audio track and MP3Gain non-destructively sets a metadata field in the file so that all your songs will play at the same volume. This removes the annoying problem of having to adjust your player volume every time a new song comes on. iTunes has a similar feature called “Sound Check” but in my experience it doesn’t work anywhere near as well as MP3Gain does. Since MP3Gain is non-destructive, meaning it doesn’t affect the actual audio data in the file, I think it’s the best solution out there right now. Use the default settings, drag and drop your music folder onto it, and click the “Track Gain” button and let it go to work. It might take a few hours (or days) to complete running depending on the size of your collection.

In addition to the steps above, I wrote some code to do a few things to my music library.

Capitalize Folder Names

To capitalize all the folder names to “title case” (example: Modest Mouse – This Is a Long Drive for Someone with Nothing to Think About), here is the C# code I used:

using System.IO;

namespace TitleCaseFolders
{
    class Program
    {
        static void Main(string[] args)
        {
            Rename(@"E:\music");
        }

        private static void Rename(string p)
        {
            DirectoryInfo di = new DirectoryInfo(p);
            if (di.Exists)
            {
                di.MoveTo(p + "_");
                di.MoveTo(System.Globalization.CultureInfo.CurrentCulture.TextInfo.ToTitleCase(p));
            }
            Console.WriteLine("renamed" + p);
            foreach (string d in Directory.GetDirectories(p))
            {
                Rename(d);
            }
        }
    }
}

Sort Files Into Folders By Extension

I realized I had music in many different formats. Some albums I had duplicate versions in the same folder. MP3, M4A, M4P, WMA, OGG, FLAC, WAV, etc… So, I wrote code to automatically sort these out by file type/format (basically extension):

using System.IO;

namespace FileTypeSorter
{
    class Program
    {
        static void Main(string[] args)
        {
            SortDirectory(@"E:\Music\Albums");
        }

        private static void SortDirectory(string p)
        {
            DirectoryInfo di = new DirectoryInfo(p);
            foreach (FileInfo fi in di.GetFiles())
            {
                SortFile(fi);
            }
            foreach (string d in Directory.GetDirectories(p))
            {
                SortDirectory(d);
            }
        }

        private static void SortFile(FileInfo fi)
        {
            string dir = @"E:\NewMusic\" + fi.Extension.Substring(1, fi.Extension.Length-1) + @"\" + fi.Directory.Name;
            if (!Directory.Exists(dir))
            {
                Directory.CreateDirectory(dir);
            }
            fi.MoveTo(dir + @"\" + fi.Name);
            Console.WriteLine("Moved" + fi.Name + " to " + dir + ".");
        }
    }
}

This code separates out all your albums by file type. After running it on your music files, you will end up with a folder structure like this:

imageMuch more organized!

Deleting Duplicate Albums

After I have sorted by file type, I may have the same folder (example: C:\Music\mp3\Nirvana – Nevermind & C:\Music\m4a\Nirvana – Nevermind). Here is the process I use to identify duplicate albums and delete them.

Tools Required:

Let’s say you want to keep all your FLAC and MP3 album copies and delete any duplicate copies in other formats. Here’s what you do:

  1. Open a command window by going to Start/Run and type in cmd.
  2. Use the cd command to change into your MP3 folder. For example: cd C:\Music\mp3.
  3. Issue the following command: dir /ad /s /b | sort /r > folders.txt. This will create a file called folders.txt listing out all the folders and subfolders in the MP3 directory sorted in reverse order. We are going to perform some search and replace magic on this list of folders to turn it into a DOS BATCH FILE.
  4. Open this file in TextPad and performing the following search and replace operations (^ and $ mean "start" and "end" of a line a line of text, respectively, in Regular Expression (aka regex) Syntax)
    1. Find: ^
    2. Replace: rd /s /q "
    3. Find: $
    4. Replace: "
  5. Now, check the beginning and end of this file for bogus lines, delete them, and save this file as “delete.bat” into one of the other folders that could potentially contain duplicate folders. Double click on the delete.bat file in the other folder and it will scour out all the folders that have the same names as the ones in the original folder, thus ridding you of inferior quality duplicate albums.

Other Tools

One tool worth mentioning is CloneSpy. It’s great for deleting duplicate files. Be careful with it because realize that in a music collection, some files may have the same name or even identical data but be on separate releases, compilations, live albums, etc… and you don’t want to delete those.

Another tool worth mentioning is Tag&Rename. It is the only non-freeware application I use (I actually paid for a license). I use it less now that I discovered Picard, but I still use it here and there to double check the tags Picard has chosen for my files and to tweak my settings.

Once your music collection is in shape, I highly recommend using iTunes in conjunction with a new iPod (Nano, Classic, Touch or iPhone) to listen to your music. Why? One word: Genius. The Genius feature available through iTunes and the iPod models mentioned (shuffle does not support Genius) is AMAZING at creating playlists for you on the fly. Way better than Pandora, it uses objective statistical aggregate listening pattern data from the millions of iTunes and iPod listeners to build your playlists. Here’s how it works: Put your iPod on Shuffle. Skip until you find a song that suits your mood. Hold down the center button and choose Start Genius. Voila! A magical playlist is generated that is better than 99% of human DJs could ever come up with. This feature is Apple’s true killer app for music and truly kicks ass. I wish I could have been on the team that developed it, I bet it was a lot of fun.

Also, you definitely want to backup your files when you’ve made lots of changes. I use SyncBack Freeware to do my backups between 2 WD Passport 1TB USB drives.

Happy Listening!

Bookmark & Share

Unable to start debugging on the web server.

January 11, 2010

If you use IIS to run and debug your web applications using the hosts file method I detailed in a previous post, you may occasionally encounter this error. It can be caused by the “Microsoft Malware Protection Engine”. Apparently, this malware engine will just wipe out your old hosts entries and create a new one from time to time. Fortunately, it creates a backup. Simply copy your old entries into the new hosts file from the backup and you should be good to go. The hosts file is located in C:\Windows\System32\drivers\etc.

Other causes and solutions for this error include:

  • Installing IIS after the .Net Framework

Solutions:

  • run aspnet_regiis -i from the appropriate folder (latest framework version and 64 bit if applicable)
  • In IIS Manager, select the machine and choose the "ISAPI and CGI Restrictions" item. Make sure your target framework is allowed and not disabled here.

Happy coding!

Bookmark & Share

Basic Visual Studio 2008 & Asp.Net Setup on Windows 7 With IIS7

December 22, 2009

Recently I have set up Windows 7 on my home computer. I wanted to do some development, so I also installed Visual Studio 2008 & IIS7. I like to develop using IIS rather than the built in web server in VS because I’ve noticed that some things that work in one don’t always work in the other, and my ultimate goal is to deploy to a hosted environment, so IIS compatibility is a must.

The order in which you install components tends to matter quite a bit on Windows… one of the slight inconveniences that I hope someday they will address. Here’s the order I’ve found to work for me:

Important: In between EACH STEP, reboot and install windows updates until no more updates are found. Manually scan for updates each time.

  1. Windows 7 Ultimate
  2. Microsoft Security Essentials (new MS anti-virus program, seems to work pretty well)
  3. IIS7 from Programs & Features. You need the Win 7 disk to add this. Install the options shown below:!cid_52D6B46A76254CF4A2A3650B382382CB@JoshPC
  4. Office 2007 Ultimate
  5. Office 2007 SP2
  6. Visio 2007
  7. Visio 2007 SP1
  8. Visual Studio 2008 Team System Developer Edition
  9. Team Explorer (setup.exe in the TFC folder of the VS 2008 disk)
  10. Visual Studio 2008 SP1

Once you’ve got all this set up, it’s time to get your first Asp.Net web site up and running. Run Visual Studio 2008. I usually choose the C# Development configuration option on the initial startup screen. Then go to File and choose New Project.

!cid_F2D9447573AF4C18B883E90BC5B8A04A@JoshPC

Select ASP.NET Web Application and name it "Hello”. Check the options as shown above. The project opens:

A0E8

Add the text “Hello!” inside the <div></div> tags so there’s something to show on the page. Press F5 or the green arrow “play button” to start your new app:

83B

This is good, but it’s only running in VS’s Personal Web Server. That’s not our goal. Our goal is to get it running (and debugging) in IIS7. Close the browser to stop running and debugging. Now, open Internet Information Services (IIS) Manager:

88EA

Expand the tree view, right-click on Sites and choose Add Web Site:

!cid_DD0916C8579B494590CE81D0A7B79FA5@JoshPC

For the sake of consistency, name your site “Hello”. By default, a new Application Pool is automatically created with the same name. Browse to the physical path of your Visual Studio project, which, by default would be stored in “C:\Users\<User Name>\Documents\Visual Studio 2008\Projects\Hello\Hello\”. Configure the dialog as shown above and press OK. Seems like it should work, right? Only problem is, it doesn’t. Try browsing to http://hello and all you’ll get is a blank screen. We need to HACK THE HOSTS FILE.

9F85

The hosts file is pretty much there to allow you (or a hijack virus) to redirect you from the website you wanted to go to to a different one. So many viruses use the hosts file to do evil, and this single file is so powerful that I strongly recommend setting it to read-only and setting blacklisted domains in it to prevent yourself from accidentally being hijacked. It’s located in the C:\Windows\System32\drivers\etc\ directory and called “hosts” with no extension. Here we are using it to add the line:

127.0.0.1    hello

which tells the browser to route all requests for hostname “hello” to the IP Address 127.0.0.1, which is the IP address of yourself. It’s also referred to as the “loopback” address or “localhost”. You can also use the hosts file to block access to certain websites if you have kids and I advise checking it and using it frequently to make sure there are no suspicious entries in there.

Now if you browse to http://hello you should get the following screen (or something similar):

!cid_2759793EDD504E7DA4BFE529DB839D9A@JoshPC

We’re getting closer. The problem is that the User account that IIS is running under doesn’t have Read access to your site files. To fix this, I assigned the Users group to have access as follows:

1FE3

You can access the above dialog by right clicking on the project folder and choosing Properties, going to the Security tab and clicking around. Like the tomato sauce, “It’s in there”.

Now it should be working without error. Try it and see: http://hello

OK, now we need to get Visual Studio to use IIS instead of its built in web server. To do this, right click on the Hello project in the VS Solution Explorer panel and choose Properties:

8F4

Configure the project properties as shown above. Specific Page – checked, blank. Use Local IIS Web Server – checked. Project URL: http://hello. Debuggers: ASP.NET & SQL Server – checked.

In Solution Explorer, browse to the Default.aspx.cs code-behind file and set a debugging breakpoint by clicking on the left inside the Page_Load event handler. When the red dot appears, the breakpoint is set.

Now, press F5 or the play/debug button. One last little hitch:

2F10

Press OK to set debug=”true” in the web.config file automatically and enable debugging.

AA96

The yellow arrow inside the red circle means debugging is working! We’re almost done! Press F5 to continue:

6D0

Congratulations! You are now a Professional Asp.Net C# Web Developer. Go update your resume!

Setting up a SQL Server 2008 database is a whole other topic in itself and will be the subject of a post in the near future. Happy coding!

Bookmark & Share

Hotmail / Live Mail SMTP / POP Android / iPhone SOLUTION!!!

December 21, 2009
I just purchased an Eris Droid phone from Verizon. I set up my @live.com email address using the built in mail application. All I had to do was enter my email address and password and it seemed like it was working great. I soon had an inbox full of my emails. Problem is, I couldn’t send mail. I checked the SMTP settings and they were all correct (login required, username:<my email>@live.com, password is correct, smtp.live.com, TLS security, port 587) When I pressed "next" it verified the settings fine. When I changed options it would not verify, so I knew it was talking to the server. But, whenever I tried to send mail I got an "unable to send mail" notification "The account setting is incorrect". I was at my wits end. It had worked fine with my Blackberry.
 
I posted online in the support forum, called Microsoft and Verizon. They all told me to try a bunch of stuff that just wasted hours of my life.
 
Then, I remembered that a similar thing had happened to me WAYYYYY back when I had a blackberry years ago. I was unable to get my live mail on my blackberry and someone said "you have to upgrade to Hotmail Plus". So, I did. It worked! Well… I had a brainstorm: maybe it would work in reverse?
 
So, I called Microsoft Hotmail Plus billing support and CANCELLED MY HOTMAIL PLUS. They said I was paid through April but I demanded they perform an IMMEDIATE cancellation instead. I had to really haggle the guy to get him to do this for me. Not 30 seconds later I hear new email notification sounds popping out of my desktop. All the messages that were in the droid’s outbox had been sent instantly AS SOON AS I CANCELLED HOTMAIL PLUS!!!
 
I then signed back up for Hotmail Plus, because I like the storage space and the account expiration exemption and the fact that it doesn’t put ads in my email footers. Still works fine!
 
So, it seems like the network engineering over at Windows Live Hotmail services has something tied into billing where whenever you change your subscription status, some config file gets re-provisioned and it fixes POP3/SMTP issues. Try it yourself if you are having any troubles sending or receiving email on your mobile device.
 
Good luck!

Bookmark & Share

Converting Enum Values to Human Readable Strings in C#

December 16, 2009
Here’s a cool tip I figured out today:
 
Say you have an Enum like this:
  
public enum Role

{
    AcctAdministrator = 1,
    ContentMgr = 2,
    NormalUser = 3,
    Accountant = 4
}
 
Now let’s say you want to make a "Role Management Page" where you display the list of roles on the page. Well, "ContentMgr" is not "readable". You want it to be "Content Manager". The problem is, Enums in C# do not allow spaces. Other solutions to this problem include storing the readable descriptions in a database or decorating the Enum values with description attributes like this:
 
[Description("Content Manager")]
ContentMgr = 2,
 
Problem with that is, to get the description back out you have to write a method which uses reflection, which is very expensive:
 
labelRole.Text = GetEnumDescription(enumValue);
 

private string GetEnumDescription(Enum value)

{

    // Get the Description attribute value for the enum value

    FieldInfo fi = value.GetType().GetField(value.ToString());
    DescriptionAttribute[] attributes = (DescriptionAttribute[])fi.GetCustomAttributes(

typeof(DescriptionAttribute), false);

    if (attributes.Length > 0)

    {

        return attributes[0].Description;

    }

    else

    {

        return value.ToString();

    }
}

 
A cool and quick performing workaround is to just name your Enum values with underscores:
 
public enum Role

{
    Account_Administrator = 1,
    Content_Manager = 2,
    Normal_User = 3,
    Accountant = 4
}
 
Then when it comes time to display them, replace the underscores with spaces!
 

labelRole.Text = enumValue.ToString().Replace("_", " ");

Happy coding!

Bookmark & Share

TFS & Java

June 15, 2009
Hey, it’s been a while! I’ve been working on a cool new project for Microsoft and have been getting into mobile and games development. I don’t have a huge amount of information for you in this post other than the following tidbits:
 
TFS
 
In working with Team Foundation Server, you have the option of working either through VS Team Explorer or using the command line interface. I would highly reccomend using the command line for just about everything.
 
Overall, I have had a better experience using Subversion & Tortoise, but there are certain cool things TFS can do:
 
tfs shelve
 
This command greatly facilitates code-reviews by placing the developer’s code "on the shelf" in what is called a "shelveset" that requires another developer’s approval before being checked in. Shelvesets and checkins can be configured so as to allow the developer to assign a designated code reviewer. The developer still has to email the name of the shelveset to the reviewer to request approval to checkin. A nice feature ask here would be the ability to have the reviewer automatically assigned a work item when the shelveset is created. You can associate work items with a shelveset or checkin so that they automatically are closed or resolved when the shelfset is eventually checked in. When creating a shelveset, you have the option of preserving your local changes or removing them from your local copies. I always choose to preserve, but you do have to keep track of changes and update your shelveset as necessary to make sure the code reviewer has the latest version for their review.
 
To get someone else’s shelved changes, issue the command:
 
tf unshelve <name of shelveset>
 
Merging is quite sophisticated in TFS, rarely requiring developer input to resolve conflicts.
 
Many of the beneficial effects of using TFS can be enhanced by using it in conjunction with Microsoft PowerShell. MSBuild and automated unit testing.
 
Here are some of the drawbacks of TFS:
 
There is no easy way to move files in TFS. The workaround is to copy the file in your local tree, tfs add the file in its new location and then tfs delete it in its old location. Shelve/checkin. This is not ideal as it obviously destroys the history of the file. There is a tfs move command, but it apparently has bugs and I was advised not to use it as it might screw things up.
 
Changing the location of your local files in TFS is very counterintuitive. When you first do a get, it doesn’t always ask you where you want the files. Once set, changing their location is difficult. In TortoiseSVN, this works easily with the relocate command but in TFS, you have to dig into the GUI and find an obscure setting within the workspaces submenu / edit / path. Kinda lame. Occasionally you will get lucky and find a right-click menu option within the Team Explorer Source Control folder structure treeview called "Remove Mapping" that allows you to do this more easily, but it’s not always there. I’ve heard this was due to a bug.
 
I still don’t like having to "checkout" a file to make changes to it. I want my files to all be read/write and for my source control tool to notify me of any change I have made in an intuitive manner. This is how TortoiseSVN works. It may be personal preference.
 
NullPointerException in Java
 
Since the commonly held belief is that there are "no pointers" in Java, why does the language contain a java.lang.NullPointerException? Since pointers are obscured in the Java language, this could be slightly improved in terms by calling it a NullReferenceException, which is what C# does, but why even have an exception? Why no NullValueException? Because sometimes things are null! C# supports this well for allowing for the use of nullable types. If you want to create a variable that might be null, you simply use a question mark, as such:
 
        public static bool RemovePhoto(int? PhotoID)
        {
            bool statusResult = true;
            if (!PhotoID.HasValue || PhotoID < 1)
            {
                throw new ArgumentOutOfRangeException("PhotoID");
            }
        …
 
In most cases, when the end user gets one of these errors it means simply this: The code is broken. If it is a web site, they probably have a server down or moved a resource file or graphic to the wrong place. The site is just broken and there’s nothing you, the user, can do about it! Well, that’s it for now. More to come soon!

Bookmark & Share

Asp.Net / AJAX Tips & Tricks

October 1, 2008

Some random tips I’ve learned recently while working with the aspnet membership database and deploying ASP.Net apps onto new machines:

 

Tip #1:

 

When restoring a backup of the aspnet membership databases, sometimes you end up with an “orphaned” user. To check for orphans, issue the following command:

 

EXEC sp_change_users_login ‘Report’

 

To fix the orphans, issue one of the following commands:

 

EXEC sp_change_users_login ‘Auto_Fix’, ‘user’

 

EXEC sp_change_users_login ‘Auto_Fix’, ‘user’, ‘login’, ‘password’

 

Tip #2:

 

When deploying an ASP.Net web app to a brand new server, IIS is in a locked down state. You may have to check one or more of the following:

 

o    Everyone account has Read & Execute permissions on c:\inetpub\wwwroot and all subfolders and files

o    Default Web Site and Virtual Directories both have "Scripts Only" execute permissions

o    Anonymous browser is enabled

o    Run aspnet_regiis –i

o    Enable ASPX as a “Web Service Extension”

o    Virtual directory name cannot end in “.com”

o    Vdirs created by VS2008 are, by default, not marked as an “application”. To get them to be so marked, click "Create" on the Directory tab of the virtual directory properties window.

 

Tip #3:

            To enable anonymous browsing of SSRS reports, it is required that the anonymous browser not be a member of the “Guests” group (the default). Create a new User in the Users group with limited rights and assign the anon browser to this identity.

 

Tip #4:

Encrypting your connection strings in web.config:

Issue the following command from a dos window or batch file:

C:\WINDOWS\Microsoft.NET\Framework64\v2.0.50727\aspnet_regiis -pef "connectionStrings" "<path to the application like c:\Portal>” -prov "DataProtectionConfigurationProvider"

Tip #5:

Allowing the ASP.Net web application to dynamically modify the Web.sitemap file or save and delete files in a sub directory at runtime:

 

Issue the following commands from a dos window or batch file:

cacls <path to the sitemap file like c:\Portal\Web.sitemap> /e /g IIS_WPG:f ASPNET:f

Also works to allow your web app to save and delete files to a sub folder:

cacls <path to the folder like c:\Portal\flash> /e /g IIS_WPG:f ASPNET:f

 

Tip #6:

            To prevent postback and page refresh: Wrap it in an UpdatePanel!  Note: This doesn’t work with FileUpload and many other controls listed here.

 

Tip #7:

Hacking the AJAX.Net Control Toolkit controls.

 

Here’s an example: By default, the Rating control allows the user to click and change their rating numerous times. To submit the rating they must click a button outside the control. Here is the code for various enhancements:

·         Code to assign meaningful labels to the hover-tooltips on the ratings. Insert into the <head> of the page:

 

    <script type="text/javascript" language="javascript">

    function pageLoad()

    {

        //change the ratings titles

        var RatingID = "<% =PageRating.ClientID %>";

        for(i=0;i<$find("RatingBehavior1").get_MaxRating();i++)

        {

            switch(i)

            {

                case 0:    $get(RatingID+"_Star_"+(i+1).toString()).title ="OK";     break;

                case 1:    $get(RatingID+"_Star_"+(i+1).toString()).title ="Good";     break;

                case 2:    $get(RatingID+"_Star_"+(i+1).toString()).title ="Great!";break;

                case 3:    $get(RatingID+"_Star_"+(i+1).toString()).title ="Excellent!";     break;

                case 4:    $get(RatingID+"_Star_"+(i+1).toString()).title ="The Best!!";     break;             

                default:    break;

            }

        }

    }

    </script>

 

·         Code to assign the value of the clicked rating to a label on the page without postback. Insert at the end of the page (after script manager):

 

    <script type="text/javascript" language="javascript">

Sys.Application.add_load(function(){$find("RatingBehavior").add_EndClientCallback( function(sender, e) {var responseTag = $get(‘lblRatingResponse’); responseTag.innerHTML = e.get_CallbackResult();});});

 </script>

 

·         Code to set the Rating control to “read only” after it is clicked. Insert at the end of the page (after script manager):

 

    <script type="text/javascript" language="javascript">

Sys.Application.add_load(function(){$find("RatingBehavior").add_Rated( function(sender, e) {$find("RatingBehavior").set_ReadOnly(true)});});

 </script>

 

·         Code to override the default javascript methods which prevent the user from submitting a rating equal to the current rating. Insert at the end of the page (after script manager):

    <script type="text/javascript" language="javascript">

    function AjaxControlToolkit.RatingBehavior.prototype._onStarClick(value)

    {

        if (this._readOnly)

        {

            return;

        }

        this.set_Rating(this._currentRating);   

    }

    function AjaxControlToolkit.RatingBehavior.prototype.set_Rating(value)

    {

        this._ratingValue = value;

        this._currentRating = value;

        if (this.get_isInitialized())

        {

            if ((value < 0) || (value > this._maxRatingValue))

            {

                return;               

            }

 

            this._update();

          

            AjaxControlToolkit.RatingBehavior.callBaseMethod(this, ‘set_ClientState’, [ this._ratingValue ]);

            this.raisePropertyChanged(‘Rating’);

            this.raiseRated(this._currentRating);

            this._waitingMode(true);

          

            var args = this._currentRating + ";" + this._tag;

            var id = this._callbackID;

           

            if (this._autoPostBack)

            {                   

                __doPostBack(id, args);

            }

            else

            {

                WebForm_DoCallback(id, args, this._receiveServerData, this, this._onError, true)

            }

        }

    }   

    </script>

 

Using all of these together (with an event handler in code-behind for the OnChanged event to store the rating) converts the behavior of the rating control to the more familiar “click to rate” from the default “click a bunch of times and nothing happens until you press the button” scheme.

 

Hope some of you find these tips helpful.

 

Thanks,

 

Josh

 

 

Bookmark & Share

Why Software Architecture Matters

September 27, 2008
Since this is my first blog post, I am going to keep it short. There are many concepts, processes and points of view that may be referred to as "software architecture" but before I discuss any of those, I want to discuss why anyone should care about it. What good is it? What benefits does it offer to individuals, communities, businesses and society? The answer is simple: better software. Applying principles of domain driven design and iterative development to software projects makes them cheaper to develop, more powerful, easier to use and more likely to succeed. Success for a given application or framework is measured by the extent to which it is adopted and the benefits it confers on it’s users and creators. For users, software offers the benefits of automating tasks, discovering useful information, communication, inspiring creativity and many others. For organizations, good software translates directly to higher profits, happier employees and sustainability of growth.
 
So, now that we’ve discussed why architecture matters, let’s discuss what it is. Here is a basic list of ideas that encompass current thinking about best practices in software architecture:
 
  • requirements gathering
  • use cases
  • domain driven design (ddd)
  • domain modeling
  • responsibility driven design (rdd)
  • general responsibility assignment software patterns or principles (grasp)
  • agile / iterative development
  • object oriented programming / analysis / design (oop / ooa / ood)
  • component based design (cbd)
  • test driven development (tdd)
  • pair programming
  • universal modeling language (uml)
  • design patterns
  • unified process (up / rup)
  • separation of concerns
  • low coupling
  • high cohesion
  • protected variation

I will discuss each of these further in future blog postings. For now, I hope this post has served to pique your interest in software architecture. Stay tuned!

Bookmark & Share


Follow

Get every new post delivered to your Inbox.