One of my side projects at work right now is documenting the architecture of a product that has already been built but will be going through a re-architecting with a focus on a more robust schema and applying some of the learning we’ve gone through in discovering exactly how our product is being used and ways in which our users want to extend the platform. SaaS and SOA are two good buzz words we’ll be throwing around a lot, although to be honest we’ve been in the SaaS model for years now, just not following all of the best practises. (examples, check out litwareHR)

So despite documentation being at the heart of the architect’s role I find it extremely difficult to find good documentation on how to approach a task like this. I have Craig Larman’s book Applying UML Patterns which I’ve enjoyed, but I still find myself grappling for where to even begin sometimes.

These articles on IBM have been good reads for this and I’d reccommend giving them a read if you are facing similar challenges.

Part1
http://www.ibm.com/developerworks/library/ar-archdoc1/index.html?S_TACT=105AGX20&S_CMP=EDU

Part2
http://www.ibm.com/developerworks/library/ar-archdoc2/index.html?S_TACT=105AGX20&S_CMP=EDU

Part 3
http://www.ibm.com/developerworks/library/ar-archdoc3/index.html?S_TACT=105AGX20&S_CMP=EDU

I’m assuming there will be more of these which I’m looking forward to.

I just finished my first assignment in a beginning networking course I’m taking and I am so far pretty impressed with how interesting this stuff is. I have a working knowledge of networking that includes decent understanding of the application layer, high level knowledge of the transport layer and basically just awareness of the link layer. It’s pretty rare that in my position as a developer that I need to answer questions about the link layer. (thank you my friends in IT)

Some of the questions are actually kind of fun in that they had me visualizing data flowing through networks in ways I had not before. For example given a link between two hosts X km apart, with a transmission rate of R and a propagation delay of N….

      2.4.d What is the width (in meters) of a bit in the link? Is it longer than a football field?

Kind of useless, but super fascinating at the same time, imagining the physical manifestation of all this work I do day in and day out. Pulling these bits from all over the world is so effortless, so fast and so transparent that it’s easy to forget the actual resources behind it.

The football field question actually relates to a pretty interesting concept called bandwidth-delay, which refers to the amount of data that exists “on the wire” or “on the air” at any given moment. Data that has been sent but not yet acknowledged. It’s helpful in determining minimum buffer sizes for receivers and transmitters over a given link.

http://en.wikipedia.org/wiki/Bandwidth-delay_product

Another element to this first assignment was to setup apache as a proxy server on your local machine which was a bit surprising. I assumed at first that the assignment meant squid, but no, apparently apache itself can be configured to be a proxy server for a number of protocols including both ftp and http traffic.There are numerous articles out there on using it as a personal ad blocker, or caching server.

For reference this is what I had to do to Httpd.conf to make it work :

LoadModule disk_cache_module modules/mod_disk_cache.so
LoadModule proxy_module modules/mod_proxy.so
LoadModule proxy_http_module modules/mod_proxy_http.so
LoadModule proxy_connect_module modules/mod_proxy_connect.so

<IfModule mod_proxy.c>
ProxyRequests On
<Proxy *>
Order deny,allow
Deny from all
Allow from 10.0.1.2255.255.255.0
</Proxy>
</IfModule>

<IfModule mod_disk_cache.c>
CacheRoot “c:\apachecache\”
CacheDirLevels 5
CacheDirLength 3
</IfModule>
Interestingly if you get that configuration wrong, you actually get a big “It Works!” page shown in your browser for any page you try to visit. Go figure. My mistake at that point was just not having uncommented the right modules, so apache was just serving the It Works page rather than attempting to proxy my request. 

I’m supposed to be studying for a challenge exam I’m writing this week in “Advanced” Operating Systems. Instead I spend a good chunk of the day today working on blogquotes in between watching/playing with my daughter.

I can justify this time spent because I did all of this work exclusively from a bash shell using vi to refresh myself on some of the content for the course. In this session I was able to use wget,grep,awk,vi,a shell script and some file permissions. I’m not so sure that will get me through the exam but it was fun and I was finally able to put some time towards my random quote include for the blog.

You can see the quotes being pulled in now in the top right hand corner of this page. So far only my wife and I are using it, as this is very much a proof of concept. It works basically from end to end, but without a lot of features that are going to be necessary as this grows. Paging, searching, caching, tags, some UI polish, and some testing. It’s very gratifying though to reach this first phase and actually get something working. Adding features will probably happen a lot quicker now that I actually have a user. ;-)

I also took the opportunity to try out some of Yahoo’s client side api’s in the YUI. Currently I’m using XHR, Layout and DataTable. I was amazed at how quick it was to basically “assemble” my application. Google’s app engine makes the CRUD a total cake-walk, and yahoo’s user interface library has no dependencies on server side code but works seamlessly with a JSON backed RPC scheme in Python. It’s a whole new world! Now as long as my application doesn’t get popular and I have to start paying for resources. ;-)

Some interesting snags while working on this latest revision:

  • Randomly selecting an entity in GQL
  • Django Utils simplejson can’t serialize google’s db.Model classes, so I had to actually proxy my model class to a simpler structure that I serialize to the client via JSON and XHR
  • No unique id’s for google user accounts, all you have is email which isn’t exactly something people are going to appreciate me passing on url’s for the random inclusion widget. The solution was simple for what I needed, a user preference entity keyed on google’s User db type and storing a UUID as the publishingKey, that id now becomes my unique id which won’t change even if you change your google account name
  • Google has a very cool AJAX Library SDK for sharing hosting/serving up of all the most popular frameworks like dojo and jquery
I think I’ll use google code to start some documentation around bugs and features for this tool rather than simply the blog so look for those details elsewhere.

I can’t say I enjoy writing in the blogger post interface, in fact it’s pretty frustrating. For a while there I was using google docs to write posts(which I loved), then I would just publish to my blog. That actually worked great until the actual publishing process which doesn’t allow you to control the title very effectively and totally messed up my rss feed even if I did fix the title. Then I tried scribefire which again was really promising but it’s a cramped UI and again the publishing process was really clunky for my workflow. (things remain drafts for me for weeks at a time)

Anyway, I’m looking at my last post and those code samples are embarrassingly poorly formatted. Not only that but if you check the source the blogger editor is introducing tons of html space entities which drives me nuts considering I’m using whitespace:pre on my blockquotes anyway.

I’m really inclined to just use the tools I have when it comes to this site, primarily so that I focus on writing and not tinkering. Since moving my website from a hosted environment to blogger I have actually started to focus again on my writing and my projects rather than tinkering with a wheel that’s been built a thousand times (photo gallery scripts, php and perl cgi trickery for mundane templating etc). So while I will probably end up spending time on this at some point I really just want to find something that “just works” for showing code in blog posts. More to come I’m sure.

We’ve recently put Microsoft’s managed add-in framework (part of .NET 3.5) into very effective use building a plug-in system for a large asp.net application at work. Essentially the framework in place allows other developers (and our own team for out of stream releases) to develop new functionality for our platform that runs the entire life-cycle for a given widget. In our case for this particular widget we’re talking about plugins being responsible for up to 4 asp.net controls in different contexts (for example data collection and reporting as two separate controls) as well as a script injection point where plug-ins are able to extend the scriptability of our platform. 
For us going with the framework gave us a few things we didn’t have with our original design for the add-ins. 
  1. Tools to help enforce the pattern
  2. An extra layer of versioning over the somewhat naive approach we started with
  3. Built in discovery, provisioning, and a communication pipeline for serializing types and calls across the contracts that make up the interface between host and plugin
  4. And last but not least support from Microsoft. This is somewhat more minor than the points above, but it helps legitimize our design when we are following the best practices laid out by Microsoft and used by others in similar situations. The documentation and training available also make getting other developers up to speed on the framework that much easier.
There have been numerous challenges in using the framework, but perhaps the most surprising of all for me was the human element and how simple it became over the life of the project to break the pattern by coupling components across or outside the pipeline.

Examples :
  1. Referencing an assembly from both the addin and the host that shared code that should have been passed across the pipeline. 
  2. Bypassing the pipeline completely by calling web services from the addin code (client side or server side calling code) 
  3. Conditional code in the host making decisions based on the type of the addin
  4. Loose coupling based on common knowledge (that shouldn’t be common) 
These all basically come down to a breach of contract or an absence of contract for various operations that we needed addins to handle. On some level all of these things can be excused and safely done without compromising the framework if they are done right. It’s a slippery slope though and requires a commitment to not be lazy to avoid the temptation to sidestep the pipeline.

In the case of #1 above the shared assembly started off very benign. Essentially some shared utility code for handling urls and some common resource tasks. Why rewrite when that code already existing the main project? Break it off from the project so that it has no dependencies then drop it in. Except that slowly the terrible pain of building contracts, views and adapters for every little interface or interface change drives you towards shortcuts. “Oh I’ll just put this code here to test and then fix it later”  Even worse are those cases where you’ve chosen the path of least resistence in dealing with a bug resulting from unexpected behavior with serialization across the pipeline. It only took a few weeks of not being completely on top of this before I discovered our project was littered with types that were being shared directly between host and addin. Any change meant a recompilation of both projects, completely defeating the purpose.

#2 is a legitimate need in our scenario, and we’ve found ourselves needing to creating proxy services that wrap our own services just to protect against the inevitable change that will follow. Given that third party developers may be writing code for the platform we have to make an effort to protect from change in all of our interfaces, web service or otherwise. In retrospect I think it would have made more sense to strictly enforce a team division so that no one writing addin code was also writing host code.This probably would have gone a long way to preventing these types of problems.

#3 and #4 are a little more insidious and harder to spot without strict code review. #3 for us isn’t technically breaking anything in terms of the interface or future versioning, but adds cruft and generally points to a missing method or property on the interface. The last thing you need as the host is to have case statements littered throughout your code looking for addins. #4 took many forms, and in some cases it’s fine. An ok example might be sharing enums, which provided they are defined in the contracts or slightly worse something like a utility class is ok. A not ok example for me was code like this :  extension.GetSetting(“Menu_Text”);  which in this case has two errors. One “GetSetting” shouldn’t really exist because how an addin chooses to configure itself should be transparent to the host. Second this code depends on the addin having a value defined in it’s config file for the key “Menu_Text”. This is next to impossible to enforce and can of course easily break.

Replacing this with extension.MenuText; should be trivial, and a no-brainer. When we started using the framework back in December we were rolling the supporting code by hand. To give you a sense of what this entails, this is how you would define an extension who’s only job is to return MenuText as in the code above :

IExtensionContract.cs
using System.AddIn.Pipeline;
using System.AddIn.Contract;

namespace SimpleExtensionContracts
{
[AddInContract]
public interface ExtensionContract : IContract
{
string MenuText { get; set; }
}
}

IExtension.cs
namespace SimpleExtensionContracts.AddInViews
{

[System.AddIn.Pipeline.AddInBaseAttribute()]
public interface IExtension
{
string MenuText
{
get;
set;
}
}
}

IExtension.cs
namespace SimpleExtensionContracts.HostViews
{

public interface IExtension
{
string MenuText
{
get;
set;
}
}
}

IExtensionContractToViewHostAdapter.cs
namespace SimpleExtensionContracts.HostSideAdapters
{

[System.AddIn.Pipeline.HostAdapterAttribute()]
public class IExtensionContractToViewHostAdapter : SimpleExtensionContracts.HostViews.IExtension
{
private SimpleExtensionContracts.ExtensionContract _contract;
private System.AddIn.Pipeline.ContractHandle _handle;
static IExtensionContractToViewHostAdapter()
{
}
public IExtensionContractToViewHostAdapter(SimpleExtensionContracts.ExtensionContract contract)
{
_contract = contract;
_handle = new System.AddIn.Pipeline.ContractHandle(contract);
}
public string MenuText
{
get
{
return _contract.MenuText;
}
set
{
_contract.MenuText = value;
}
}
internal SimpleExtensionContracts.ExtensionContract GetSourceContract()
{
return _contract;
}
}
}

IExtensionHostAdapter.cs
namespace SimpleExtensionContracts.HostSideAdapters
{

public class IExtensionHostAdapter
{
internal static SimpleExtensionContracts.HostViews.IExtension ContractToViewAdapter(SimpleExtensionContracts.ExtensionContract contract)
{
if (((System.Runtime.Remoting.RemotingServices.IsObjectOutOfAppDomain(contract) != true)
&& contract.GetType().Equals(typeof(IExtensionViewToContractHostAdapter))))
{
return ((IExtensionViewToContractHostAdapter)(contract)).GetSourceView();
}
else
{
return new IExtensionContractToViewHostAdapter(contract);
}
}
internal static SimpleExtensionContracts.ExtensionContract ViewToContractAdapter(SimpleExtensionContracts.HostViews.IExtension view)
{
if (view.GetType().Equals(typeof(IExtensionContractToViewHostAdapter)))
{
return ((IExtensionContractToViewHostAdapter)(view)).GetSourceContract();
}
else
{
return new IExtensionViewToContractHostAdapter(view);
}
}
}
}

IExtensionViewToContractHostAdapter.cs
namespace SimpleExtensionContracts.HostSideAdapters
{

public class IExtensionViewToContractHostAdapter : System.AddIn.Pipeline.ContractBase, SimpleExtensionContracts.ExtensionContract
{
private SimpleExtensionContracts.HostViews.IExtension _view;
public IExtensionViewToContractHostAdapter(SimpleExtensionContracts.HostViews.IExtension view)
{
_view = view;
}
public string MenuText
{
get
{
return _view.MenuText;
}
set
{
_view.MenuText = value;
}
}
internal SimpleExtensionContracts.HostViews.IExtension GetSourceView()
{
return _view;
}
}
}

IExtensionAddInAdapter.cs
namespace SimpleExtensionContracts.AddInSideAdapters
{

public class IExtensionAddInAdapter
{
internal static SimpleExtensionContracts.AddInViews.IExtension ContractToViewAdapter(SimpleExtensionContracts.ExtensionContract contract)
{
if (((System.Runtime.Remoting.RemotingServices.IsObjectOutOfAppDomain(contract) != true)
&& contract.GetType().Equals(typeof(IExtensionViewToContractAddInAdapter))))
{
return ((IExtensionViewToContractAddInAdapter)(contract)).GetSourceView();
}
else
{
return new IExtensionContractToViewAddInAdapter(contract);
}
}
internal static SimpleExtensionContracts.ExtensionContract ViewToContractAdapter(SimpleExtensionContracts.AddInViews.IExtension view)
{
if (view.GetType().Equals(typeof(IExtensionContractToViewAddInAdapter)))
{
return ((IExtensionContractToViewAddInAdapter)(view)).GetSourceContract();
}
else
{
return new IExtensionViewToContractAddInAdapter(view);
}
}
}
}

IExtensionContractToViewAddInAdapter.cs
namespace SimpleExtensionContracts.AddInSideAdapters
{

public class IExtensionContractToViewAddInAdapter : SimpleExtensionContracts.AddInViews.IExtension
{
private SimpleExtensionContracts.ExtensionContract _contract;
private System.AddIn.Pipeline.ContractHandle _handle;
static IExtensionContractToViewAddInAdapter()
{
}
public IExtensionContractToViewAddInAdapter(SimpleExtensionContracts.ExtensionContract contract)
{
_contract = contract;
_handle = new System.AddIn.Pipeline.ContractHandle(contract);
}
public string MenuText
{
get
{
return _contract.MenuText;
}
set
{
_contract.MenuText = value;
}
}
internal SimpleExtensionContracts.ExtensionContract GetSourceContract()
{
return _contract;
}
}
}

IExtensionViewToContractAddInAdapter.cs
namespace SimpleExtensionContracts.AddInSideAdapters
{

[System.AddIn.Pipeline.AddInAdapterAttribute()]
public class IExtensionViewToContractAddInAdapter : System.AddIn.Pipeline.ContractBase, SimpleExtensionContracts.ExtensionContract
{
private SimpleExtensionContracts.AddInViews.IExtension _view;
public IExtensionViewToContractAddInAdapter(SimpleExtensionContracts.AddInViews.IExtension view)
{
_view = view;
}
public string MenuText
{
get
{
return _view.MenuText;
}
set
{
_view.MenuText = value;
}
}
internal SimpleExtensionContracts.AddInViews.IExtension GetSourceView()
{
return _view;
}
}
}

Yeah, seriously. One interface and one string accessor requires nine class/interfaces and over 200 lines of code (which obviously could be made less with formatting etc).  It’s also possible to share the views between addin and host but then you lose part of the more compelling robustness of the framework. If you are interested in where these classes come into play and how the add-in framework actually works check out this link for a good description.
Anyway, I can sympathize with the developers in wanting to speed up the process a bit, but the answer is not to bypass the pipeline. The answer is code generation! Thankfully by the time we realized our mistake Microsoft had released a CTP of their pipeline generator which is a nifty little visual studio addin which picks up the output of the Contracts project and uses reflection to find all of the contracts and generate the necessary projects and files for the pipeline. It literally saved us tons of hours and made the addin framework actually usable. Of couse the code generation is only going to work until we version one side or the other, but at that point we should have solidified those interfaces considerably so it will matter a lot less.

Anyway, long story short, the add-in framework is great, but it’s really important for the entire team to understand the goal and be diligent in ensuring that all that extra framework code isn’t just being wasted by introducing dependencies.

So I have a great personal distrust and disgust in the way copyright law has continually degraded and been abused by large corporations over the past 30 or 40 years . (Thanks Mickey!) I cringe at the idea of the RIAA sueing people to protect their broken business model and laugh my ass off when bands like Metallica (2) and Kiss make total asses of themselves while those artists that are still relevant embrace new ways of engaging their fans. Anyway, that’s the context for this post. I am legitimately interested in seeing how Canada will follow other countries in protecting artists, content producers and consumers.

As much as I abhor the record industry I do think the law should reflect the reality of the new digital landscape. Content creators need to be protected, and consumers should get their money’s worth when buying or consuming copyrighted material.

Anyway I watched these videos about the new Bill C61, and as painful as it is to listen to Jim Prentice repeat the same meaningless quip over and over in response to these questions, I did find it interesting. I’ve been really concerned about Canada following in the steps of the DMCA, and I imagine that under the covers this is largely similar, particularly where “digital locks” are concerned but I’ve so far not heard anything really terrible.


http://www.digital-copyright.ca/node/4761  (videos)

Some highlights from the videos

  • Need for international cooperation, if it lets me watch more movies online for cheaper then going to the video store then yeah let’s go. How long has that been possible in the US?
  • “and of course new technologies such as mp3 players and memory sticks” I just included that because it struck me as funny.
  • Time shifting and format shifting is preserved
  • BUT “digital locks” which are chosen by businesses (ie stopping format shifting) will be legally enforceable and will allow those time shifting and format shifting rights to be circumvented
  • Having personally already rented videos on iTunes I can see some benefit to these locks and some of the new models they enable (death to blockbuster)
  • Also though owning a bunch of iTunes tracks I can’t use on other devices makes me hate the locks. But ultimately this is my decision on what to buy so I can’t complain too much. It’s important to let the market decide on some of these issues I think. Although how much of a market is it really when there are only a handful of really big labels and studios producing all the content?
  • no liabilities for ISP’s is great I think
  • New limits on the liability for “personal use” of copyrighted material to $500 PER INFRINGED WORK  (they kept playing up $500 limit as a good thing for consumers but it sounds like downloading 4 movies is still a $2000 hit)We all know how quickly this would completely sink most households.  In Prentice’s example this goes from five videos at $20,000 each  = $100,000 to $500 total… he didn’t seem to really have that part down and I’m still not sure if it’s $500 per work or per incident or whatever?
  • Not that that matters as this is totally unenforceable, the law will enable companies to more confidently invest in delivery mechanisms that rely on locks and have a clearer understanding of rights, but none of it will help anyone actually enforce it. (Unless the companies do it themselves ala the RIAA sueings)
  • This whole bill was rammed through right at the end of summer with little consultation, seems ugly
  • When timshifting you can’t store those as a library of recordings, again very vague and would seemingly limit PVR software quite a bit.
  • You can’t import devices into Canada that enable bypassing locks (vague, could encompass a lot of devices )
  • Teenagers seem to be the “they” in all these videos. Makes the questioners and the lawmakers seem out of touch. I understand teenagers are big offenders but they are by far not the only ones.
  • What are the whitewood treaties Mr Prentice brings up? I’m assuming I didn’t hear him because I didn’t see anything on my initial Googling of it.

The business network video is the best video of the bunch, if you want a summary just scroll down and watch that one.

One of the things I was most surprised to hear was from Mr. Sookman saying that time shifting and format shifting is currently NOT legal? Really? I had always been under the impression that this was actually legal in Canada.

Check out this website for more information on Bill C61
http://www.digital-copyright.ca/

So I ran into an interesting “gotchya” with C# extension methods tonight. And of course it happens at the 11th hour on a project that is being demoed at 9:00am tomorrow morning. Of course.

Extension Methods

Extension methods are a really cool new feature of C# that were introduced in version 3.0 of the language. Essentially they are static methods that act like instance methods, allowing you to extend objects you don’t own. Ok, seeing those words on screen makes me think this is a terrible idea, but it really does have it’s place. (LINQ relies heavily on it)

When I first saw these I got quite excited because we had a number of scenarios where they would make our code much much cleaner and easier to maintain. I can name a couple examples in our own project where this has the potential to make the API cleaner.

Example 1: Helper/Utility/Static Methods

Try as they might, the .NET framework guys will not anticipate every piece of code that ends up repeated hundreds of times across your project to get around a common case. Before the framework added String.IsNullOrEmpty() in 2.0 we had StringHelper.IsNullOrEmpty(string arg)in our code base along with about half a dozen other methods. Now string may not be the best example, because in my mind I think it’s a bad idea to write extensions to framework types, but it does illustrate the problem well.

In our project we have maybe a dozen of these Helper classes full of static methods (utility pattern). They are useful, but the biggest problem is the team’s ability to consistently discover those methods. Emails and other forms of communication helps, code review helps but ultimately you end up with repeated code fragments where helpers could have been used or even worse you end up with competing helpers in different namespaces that need to be consolidated once discovered.

Example 2: Enhancing Functionality Based on Context

Another useful place for extension methods in our project is to have our domain model be extended differently based on the area of the application and what rights the users executing that code have. Essentially what we have is two sets of functionality that are implemented very differently based on the context. For example in one context our object model is mapped using an ORM to the database directly, where as in another context that same object model is used with pre-populated data sets that are cached and completely scriptable by our end users. We still have code that needs to understand these objects across this context however, leading us to create interfaces for every single object in that domain that needs to work across these two contexts. I’m a fan of interfaces, but I think in this scenario we’ve clearly lost something in terms of the DRY principle and code readability.

I’ve yet to really map out what this will look like for our project, but I see what’s been done with Linq and am excited about how simple it could be for example to include the “.Scripting” namespace to attach methods to our domain model that are exposed to end users. Similarly an “.API” namespace for our internal privileged code base with everyone sharing the same core objects.

The Gotchya that kept me at work an extra hour

Symptoms

  • Your extension method is no longer ever being executed
  • Intellisense shows the correct method signature, reports no problems
  • Right click “go to definition” takes you to the method you think will be hit
  • Stepping through the code shows you never reach your extension method


I’ve created some fake code to illustrate the problem. Imagine we have a simple factory class providing some useful functions like so :

namespace ExtensionMethodsTest
{
public class FactoryA
{
public ObjectA GetInstance()
{
return new ObjectA(“empty object”, -1);
}

public ObjectA GetInstance(string name)
{
return new ObjectA(name, 0);
}

}
}

But then we decide that we actually need to grab instances of ObjectA using an int id, so we add that using an extension method like so :

using ExtensionMethodsTest;

namespace MyExtendingNamespace
{
// a container for my extensions
public static class ExtendFactoryA
{
// Extend our factory to look objects up by id
public static ObjectA GetInstance(this FactoryA factory, int id)
{
return new ObjectA(“got by id”, id);
}
}
}

Blamo! Now anytime we’re using the “MyExtendingNamespace” our FactoryA includes the third way to grab an instance of ObjectA.

Here is how I am calling both of these :

private void ByIdButton_Click(object sender, EventArgs e)
{
FactoryA fa = new FactoryA();
ObjectA a = fa.GetInstance(42); // call extension method with int
this.label1.Text = a.ToString();
}

private void ByNameButton_Click(object sender, EventArgs e)
{
FactoryA fa = new FactoryA();
ObjectA a = fa.GetInstance(“gotten by name”); // call instance method with string
this.label1.Text = a.ToString();
}

This seems ok, our calls to GetInstance are consistent and we have everything working. Now transplant yourself a couple months into the future when the original owner of the FactoryA class has a need to alter the behavior to allow for more scenarios for retrieving instances of ObjectA.

public ObjectA GetInstance(object proxyObj)
{
return new ObjectA(proxyObj.GetType().Name, 0);
}

Now from the calling code everything still appears to work fine, intellisense continues to show the “int” signature that you think you are calling, and in fact if you right click the GetInstance call and say “Go to Definition” it takes you to the MyExtendingNamespace version of GetInstance.

At runtime however the CLR looks first for something that matches your call to fa.GetInstance(42); within the main namespace/class before looking at any extensions. Because our int gets automatically boxed we actually match “object proxyObject” and will never reach the extension method. Depending on the specific implementation in your code this can be a particularly insidious error or it may fail outright. Either way on a large project it can be a real annoyance to track down.

To me there are a few mistakes here. 1) Why use an extension method here at all? 2) In general I think extension methods as overrides make little sense given how the CLR matches these and 3) GetInstance(object proxyObject) should probably be something more specific like ProxyBase or something else that is not an object. (Akin to catching System.Exception, bad… be specific)

Fixing this issue is as simple as renaming the extension method to GetInstanceById, or moving the method onto the Factory itself, or fixing the factory method to not use object. Personally I say drop the extension method.
My example is over simplified, and in our scenario at work there was a little more justification for not changing the class that defined the original functionality. To me though this seems like the tip of an iceberg of problems.

  • Why negotiate with a module maintainer when I can just add functionality right here right now from my own code?
  • Why stop to understand that class and how it’s been constructed when I can just impose my view of how it should work overtop?
  • Intellisense makes it easy to find the method, so I don’t have to worry about how I organize this code.

Terrifying.

When I first read Jeff Atwood’s latest post on his love of C#’s new “var” keyword I was deeply bothered that my co-workers would find the article and latch on to the argument as a justification for laziness. While I do understand his point of view I was bothered by the idea of var statements littered throughout the code base making things more difficult to read for the next developer.

Saving key strokes is never justification for obscuring the code base. If you want to save keystrokes improve your environment, don’t sacrifice your code.

I came across this post on reddit tonight that very nicely counter’s the post on coding horror. Thanks Richard for a voice of reason.

http://richarddingwall.name/2008/06/21/csharps-var- keyword-jeff-atwood-gets-it-all-wrong

I need to blog this basically to toss it in my archive. There have been some interesting posts on the religious debate of static vs dynamic languages. I don’t know why I always get drawn into these lines of thought, but I do. (in fact I just added a “versus” label)

I say drawn in because my underlying philosophy in all of these things is to choose the right tool for the job and leave it at that. I know, hardly original thinking, but despite the mantra and the collective nod that this is true we still get very heated on issues that are not actually at odds with each other.

I’m NOT arguing that with you. I’m not arguing that with YOU. I’m not ARGUING that with you. I’m not ARGUING that with you Harry! Harry… Harry… Yeah Harry… but can he DO the job. I know he can GET the job but can he do the job?
Mr. Waturi, Joe vs the Volcano
Still there is fun to be had in the whole exercise. For the record I tend towards the static languages. Despite my recent fun with Python I have spent the last four years neck deep in C# and really am loving it. Our application uses over 75,000 lines of JavaScript, and every opportunity I have to decrease that number I will take. (Part of my excitement about Silverlight is just not having to write as much JavaScript anymore) I see the power in dynamic, I’ve done some really cool things with JavaScript and I’ve really enjoyed working in Python again…. but I believe there is less of a ceiling for static languages then there is for dynamic in terms of tool set, performance and the ability to handle large projects with large teams.

Anyway, Mat Podwsocki presents a great summary of the debate with links to a few bloggers here :

http://codebetter.com/blogs/matthew.podwysocki/archive/2008/05/28/static-versus-dynamic-languages-attack-of-the-clones.aspx


The original Steve Yegge presentation is here :

http://steve-yegge.blogspot.com/2008/05/dynamic-languages-strike-back.html


And the response that I enjoyed the most is here :

http://codebetter.com/blogs/gregyoung/archive/2008/05/18/revenge-of-the-statically-typed-languages.aspx


I enjoyed Greg’s response because it just really got me thinking. I honestly know nothing about the design by contract research that’s going on, but it just feels like something that makes sense. I got excited just imagining seeing errors like the ones described showing up in our project at compile time. I believe that if we were using such a system we would see a real improvement in quality. I can see how some would see this as an unnecessary burden on the developer, but these are probably the same crowd not writing their unit tests.

Design by contract is really a great example of what I mean when I’m thinking about the potential for statically typed languages in tool sets.

So I’ve actually started my blogquotes project which was intended as a widget style provider for random quotes (from a personal library) to appear somewhere on my site. I’m using the new Google app engine for this project and so far I have to say it’s pretty damn easy. From barely knowing python to having a django templated, gql driven tiny quote engine. Granted it’s the “hello world” of web apps, but still considering I have an application running on Google, with built-in authentication and datastore …. I’m pretty impressed.

I’ve played with ruby on rails before, but so far I’m far more comfortable with what I’m building here than I ever was with Ruby. (I don’t think I’ll start detailing that here and now as this post is more about my gadget plans and I don’t care)

So I have my application, and I have the beginnings of the backend for my quote providers. I just went to draft.blogger.com where google tests out new features and clicked “add gadget” from the layout view of the blog. I actually thought that what I was looking at was a mistake. 43,000 page elements and gadgets? How is that useful to anyone? How am I not just making the problem worse by adding another one that no one will find and no one but me will use? This is just insane.

I did a search for quote on that page and it returned 13, thankfully none of them are what I have in mind. They are all basically syndication style “[fill in topic] quote of the day” style widgets. There actually seem to be a surprising number of gadgets like that. Things that just don’t really allow the user to add anything meaningful or contextual to the web. I suppose a lot of people use their blogs as their home page, but for me having the weather and a stock ticker on my blog just doesn’t make sense. My Google home page sure, but on millions of public blogs not so much.

There also seems to be too many thinly disguised marketing devices and cheap looking ad-style nothing widgets. Personally I would like to see more of the types of gadgets that Google themselves are creating, where user generated content is the focus (my youtube videos, my picasa albums, my google docs etc) as opposed to content that is one size fits all for millions of blogs. At least it gives me some hope that what I’m doing is not a complete right off.

The data won’t stick, but if you have a google account you can log in and add a quote to the pre-alpha-hello-world-version of blog quotes at http://blogquotes.appspot.com