I’ve used this approach a few times when I essentially need a really simple plugin / provider model within my applications so I thought I’d jot down the relevant details here for posterity using an old project for adding post commit hooks to subversion.
Consider this a somewhat simplistic approach, not suitable for production code without a bit more plumbing. If you are going all out and need true add-in’s for your .NET based product I recommend checking out the managed add-in framework , very robust stuff and not that hard to implement. In a lot of cases though the isolation, discoverability, communication pipelines etc are a bit overkill. The example I’ll show is a subversion hook that allows for very simple addition of new .NET “actions” to execute on PostCommit. In this case the “add-ins” are only written in house, and editing a config file to hook them up is completely acceptable etc etc.
The solution
Subversion.Contracts : This project is the bridge between our dispatcher and the plugins that will do the work. Subversion.Plugins : Any of the actions we wish to take post commit are added here, but could just as easily be distributed across as many assemblies and projects as necessary as long as they reference the contracts. Subversion.Dispatcher : This is the console application that actually receives the arguments from subversion and translates them into our contracts, then executes the appropriate actions (note no references to the plugins project)
The Contract
The contracts are relatively simple, but whatever you put in them this is the interface for the “plugin” that will need to implement. In our case this is IPostCommitHandler :
Pretty simple, essentially just a “do whatever you want” method that passes the arguments from subversion wrapped up in a simple class. See the attached zip if you want the guts of the subversion specific stuff.
The Plugin
using System; using Subversion.Contracts;
namespace Subversion.Plugins { public class ExecuteForAllCommits : IPostCommitHandler { #region IPostCommitHandler Members
public void ExecuteCommand(PostCommitArgs a) { SendEmailNotification.SendEmail(a.Argument, a.Revision); }
#endregion } }
Again, very simple and in this case we’re passing off the execution to a static class that again is not shown, but what gets executed isn’t all that important in this case.. simply fill in what you need.
The Dispatcher (Plugin Host)
using System; using System.Collections; using System.Text.RegularExpressions; using System.Configuration; using Subversion.Contracts;
namespace Subversion.Dispatcher { /// /// /// Summary description for PostCommit. /// class PostCommit {
private static ArrayList DispatchGlobalCommands(SubversionRevision rev) { // Handle global commands ArrayList commands = new ArrayList(); for (int i = 0; i < ConfigurationSettings.AppSettings.Count; i++) { string key = ConfigurationSettings.AppSettings.GetKey(i); string val = ConfigurationSettings.AppSettings.Get(i); string[] cmdParts = key.Split(’:‘); if (cmdParts.Length == 2 && cmdParts[0] == “command”) { if (cmdParts[1].StartsWith(””)) { DispatchCommand(val, cmdParts[1].Substring(cmdParts[1].IndexOf(“,”) + 1), rev); } else { commands.Add(cmdParts[1]); } } } return commands; }
/// /// Call the appropriate method for the command name given with the argument given /// no processing of the argument happens here. /// private static void DispatchCommand(string handlerString, string argument, SubversionRevision rev) { // We don’t want properly configured commands to stop working because of errors so trap // everything here… try { if (handlerString != null && handlerString.Length > 0) { string[] typeAndAssembly = handlerString.Split(‘,’); if (typeAndAssembly.Length == 2) { System.Reflection.Assembly a = System.Reflection.Assembly.Load(typeAndAssembly[1]); System.Type t = a.GetType(typeAndAssembly[0], true); object handler = System.Activator.CreateInstance(t); if (handler is IPostCommitHandler) { ((IPostCommitHandler)handler).ExecuteCommand(new PostCommitArgs(argument,rev)); } } } } catch (Exception) { //TODO: log errors } } } }
There is some plumbing in this class that isn’t directly related to this post, but I’ve left it all anyway. Subversion will run this command every time a checkin is made, and the process ends and starts over again each time. This allows for some pretty simple handling of loaded assemblies and whatnot, if you have a longer running process or are dealing with some scale be cautious. ;-)
The Main function has two jobs, parse and create the revision, then read the application configuration file and start issueing commands for the received revision. Commands are in two parts, those defined in config to be executed always (global commands) and those that are interpreted from the subversion commit log itself, parsed out and executed with arguments from the revision log.
Here are some example commands defined in the config
in the key we have “command:[name]” signifies a command arriving in a revision where somewhere in the revision log we’ll see the command name followed by a colon, anything following the colon is then passed to the plugin as an argument. If the name is an asterisk then we simply execute for all, with an optional argument being passed to the plugin. (so the first example emails chris for all revisions, and the second emails an account named check-ins
the value portion here is what directs the program where to look for the appropriate plugin and class to execute. I copied the format I found in a web.config file which is to put the class name followed by the assembly name separated by a comma.
In retrospect if I were doing something similar again I’d probably create a better structured format rather than relying on all this string parsing… but old code is what it is in this case.
Finally we call DispatchCommand for each parsed out command which is the last piece of this old code that I’m attempting to document here for reuse. DispatchCommand will read the class name and assembly name, load the assembly name and attempt to instantiate the class/type named in order to call it using our IPostCommitHandler interface.
There are a few ways to do this, and for this project I’m simply calling “System.Reflection.Assembly.Load” which relies on the fact that my plugins are located in my bin directory. I’ve also done this using a “plugin store” which is a fancy way to say I had a dynamic path configured that I could read my assemblies from. In this case you can use LoadFile or LoadFrom, LoadFrom will load dependencies automatically while LoadFile loads just the assembly and will potentially load duplicate copies. (see the documentation) In order to get the dll’s in place for this project we just simply add a post build event like so…
If after instantiating the named type from the loaded assembly we actually have an IPostCommitHandler then make the call! Done.
System.Reflection.Assembly a = System.Reflection.Assembly.Load(typeAndAssembly[1]); System.Type t = a.GetType(typeAndAssembly[0], true); object handler = System.Activator.CreateInstance(t); if (handler is IPostCommitHandler) { ((IPostCommitHandler)handler).ExecuteCommand(new PostCommitArgs(argument,rev)); }
So that’s that. You can download the code here - it should basically work as is if you are looking for a shortcut to extending subversion with .NET. I was relatively lazy with getting this posted - so if you got this far, can use the code, and have problems with it leave a comment and I’ll try to help if I can.
I read the singularity is near last year and really enjoyed it, despite a few misgivings for Kurzweil’s ego and some dubious use of statistics. One of the things I found myself really intrigued by was Kurweil himself and this movie looks like a fun look at the man and his ideas.
Do I believe him? Part of me wants to, definitely. The ultimate end-game of the singularity is fascinating and wondrous, but I actually found some of the more intermediate steps in his projections to be more fascinating. Maybe that’s just a factor of what I can relate to. One example of this was the idea that nano-technology will lead us to self-assembling products from base materials and an instruction set transmitted as information. So much of my life is already so information focused that the idea of being able to go 100% information based and the implications on how society is structured etc… it’s mind numbingly cool. Try to imagine how much energy, time and effort we put into moving goods around this planet and how incredible it would be for all of that to end.
Anyway, looking forward to renting this one when it becomes available.
Moq is now my favorite unit testing framework for .NET, and a great poster child for the power of the lambda expression support added to C#. If you are not doing unit tests or Test Driven Development you should, and if you already are and have not checked out Moq, you should.
My tests previous to Moq were using NMock, a very handy tool that looks like a lot of other mock frameworks. In order to setup a mock call you would write something similar to this :
[Simple NMock example]
Mockery mocks = new Mockery(); IWidgetAdapter mockAdapter = mocks.NewMock();
IList mockWidgets = new List(); Widget mockWidget = new Widget(); mockWidget.Name = “Mock Widget”; mockWidgets.Add(mockWidget);
Stub.On(mockAdapter).Method(“LoadWidgets”).WithNoArguments().Will(Return.Value(mockWidgets)); WidgetManager widgetManager = new WidgetManager(mockAdapter);
The ugliest thing in the expression above for me was the literal string that describes the method name that will be called. All of a sudden my fancy refactoring tools don’t quite reach all of my code and things become brittle. Sure you say, but I run these tests all the time! So it is caught right away anyway right? Yeah, but who wants to be searching and replacing these values after every refactor? Just does not feel right.
Here’s the Moq equivelent:
[simple Moq Example]
IList mockWidgets = new List(); Widget mockWidget = new Widget(); mockWidget.Name = “Mock Widget”; mockWidgets.Add(mockWidget);
Mock mockAdapter = new Mock(); mockAdapter.Setup(cmd => cmd.LoadWidgets(It.IsAny())).Returns(mockWidgets); WidgetManager widgetManager = new WidgetManager(mockAdapter.Object);
See that the “LoadWidgets” string disappears, and refactoring code now properly refactors tests right along with it, very very handy. Some find the need to add .Object when referencing the underlying mocked type annoying (on the call to WidgetManager) but personally I find this a small price to pay.
When I first started using Moq a few weeks ago I didn’t go much beyond that example. Which speaks to Moq in that it is VERY easy to get started without much effort and more advanced features really don’t get in the way of the simple features.
For a while I was able to do a lot of the testing I had in place by Asserting on values I either had access to or were being returned to me. In those cases where the values I needed were being returned to someone else (say a Service for example) I was in the habit of building stub classes (Test Spy in this case) to handle the outgoing data.
So using the generic service as an example, and wanting to observe and Assert that I am sending the correct requests to that service my previous code would have looked something like this:
[Test spy example]
public class AuthenticationSpy : IAuthenticationService { #region Test Helpers public IList ReceivedReqeustContexts = new List(); public AuthenticationResponse ExpectedResponse { get; set; } #endregion
public AuthenticationResponse AuthenticateUser(AuthenticationRequest request) { return ExpectedResponse; }
public AuthenticationResponse RenewAuthenticationTicket(RequestContext context) { this.ReceivedReqeustContexts.Add(context); return ExpectedResponse; }
}
[TestMethod] public void RenewExpiredTicketTest() { AuthenticationSpy _authenticationMock = new AuthenticationSpy(); Mock _respondingMock = new Mock();
// initialize will call authenticate() in the service wrapper ServiceWrapper.Current.Initialize(“testing”, “Password1”, “http://auth", “http://resp");
// now setup and call any method to trigger a renew of our now expired authentication ticket SetupCreateResponse(Guid.NewGuid()); SurveyController.StartSurvey(new StartSurveyArgs());
// confirm renew was actually called Assert.IsTrue(_authenticationMock.ReceivedReqeustContexts.Count == 1); }
This works, and in some cases the control given to you with your test spy can be really helpful, but if I can avoid it I will every time. More classes and more code means more maintenance, even if it is in the test code. So I finally read the docs on the Verify() method on Moq objects and it is awesome. ;-) Here’s the same code handled with Moq properly and without the need for a whole new class imitating the authentication service.
[using Verify example]
[TestMethod] public void RenewExpiredTicketTest() { Mock _authenticationMock = new Mock(); Mock _respondingMock = new Mock();
// initialize will call authenticate() in the service wrapper ServiceWrapper.Current.Initialize(“testing”, “Password1”, “http://auth", “http://resp");
// now setup and call any method to trigger a renew of our now expired authentication ticket SetupCreateResponse(Guid.NewGuid()); SurveyController.StartSurvey(new StartSurveyArgs());
// confirm renew was actually called _authenticationMock.Verify(cmd => cmd.RenewAuthenticationTicket(It.IsAny()), Times.AtLeastOnce()); }
Not bad eh? Again the power of the lambda expression here jumps out at you. Full intellisense and compiler support for describing exactly what you expect that method to receive. The “It” class allows for no description “It.IsAny<t>()” or very precise description as above. The “Times” check also allows you to narrow Significant savings in code and maintenance and actually using the testing framework as intended (imagine that) ! My only slight annoyance so far is in having to keep count of the number of times a method has been called in order to check that the last piece of code actually resulted in a call and not some code way earlier.
I failed to convince my manager at work that sending me and a few members of my team to MIX was a worthwhile expense in this economy. So instead I spent a couple days this sprint with http://live.visitmix.com/ on one screen and visual studio in the other. I have to say, Microsoft did an amazing job with MIX in terms of getting me excited and having me “tuned in”. If you are at all interested in web development on the Microsoft stack and haven’t checked out the keynote I’d recommend it. I really enjoyed Buxton’s presentation and Guthrie was amusing.
So now that it’s been a week, and “the Gu” and all those dancing flashy lights are no longer influencing my opinion… I’m STILL excited about Silverlight 3. Sadly the development tools can’t be run in parallel with Silverlight 2 and we’re near the end of our sprint so can’t afford the risk. Which is really too bad because one of the things our current application is leveraging is the wcf duplex polling module. A lovely little COMET like implementation for server push. The version of the duplex polling that made it into the Silverlight 2 toolkit was a little more bare than your typical Microsoft module. And while it works pretty well, it leaves a lot of plumbing code in the hands of the programmer, specifically a lot of asynchronous channel handling code that is a bit of pain to deal with. (though a bit educational too) Anyways, this is one of those areas that Microsoft is improving on in Silverlight 3, and one of those things I’m excited about. Right next to the simpler duplex polling usage for me is the introduction of binary serialization for web services (including duplex!). When comparing to Flex and the myriad of tools and options for using AMF Silverlight was really behind the ball on this one. When we eventually decided to build our tool in Silverlight as opposed to Flex we basically committed ourselves to rolling our own binary serialization. I’m very happy we’re not going to have to follow through on that. Read more from the web services team :
Another great addition in the realm of things-that-were-annoying-but-possible-and-already-in-flex is the new navigation uri support within Silverlight 3. Check out Tim Heuer’s typically great post on all the silverlight changes here. (link specifically to the nav)
Lastly to round out my list of really exciting enhancements to SL3 are the network monitoring API, which gives developers events to subscribe to detect when the network is and isn’t present - as well as assembly caching which is huge, allowing Silverlight to cache assemblies like the toolkit so that once a user has been exposed to it they don’t necessarily have to download it again until a new version is required. This in turns makes XAP’s smaller which is always a good thing.
So to summarize, I think the top five features from the slew of enhancements that I’m looking forward to are :
Binary Serialization
Duplex polling enhancements
Network detection API
Assembly Caching
Navigation and Deep Linking suport
My perspective on Silverlight is very biased to the needs of our application of course. And our application will live and die on the network, with performance being a top concern in everything we do. Controls are nice but we can buy those from vendors like Telerik, animation and media are cool for demos but likely won’t do much for us in the short term. The out of browser story is huge, but again with a SaaS app that relies on the network we don’t envision a whole lot of offline work happening in the early versions of our app.
Honorable mentions for features go to GPU acceleration (performance) and the SaveFileDialog (control) and Expression Blend 3. I don’t use Blend much myself, but the current version is a huge pain for our team. Maybe more on that in a separate post.
I once heard an interesting anecdote about how to make a difficult decision between two paths. When you find yourself spinning, alternating between one choice and then the other, it can be helpful to simply assign each choice “heads” or “tails” and flip a coin. When you reveal what side the coin landed on pay attention to your emotional reaction… are you relieved or are you disappointed? Try it sometime, it really can work.
I recently spent about three weeks or so doing an in-depth analysis of Adobe Flex vs Microsoft Silverlight for an enterprise application and I really feel like I ultimately decided via the coin flip method (without actually flipping the coin). Our company is about to embark on a new product aimed at the enterprise that will require levels of functionality and control that Ajax alone can not provide. We are essentially looking to take a workflow that has been heavily dominated by Word and Outlook and drag it into the future with real-time collaborative tools in the spirit of Google Docs.
I ended up choosing Silverlight, despite the potential risk adoption may pose. At the end of the day we believe our target market will be willing to accept the Silverlight install process, and that the underlying engine (.net) provides far more robustness for building the kind of application we’re looking to build. Honestly this is a whole other post, but the nail in the coffin for Flex ended up being the lack of threading support for developers. On nearly every other level the two were neck and neck, with very subjective “wins” for either and Flash being the clear winner when it comes to adoption etc.
What’s interesting though is that my first choice was Flex. After weeks of agonizing I decided we needed to build this thing in Flex, working around the lack of threading where necessary and going with the safe route of next to zero adoption barriers. It only took a weekend after making that decision to flip-flop. I was supposed to be making the call as if this were my company on the line, and with a clear vision of the unknowable future… at the end of the day though taking the safe and compromised route just didn’t feel right. I could see the complexity of our application snowballing in the future, I could see the legacy of the flash runtime catching up with us, I could see a competitor choosing to build their offering in Silverlight and spanking us in the next year. Making the decision from a technical standpoint the only winner was Silverlight, if the business deemed the adoption risk too great then fine we could do Flex, I was prepared for either.
My proverbial flip of the coin had basically taken those three weeks of opinions and research and testimonials and flame wars had all gelled together once I had made an actual commitment to choosing Flex. It was only then that my gut told me what I needed to know and I have not looked back from Silverlight since.
The post saved me a ton of time. It’s a bit embarrassing for Adobe in my mind to ship something this buggy. I was seriously running into these issues within an hour of trying to connect Flex to our .NET Soap based services.
“MyMethod can’t return an object of with the type name MyMethodResult.”
You’re fracking kidding me right? Wow. (and there are more along these lines)
After fighting with the above and other bugs I was rewriting a lot of the generated code from FlexBuilder and it was just pointless. And sure, generated code isn’t the greatest to rely on anyway, but give me a break. In the end I used the WebORB presentation server to handle the communication to our .NET code, as well as for generation of the initial proxy classes for the client and I have to say it was an excellent experience compared to the crap built into FlexBuilder.
This is a great article about the myth of how the best technology doesn’t necessarily win. Granted, sometimes the best technology does not win, but there is a persistent and pervasive sense that the populous often chooses the “VHS” over the far superior alternative. The article addresses the VHS vs Beta debate directly as well as the victory over Dvorak by QWERTY. To encourage you to read the original I won’t reveal the clever arguments made.
I’m posting this because there seems to be a real sense of fait accompli when it comes to the Flash vs Silverlight debate. Critical mass has already been acheived, why would content producers or development shops choose to target any other platform than the Flash runtime when users have clearly already made their choice? How could Beta possibly make a resurgence against an already entrenched VHS? It would take an entire round of evolution before DVD would come along and supplant the status quo. There are a couple reasons why this article has relevance for Silverlight, and why the VHS / Beta argument doesn’t hold water.
Flash vs Silverlight is about a producer investment in technology NOT a consumer investment. Machines are powerful enough, and installations simple enough that the relative cost of owning both technologies is nothing like owning two peices of hardware.
If there is a competitive advantage for a producer to be gained via a specific technology they will use it. Any differentiators in a competitive field like software has a high potential of making a return. This is a very different decision process than it is for consumers.
Consumers don’t really care or even know which technology is driving their rich content. They care that it “just works” (like flash based video in comparison to WMP or Quicktime) and that the functionality they desire is there. Without a right-click most users won’t even realize which is which behind the curtain once they have both installed.
“Owning” everyone (high adoption) is really not that big a deal when your competition can also have 100% adoption at the same time. This is not like choosing a computer or an operating system. Only Microsoft can prevent themselves from achieving their penetration goals.
Better technology does win. I’m not saying that Silverlight is necessarily the better technology right now, Flash maintains an edge on some specific rendering speeds it appears, and their designer tools are clearly better… but Silverlight has the benefit of coming at this with second mover advantage. They didn’t start from scratch, they built out a proven technology (.NET) into new ground by largely copying and improving on the entrenched technology. (sure looks copied from my perspective but that’s a different post) The .NET runtime, threading, compiled/managed code and the lack of legacy in Silverlight will all combine to produce demonstrations of browser based technology that will be extremely difficult and expensive to reproduce on the Flash runtime.
Silverlight does not have to “kill” Flash to win, it only needs to join Flash in the 90% adoption numbers to be a great success.
I like both technologies by the way, I’m just entertained by some of the almost religious like statements on those on the Flash side that sound a lot like any attempt to improve or even add to the status quo is a total waste of time. (or somehow an affront to their own efforts)
Silverlight 2 may not have the control set that Flex developers are used to seeing out of the box but there are a significant number of control vendors who are stepping up to the plate to fill the void. It seems as though Microsoft’s strategy has been to get the Silverlight 2 runtime out as quickly as possible (and as lean as possible) always knowing that this type of extension to the framework would exist.
I do still hope to see Microsoft push a little further in controls that are downloaded once and only once with the framework itself thereby making our applications leaner - but it’s a pretty serious tradeoff until the runtime has the kind of penetration that Flash enjoys.
Anyway, here’s a nice post from Tim Heuer that does a good round up of where to find those missing controls.
I just bought a 1TB external HD, the “Maxtor OneTouch 4 Plus” this weekend on sale at London Drugs. A bit of an impulse purchase but I’ve been digiizing all of our dvd’s lately into iTunes and had completely run out of space…
The drive has a bunch of automated backup features I’ll never use, so I skipped all the software and went to use the drive directly from Mac OS X. First step here is to convert the format of the drive from it’s default of NTFS to something Mac can natively use. My impulse was to simply “format” and continue but unfortunately every time I tried the format disk utility would abort with nothing useful showing in the console logs. I went through this a few times with different file system settings and nothing worked. (See this ArsTechnica link for how to choose your filesystem)
Odd.
I then tried to partition the drive - and two partitions or more would all of a sudden work. Again odd, but I didn’t want more than one partition in this case, but reverting back to one partition would cause the same problem all over again. I resorted to Google at this point and came across this very useful although somewhat poorly formatted post on Seagate forums. (seems to be a generic problem with disk utility)
The gist of the problem being that the default partition table format needs to be changed to GUID. You can apparently only acheive this by partitioning (in this case to two partitions) so you can change the setting for partition table format and then partitioning again back to one partition with the new partition table format intact. Annoying but easily worked around once you stumble on the right answer.
Note this issue is actually new to 10.5.* and you can also solve the problem by formatting from an older version of Mac OS X (from a boot CD for example).
So at the beginning of the year I was tasked with evaluating a number of technologies for RIA development for the next evolution of my company’s product. Up to this point we had been relying extensively on ASP.NET forms with a traditional post-back model that was responsible for a lot of wasted time and bandwidth. We’ve leveraged a lot of Ajax in the past few years, starting with simple fixes like trees and list based controls that use load on demand and going all the way up to full fledged single page applications that consumed purely services.
This has worked, but the cost is overwhelming for a development team of our size and makeup. We hire smart generalists for the most part, favoring developers with C++/Java/C# backgrounds. Some of our developers have acquired some deeper skills on the client side, but where possible we attempt to leverage control vendors like Telerik and ComponentArt as much as possible. They do an excellent job of hiding some of the complexity involved in cross browser web interfaces, but you will inevitably have to “hit the metal” and get your hands dirty. Relying on third parties also removes a lot of the control needed to do things the way you need them done. Regardless despite being a huge fan of the http://docs.google.com suite of tools, I have witnessed far too much ugliness in our organization with supporting multiple browsers (including having to support IE 6) and pushing the limits of complicated UI in the browser. As the size of the DOM increases and the size of our data sets increase we see wild variance in client performance with respect to things like drag and drop. I know it can be done, I know we are not at the limit yet, but seriously this is not pragmatic for our software and our market and our developers. I am a big fan of the view that JavaScript is becomming the assembly of the web, those who do this shit well, do it well by lifting themselves out of the muck with good abstractions like GWT.
One thing I think should add here in defense of Ajax though; UI design plays a really important role in the effectiveness of the DHTML approach and honestly I believe part of our problem has been designing a far richer interface than we could afford in the technology we were leveraging at the time. Take a close look at Google’s lack of decoration, images etc. These things certainly matter.
Next steps… evaluation
Anyway, I’m getting off topic as usual. In the beginning of 2008 my feature matrix analysis really narrowed our options from about a dozen technologies (including XUL, ActiveX, Applets, JavaFX, Silverlight, ClickOnce, Ajax, Flex) down to three. Silverlight, Flex or Ajax. At the time of my evaluation Flex was at version 2, JavaFX was vapourware and Silverlight2 was in Beta. Given that we are a .NET shop and already have the C# programmers, the Silverlight option was looking like it was going to cleanly win out over Flex. Ajax was honestly only at the table still because we needed to justify our position and show we clearly evaluated all our options. Flex was seen as less desirable due to being based on ECMAScript and having to retool and retrain.
For the most part we’ve seen this as two relatively equivalent technologies with different stories for the developers. While there are important differences between how code is delivered and executed in Flex vs Silverlight, but at a high level we believe technically we can deliver our application in either technology very effectively. We prefer to keep working in C#, but the limited penetration of Silverlight is a serious risk for an application delivered in a SaaS model. That single fact has transformed the whole exercise into largely a business decision. I don’t doubt Microsoft will be able to push their offering significantly, but I would not bet money on where they will be in 1 year. (Windows Media Player STILL doesn’t equal flash in penetration)
Tool support however remains something that is extremely important to developers, and is one of those things Microsoft often trots out in arguing the superiority of their platform. We swallowed that line pretty easily at first, knowing that under the hood all the code written for Flex is just a variation of ECMAScript (JavaScript) is enough to scare us off. How can you acheive the refactorability and tool support provided by current and future versions of Visual Studio with a loosely typed language like ActionScript?
Trying it out
This week I downloaded FlexBuilder3 after one of our senior executives setup a call for us with Adobe evangelists to get more details on why to go with Flex. Again the motivation for this coming back to penetration and wanting to ensure we are making the right decision for what will become a million plus dollar iniitiave to re-engineer. I wanted to get some hands on time with the latest version of FlexBuilder (3) that had come out since our initial research.
I was immediately surprised by the leaps Flex had taken since I last really dived in. I’ll admit there was some bias here though as I am also a huge fan of Eclipse, so the fact that FlexBuilder is built on Eclipse is in my mind a huge win. (not new btw)
The effort in actually building an application that connected to our existing .NET web services was embarrassingly trivial. FlexBuilder has a simple tool for generating and managing proxy classes to represent your web services. So after literally pasting a url into a wizard I had code for talking to our .NET SOAP based web services. (seemed to only support SOAP 1_0 not 1_2) I then got started with the Form Designer and had a simple application talking to our backend in under an hour even counting the little things that tripped me up like where to add my event handlers which wasn’t immediately apparent. (too reliant on double clicking controls apparently ;-) hint : <mx:Script> tags and dom style event callouts)
The concept of states in Flex and the ease with which I was able to create a number of them in the designer and bind those to a dropdown for switching between them was pretty eye opening. A state in Flex is defined by the differences between your main UI (or just another state) and the state you wish to have/be in. The IDE allows you to visually manage these states and then visually modify each one to represent application states. I don’t have an early sense of whether this actually scales for complex applications, but at first glance it’s very cool. (Think hierarchical state machine) Couple this with the data binding model and you have some very effective UI management tools at your disposal. Maybe this only looks cool coming from our antiquated asp.net approaches, but this stuff is exciting. (Silverlight/WPF have the same capability, maybe even a little more advanced but with more overhead in my opinion) Having your model drive all changes is so much more manageable, scalable… and just correct than having explicit assignments in page PreRender methods that set visibility based on the state of that model. Barf.
The control toolkit out of the box with Flex is also extremely impressive. Check out this post for a list of all the FlexBuilder 3 Controls included out of the box . For now at least this control set will mean being highly more productive in the early stages of development than if we were either having to roll our own or rely on third party vendors. And of course you can roll your own in both Silverlight and Flex and each can be just about anything imaginable.
So I’m sold, at least sold on the fact that Flex deserves considerably more attention than what we had previously given it. I’ve bought the “Flex 3 Cookbook”, and “Adobe Flex 3 Training From the Source” and I’m intending on spending at least some of this Christmas holiday catching up on just what’s possible with that silly little flash technology.