Sunday, July 11, 2010

Modern Technology and Transparency

So the first question you may have is, “what does he mean by modern technology?” For those of us that grew up in the ‘80s or before, we know that pretty much anything that has to do with computers is “modern technology.” But I am referring to the new toolset of the on-demand generation: Twitter, Facebook, My Space and so on – social networking.

You see, to my kids, computers are as basic to their lives as color televisions were when I grew up – every household had at least one and eyebrows were raised if your family didn’t. My youngest daughter was better with a mouse at 3 years-old than anyone else in my family – including me! They’ve grown up with Windows and Office and, for the most part, the Internet. And while those of us that had a hand in creating the modern computing culture are proud that we’ve brought the world within a few keystrokes, there is an unfortunate side-effect that is becoming more and more apparent as a result of these advances: e-narcissism.

Having two teenage daughters at home, it was a common dinner-time joke to ask one of them if they truly thought anyone cared that after school they were “doing homework then movie time with Jake” as she had posted to Twitter. But when my teenage son took over the texting championship and has maintained his lead over my daughters every month for over a year now, I had to stop and ask myself what our kids were really learning and how the always connected technology we were handing them was shaping their values, beliefs and personalities.

If you haven’t read “The Narcissism Epidemic: Living in the Age of Entitlement”, I strongly suggest doing so; especially if you are a parent of younger kids. I’ve lived what the authors describe and have battled for the past few years to keep my kids’ heads in the right place despite a generation that is falling victim to the narcissism of social networking and cloud computing.

But, as much as I could go on and on, it is another side-effect of this technology that has drawn my attention recently. With the proliferation of electronic communication mediums such as blogs, tweets, social networking sites and hardware that supports these tools such as cell phones, access to individuals has been greatly enhanced. And while this is awesome when we want answers or to learn something new or stay on top of what’s happening in the world, there is a flip-side. Access also leads to transparency. And by this I mean that we get a chance to learn more about someone than ever before.

Psychiatrists don’t analyze a person by reading a medical history – they listen as the person talks. This is how they get to know that person, learn about them and get inside their head. The more they talk, the more the doctor learns. This holds true in general as well. You walk up to an attractive person at a party and it is through dialogue that you determine if a real connection exists. But there is a difference when speaking to someone in a chair across the room or standing in front of you at a party versus sharing your thoughts to a faceless, nameless entity on the Internet.

Sociologists, Human Resources personnel and business managers recognized this problem as e-mail became more and more commonplace in our culture. So often someone will stick their chest out a little farther in an e-mail than they ever would if in front of their target audience. And so we also see the same trend in today’s social networking. Where it crosses a line is when the target audience is known and the authors seem to forget that the Internet is open for anyone to see.

No longer is workplace gossip spread at the water cooler. Rumors fly across the Internet via tweets, comments and wall posts at the speed of light. And unless we are diligent, they are out there for everyone, including the target of the rumor or comment, to see.

I just left a position that, on paper, was one of the best jobs I’ve ever had. The group was already using Visual Studio 2010, Team Foundation Server 2010, Silverlight 4 and so on. I was programming some pretty cool applications and the people I was working with were, for the most part, excellent. I can truly say that in nearly 30 years of developing software I’ve never worked with a more intelligent group of developers as part of one team. And while the position was a step-backwards on my career path and only used about 25% of what I brought to the table, the latter could have easily changed over time and the former was an acceptable trade-off to be part of a cutting-edge team. It was getting to know several of the people on the team that pushed me out the door.

I didn’t get to know them over lunch or while working on projects. It was during those times that I found them to be very intelligent and competent developers. No, it was what I learned about their true personalities online that shaped my decision to move on.

I don’t know if they didn’t realize, forgot or simply don’t care that their conversations on Twitter were visible to anyone. But it certainly gave me the opportunity to see their true colors.

I actually stumbled upon the conversations accidentally when I started to follow one of their blogs. Alongside the list of recent posts was a list of recent tweets. One of them caught my eye and motivated me to look farther. It was a harsh criticism, full of expletives, about another of my co-workers and a task he had chosen to take on in the hopes of improving the user experience of several of our applications. Now it isn’t easy to follow a Twitter conversation if you aren’t a follower of everyone involved, but you can put the dialogue together if you review each person’s tweets.

As it turns out, four of the developers on the team have been having extensive conversations about the other team members, including their opinions of those members and the work they do, etc. “behind their backs”. Unfortunately, it really isn’t a private conversation when you hold it via Twitter. Suggesting that the other developers, myself included, are “kids” that “don’t know WTF they are doing” were probably not intended for my eyes – or anyone else on the team. But, social networking provides this level of transparency and, unless you are careful, your inner thoughts may become public with the click of a mouse.

Through the arrogance and disrespect, I learned that these individuals were truly ignorant and could not be trusted. Unfortunately, they are also some of the most tenured developers in the group and the most influential with management. And while they refer to their teammates as “kids, er, architects” tongue-in-cheek, I’ve seen the fruits of their labor and they have nothing to warrant the feather in their caps. I worked with a team 1/3 the size of theirs which produced a production-ready, mission-critical application for the U.S. Army in ¼ the time these so-called “code ninjas” generated some of the most hacked up, spaghetti code I’ve ever seen that doesn’t even come close to meeting the requirements for the project. They blame their poor coding on the time crunch and pressure they were under “over the holidays” but any developer worth his salt knows that good developers will write good code under any circumstance. Bad developers will make excuses. Arrogant developers will make fun of and blame others.

The bottom line is that I would have known none of this, been oblivious to the true feelings of these former co-workers had it not been for the transparency that social networking provides. Depending on how you view my story, you may feel this is good or bad. I would hope the developers that made those comments on Twitter will be ashamed and see this as a negative, but I suspect that I’m an a** for writing this. I guess I’ll know when they tweet about it later, eh?

Sunday, July 4, 2010

WCF Is An Implementation

Recently I’ve found myself in a number of conversations about best practices for consuming WCF services in client applications. It is my position that WCF is an implementation detail and the vast majority of uses either ignore or abandon one of the core principals in Object-Oriented Program for the sake of ease or because developers simply don’t give it a second thought. By reminding ourselves that we should be following solid OO principals first, our best practices for using WCF becomes clear.

Always Code Against Interfaces
I’m sure you’ve heard it before: we should always code against interfaces rather than concrete implementations. While this sounds great in theory, there are practical benefits to this principal as well.

Interfaces allow us to decouple implementations so calling code doesn’t have to be concerned with how an operation will be performed – only that it will satisfy that contract established by the interface. This decoupling is key to Dependency Injection where the actual implementation of the interface is determined externally at runtime. Breaking dependencies allows us to create mocks and stubs for testing purposes or change the concrete class at runtime.

In fact, interfaces provide the most robust way for us to adhere to the OO principal of polymorphism in the .NET world. Regardless of the actual type we are working with, we are able to treat the object as if it were the interface we need.

Making WCF Testable
One of the most common problems I’ve seen when consuming WCF services in client applications is testing client code without actually calling service methods. One reason for this is that most developers simply reference the proxy service class and call the desired service method(s). This creates tight coupling between client code and the service proxy. A better solution would be to have client code use an interface that represents the service then we could mock the service for testing purposes.
Unfortunately, the lack of a readily available interface to work with serves as a deterrent for most developers and they simply relegate this as a future refactoring task. This reduces the code coverage of your unit tests and degrades the overall confidence in the quality of the application.

I won’t entertain the debate between using the auto-generated proxy types created using the Visual Studio “Add Service Reference” wizard or sharing code between the server and client in this article. Suffice it to say, in either case, we still have to create an interface to serve our needs.

Back To The Point
We can now come full-circle to the point of this article: WCF is an implementation. Following the logic that an interface hides the actual implementation from calling code and our need to decouple the WCF service from the client by working with an interface instead of directly using the service proxy, we can ascertain that WCF really is an implementation detail and our client code really doesn’t and shouldn’t care that it is calling a WCF service. From the client’s perspective, it simply needs an object that implements the interface. The implementation could be a WCF service, mock object for testing, a traditional XML web service, a class from a shared library or some new technology or approach that has yet to be announced.

Give this a little more thought and you’ll see how this applies to many other OO principals and best practices. Decoupling the services means our client code can better adhere to the single-responsibility principal. If we want to change the service implementation from WCF to some other technology, we only have to make that change in one place, the service implementation, which keeps code DRY (Do not Repeat Yourself!). Service implementation is also encapsulated in one place making the code reusable and easier to maintain.

Putting It All Together
I find having actual code to demonstrate a concept helps drive the information home, so let’s use the example of a client application that is consuming a Customers service which provides simple methods to retrieve and save information about customers in our fictitious application. We will create a Silverlight application using the Model-View-ViewModel (MVVM) design pattern for the user interface.

Instead of designing the WCF service, let’s start with the use-case for our client application. When our application starts, we will display a list of existing customers in the user interface. Following the MVVM pattern, our list will be data bound to the CustomerListViewModel which uses our service to retrieve the list of customers from the server.

public class CustomerListViewModel : ViewModelBase
{
  private ICustomerService _service

  public CustomerListViewModel(ICustomerService service) : base()
  {
    _service = service;
  }
}


We use constructor injection to obtain a reference to an implementation of the ICustomerService interface used by the view model to perform service operations.

The view model uses lazy-loading to retrieve the list of customers. Because Silverlight forces us to make all calls asynchronously, we will return an empty collection initially but could just as easily return null since Silverlight data-binding will handle null values gracefully.

private ObservableCollection<CustomerViewModel> _items;

public ObservableCollection<CustomerViewModel> Items
{
  get
  {
    if (_items == null)
    {
      _items = new ObservableCollection<CustomerViewModel>

      BeginLoad();
    }

    return _items;
  }
  private set
  {
    _items = value;

    RaisePropertyChanged(“Items”);
  }
}


The BeginLoad method is responsible for starting the asynchronous call to the service. Because Silverlight only supports asynchronous communication with the server, our service pattern must provide an easy way to handle all method calls. This is accomplished by passing a callback delegate to the service method that will be called when the operation completes.

private void BeginLoad()
{
  _service.ListCustomers(OnLoadComplete);
}


When the operation completes, the OnLoadComplete method is called. After checking if the operation was cancelled on the server, we translate the results and update the view model property.

private void OnLoadComplete(IEnumerable<ICustomer> customers, bool cancelled)
{
  if (!cancelled)
  {
    var list = from customer in customers
               select new CustomerViewModel(customer);

    Items = new ObservableCollection<CustomerViewModel>(list);
  }
}


One thing to note is that we are not persisting references to the data objects returned from the service. This is consistent with a pure-MVVM approach were we only ever expose view models to our UI.

As you can see, nothing in the code above requires WCF or cares if that is the technology used to perform the service operation. However, we can easily mock the ICustomerService interface for unit testing purposes. And if something changes in the way the service is implemented, we won’t have to make a single change to our view model class – as long as the contract isn’t broken.

Let’s take a look at the interface for the service our view model uses:

public interface ICustomerService
{
  void DeleteCustomer(Guid id, DeleteCustomerCompleteCallback callback);

  void GetCustomer(Guid id, GetCustomerCompleteCallback callback);

  void ListCustomers(ListCustomersCompleteCallback callback);

  void SaveCustomer(ICustomer customer, SaveCustomerCompleteCallback callback);
}


Each method accepts whatever parameters are needed to perform the operation as well as a callback method that is executed when the operation completes. The delegates are strongly-typed to make coding easier.

public delegate void DeleteCustomerCompleteCallback(bool cancelled);

public delegate void GetCustomerCompleteCallback(ICustomer customer, bool cancelled);

public delegate void ListCustomersCompleteCallback(IEnumerable<ICustomer> customers, bool cancelled);

public delegate void SaveCustomerCompleteCallback(bool cancelled);


I’ve left all error handling out of the interface which means it is left to the service implementation to handle any exceptions (or faults) that occur during the service call.

Let’s take a look at how we will implement the ICustomerService interface to make use of a WCF service for the actual operation.

public class DefaultCustomerService : ICustomerService
{
  public void ListCustomers(ListCustomersCompleteCallback callback)
  {
    var wcfService = new WcfCustomerServiceClient();

    wcfService.ListCustomersCompleted += ListCustomersCompleted;
    wcfService.ListCustomersAsync(callback);
  }
}


Here is where we finally see our WCF proxy come into play. Our service implementation also abstracts away the async pattern used by the auto-generated proxy class by wiring a local handler to the completed event and passing the callback method to the service method. The callback delegate will be passed automatically to the completed handler in the event args.

private void ListCustomersCompleted(object sender, ListCustomersCompleteEventArgs e)
{
  if (e.Error != null)
    throw e.Error;

  var callback = e.UserState as ListCustomerCompletedCallback;

  if (callback != null)
  {
    var list = Mapper.Map<List<CustomerContract>, List<ICustomer>>(e.Results);

    callback.Invoke(list, e.Cancelled);
  }
}


In the handler for the completed event, we check for any error that may have occurred on the server then obtain the callback method from the event arguments. The callback is invoked with the results of our service method call.  (I'm using AutoMapper to simplify mapping the DTO returned by the service to the interface expected by our code.)

Conclusion
The point of this article was to illustrate that from the perspective of the client application, WCF is an implementation detail and a simple pattern can be used that makes our code more adherent to solid object-oriented principals. Hopefully the explanation and sample walk-through have demonstrated how this approach will also make our code more testable and flexible in the process. While it may seem like additional work and code, the benefits gained far outweigh the effort required and after putting it into place, I’m sure you’ll see for yourself how much easier it is to test and maintain your code as your application changes.

Tuesday, April 6, 2010

Model / ViewModel Relationship

I've become an avid believer in the MVVM design pattern for XAML-based applications but find certain aspects remain ambiguous as the pattern moves through its "toddler" years. One of these areas has to do with the relationship between the Model and the ViewModel.

Decoration
In many articles and examples, I see that the Model is directly exposed as a property on the ViewModel. This seems like the easy way out to me, but does fit one approach to the pattern. In this case, the ViewModel is designed to "extend" the Model by exposing additional properties and methods used by the UI - an implementation of the Decorator pattern, if you will.  But, this approach has a couple of issues:
  1. First, there might not be a one-to-one relationship between the data contained in the Model and the way the UI wants to display it.  Perhaps we want to display a person's name as their full name but the Person object only contains FirstName and LastName properties.  Sure, we could modify the Person class to expose a FullName property, but what if we don't have access to that code - perhaps Person came from an external service?  Isn't it the role of the ViewModel to make these adaptations for us?
  2. What if we have an object graph with parent-child (or even grandchild) relationships?  Wouldn't we want the ability to manage the child objects in their own view?  At which point, we'd need all that extra stuff the ViewModel offers for the child views, too.
On the other hand, let's not discount the simplicity of exposing the Model object as a property on the ViewModel.  One property and a convenient way to swap the object under-the-hood of the ViewModel without having to worry about multiple property changed events, etc.

Delegation
Another common approach I've seen is to store a reference to the Model object in a private variable and have the ViewModel delegate to the object in property setters/getters.  One advantage of this is that we can preserve any business logic that may be contained in the Model object but it does increase the code in our ViewModel significantly as we have to mimic each property we want exposed to the View.  Plus, if the Model object contains validation logic, etc. we have to include plumbing to hook that up.  And, let's not forget the need to handle property changed events from the Model so we can bubble them to the UI.

Child objects or collections are then wrapped in their own ViewModel class and exposed to the UI, thereby, supporting deep object graphs in a consistent manner.

Truthfully, I only see this as a reasonable approach if you are using rich business objects for your Model.  Simple DTO's don't contain any additional logic that provide value when delegating.  Of course, it makes it easy to swap them under-the-hood and possibly saves a step when interacting with a service to persist changes but I don't see this as good enough justification.

Duplication
The other option is to use the Model as a DTO and "load" the ViewModel state from the DTO when retrieved.  The ViewModel would then manage the state internally via data-binding with the View.  For persistance, the ViewModel would create a new DTO, populate it and execute whatever method(s), presumably a service call, to save the state.

This choice suffers as the previous from the additional coding required to "re"implement the various properties for the View to consume.  However, it does provide clean separation of layers as the View knows absolutely nothing about the Model.  And, if I'm not mistaken, one of the diagrams for the MVVM pattern I've seen shows exactly that.


So, the question in my mind is what purpose should a ViewModel serve and which approach best suits that role?  While I hate recoding properties that have already been implemented in my Model object, there are some very good reasons while I think this is the right approach:
  1. Separation of layers.  The View knows nothing about the Model so changes to the Model don't affect the View.
  2. The ViewModel is there to serve a particular use-case which, following solid OO principals, should be behavior-driven while our Model object (given today's trend toward entity-base data access technologies) is most likely data-centric.
  3. The ViewModel assumes the responsibility to reformatting, repackaging, re... the data obtained from the Model into a form required by the UI.
  4. The ViewModel adds INotifyPropertyChanged support and aleviates this responsibility from the Model.  This may not be the case if dealing with rich model objects where we may want to delegate and have the ViewModel bubble the PropertyChanged event.
As the last point demonstrates, heading down one path often turns on itself and takes us right back to the original question/debate.  I'm still forming my opinion and welcome your thoughts, comments and observations to help me along this journey.

Thursday, February 18, 2010

Recommended Reading: Clean Code by Robert C. Martin

I am only a few chapters in but I'm already adding "Clean Code: A Handbook of Agile Software Craftsmanship" by Robert C. Martin (available at Amazon.com) to my list of required reading for my development teams.  Despite the fact that Robert bases his examples on Java, I believe the concepts ring true to the .NET family of languages and every lesson, guideline and recommendation holds value for any developer.