Build your own CAB II (转与 Jeremy D. Miller)

来源:百度文库 编辑:神马文学网 时间:2024/04/17 06:34:55
Build your own CAB Part #7 - What‘s the Model?
First, go catch up on what‘s come before:
PreambleThe Humble Dialog BoxSupervising ControllerPassive ViewPresentation ModelView to Presenter CommunicationAnswering some questions
What‘s the Model?
I‘ve spent most of the series talking about the View or the Presenter, but the Model piece of the triumvirate has a role to play as well.
A couple years ago I participated in a session on design patterns for fat clients that Martin Fowler was running to collect data for his forthcoming sequel to the PEAA book.  One of the topics that came up in conversations afterward was whether or not it was desirable to put the real Domain Model on the client or use a mirror version of the Domain Model that you allow to have some user interface specific functionality.  In other words, is the Domain Model pattern applied to the user interface a completely different animal with different rules than the traditional POCO Domain Model in the server?  We didn‘t come up with any kind of a consensus then, and I‘ve never made up my mind since.
As I see it, you have four choices for your Model:
Domain Model Class - Just consume the real application classes.  Many times it‘s the simplest path to take, and leaves you with the fewest moving parts.  My current and previous projects both used this approach with a fair amount of success.  In my current project our Domain Model classes implement quite a few business rules that come into play during screen interactions.  The downsides to consuming the real Domain Model in the View are that you‘re binding the View more tightly to the rest of the application than may be desirable and the possibility of polluting the Domain Model with cruft to support INotifyPropertyChanged interfaces and other User Interface needs.  Part of the reason I‘m perfectly happy to consume Domain Model objects directly on my project is that our screen design doesn‘t require any special UI support for our custom data binding solution. Presentation Model - Even if you‘re largely following a Model View Presenter architecture, the Presentation Model is still useful.  Think of this realistic screen scenario:  the real domain objects are an aggregate structure and your View definitely needs some INotifyPropertyChanged-type goo.  In this particular case a Presentation Model that wraps and hides the real domain objects from the View is desirable.  The Presentation Model probably provides a flattened view of the domain aggregate to make data binding smoother while implementing much of the UI infrastructure code, allowing the domain classes to take the shape that is most appropriate for the business logic and behavior and keeping them from being polluted with UI code.  I should not that using a Presentation Model will potentially add extra work to synchronize the data with the underlying domain objects. Data Transfer Object - Forget about behavior and just use a lump of data.  This is often a result of hooking the user interface directly to the return values of a web service.  In my previous application we wrote an absurd amount of mapping code to map data transfer objects to domain objects and vice versa and I think we all felt like it ended up being a huge waste (we *really* needed to isolate our client from the backend, so it was mostly justified).  This time around I‘m honestly quite content just to bind some of the user interface directly to the Data Transfer Objects returned from our Java web services.  I feel a little bit dirty about this, but this approach is dirt simple.  It does couple us a bit more to the server than I would normally like, but I‘m hoping to compensate with a fairly elaborate Continuous Integration scheme that does a fully integrated build with end to endStoryTeller/Fit tests anytime either codebase changes to detect breaking changes. DataSet/DataTable/DataView - See below:
Somebody has to ask, what about DataSet‘s?  I despise DataSet‘s in general, but there‘s no denying that sometimes the easiest way to solve a problem is to revert back to Stone Age techniques and use them.  There‘s a memorable scene from the first Lord of the Rings movie when Gandalf is speaking to Frodo very dramatically of the One Ring - "It wants to be found."  It‘s the same way in WinForms.  The user interface widgets often want to consume a DataSet.  The entire WinForms UI toolkit was originally wrapped around a very datacentric view of the world and sometimes you just go along with it.
Alright, it‘s not that bad.  I will happily use a DataSet/DataTable/DataView in my user interface as the ostensible Model any time it‘s the easiest way to use a UI widget or when I want to take advantage of the sorting and filtering capabilities of a DataTable (I think that equation changes when Linq to Objects hits).  The Data* classes aren‘t my real domain though, I generally convert my real classes to a DataTable just in time for display.  And no, a DataSet-centric approach doesn‘t buy me much because as I‘ll show in the next section the Model has real responsibilities beyond just being a dumb bag of data.  Plus the little issue that DataSet‘s are not interoperable and we‘re using web services written with Java for our backend services.
While a DataSet makes the data binding generally very simple, you‘re left with the usual drawbacks to a DataSet.  You can‘t embed any real logic into the DataSet, so you have to be careful with duplication of logic.  If you insist on a DataSet approach, I‘d recommend giving theTable Module approach some thought as a way to centralize the related business logic for a particular set of data to avoid duplication.  Personally, I think DataSet‘s are clumsy to use inside of automated tests in terms of test setup.  A strongly-typed DataSet helps to at least get some Intellisense, but they annoy me as well.  Plus you‘ll often find yourself building data that‘s meaningless to the test just to satisfy the referential integrity rules of a DataSet.  That‘s wasted effort.
Build your own CAB Part #8 - Assigning Responsibilities in a Model View Presenter Architecture
First, go catch up on what‘s come before:
PreambleThe Humble Dialog BoxSupervising ControllerPassive ViewPresentation ModelView to Presenter CommunicationAnswering some questionsWhat‘s the Model?
Where should this code go?
In a post last winter I said that a coder only becomes a true software craftsman when they start asking themselves "where should this code go?"  It‘s a neverending question that you use to guide your designs and assign responsibilities.  In a typical Model View Presenter architecture the screens are composed of 4 basic types of classes ("M", "V", "P", and the Service Layer).  When you‘re designing a screen along MVP lines, you need to assign each responsibility of the screen to one of these 4 pieces, while also determining the design constraints upon each piece.  The question of "who does what?" isn‘t perfectly black and white, and there are many variations on the basic structure.  That being said, here‘s my best advice on the duties and constraints of each of the four pieces.  As a rule of thumb, try to put any given responsibility as close to the top of this list as possible without violating the design constraints of each piece.
Service Layer
TheService Layer classes are generally aFacade over the rest of the application, generally encompassing business logic and data persistence.  In general it‘s important and valuable to decouple the view layer from the rest of the application as much as possible to create orthogonality between the backend and the user interface.  If you find the Presenter collaborating with more than a single Service Layer class you might think about putting a single Facade over the two services for that screen.
In terms of constraints, the Service Layer is not aware of anything specific to any one screen, but it is perfectly permissible for the Service Layer classes to be aware of the Model and perform work on the Model.  If you‘ve chosen to use actualDomain Model classes for the Model in the screen, the Service Layer is probably just some sort ofRepository class.  You might think of it like this.  Say you‘re building a monolithic application today, but tomorrow you want to expose web services to interact with the system as an alternative to the user interface.  The Service Layer classes should be able to work within the web service code without any change.
The Service Layer should encompass any type of business logic that is not specific or tightly coupled to a specific screen.  In my current system there is going to be a validation on certain date values against a calendar service that compares the selected date against the business days for a particular country.  If I were to embed that logic into the Presenter for my first Trade screen the logic wouldn‘t be very accessible to the following Trade screens.  That calendar logic definitely belongs in the Service Layer.
Model
The Model can be more than just a lump of data.  For a multitude of reasons, I want my Model class to implement most of the business rules for the given business concept.  The first reason is simply cohesion.  As much as possible, I want all of my business rules for an Invoice or a Trade or a Shipment in the domain class for that particular domain concept.  If I want to look up the business rules for an Invoice, I want a single place to go look.  One of my primary design goals is toput code where you would expect to find it, and to me that means that invoicing business rules go in the Invoice class.
The second reason is reuse or elimination of duplication.  If you make a Presenter or the View itself responsible for business rules you‘re much more likely to find yourself duplicating the same business rules across multiple screens.  Those declarative validation controls in ASP.Net are sure easy to use, but there‘s a definite downside because the responsibility for validation is in the wrong place.
For example, the Model classes in my current system are responsible for:
Validation rules.  The validation rules for a domain concept should definitely be kept in the Domain Model.  Validation logic is most definitely a business rule.  Besides, it‘s just so common to have multiple screens acting upon the same classes.  You don‘t really want to have to duplicate validation rules from a "Create Trade" screen to the parallel "Edit Trade" screen do you?  If nothing else, putting validation rules in the Domain Layer makes these rules very simple to test compared to the same rules living in either the View or even the Presenter.  I‘ve got a lot more to say on this subject in the next post on the Notification pattern.  In the meantime, Jean-Paul has a good post onValidation In The Domain Layer - Take One and againValidation In The Domain Layer - Take Two. Default values.  My last two projects have included quite a bit of logic around assigning default values.  The normal scenario is that the use selects one value, and from that one value you can determine logical defaults for one or more other fields.  I have a little business rule to code tomorrow morning that will automatically set the values for two date fields to 2 days after the Trade Date as soon as the Trade Date is selected for the first time.  That rule is going right into the appropriate Trade subclass, both because it‘s where I‘d expect to find that code and because it makes that business rule very easy to drive with Test Driven Development.  Because the rule is wholly implemented in the Model class and the Model class is completely decoupled from the View and Service Layer, I can test these rules by applying purely state based testing.  For this approach to work I do need the screen to automatically synchronize itself with the Model when the Model changes, but the WinForms tooling makes that kind ofObserver Synchronization relatively simple.  Calculated values.  Again, this is dependent upon using Observer Synchronization, but I place the responsibility for calculating derived values into the Model class.  It‘s a business rule, and the Model class has all of the data, so it‘s the natural place for this logic.  As with defaulted values, the calculation can be relatively simple to test in isolation.
The constraint on the Model is that whether or not we allow the Model classes to call out to any kind of service, and if so, how much coupling do we allow with other services?  In my current project I don‘t allow any direct coupling from my Model classes to any external system.  If a business rule for defaulting or calculating a value requires information or a service from outside the Model, I would partially move that responsibility out into the Presenter to keep the Model decoupled from infrastructure.  The Model classes don‘t know how they‘re displayed nor how they‘re persisted and updated.  In other systems you might opt for more of anActive Record approach that puts some form of service communication responsibility into the Model classes.
The Presentation Model approach isn‘t really an exception to these constraints because the Presentation Model itself usually wraps the actual Model.
View & Presenter
The screen itself is split between two main actors, the View and the Presenter.  As we‘ve seen in the previous posts on the Supervising Controller, Passive View, and Presentation Model, there‘s a couple different ways to split responsibilities between the View and Presenter.  Rather than rehash those posts, I just want to add a few more thoughts:
Don‘t allow the View to spill out into the Presenter.  There might be exception cases, but don‘t allow any reference to any Type in the System.Windows.Forms namespace from the Presenter class.  Any WinForms mechanics in the Presenter is like letting water splash out of the tub and rot the floor.  Keep the View as thin as possible.  Always ask yourself if a given piece of code could or should live outside of the View.  My advice is to move anything out of the View that doesn‘t have to be there. Don‘t allow the Presenter to become too big.  It‘s far too tempting to use the Presenter as a receptacle for any random responsibility.  If you see parts of the Presenter that don‘t seem to be related to the rest of the Presenter, do an Extract Class refactoring to move that code to a smaller, cohesive class. There is no ironclad rule that says 1 screen == 1 view + 1 presenter.  It‘s often going to be valuable to reduce the complexity of either the Presenter or View by breaking off pieces of the screen into smaller pieces.  There may still be one uber-Presenter and one uber-View for the entire screen, but don‘t hesitate to make specific Presenter or View classes for a part of a complicated screen.  There‘s also no ironclad law that says n views = n presenters as well.
Getting back to first causes, it all comes down to two things:
Is each piece of the code easy to understand? Is the code easy to test?
Answer these two questions in the affirmative and I think your design is effective.
Where Next?
That‘s the end of the 100/200 level material on "Build your own CAB."  Now we get to move on to the cool parts -which will undoubtedly prove more difficult to write.  The next post will be on Domain-centric validation with the Notification pattern.  After that I think the consensus is to go into building the Application Shell (2-3 posts), then my MicroController stuff (2-3 posts), and automated testing (???).  Bringing up the rear will be event aggregation and synchronization with the backend.  I am leaving Pluggability until last, but that‘s because I‘m going to slip in a StructureMap 2.1 release at the end of June and demonstrate a couple of new features.
I‘m hoping to wrap this up no later than the first week in July.
Build your own CAB Part #9 - Domain Centric Validation with the Notification Pattern
Buckle up, because this is going to be a long post with more pedantry than you can shake a stick at.  As I was writing this I found some good examples of some of the design principles related toOrthogonal Code, so I made some digressions just to bring up these concepts in the context of real code.  The majority of the sample code is the very validation code we‘re using on my current project for generic validation, so it better damn well work.  I‘m pretty happy with this solution so far, and it‘s made the validation easy to implement and test.  I promise to make up for the absurd length of this post by taking about a week off from posting starting right now.
First, here‘s the preceding posts:
PreambleThe Humble Dialog BoxSupervising ControllerPassive ViewPresentation ModelView to Presenter CommunicationAnswering some questionsWhat‘s the Model?Assigning Responsibilities in a Model View Presenter Architecture
And for that matter,Bil Simser took a different tack on this exact subject last week.
Domain Centric Validation
As I said inthe last post, it‘s highly advantageous to put the validation rules into the Model classes.  Just to recap, I‘ll rattle off four reasons why I want my validation rules in my Model classes:
Validation is mostly a business logic function anyway, and purely for the sake of cohesion it makes sense to put business rules into business classes.  As much as possible, I want related business rules in a single place rather than scattered about my code. Since it‘s fairly common to have a single Model class edited in multiple screens, I want to reuse the validation logic consistently across these different screens.  More importantly, I want to eliminate duplicated logic between these screens.  It‘s going to be much easier to avoid duplication by placing this functionality in the Model than it would be to put it in each Presenter or View.  I know somebody is going to bring up the idea of drag n‘drop validator‘s, but even those are potentially easy to use, the potential duplication can easily become a tax -- especially as your validation rules change. By decoupling the validation logic from the View and the backend, we‘ve made these validation rules easy to test.  The easiest possible type of unit testing is simple state based testing.  Push a couple values into a single object and check the return value with no interaction with anything else.  Clear sailing ahead. I‘m going to content that It‘s easier to read the validation logic.  The downsides of Rapid Application Development (RAD) approaches are widely known, but for me the worst part is that I think RAD designers and wizards obscure the meaning of code and camouflage the functionality in design time goo.
Ok, that‘s a lot of hot air about why we want to do domain centric validation, but how do I do it?  My current project is doing domain centric validation, and so far I‘m thrilled with how it‘s turning out.  Here‘s our secret sauce (ok, it‘s not that secret because it‘s a documented design pattern):
The Notification Pattern
In doing validation on user input there are two main tasks:
Perform the validation and create meaningful validation messages. Display these validation messages to the user on the screen in a meaningful way.  In WinForms development we‘ve got the handyErrorProvider class that probably meets the needs of most scenarios.
Let me emphasize this a little bit more, we have two separate concerns:  validation rules and screen presentation of validation failures.  As always, we can make our development simpler by tackling one problem at a time.  The advantage in testability, reuse, and understandability I mentioned above is largely predicated on separating the validation logic from the screen machinery.
Assuming you‘ve accepted my advice to separate the two tasks and make the validation logic completely independent of the presentation machinery, we can move onto the next question.  How can we marshal the validation messages to the View in a way to simplify the presentation of validation messages?  In other words, how does a validation message get put into the screen at the right place?  There is a ready made answer for us, just use the Notification pattern as a convenient way to package up validation messages.
From Martin Fowler, theNotification pattern is a
An object that collects together information about errors and other information in the domain layer and communicates it to the presentation.
Let‘s take a look at my project‘s Notification structure.  The first piece is a class called NotificationMessage (the code below is somewhat elided) that‘s simply a single validation error consisting of the message itself and the name of the Property on the Model that the message applies to:
public class NotificationMessage : IComparable
{
private string _fieldName;
private string _message;
public NotificationMessage(string fieldName, string message)
{
_fieldName = fieldName;
_message = message;
}
public string FieldName
{
get { return _fieldName; }
set { _fieldName = value; }
}
public string Message
{
get { return _message; }
set { _message = value; }
}
// Override the Equals method to make declarative testing easy
public override bool Equals(object obj)
{
if (this == obj) return true;
NotificationMessage notificationMessage = obj as NotificationMessage;
if (notificationMessage == null) return false;
return Equals(_fieldName, notificationMessage._fieldName) && Equals(_message, notificationMessage._message);
}
// Override the ToString() method to create more descriptive messages within
// xUnit testing assertions
public override string ToString()
{
return string.Format("Field {0}:  {1}", _fieldName, _message);
}
}
For reasons that will be clear in the section on consuming a Notification, it‘s very useful to tie the messages directly to the property names.  The second part is the Notification class itself.  It‘s not much more than a collection of NotificationMessage objects:
public class Notification
{
public static readonly string REQUIRED_FIELD = "Required Field";
public static readonly string INVALID_FORMAT = "Invalid Format";
private List _list = new List();
public bool IsValid()
{
return _list.Count == 0;
}
public void RegisterMessage(string fieldName, string message)
{
_list.Add(new NotificationMessage(fieldName, message));
}
public string[] GetMessages(string fieldName)
{
List messages = _list.FindAll(delegate(NotificationMessage m) { return m.FieldName == fieldName; });
string[] returnValue = new string[messages.Count];
for (int i = 0; i < messages.Count; i++)
{
returnValue= messages.Message;
}
return returnValue;
}
public NotificationMessage[] AllMessages
{
get
{
_list.Sort();
return _list.ToArray();
}
}
public bool HasMessage(string fieldName, string messageText)
{
NotificationMessage message = new NotificationMessage(fieldName, messageText);
return _list.Contains(message);
}
}
Easy enough right?  You‘ve got everything you need to do domain centric validation -- except for all the stuff around the Notification class that does all the real work.  Details.  Or as my Dad‘s construction crew will say after day one on a new project, "all we lack is finishing up."
Filling up the Notification
The first design goal is simply to come up with a way for a Model object to create Notification objects.  Most validation rules are very simple and take one of a handful of forms, so it‘s worth our while to make the implementation of these simple rules as quick and declarative as possible.  We also need to relate any and all validation messages to the proper field.  Since this is .Net, the easiest usage scenario is simple attributes that we can use to decorate property declarations.  Using attributes comes with the great advantage of quickly tying a validation rule to a particular property.
Using attributes always requires some sort of bootstrapping code that finds and acts on the attributes decorating code, but I‘m going follow my own "Test Small Before Testing Big" rule.  Let‘s put off the overall validation calculation to tackle the very small goal of creating validation attributes.  Once we have the shape of the attributes and maybe the Model class down, the validation coordination code largely falls out.
Don‘t use the attributes as just markers though.  The actual validation logic should reside in the attribute itself to allow for easier expansion of our validation rules.  This is a good example of theOpen/Closed Principle.  We can create all new validation logic by implementing a whole new ValidationAttribute without having to modify any existing code.  That‘s good stuff.  I think this is also an example ofCraig Larman‘s Protected Variations concept.
All of our validation attributes inherit from the ValidationAttribute shown below:
[AttributeUsage(AttributeTargets.Property)]
public abstract class ValidationAttribute : Attribute
{
private PropertyInfo _property;
public void Validate(object target, Notification notification)
{
object rawValue = _property.GetValue(target, null);
validate(target, rawValue, notification);
}
// Helper method to write validation messages to a Notification
// object with the correct Property name
protected void logMessage(Notification notification, string message)
{
notification.RegisterMessage(Property.Name, message);
}
// A Template Method for the actual validation logic
protected abstract void validate(object target, object rawValue, Notification notification);
// The Property of the targeted class
// that‘s being validated
public PropertyInfo Property
{
get { return _property; }
set { _property = value; }
}
public string PropertyName
{
get
{
return _property.Name;
}
}
}
Attributes don‘t find themselves, so we‘ll need to "push" the correct actual System.Reflection.PropertyInfo object to the ValidationAttribute objects, so there‘s an open getter/setter for the PropertyInfo.  The validation will work by first finding all of the special validation attributes, attaching the proper PropertyInfo‘s, and looping through each ValidationAttribute and calling the Validate(target, notification) method.  All the ValidationAttribute classes do is register messages on a Notification object, so there‘s absolutely no coupling to either the server or the user interface.
To actually build a specific ValidationAttribute you simply inherit from ValidationAttribute and override the validate(target, rawValue, notification)Template Method.  The simplest case for a simple required field validation is shown below:
[AttributeUsage(AttributeTargets.Property)]
public class RequiredAttribute : ValidationAttribute
{
protected override void validate(object target, object rawValue, Notification notification)
{
if (rawValue == null)
{
logMessage(notification, Notification.REQUIRED_FIELD);
}
}
}
In real life you‘d have to enhance this a bit to handle primitive types that aren‘t nullable, but right now this is doing everything my project needs.  Using this little monster is as simple as marking a Property like this;
[Required]
public string Name
{
get { return _name; }
set { _name = value; }
}
or combine attributes:
[Required, GreaterThanZero]
public double? BaseAmount
{
get { return _baseAmount; }
set { _baseAmount = value; }
}
Of course, some rules are going to be too complex or simply too rare to justify building an attribute (think about rules that only apply in a certain state or involve many fields).  To allow for these more complex cases, I chose to use a simple marker interface called IValidated.
public interface IValidated
{
void Validate(Notification notification);
}
A Model class can implement this interface as a kind of hook to perform any sort of complex validation that just doesn‘t fit into the attribute-based validation.  There‘s a bit of a design point I‘d like to make here.  This little IValidated interface is an example of theTell, Don‘t Ask principle in action.  Rather than querying the attributes and the validated object to decide what rules to enforce, we can create far more variations and allow for far more extension by just telling the object being validated and each attribute class to add it‘s messages to the Notification object.
A quick side note.  If you‘re going to build something like this for yourself, I‘d recommend building the first couple specific cases, then doing anExtract Superclass refactoring to "lift" the common template into a superclass.  Starting by designing the abstract class can sometimes lead into a black hole.  When I built this code I created the RequiredFieldAttribute first then used ReSharper to create the Supertype.
And yes, I do know about theValidation Application Block, but it‘s kind of fun and fairly easy to write our own to do exactly what we want.  Besides, I‘m not sure how well the Validation Block plays in the Notification scenario.  My real inspiration here is the Notification-style validation baked into ActiveRecord in Ruby on Rails.  The same kind of thing that we did with attributes in C# is potentially more declarative in a Rails model.  Just coincidentally, Steve Eichert has a cool writeup on this subject atCreate DDD style specifications in Ruby with ActiveSpec.
Putting the Notification Together
We‘ve got the Notification class, some working ValidationAttribute classes, and the IValidated hook interface.  Now we need a Controller class of some kind to glue all of these pieces together.  With my nearly infinite creativity, I‘ve named this class in our system "Validator."  The next task is simply to scan a Type and create a list of ValidationAttribute objects for each declared attribute.  That‘s done with this static method:
// Find all the ValidationAttribute‘s for a given Type, and connect
// each ValidationAttribute to the correct Property
public static List FindAttributes(Type type)
{
List atts = new List();
foreach (PropertyInfo property in type.GetProperties())
{
Attribute[] attributes = Attribute.GetCustomAttributes(property, typeof (ValidationAttribute));
foreach (ValidationAttribute attribute in attributes)
{
attribute.Property = property;
atts.Add(attribute);
}
}
return atts;
}
Once that method is unit tested, we can move onto the core task of validating an object.  That‘s accomplished with the method below.
public static Notification ValidateObject(object target)
{
// Find all the ValidationAttribute‘s attached to the properties
// of the target Type
List atts = scanType(target.GetType());
Notification notification = new Notification();
// Iterate through each ValidationAttribute and give each
// a chance to add validation messages
foreach (ValidationAttribute att in atts)
{
att.Validate(target, notification);
}
// And if the target implements IValidated, call the
// Validate(Notification) method to do the special cases
if (target is IValidated)
{
((IValidated)target).Validate(notification);
}
return notification;
}
I‘ve elided the code here for brevity, but the first thing ValidateObject(object) does is to call its scanType(Type) method to find the list of ValidationAttribute‘s declared for target.GetType().  Since Reflection isn‘t the fastest operation in the world, I do cache the list of ValidationAttribute objects for each Type on the first access for that Type.
Once we have the ValidationAttribute list everything else just falls out.
Testing Validation Logic
One of the best things about this approach is how easy automated testing of the validation logic becomes.  Just create the class under test, pump in some property values, create a Notification from its current state, and finally compare the Notification object to the expected results.  One of Jeremy‘s Laws of TDD is to "go declarative whenever possible" in testing.  We‘re using Fit tests[LINK] to perform declarative testing of our validation rules, and this is exactly the kind of scenario where Fit shines.  I will often embed bits of Fit tests directly into NUnit tests for developer tests, especially for set based functionality.  This might not be very interesting to you unless you already use the FitnesseDotNet engine, but we create and define the state of one of our Model objects inside a Fit table, then turn around and check the validation messages produced for the input.  The Fit test itself looks something like this:
!|UserFixture|
|Create New|
|Name |Jeremy|
|Birthday |BLANK |
|PhoneNumber|BLANK |
|The Validation Messages Are|
|FieldName|Message |
|Name |Field Required|
|Birthday |Field Required|
The actual UserFixture class would inherit from DoFixture and provide methods to set each public property (we codegen these Fixture classes from reflecting over the targeted classes, i.e. the code generator creates a UserFixture class for User).  Once the desired state of the object is configured, the validation logic is tested by comparing the actual validation messages against the expected messages.  At the bottom of the Fit test above we simply make a declaration that the expected validation messages are the values in the nested table (the bolded, red text).  The Fit engine has a facility called the RowFixture that makes this type of set comparison very easy.  The RowFixture to check the validation messages is triggered by this method on the UserFixture class:
public Fixture TheValidationMessagesForFieldAre(string fieldName)
{
Notification notification = Validator.ValidateObject(CurrentTrade);
string[] messages = notification.GetMessages(fieldName);
return new StringArrayFixture(messages);
}
This section might not be all that useful to you if you‘re not familiar with the mechanics of Fit testing, but then again, you might want to go give Fit another try.  It‘s goofy alright, but it‘s very useful in the right spot.
Consuming Notifications
Alrighty then!  We‘ve got ourselves a Notification object that relates all the user input failure messages to the relevant properties of the Model.  Now we‘ve just got to get these messages to the right place on the screen. In your best Church Lady voice -- who knows about both the field names and the controls that edit those fields?  Could it be....   ...Satan!  The data binding!  I‘m not going to show it here because I suspect the code would be nasty, but you could do a recursive search through all the child Control‘s and check the DataBindings collection of each child control to see if it‘s bound to any property of the object being validated.
Alternatively, you ditch the designer for data binding and put just a small Facade over the BindingDataSource class.  If you build just a tiny class that wraps BindingDataSource with something like "BindControlToProperty(control, propertyName)" you could capture a Dictionary of the controls by field name along that way that could make attaching the errors much simpler.  One of the themes I was trying to get across with my WinForms patterns talk at DevTeach was that sometimes a codecentric solution can be much more efficient than the equivalent work with designers.
Or, if that sounds as nasty to you as it does to me, you could just write your own synchronization mechanism that bakes in the functionality to attach validation messages to controls.  If you thought I was nuts to suggest that mere mortals could write their own CAB analogue, you‘ll really flip out once I admit that I‘ve written a replacement for WinForms data binding to do screen synchronization (so far theCrackpot idea is working out fairly well).  I‘ll give it a much better treatment in the post on MicroController, but for now, here‘s a sneak peek at the code we use to attach validation messages to the proper screen element with an ErrorProvider.
public void ShowErrorMessages(Notification notification)
{
foreach (IScreenElement element in _elements)
{
string[] messages = notification.GetMessages(element.FieldName);
element.SetErrors(messages);
}
}
Each IScreenElement is what I call a "MicroController" that governs the behavior and binding of a single control element.  Each IScreenElement already "knows" the ErrorProvider for the form or control by this point, so it just needs to turn around and use the ErrorProvider to attach messages to its internal control.