Chapter 9. Writing real-world domain models – NHibernate in Action

Chapter 9. Writing real-world domain models

This chapter covers

  • Domain-model development processes
  • Legacy schema mapping
  • Understanding persistence ignorance
  • Implementing business logic
  • Data binding in the GUI
  • Obtaining DataSets from entities

Having read this book so far, you should be familiar with what a business entity looks like, what a domain model is, and roughly how a domain model is formed. Our examples have aimed to keep things simple, so we haven’t yet introduced you to the processes and techniques that will help you tackle real-world projects.

The first part of this chapter looks at the various starting points of an NHibernate project and then explains how you can leverage automation and generation to help build the other layers. Until now, you’ve been manually implementing entities by hand. You can save much time by using the tools described here to automatically generate domain-model entities, database schema, and even mapping definitions.

One particularly tricky starting point is a legacy database that you can’t change. Fortunately there are many mappings explained in this chapter that are especially useful in that scenario.

Once we’re finished describing the processes and tools around NHibernate development, we’ll take a closer look at the domain model. Up to this point, this book’s examples have involved entities that contain only data. This allowed us to demonstrate how the mappings work for the purposes of saving and loading those entities. But the domain-model pattern encourages you to create a much more behavior-rich domain model that encapsulates business rules, validation, and workflow. Later in this chapter, you’ll discover how these things can be achieved.

Another aspect that we briefly touched earlier is persistence ignorance; certain projects may require that the domain model has no awareness of NHibernate, instead focusing purely on business concerns. We look at what persistence ignorance means and how you can structure your projects to realize it with NHibernate.

Selecting and applying the techniques presented here, you’ll develop a fully functional domain model that is well suited to your needs. The next trick is to get the domain model to collaborate with the other layers, including the GUI. This will be the focus of the two last sections, which explain how entities can be consumed in the presentation layer and how to fill DataSets with the content of an entity to allow compatibility with many GUI and reporting components.

We’ll begin by discussing the possible starting points for an NHibernate project and the development processes that may follow.

9.1. Development processes and tools

In the earlier chapters, you always started by defining the domain model before creating the database and setting up your mapping. What if you already have a database in place, or even a mapping file? No rule says that things have to be done in any particular order, so we’ll present the different processes available and explain which projects they suit best.

You’ll find that once you’ve created either a database, a domain model, or mapping files, NHibernate provides tools that can be used to generate the other representations. figure 9.1 shows the input and output of tools used for NHibernate development.

Figure 9.1. Development processes

Generally, you have to complete and customize the generated code, but the tools can give you a valuable head start. We’ll review these tools and processes in this section, starting with the top-down approach.

9.1.1. Top down: generating the mapping and the database from entities

The approach you’ve been using in this book is commonly called top-down development. This is the most comfortable development style when you’re starting a new project with no existing database to worry about.

Looking at figure 9.1, the starting point is the Plain Old CLR Object (POCO). When using this approach, you first build your .NET domain model, typically as POCOs. If you’ve used the NHibernate.Mapping.Attributes library to decorate your entities, you can use NHibernate to generate the mapping for you. Alternatively, you can manually write it using an XML editor, as demonstrated throughout this book.

With your entities and mapping file in place, you can let NHibernate’s hbm2ddl tool generate the database schema for you, using the mapping metadata. This tool is part of the NHibernate library. It isn’t a graphical tool; you access the features from your own code by calling methods on the NHibernate.Tool.hbm2ddl.SchemaExport class.

When you create your mapping with attributes or XML, you can add elements that help SchemaExport create a database schema to your liking. These are optional; without them, NHibernate will attempt to use sensible defaults when creating your databases schema. If you decide to include extra mapping metadata, having the ability to override naming strategies, data types, column sizes, and so on can be useful. Sometimes it’s necessary, especially if you want your generated schema to follow house rules or your DBA’s requirements.


Using naming strategies was explained in section 3.4.7. This feature lets you change the way entities’ names are converted into tables names.

We’ll now look at how you can prepare the mapping metadata to control database schema generation.

Preparing the Mapping Metadata

In this example, we’ve marked up the mapping for the Item class with hbm2ddl-specific attributes and elements. These optional definitions integrate seamlessly with the other mapping elements, as you can see in listing 9.1.

Listing 9.1. Additional elements in the Item mapping for SchemaExport

hbm2ddl automatically generates an NVARCHAR typed column if a property (even the identifier property) is of mapping type String. You know the identifier generator uuid.hex always generates strings that are 32 characters long; you use a CHAR SQL type and set its size fixed at 32 characters . The nested <column> element is required for this declaration because there is no attribute to specify the SQL data type on the <id> element.

The column, not-null, and length attributes are also available on the <property> element; but because you want to create an additional index in the database, you again use a nested <column> element . This index will speed your searches for items by name. If you reuse the same index name on other property mappings, you can create an index that includes multiple database columns. The value of this attribute is also used to name the index in the database catalog.

For the description field, we chose the lazy approach, using the attributes on the <property> element instead of a <column> element. The DESCRIPTION column will be generated as VARCHAR(4000) .

The custom user-defined type MonetaryAmount requires two database columns to work with. You have to use the <column> element. The check attribute triggers the creation of a check constraint; the value in that column must match the given arbitrary SQL expression. Note that there is also a check attribute for the <class> element, which is useful for multicolumn check constraints.

A <column> element can also be used to declare the foreign key fields in an association mapping. Otherwise, the columns of your association table CATEGORY_ITEM would be NVARCHAR(32) instead of the more appropriate CHAR(32) type .

We’ve grouped all attributes relevant for schema generation in table 9.1; some of them weren’t included in the previous Item mapping example.

Table 9.1. XML mapping attributes for hbm2ddl






Usable in most mapping elements; declares the name of the SQL column. hbm2ddl (and NHibernate’s core) defaults to the name of the .NET property if the column attribute is omitted and no nested <column> element is present. You can change this behavior by implementing a custom INamingStrategy; see section 3.4.7.



Forces the generation of a NOT NULL column constraint. Available as an attribute on most mapping elements and also on the dedicated <column> element.



Forces the generation of a single-column UNIQUE constraint. Available for various mapping elements.



Can be used to define a “length” of a data type. For example, length="4000" for a string mapped property generates an NVARCHAR(4000) column. Also used to define the precision of decimal types.



Defines the name of a database index that can be shared by multiple elements. An index on a single column is also possible. Only available with the <column> element.



Enables unique constraints involving multiple database columns. All elements using this attribute must share the same constraint name to be part of a single constraint definition. A <column> element-only attribute.



Overrides hbm2ddl’s automatic detection of the SQL data type; useful for database specific data types. Be aware that this effectively prevents database independence: hbm2ddl will automatically generate a VARCHAR or VARCHAR2 (for Oracle), but it will always use a declared SQL-type instead, if present. Can only be used with the dedicated <column> element.



Names a foreign-key constraint, available for <many-to-one>, <one-to-one>, <key>, and <many-to-many> mapping elements. Note that inverse="true" sides of an association mapping aren’t considered for foreign key naming—only the noninverse side. If no names are provided, NHibernate generates unique random names.

After you’ve reviewed (probably together with a DBA) your mapping files and added schema-related attributes, you can create the schema.

Creating the Schema

The hbm2ddl tool is instrumented using an instance of the class SchemaExport. Here’s an example:

Configuration cfg = new Configuration();
SchemaExport schemaExport = new SchemaExport(cfg);
schemaExport.Create(false, true);

This example creates and initializes an NHibernate configuration. Then it creates an instance of SchemaExport that uses the mapping and database-connection properties of the configuration to generate and execute the SQL commands that create the tables of the database.

Here is the public interface of this class, with a brief description of each method:

Table 9.2 explains the meaning of the parameters of the Execute() methods.

Table 9.2. hbm2ddl.SchemaExport.Execute() parameters




Outputs the generated script to the console


Executes the generated script against the database


Only drops the tables and cleans the database


Formats the generated script nicely instead of using one row for each statement


Specifies the opened database connection to use when export is true


Outputs the generated script to this writer

This tool is indispensable when you’re applying TDD (explained in 8.1.1) because it frees you from manually modifying the database whenever the mapping changes. All you have to do is call it before running your tests, and you’ll get a fresh, up-to-date database to work on. Note that it’s also available as an NAnt task: NHibernate.Tasks.Hbm2DdlTask. For more details, read its API documentation.

Using this tool throughout a project requires some thought, because the database is re-created each time—which scraps any data. We’ll describe some workarounds in section 9.1.4.

Executing Arbitrary SQL During Database Generation

If you need to execute arbitrary SQL statements when generating your database, you can add them to your mapping document. This is especially useful to create triggers and stored procedures used in the mapping.

You write these statements in <database-object> elements. If they’re in the <create> sub-element, they’re executed when creating the database. Otherwise, they’re in the <drop> sub-element, and they’re executed when dropping the database.

Here’s an example:

<hibernate-mapping xmlns="urn:nhibernate-mapping-2.2">
<dialect-scope name="NHibernate.Dialect.MsSql2005Dialect"/>
<dialect-scope name="NHibernate.Dialect.MsSql2000Dialect"/>

This example provides the code required to create and drop the stored procedure used at the end of section 7.6.2.

Because these SQL statements can be dialect-dependent, it’s also possible to use <dialect-scope> to specify the dialect for which they must be executed. In the previous example, the code is executed only on SQL Server databases.

The top-down approach is a comfortable route for many developers, especially when working on a new project with no existing database in place. We now look at the middle-out approach, where you generate entities from the mapping file.

9.1.2. Middle out: generating entities from the mapping

Referring back to figure 9.1, this approach starts in the middle box: the mapping documents. These provide sufficient information to deduce the DDL schema and to generate working POCOs for your domain model. NHibernate has tools to support this middle-out development, so you can take your handwritten NHibernate mapping documents and generate the DDL using hbm2ddl, and you can generate the .NET domain-model code using code-generation tools. This approach is appealing primarily when you’re migrating from existing NHibernate mappings (used by a .NET application).

When it comes to generating entities from your mapping documents, NHibernate provides a tool called hbm2net. It’s similar to hbm2ddl; but it’s available as a separate library along with a console application (NHibernate.Tool.hbm2net.Console) and an NAnt task (NHibernate.Tasks.Hbm2NetTask).

Before using this tool, you must make sure your mapping provides all the required information, such as the properties’ types. Then you can execute it using its CodeGenerator class:

string[] args = new string[] {
"--config=hbm2net.config", "--output=DomainModel", "*.hbm.xml" };

Here’s the equivalent using its console application:

--config=hbm2net.config --output=DomainModel *.hbm.xml

This code generates a C# class for each mapping document in the current directory and saves it in the DomainModel directory. The content of the hbm2net.config file looks like this:

<?xml version="1.0" ?>
<meta attribute="implements">
suffix="Finder" />

The generated C# classes inherit from the specified IAuditable interface. Renderers are used to generate specific parts of the C# class; there is even a VelocityRenderer, based on the NVelocity library, which allows you to use a template. Refer to their API documentation for more details.

Note that this tool isn’t as complete as Hibernate’s hbm2java; refer to the latter’s documentation for more details.

Most .NET developers feel more comfortable using top-down development with an attribute library like NHibernate.Mapping.Attributes, which gives maximum control; or, they prefer to use bottom-up development when there is an existing data model.

9.1.3. Bottom up: generating the mapping and the entities from the database

Bottom-up development begins with an existing database schema and data model. It’s depicted as the Database Schema box in figure 9.1 In this case, you use code-generation tools to generate the mapping files and .NET code from the metadata of the database schema.

The following tools are known for their ability to generate NHibernate mapping documents and skeletal POCO persistent classes (data containers with fields and simple implementation of properties, but no logic):

What if I have an existing database and an existing class model?

We call this the meet-in-the-middle approach. It isn’t shown in figure 9.1, but essentially you have an existing set of .NET classes and an existing database schema. As you can imagine, it’s hard to map arbitrary domain models to a given schema, so this should be attempted only if absolutely necessary.

The meet-in-the-middle scenario usually requires at least some refactoring of the .NET classes, database schema, or both. The mapping document must almost certainly be written by hand (although it’s possible to use NHibernate.Mapping.Attributes). This is an incredibly painful scenario that is, fortunately, exceedingly rare.

If you try to use this scenario, don’t hesitate to take full advantage of the numerous extension interfaces of NHibernate. They were introduced in section 2.2.4.

You’ll usually have to enhance and modify the generated NHibernate mapping by hand, because not all class association details and .NET-specific meta-information can be automatically generated from a SQL schema.

Is it a bad thing to write anemic domain models?

As explained in section 8.1.3, a domain model is made of data and behavior. When using a simplistic code generator, you may be tempted to write your domain model as a data container and move all the behavior to other layers. In this case, your domain model is said to be anemic (see

This isn’t necessarily a bad thing. This approach may work well for simple applications. But it goes against the basic idea of object-oriented design. The behavior may not be correctly represented in other layers, leading to code duplication and other issues. Worse, it may end up in the wrong layers (such as the presentation layer).

This type of code generation is generally template-based: you write a template describing your mapping and POCO with placeholders (for the names, types, and so on). The code generator executes this template for each table in the database. This process is intuitive. It’s even possible to preserve hand-written regions of code that aren’t overwritten when regenerating the classes. Or even better, use partial classes to separate generated code from your hand-written code.

9.1.4. Automatic database schema maintenance

Once you’ve deployed an application and its database for the first time, you’re challenged with rolling out future changes as new versions of your software are released. This is often difficult when working with databases because you need to determine how to safely roll out all the schema changes made during development without losing data (or too much sleep). How do you know which database changes were made during development? And how can you safely apply these to live databases?

Updating Live Databases

NHibernate comes bundled with a tool called SchemaUpdate. It’s used to modify an existing SQL database schema, dropping obsolete objects and adding new ones as needed. At the time of writing, SchemaUpdate isn’t ready for use against production databases. It can potentially delete data, and it doesn’t support more advanced features such as data transformations and safe column renaming. But the tool is useful during development. It can be great for keeping development and test databases in sync with your domain model, and it’s faster than using SchemaExport, which creates your database from scratch each time. We want to discuss ways of automatically maintain live databases, so lets look at some other options.

One option is to use a commercial product that can compare live and development database schemas and then generate SQL commands to safely migrate between the two. Some tools can also handle data migration. A few recommended commercial tools include Red Gate’s Sql Compare and Data Compare, Sql Delta, and Microsoft’s Data Professional tools.

If you don’t want to take this route, a simple solution is to manually write a SQL migration script as you develop the application and database. In this script, you keep a log of each command used to tweak the database during development. At deployment time, you can then run this migration SQL script against live databases to apply all the updates. This approach has its drawbacks. It isn’t cross-database compliant, and despite being simple, it’s tedious, and we don’t recommend it.

Perhaps one of the best approaches to handling schema changes during both development and live deployment is to use a dedicated migrations library. Migrator is one open source example (, as is LiquiBase (; another is the migrations built into Ruby on Rails, which some people are also using with .NET. Others may also be available.

Database migrations libraries work on the principle that, each time you want to change your development schema, you do so using the migrations library. It automatically keeps track of the changes so they can be applied to any database to bring it up to date.

Here’s a simple example:

DatabaseSystem.AddUpdate(1.0, 1.1, new string[] {"SQL statements..."});

When a database must be updated, this system will read its current version and only execute the changes done since the last update.

These tools usually support rolling back to previous versions. Exploring these tools in full is beyond the scope of this book, but we strongly recommend that you look into them, starting with the ones we’ve mentioned here.

Development Databases

Schema-maintenance problems also occur during development, even before you roll out to any live databases. A common scenario involves lots of test data, which you want to be sure is inserted correctly each time you change the schema.

Migrations libraries are an excellent choice for achieving this, but you may have decided to generate your database schema using hbm2ddl rather than a separate library. With hbm2ddl, you can drop and re-create your test databases regularly during development, and you don’t have to worry about adding, renaming, or removing things—the schema is built from scratch whenever needed. But how do you insert test data each time?

One option is to keep a bunch of SQL scripts that insert the test data into the database and run them each time you re-create the database. The downside is that you’ll need to update these scripts to match each change of schema, which can be time consuming if you have thousands of insert statements.

Another option is to have a .NET program that creates entities for test purposes and then persists them to the database using NHibernate. Effectively, you’re replacing the SQL script with a .NET application. One benefit is that you can lean on refactoring tools to handle changes to class properties, and you won’t have to edit cumbersome SQL scripts manually. The ObjectMother pattern lends itself well to this approach, where you have an object dedicated to creating test and reference data that can be used by several tests. You can learn more about that at

So far, this chapter has focused on top-down, middle-out, and bottom-up approaches to developing your application with NHibernate that let you start with an existing domain model, some mapping files, or an existing database schema. We’ve also introduced you to the concept of migrations and how they can help you manage your evolving database throughout the development process. We’ll now look more at the bottom-up scenario discussed in 9.1.3, in which you start the project with an existing database schema. In particular, we’ll focus on legacy databases whose schema you’re often unable to change to fit your needs. This scenario often comes with its own set of problems, so we’ll explain how you can tackle some of them when working with legacy schemas.

9.2. Legacy schemas

Some data requires special treatment in addition to the general principles we’ve discussed in the rest of the book. In this section, we’ll describe important kinds of data that introduce extra complexity into your NHibernate code.

When your application inherits an existing legacy database schema, you should make as few changes to the existing schema as possible. Every change you make can break other existing applications that access the database and require expensive migration of existing data. In general, it isn’t possible to build a new application and make no changes to the existing data model—a new application usually means additional business requirements that naturally require evolution of the database schema.

We’ll therefore consider two types of problems: problems that relate to changing business requirements (which generally can’t be solved without schema changes) and problems that relate only to how you wish to represent the same business problem in your new application (which can usually—but not always—be solved without database schema changes). You can usually spot the first kind of problem by looking at the logical data model. The second type more often relates to the implementation of the logical data model as a physical database schema.

If you accept this observation, you’ll see that the kinds of problems that require schema changes are those that call for addition of new entities, refactoring of existing entities, addition of new attributes to existing entities, and modification of the associations between entities. The problems that can be solved without schema changes usually involve inconvenient column definitions for a particular entity.

Let’s now concentrate on the second kind of problems. These inconvenient column definitions most commonly fall into two categories:

  • Use of natural (especially composite) keys
  • Inconvenient column types

We’ve mentioned that we think natural primary keys are a bad idea. Natural keys often make it difficult to refactor the data model when business requirements change. They may even, in extreme cases, impact performance. Unfortunately, many legacy schemas use (natural) composite keys heavily, and, for the reason that we discourage the use of composite keys, it may be difficult to change the legacy schema to use surrogate keys. Therefore, NHibernate supports the use of natural keys. If the natural key is a composite key, support is via the <composite-id> mapping.

The second category of problems can usually be solved using a custom NHibernate mapping type (implementing the interface IUserType or ICompositeUserType), as described in chapter 7.

Let’s look at some examples that illustrate the solutions for both problems. We’ll start with natural key mappings.

9.2.1. Mapping a table with a natural key

Your USER table has a synthetic primary key, USER_ID, and a unique key constraint on USERNAME. Here’s a portion of the NHibernate mapping:

<class name="User" table="USER">
<id name="Id" column="USER_ID">
<generator class="native"/>
<version name="Version"
<property name="Username"

Notice that a synthetic identifier mapping may specify an unsaved-value, allowing NHibernate to determine whether an instance is a detached instance or a new transient instance. Hence, the following code snippet may be used to create a new persistent user:

User user = new User();
user.Username = "john";
user.Firstname = "John";
user.Lastname = "Doe";
session.SaveOrUpdate(user); Generates id value by side-effect
System.Console.WriteLine( session.GetIdentifier(user) ); Prints 1

If you encounter a USER table in a legacy schema, USERNAME is probably the primary key. In this case, you have no synthetic identifier; instead, you use the assigned identifier generator strategy to indicate to NHibernate that the identifier is a natural key assigned by the application before the object is saved:

<class name="User" table="USER">
<id name="Username" column="USERNAME">
<generator class="assigned"/>
<version name="Version"

You can no longer take advantage of the unsaved-value attribute in the <id> mapping. An assigned identifier can’t be used to determine whether an instance is detached or transient—because it’s assigned by the application. Instead, you specify an unsaved-value mapping for the <version> property. Doing so achieves the same effect by essentially the same mechanism. The code to save a new User isn’t changed:

But you have to change the declaration of the version property in the User class to assign the value -1 (private int version = -1).

If a class with a natural key doesn’t declare a version or timestamp property, it’s more difficult to get SaveOrUpdate() and cascades to work correctly. You can use a custom NHibernate IInterceptor, as discussed later in this chapter. (On the other hand, if you’re happy to use explicit Save() and explicit Update() instead of SaveOrUpdate() and cascades, NHibernate doesn’t need to be able to distinguish between transient and detached instances, and you can safely ignore this advice.)

Composite natural keys extend the same ideas.

9.2.2. Mapping a table with a composite key

As far as NHibernate is concerned, a composite key may be handled as an assigned identifier of value type (the NHibernate type is a component). Suppose the primary key of your user table consisted of USERNAME and ORGANIZATION_ID. You could add a property named OrganizationId to the User class:

public class User {
[KeyProperty(1, Name="Username", Column="USERNAME")]
[KeyProperty(2, Name="OrganizationId", Column="ORGANIZATION_ID")]
public string Username { ... }
public int OrganizationId { ... }
[Version(Column="VERSION", UnsavedValue="0")]
public int Version { ... }

Here is the corresponding XML mapping:

<class name="User" table="USER">
<key-property name="Username"
column="USERNAME" />
<key-property name="OrganizationId"
<version name="Version"
unsaved-value="0" />

The code to save a new User would look like this:

User user = new User();
user.Username = "john";
user.OrganizationId = 37;
user.Firstname = "John";
user.Lastname = "Doe";
session.SaveOrUpdate(user); // will save, since version is 0

But what object could you use as the identifier when you called Load() or Get()? It’s possible to use an instance of the User:

User user = new User();
user.Username = "john";
user.OrganizationId = 37;
session.Load(user, user);

In this code snippet, User acts as its own identifier class. Note that you now have to implement Equals()and GetHashCode() for this class (and make it Serializable). You can change that by using a separated class as identifier.

Using a Composite Identifier Class

It’s much more elegant to define a separate composite identifier class that declares just the key properties. Let’s call this class UserId:

[Serializable] public class UserId {
private string username;
private string organizationId;
public UserId(string username, string organizationId) {
this.username = username;
this.organizationId = organizationId;
// Properties here...
public override bool Equals(object o) {
if (o == null) return false;
if (object.ReferenceEquals(this, o)) return true;
UserId userId = o as UserId;
if (userId == null) return false;
if (organizationId != userId.OrganizationId)
return false;
if (username != userId.Username)
return false;
return true;
public override int GetHashCode() {
return username.GetHashCode() + 27 * organizationId.GetHashCode();

It’s critical that you implement Equals() and GetHashCode() correctly, because NHibernate uses these methods to do cache lookups. Furthermore, the hash code must be consistent over time. This means that if the column USERNAME is case insensitive, it must be normalized (to uppercase/lowercase strings). Composite key classes are also expected to be Serializable.

Now you’d remove the UserName and OrganizationId properties from User and add a UserId property. You’d use the following mapping:

<class name="User" table="USER">
<composite-id name="UserId" class="UserId">
<key-property name="UserName"
<key-property name="OrganizationId"
<version name="Version"

You could save a new instance using this code:

User user = new User();
user.UserId = new UserId("john", 42);
user.Firstname = "John";
user.Lastname = "Doe";
session.SaveOrUpdate(user); // will save, since version is 0

The following code shows how to load an instance:

UserId id = new UserId("john", 42);
User user = (User) session.Load(typeof(User), id);

Now, suppose ORGANIZATION_ID was a foreign key to the ORGANIZATION table, and that you wished to represent this association in your C# model. Our recommended way to do this would be to use a <many-to-one> association mapped with insert="false" update="false", as follows:

<class name="User" table="USER">
<composite-id name="UserId" class="UserId">
<key-property name="UserName"
<key-property name="OrganizationId"
<version name="Version"
<many-to-one name="Organization"
insert="false" update="false"/>

This use of insert="false" update="false" tells NHibernate to ignore that property when updating or inserting a User, but you may of course read it with john.Organization.

An alternative approach would be to use a <key-many-to-one>:

<class name="User" table="USER">
<composite-id name="UserId" class="UserId">
<key-property name="UserName"
<key-many-to-one name="Organization"
<version name="Version"

But it’s usually inconvenient to have an association in a composite identifier class, so this approach isn’t recommended except in special circumstances.

Referencing an Entity with a Composite Key

Because USER has a composite primary key, any referencing foreign key is also composite. For example, the association from Item to User (the seller) is now mapped to a composite foreign key. To our relief, NHibernate can hide this detail from the C# code. You can use the following association mapping for Item:

<many-to-one name="Seller" class="User">
<column name="USERNAME"/>
<column name="ORGANIZATION_ID"/>

Any collection owned by the User class will also have a composite foreign key—for example, the inverse association, Items, sold by this user:

<set name="Items" lazy="true" inverse="true">
<column name="USERNAME"/>
<column name="ORGANIZATION_ID"/>
<one-to-many class="Item"/>

Note that the order in which columns are listed is significant and should match the order in which they appear inside the <composite-id> element.

Let’s turn to our second legacy schema problem: inconvenient columns.

9.2.3. Using a custom type to map legacy columns

The phrase inconvenient column type covers a broad range of problems: for example, use of the CHAR (instead of VARCHAR) column type, use of a VARCHAR column to represent numeric data, and use of a special value instead of a SQL NULL. It’s straightforward to use an IUserType implementation to handle legacy CHAR values (by trimming the string returned by the ADO.NET data reader), to perform type conversions between numeric and string data types, or to convert special values to a C# null. We won’t show code for any of these common problems; we’ll leave that to you—they’re all easy if you study section 6.1, “Creating custom mapping types,” carefully.

We’ll look at a slightly more interesting problem. So far, your User class has two properties to represent a user’s names: Firstname and Lastname. As soon as you add an Initial property, your User class will become messy. Thanks to NHibernate’s component support, you can easily improve your model with a single Name property of a new Name C# type (which encapsulates the details).

Also suppose that the database includes a single NAME column. You need to map the concatenation of three different properties of Name to one column. The following implementation of IUserType demonstrates how this can be accomplished (we make the simplifying assumption that the Initial is never null):

public class NameUserType : IUserType {
private static readonly NHibernate.SqlTypes.SqlType[] SQL_TYPES =
public NHibernate.SqlTypes.SqlType[] SqlTypes { get { return SQL_TYPES;
} }
public Type ReturnedType { get { return typeof(Name); } }
public bool IsMutable {
get { return true; }
public object DeepCopy(object value) {
Name name = (Name) value;
return new Name(name.Firstname,
new public bool Equals(object x, object y) {
// use equals() implementation on Name class
return x==null ? y==null : x.Equals(y);
public object NullSafeGet(IDataReader dr, string[] names, object owner)
string dbName =
(string) NHibernateUtil.AnsiString.NullSafeGet(dr, names);
if (dbName==null) return null;
string[] tokens = dbName.Split();
Name realName =
new Name( tokens[0],
tokens[2] );
return realName;
public void NullSafeSet(IDbCommand cmd, object obj, int index) {
Name name = (Name) obj;
String nameString = (name==null) ?
null :
+ ' ' + name.Initial
+ ' ' + name.Lastname;
NHibernateUtil.AnsiString.NullSafeSet(cmd, nameString, index);

Notice that this implementation delegates to one of the NHibernate built-in types for some functionality. This is a common pattern, but it isn’t a requirement.

We hope you can now see how many different kinds of problems having to do with inconvenient column definitions can be solved by clever user of NHibernate custom types. Remember that every time NHibernate reads data from an ADO.NET IDataReader or writes data to an ADO.NET IDbCommand, it goes via an IType. In almost every case, that IType can be a custom type. (This includes associations—an NHibernate ManyToOneType, for example, delegates to the identifier type of the associated class, which may be a custom type.)

One further problem often arises in the context of working with legacy data: integrating database triggers.

9.2.4. Working with triggers

There are some reasonable motivations for using triggers even in a brand-new database; legacy data isn’t the only context in which problems arise. Triggers and ORM are often a problematic combination. It’s difficult to synchronize the effect of a trigger with the in-memory representation of the data.

Suppose the ITEM table has a CREATED column mapped to a Created property of type DateTime, which is initialized by an insert trigger. The following mapping is appropriate:

<property name="Created"

Notice that you map this property insert="false" and update="false" to indicate that it isn’t to be included in SQL INSERTs or UPDATEs.

After saving a new Item, NHibernate won’t be aware of the value assigned to this column by the trigger, because the value is assigned after the INSERT of the item row. If you need to use the value in your application, you have to tell NHibernate explicitly to reload the object with a new SQL SELECT:

Most problems involving triggers may be solved this way, using an explicit Flush() to force immediate execution of the trigger, perhaps followed by a call to Refresh() to retrieve the result of the trigger.

You should be aware of one special problem when you’re using detached objects with a database with triggers. Because no snapshot is available when a detached object is reassociated with a session using Update() or SaveOrUpdate(), NHibernate may execute unnecessary SQL UPDATE statements to ensure that the database state is completely synchronized with the session state. This may cause an UPDATE trigger to fire inconveniently. You can avoid this behavior by enabling select-before-update in the mapping for the class that is persisted to the table with the trigger. If the ITEM table has an update trigger, you can use the following mapping:

<class name="Item"

This setting forces NHibernate to retrieve a snapshot of the current database state using a SQL SELECT, enabling the subsequent UPDATE to be avoided if the state of the in-memory Item is the same.

Let’s summarize our discussion of legacy data models. NHibernate offers several strategies to deal with (natural) composite keys and inconvenient columns. But our recommendation is that you carefully examine whether a schema change is possible. In our experience, many developers immediately dismiss database schema changes as too complex and time consuming, and they look for an NHibernate solution. Sometimes this opinion isn’t justified, and we urge you to consider schema evolution as a natural part of your data’s lifecycle. If making table changes and exporting/importing data solves the problem, one day of work may save you many days in the long run—when workarounds and special cases become a burden.

Now that you’re finished developing and mapping the data side of the domain model, it’s time to dig into its behavior: specifically, how much it’s supposed to know about persistence.

9.3. Understanding persistence ignorance

In the description of the layers of an NHibernate application (section 8.1.3), we highlighted the fact that the domain model shouldn’t depend on any other layer or service (although this isn’t a strict rule). This is important because it influences its portability; the less coupling an entity has, the easier it is to modify, test, and reuse.

This recommendation leads to the notion of persistence ignorance (PI). A persistence-ignorant entity has no knowledge of the way it’s persisted (it doesn’t even know that it can be persisted). Practically speaking, the entity doesn’t have methods like Save() or static (factory) methods like Load(), and it doesn’t have any reference to the persistence layer. This is already the case for the entities you’ve been writing in this book.

Going one step further, we can also say that entities shouldn’t have Identifier and Version properties. The argument is that primary keys and optimistic control have nothing to do with the business domain, and therefore don’t belong in the domain model. We usually wouldn’t go this far; the convenience of having these properties far outweighs the slight “pollution” of the domain model they introduce.

Note that PI isn’t a requirement for all solutions, and you may find it easier to develop solutions without it. However, we do consider PI a good thing to strive for as it creates a less coupled, more testable and maintainable domain model. It’s particularly useful when the domain model explicitly requires portability and flexibility.

Now let’s see how you can implement an entity that is as free as possible of persistence-related code while still being functional and simple.

9.3.1. Abstracting persistence-related code

A common compromise, at the level of persistence awareness, is to separate persistence-related code from the business code in the implementation of an entity. This can be as trivial as performing a visual separation using a #region in your code, to help improve readability. Another option is to create an abstract base class for each entity so that persistent code is separated.

Let’s look at how you can implement the latter solution. You’ll use NHibernate.Mapping.Attributes because it lets the base class abstract the mapping information along with the code. You’ll see that the end result can be acceptable as long as you don’t mind inheriting from this base class (if you do mind, copy the content of this class in your entities). Note that this implementation presents many independent ideas and patterns; feel free to extract some of them for your applications.

You’ll implement an abstract class from which entities can inherit to gain the persistence-related code they need. This class will provide an identifier and a version property along with proper overloading of System.Object methods. You’ll call this class VersionedEntity; listing 9.2 shows its implementation.

Listing 9.2. VersionedEntity base class abstracting persistence-related code

Using an assigned Guid as identifier provides many advantages. For example, it simplifies the implementation of Equals() and GetHashCode(). The version is used for optimistic concurrency control, explained in section 5.2.1. The implementations of System.Object methods are simple but effective.

Note that you can replace the initialization of the identifier as follows:

private Guid id
= (Guid) new NHibernate.Id.GuidCombGenerator().Generate(null,null);

This initialization uses the guid.comb identifier generator. You can read about its advantages in table 3.5 of chapter 3, section 3.5.3.

If you don’t want to reference NHibernate here in your business layer, you can create a private static method in VersionedEntity using the same algorithm as the method GuidCombGenerator.GenerateComb(). Remember that NHibernate is licensed under the LGPL; therefore, all of its source code is publicly available for viewing and customization. (See

Implementing Persistence-Abstracted Entities

When inheriting from this VersionedEntity base class, all the basic persistence-related features are neatly taken care of (identifier, the version, and the overloading of System.Object methods). But we still have to map our business properties. For that, we have a few choices: XML Mappings or attribute-based mappings. Another option called Fluent NHibernate also looks promising, but because it’s a work in progress we won’t discuss it here.

You may ask whether the use of NHibernate.Mapping.Attributes decreases the persistence ignorance of your entities. After all, mapping attributes is about mapping, which is a persistence concern rather than a domain concern. Do we want all those attributes in our domain models? Like all things, it’s a trade-off. The pros and cons of attributes are discussed in section 3.3.2.

Let’s use a simple example to illustrate the documentation aspect of these attributes:

public class User : VersionedEntity {
private string name;
[Property(Length = 64, Access = "field.camelcase-underscore")]
string Name { ... }

Without the information Length = 64, a careless developer may think that names can be unlimited in length—and the user will find that the application truncates a name for an unknown reason.


You can see that using VersionedEntity makes this implementation free of code unrelated to the business domain, without sacrificing functionality.

The fact that the domain model isn’t aware of other layers (like the presentation layer) means that it can’t directly inform those layers about any events that occur (for example, when a change occurs in the domain model, the GUI may need to be refreshed). Fortunately, a pattern is available to solve this kind of problem.

9.3.2. Applying the Observer pattern to an entity

The Observer pattern lets an object pass information to other objects without knowing about them up front. The object that sends the notifications is called the subject, and the objects that receive the notifications are the observers. This pattern is often used in a WinForms MVC architecture, as explained in section 8.1.1.

In .NET, you can implement this pattern using events. You add an event to your class, and then the observers must register with the event in order to receive notifications. Most of the time, the registration is done just after the entity is created or loaded.

Let’s look at an example that illustrates how to implement this pattern. In the previous example, the class User has a property Name. If you want to inform the presentation layer when this property changes, this is the direct (and bad) way:

public class User {
public string Name {
get { return name; }
set {
if (name==value) return;
name = value;

Here, you assume that the entity has access to the presentation layer, which provides a method to call when the entity changes. The problem, in this implementation, is that the entity is tied to the presentation layer—and that’s bad because you can’t use the entity in any other context (for example, when testing).

Here’s the solution, using the Observer pattern:

public delegate void NameChangedEventHandler(
object sender, EventArgs e );
public class User {
private string name;
public string Name {
get { return name; }
set {
if (name==value) return;
name = value;
public event NameChangedEventHandler NameChanged;
protected virtual void OnNameChanged() {
if (NameChanged != null)
NameChanged(this, EventArgs.Empty);

You first define a delegate for the NameChanged event. Then, in the implementation of the property (Name), you raise the event after changing the property’s value. The code to raise the event is in the OnNameChanged() method. Using a separate method is a recommended guideline from the official .NET documentation, which discusses the implementation and use of events.

The next step is to listen to the event:

User user = BusinessLayer.LoadUser(userId);
user.NameChanged += User_NameChanged;

In this code, you load a user and register the NameChanged event. The method User_NameChanged() will be called whenever this property changes.

Note that the .NET framework provides an INotifyPropertyChanged interface for this scenario. Here’s an implementation of the User class inheriting from this interface:

using System.ComponentModel;
public class User : INotifyPropertyChanged {
private string name;
public string Name {
get { return name; }
set {
if (name==value) return;
name = value;
public event PropertyChangedEventHandler PropertyChanged;
protected virtual void OnPropertyChanged(string propertyName) {
if (PropertyChanged != null)
PropertyChanged(this, new PropertyChangedEventArgs(propertyName));

This solution is similar to the previous one. But it has the benefit of working well with other mechanisms in the .NET framework, such as data binding.

You can also use the Observer pattern in many other situations. Here’s a security-related example:

public class SecurityService {
public void User_IsAdmnistratorChanging(object sender, EventArgs e) {
if ( ! loggedUser.IsAdministrator )
throw new SecurityException("Not allowed");

The security service listens to the User.IsAdmnistratorChanging event. This service is able to cancel a modification by throwing an exception if the logged user isn’t an administrator.

This section has explained how to avoid cluttering the domain model with unrelated concerns. Now, let’s talk about the domain model’s primary concern: the business logic.

9.4. Implementing the business logic

In this book, we use the term business logic for any code that dictates how entities should behave. It defines what can be done with the entities and enforces business rules on the data they contain. Note that we aren’t strictly speaking about the domain model, but about the business layer in general.

The business layer can contain many kinds of business logic. We’ll use a case study to cover them all. Let’s say you want to implement a subsystem of the CaveatEmptor application that lets a user place a bid on an item; when this is done, all other bidders are notified of the new bid via email.

You can do that in the Bid or Item entity because sending emails isn’t their responsibility. The business layer should do it. Here’s an example:

public void PlaceBid(int itemId, Bid bid) {
using (ItemDAO persister = new ItemDAO()) {
Item item = persister.LoadWithBids(itemId);
foreach (Bid bid in item.Bids)
Notify(bid.Author, bid, ...);

Before going further, we need to explain a few details. This method takes the identifier of the item and a bid object. This might be convenient if you were calling the method from an ASP.NET page where these item identifiers would be included in the GET or POST request. In its implementation, PlaceBid loads the item using a DAO called ItemDAO. (For more details about the DAO pattern, see section 10.1.)

After loading the item, you place the bid and notify the authors of all the other bids. Note that because the NHibernate session is still open, the Bids collection can be lazily loaded. But in this scenario you know you’ll definitely be using the Bids collection, so you can save trips to the database by eagerly loading the collection using the Load() method.

Let’s dissect this method to see how various kinds of business logic are executed when it’s called.

9.4.1. Business logic in the business layer

The method BusinessLayer.PlaceBid() should contain logic that belongs in the business layer. For example, it’s common for the business layer to contain rules related to security and validation:

BusinessLayer.PlaceBid(int itemId, Bid bid) {
If ( loggedUser.IsBanned )
throw new SecurityException("Not allowed");
// ...

Here, before placing the bid, you make sure the logged user isn’t banned.

It’s also common to use the IInterceptor API when a business rule is hooked to the persistence of entities. Read section 8.4 to see how it’s done. For complex business rules, you may consider using a rules engine.

The remaining business logic belongs in the domain model.

9.4.2. Business logic in the domain model

The business logic in an entity expresses what the entity is supposed to do from a core domain point of view. Here is a simple implementation of the method Item.PlaceBid():

public void PlaceBid(Bid bid) {
if (bid == null)
throw new ArgumentNullException("bid");
if (bid.Amount < CurrentMaxBid.Amount )
throw new BusinessException("Bid too low.");
if (this.EndDate < DateTime.Now)
throw new BusinessException("Auction already ended.");
CurrentMaxBid = bid;

This implementation illustrates the different kinds of business logic. You’ll see this method again in chapter 10, where we discuss more architectural decisions.

In this implementation, you start with some guard clauses, and then you perform the action itself (assigning the bid to the item). The guard clauses are the if statements before the action takes place. They prevent the user from doing something that goes against the rules of the business. In this example, you use guard clauses to prevent someone placing a lower bid than the current one or placing a bid on an item that has already been sold.

Sometimes, business logic must be executed at a specific time. For example, if some validation logic must be performed before saving an entity, you can add a Validate() method to the entity. Suppose you already have code such as this:

public string Name {
get { return name; }
set {
if ( string.IsNullOrEmpty(value) )
throw new BusinessException("Name required.");
if (name==value) return;
name = value;

You can have an additional method that re-checks all the logic without duplicating code:

public void Validate() {
Name = Name;
Password = Password;

By setting the properties in the Validate() method, the business rules are re-checked. Note that this trick only works if the validity checks are done in a specific order. Specifically, things like

if ( string.IsNullOrEmpty(value) )

must come before

if (name==value) return;

Once your Validate() method has validated each individual property, you can add validations that work on multiple properties. For example, in a holiday booking application, you might check that an outbound flight date comes before an inbound flight date.

When you’re implementing the domain model’s business logic, you must be careful to avoid unwanted dependencies. You can often move these up to an upper layer, such as the business layer. This is better explained with an example:

public string Password {
if ( CryptographicService.IsNotAStrongPassword(value) )
throw new BusinessException("Not strong enough.");
password = value;

Here you can see the domain model doing too much; it shouldn’t depend on cryptographic and mailing services. Instead, these concerns are better suited to the business layer.

There are also some rules that shouldn’t be implemented in the domain model, or anywhere else in the application for that matter.

9.4.3. Rules that aren’t business rules

Some rules shouldn’t be implemented in any layer of the application. An easy way to find them is to see if they really are business rules.

These rules generally test the code, to make sure that everything went as expected. Here’s an example:

public void PlaceBid(int itemId, Bid bid) {
if ( ! item.Bids.Contains(bid) )
throw new Exception("PlaceBid() failed.");

Some would say that this is fine, because the code is testing whether the post-conditions of the action are as expected. In this case, you’re testing that the item contains the bid after PlaceBid is called. Post-condition checking is common with programmers who employ Design by Contract [Meyer, Bertrand]. We usually adopt a different approach: we put post-conditional checks such as this into a separate test.

Let’s take another example:

public void LoadMinAndMaxBids() {
min = BidDAO.LoadMinBid(item);
max = BidDAO.LoadMaxBid(item);
Assert.LessThan(min, max);


This is a unit test for the persistence layer. If the Assert fails, it means that there is a bug in its implementation. We cover testing in more detail in section 8.1.

So far, you’ve implemented the internal structure of the domain model, taking care of its data and its behavior. Now it’s time to address some issues related to the environment in which this domain model is used.

9.5. Data-binding entities

The presentation layer allows the end user to display and modify the entities of an NHibernate application. This implies that the data inside your entities is displayed using .NET GUI controls and that the user’s input is sent back to the entities to perform updates to the data.

Although this data transfer can be done manually, .NET provides a way to create a link between an object (called the data source) and a control so that changes to one of them are reverberated to the other. This is called data binding. In the context of NHibernate, these objects are called POCOs rather than entities, to emphasize the fact that they don’t have any special infrastructure to assist data binding. Developers who are used to DataSets (and the wizards of Visual Studio .NET) many find POCO data binding challenging. DataSets contain special infrastructure to make data binding easier, and POCOs are generally free of this additional infrastructure. Fortunately most .NET GUI controls support basic data binding to POCOs, and .NET provides interfaces that allow you to improve this support.

In this section, we’ll discuss a number of alternatives for data binding; these alternatives apply equally to Windows and web applications. We’ll first explain how you can interact with .NET GUI controls without using data binding. Then you’ll data bind POCOs and learn about a number of extensions that improve these capabilities. You’ll also see how NHibernate can help implement data binding. Finally, you’ll discover a library that can help you data bind POCOs.

A POCO includes three kinds of data: a simple property (that is a primitive type), a reference to another POCO (as component or many-to-one relationship), and a collection of POCOs (or primitives). In this discussion, we’ll ignore the reference to another POCO because it’s generally visualized by displaying one of its properties (its name, for example). An additional mechanism (such as a button) is provided to view or edit the related entities.

In order to cover how the simple properties and collections can be data bound, we’ll use the example of writing a form to manage users and their billing details (as defined for the auction application in section 3.1.2). This form, shown in figure 9.2, will retrieve the user’s information and let you update the billing details.

Figure 9.2. Domain model bound to a user interface

The interesting aspect of this example is that BillingDetails is an abstract class, so the entities in the collection can be either BankAccount or CreditCard instances. This complication will let us demonstrate the limitations of some approaches to data binding, discussed next.

Note that we don’t give a thorough explanation of the .NET APIs you’ll use; if you need to learn more about them, refer to the official .NET documentation. You may also want to read Data Binding with Windows Forms 2.0 [Noyes 2006].

Let’s start by ignoring all these APIs and displaying/ retrieving data manually.

9.5.1. Implementing manual data binding

The idea behind this approach is simple: you copy the POCO data from/to the GUI. When you need to display something, you take it from the POCO and send it to the GUI:

editName.Text = user.Name;

When you need to process the POCOs, you retrieve any changes in the GUI and apply them back in the POCOs:

user.Name = editName.Text;

This approach is simple to understand and implement. It’s also easy to customize. For example, when displaying an identifier (of the type integer), you may decide to display New for a transient entity (instead of 0).

It’s also straightforward to support polymorphism:

if(billingDetails is BankAccount) {
editBankName.Text = (billingDetails as BankAccount).BankName;
else {
editExpYear.Text = (billingDetails as CreditCard).ExpYear.ToString();

The downside of manual data binding is that it can be tedious to implement, especially for complex objects.

9.5.2. Using data-bound controls

In this case, you rely on the support of built-in data binding for public properties and collections. Here’s an example for Name:

textBoxName.DataBindings.Add("Text", user, "Name");

In this example, the Windows Forms control textBoxName is data bound to the property Name of the User instance. When binding collections, you can use the control’s DataSource property:

dataGridView.DataSource = user.BillingDetails;

The DataGridView control uses the BillingDetails collection as data source. But this solution is limited: for example, it doesn’t support polymorphism, which means you can only edit the properties of the class BillingDetails. You can’t edit the properties of the subclasses BankAccount and CreditCard.

You can use numerous helper classes and extensions to improve this support: ObjectDataSource, BindingSource; BindingList, IEditableObject, INotifyPropertyChanged, and so on. We suggest that you look at these APIs to see which ones suit your needs.

If you value simplicity in your domain model and still want to do powerful data binding, you can implement wrapping classes (using the Adapter pattern) that represent a presentation model:

BillingDetailsWrapper detailsWrapper = new BillingDetailsWrapper(details);
editBillingDetails.DataSource = detailsWrapper;

In this case, you have two classes with specific purposes that give you more control: the entity keeps the focus on its business value, and the wrapper provides data-binding capabilities on top of the entity. They also mean you have to do more work, because you have two classes to implement instead of one.

Another benefit of using wrapper classes is that you can add properties for reporting purposes. A common example is to add a FullName property that returns the first name and the last name of a User as a single string.

9.5.3. Data binding using NHibernate

If you think about the way NHibernate works, you’ll realize that it already does a kind of data binding. But instead of binding an object to the GUI, it binds the object to the database. When you load an entity, it fills the entity with data; and when you save the entity, it pushes the data back to the database.

The part of NHibernate responsible for this is the MetaData API. You can leverage this API to help automate binding entities to a GUI. The following code is based on that previously shown in section 3.4.10, where we discussed working with MetaData in more depth:

User user = UserDAO.Load(userId);
NHibernate.Metadata.IClassMetadata meta =
sessionFactory.GetClassMetadata( typeof(User) );
string[] metaPropertyNames = meta.PropertyNames;
object[] propertyValues = meta.GetPropertyValues(user);
for (int i=0; i<metaPropertyNames.Length; i++) {
Label label = new Label();
label.Text = metaPropertyNames[i];
TextBox edit = new TextBox();
edit.Text = propertyValues[i].ToString();

This simplistic implementation retrieves a user’s data and generates labels and text boxes to display it. For brevity, we haven’t written code to set the position of these controls on the form.

The interface IClassMetadata also has the method: SetPropertyValues( objectentity, object[] values ); it can be used to copy the data from the GUI to the entity. Note that you must keep them in the same order as when you loaded them.

Although this approach seems powerful, it has several drawbacks that aren’t acceptable in production application. Even with a well-designed algorithm, the resulting layout of the GUI is far from perfect. You may have problems with the formatting of the values (such as dates). There are better controls than TextBox for some types of data (for example, DateTimePicker). Finally, this approach requires extra work to support references to other POCOs and collections.

It’s possible to solve these issues with some effort; and this approach can help when you’re prototyping an application. Therefore, you should add it to your toolbox.

9.5.4. Data binding using ObjectViews

ObjectViews is an open source library written specifically to help data bind POCOs to .NET Windows controls. It’s largely outside the scope of this book to explore this library, but it’s worth mentioning that it supports data binding of both individual POCOs and collections.

At the time of writing, ObjectViews is based on .NET 1.1 and won’t evolve further. You can download this library (with a helpful example application) from

9.6. Filling a DataSet with entities’ data

DataSets are widely used, mostly by data-centric applications leveraging wizards in tools like Visual Studio .NET to generate code. But they’re different than POCOs. If, for some reason, your domain model must communicate with a component using DataSets, you’ll have to find a solution to this problem.

Before you begin, remember that you can execute classic ADO.NET code by either opening a database connection yourself, or by using the one NHibernate has. NHibernate’s connection can be accessed using the ISession.Connection property. In this case, you’ll have to be careful not to work with stale data or change something without clearing the related NHibernate second-level cache. You may also consider rewriting the component using DataSets for better consistency. If neither of these options is applicable, you’ll have to convert your entities from/to DataSets.

In the following sections, we’ll consider going from entities to a DataSet filled with their data. You shouldn’t have any problem reversing this process.

9.6.1. Converting an entity to a DataSet

A DataSet is an in-memory data container that mimics the structure of a relational database. Filling it means adding rows to its tables. It’s relatively easy to figure out the code required to fill a DataSet. Here, we assume you’re working with a typed DataSet, because they’re easier to work with.

Listing 9.3 contains a method that does this work for the Item entity. It’s complete because it handles the simple properties, the Seller reference to the User entity, and the Bids collection.

Listing 9.3. Filling a DataSet with the content of an entity

You use a collection to keep the list of entities that are currently being added . This is required to avoid infinite recursive calls when there is a circular reference. If the entity is already in the DataSet, it must be updated; otherwise, its row must be created .

Filling the simple properties is straightforward . Handling references to other entities is more complex: you must either remove the entity or add it if it isn’t in the DataSet yet .

Handling collections requires that you first make sure the collection is already loaded (unless, in this case, you want it to be lazy loaded). Then you add the bids one by one . You add the row to the DataSet if it doesn’t already contain the entity . Finally, you remove the entity from this collection because we’ve processed it.

If you’re using a code generator as explained earlier in this chapter, you may be able to generate this code for all your entities. Doing so will save you a lot of time.

Now let’s see how NHibernate can help you achieve the same result more quickly.

9.6.2. Using NHibernate to assist with conversion

If you’re working with a non-typed DataSet, you can use an approach similar to the one explained in section 9.5.3: you can extract the class names and the property names and use them as table names and column names.

NHibernate approximates this idea with the ToString(object entity) method of the NHibernate.Impl.Printer class. Look at it before starting your implementation.

Succeeding in implementing this approach means it’s generic enough to work with any entity, because you’ll only be manipulating metadata. But it also means the domain model dictates the schema of the DataSet. You can solve this issue by using the mapping between the domain model and the database (because the DataSet schema is generally based on the domain model).

With this ability to communicate with a component using DataSets, you’re finished implementing a real-world domain model.

9.7. Summary

Writing real-world domain models can be tricky because of the influence of the environment. We hope this chapter has helped you understand the process.

The first step is to implement the domain model and the database and write the mapping between them. Until this chapter, you wrote them manually; now, you know how to generate them. You even know how to automate the migration of the database as the domain model evolves.

This chapter explained how to handle legacy databases when you’re writing the mapping. NHibernate supports the mapping of natural and composite keys. As a last resort, you can implement user types to handle custom situations. It’s also possible to work with a database using triggers.

After explaining how to implement and map the domain model’s data, we moved to its business logic. We explained what persistence ignorance means and how to write a clean domain model that’s free of unwanted dependencies. Then, we explained how the different kinds of business logic should be implemented. We also gave you some advice about errors to avoid.

When you’ve completed the domain model, you need to display it. This is where data binding comes into play. As you saw, doing it correctly can require quite a bit of work.

We completed this chapter by looking at how you can obtain a DataSet from an entity’s content. Although this process may require a lot of time at first, it can be automated.

This chapter was just an introduction to the real world of domain models. You may need to do some research to find the perfect answer for your needs, and we hope the resources we’ve mentioned will keep you busy for a while.

Now, it’s time to move to the persistence layer. So far, you’ve been writing simple, short persistence operations that act on similar entities. Let’s step back and look at the architectural issues that accompany writing a functional persistence layer in the real world.