Tigraine

Daniel Hoelbling-Inzko talks about programming

Mappin Sql-Time to TimeSpan with NHibernate

Funny how long you can use NHibernate + Fluent NHibernate on greenfield applications and not use all of it’s mapping features. But one little project that needs to talk to a legacy database and you run into all kinds of troubles.

Today I had to map the time datatype present in MsSql2008. According to official Microsoft documentation the preferred clr-type for time is TimeSpan.

Unfortunately, by default NHibernate believes that the best way to map TimeSpan is to transform it to a bigint.

Fortunately the solution is rather trivial:

Map(p => p.CreatedOn, "CreationTime")
.CustomType("TimeAsTimeSpan");

Filed under net, programmierung, nhibernate

NHibernate Composide-Id mapping gotcha

I am currently working on a new system that populates a legacy database, so I am knee deep in relational SQL weirdness, trying to fight it with arcane NHibernate mappings.

This gem just cost me over an hour of work:

//Inside a ClassMap
CompositeId()
    .KeyProperty(p => p.Id)
    .KeyProperty(p => p.POSITION);
Map(p => p.Id, "Id");
Map(p => p.POSITION, "Position");

Trying to save this entity results in a Exception stating: Invalid Index 3 for SqlParameterCollection with Count=3

What happens here is that not even NHibernate Profiler can detect that you are doing something wrong because NHibernate fails to construct the SqlCommand to send to the database before any profiler can pick up on this.

What happened here is that I was mapping Id and Position twice, since the ComposideId already counts as one mapping.

So in order to make this work I had to remove the Map() instructions and specify the column name in the KeyProperty of ComposideId:

CompositeId() .KeyProperty(p => p.Id, "Id") .KeyProperty(p => p.POSITION, "Position");

Hope this helps.

Filed under net, programmierung, nhibernate

Eliminating SELECT N+1 in NHibernate without getting duplicate root entities

NHibernate makes it easy to do a lot of things in a natural and OO way without having to constantly think about tables and rows.

It just falls short on some occasions when you do something in your domain model that is fine from a design and OO standpoint but kills you in terms of database performance.

I ran into this problem with a fairly common use-case: Iterating over a list of orders and calling a method on them that operates on their OrderLines collection. Everyone knows: you don’t do that. Whenever you run that method, NHibernate knows you haven’t fetched the OrderLines collection yet and will issue a single SELECT query to get them for you so you can do your calculation.

In traditional systems this is the perfect place to use a stored procedure and push the calculation logic into the sproc. It’s just not something I fancy doing. You end up with your code scattered around the system, and at times your calculations simply don’t work inside the database.

Putting code in the database also makes it even harder to deploy the application because you have to think about deploying the database code too.

Well, now back to topic. If you run the following query on a Order table that has many OrderLines NHProf will warn you that you are doing a SELECT N+1.

var orders = session.CreateCriteria(typeof (Order)) .List<Order>(); foreach(var order in orders) { order.DoSomethingWithOrderLines(); }

The way to prevent this is to tell NHibernate explicitly to fetch all OrderLines through a Join.

session.CreateCriteria(typeof(Order))
    .SetFetchMode("Lines", FetchMode.Eager)
    .List<Order>();

This works, but you’ll end up with duplicate orders being returned by NHibernate. This is due to the query you are running that will join each OrderLine with it’s order, resulting in one Order per Orderline (with a lot of duplicates).

But NHibernate is smart, and the information you are getting from the query is enough to construct the right objects, although some unnecessary data is coming over the wire. As with most things, you only need to tell NHibernate how to do it right. In our case that’s to only return distinct orders instead of duplicates through a result transformer:

var orders = session.CreateCriteria(typeof(Order)) .SetFetchMode("Lines", FetchMode.Eager) .SetResultTransformer(new DistinctRootEntityResultTransformer()) .List<Order>();

The result-transformer will eliminiate all duplicate root elements (in our case Order) while populating the child-collection accordingly. Resulting in only one query being run.

Filed under net, programmierung, nhibernate

Untangling the dependency ball! Windsor + NServiceBus + Caliburn + Fluent Nhibernate in one package

confusing-road-sign-large-web-view

Unfortunately nu is still falling short on one thing: Making sure that all the stuff you install is actually compatible with the other stuff you have already installed. There is a ticket for this and I’m fairly confident this will get resolved (please vote the ticket up), but for now I was back to figuring out what version of what framework to use to make my app compile.

As always, the main problem was Castle.Core, being present in 3 different versions. (NSB used version 1.1, Caliburn 1.2 and the latest Windsor release targets 2.5)

I decided to back down and use 1.2 since there is a NHibernate gem for 1.2 and a Windsor gem for 1.2. I’m now using NHibernate 3.0 alpha so think about using this “stack”.

Anyway, this is a collection of:

  1. NServiceBus 2.0 .NET 4 (2.0.0.1219)
  2. NServiceBus.ObjectBuilder.CastleWindsor
  3. Castle.Windsor (2.1)
  4. Caliburn 2.0 (still unreleased from the trunk)
  5. NHibernate 3.0.0.1002
  6. FluentNhibernate 1.1 (Updated to NHibernate 3.0)
  7. AutoMapper

Disclaimer: The whole thing is built for .NET 4.0 and works on my machine. Don’t blame me if it’s broken for you.

Anyway. You can download the whole package of libraries here: castle-stack.rar

NHibernate removes items from Many-To-Many upon Update of Entity due to Model Binding

Imagine the following scenario:

nh

Townships has two m:n collections mapped to Region. My Controller has special actions for updating these collections, while there is a generic Edit method that takes care of updating normal properties on Township. The code in question looks quite innocent:

[HttpPost]
public virtual ActionResult Edit([Bind]T item)
{
    if (!ModelState.IsValid) return View(item);
    using (var trans = session.BeginTransaction())
    {
        session.Update(item);
        trans.Commit();
    }
    return RedirectToAction("List");
}

Well, the problem is quickly found using NHProf:

image

Whenever I updated the Township entity all it’s associated Regions where cleared.

Turns out, the problem lies with the ModelBinder in MVC2: Since it reconstructs a new Township item and populates it with values from the request, there is no way for MVC to fill the WinterRegions and SummerRegions collection. So NHibernate got empty collections and assumed I removed all items from them and decided to persist that removal to the database, resulting in a DELETE.

There are two solutions to the problem: a) turn off Cascade.All b) Fill the collections before the update.

Since I already used the Cascade Behavior in other places I decided to go with b and select the entity prior to updating it. The resulting code looks like this:

[HttpPost]
public override ActionResult Edit([Bind]Township item)
{
    using (var trans = session.BeginTransaction())
    {
        var township = session.Get<Township>(item.Id);
        session.Evict(township);
        item.WinterRegions = township.WinterRegions;
        item.SummerRegions = township.SummerRegions;
        session.Update(item);
        trans.Commit();
    }
    return RedirectToAction("List");
}

Notice that it is important to first evict the fetched entity from the session, otherwise you’ll get an Exception stating that the same identified is already associated with this session cache.

To be honest: I don’t feel particularly fond of this solution, if anyone can point out a better solution please leave a comment or email me. While at it, it would be nice to be able to change the cascade behavior of entities for one session (like FetchMode for one criteria).

Filed under net, programmierung, nhibernate

Storing binary data in NHibernate / ActiveRecord

I believe the simplest way to store binary data is to just put in the database. Whenever I’ve agreed to throw data to a disk I’ve had issues with deployment, administration or disaster recovery.

Simply put: Once you have a dependency from your database to your file system, you no longer have the luxury of only thinking about recovering the database. You now need to keep two pieces of your system “safe”, both requiring a completely different toolset than the other.

Besides the obvious second point of headache for backup/recovery, you also bring yourself into a world of hurt for deployment / maintenance scenarios.
Filesystem access rights can be a huge pain in the ass, and having to set them right (and keep them that way) is usually a time-bomb waiting to go off.

So, storing your binary data in the db solves many problems, but some new ones arise. Mostly implementation details, but I’d like to show you some things to keep in mind when writing binary data to db.

NHibernate supports no lazy loading of instance fields

image While with conventional ADO.NET I’d just put the binary data as a column inside the table it belongs to, NHibernate requires you to do things different. If you map your data like that NHibernate will fetch it whenever you read objects from that table, meaning that you’ll be querying large binary data fields for no reason, causing you application performance to significantly degrade over time.

 

What you want is to have NHibernate fetch that field only if it is accessed (lazy load it), and that’s not possible for fields inside a class, but it is possible for references. So your database schema should look like this:

image

And your mapping will look similar to this (I’ll use ActiveRecord for easier understanding):

[ActiveRecord]
public class Invoice : ActiveRecordBase<Invoice>
{
    [PrimaryKey]
    public int Id { get; set; }

    [BelongsTo(Lazy = FetchWhen.OnInvoke, Cascade = CascadeEnum.SaveUpdate)]     public BinaryData ScannedInvoice { get; set; } }

[ActiveRecord] public class BinaryData : ActiveRecordBase<BinaryData> {     [PrimaryKey]     public int Id { get; set; }

    [Property(ColumnType = "BinaryBlob", SqlType = "IMAGE", NotNull = true)]     public byte[] Data { get; set; } }

Now whenever your Invoice is saved/inserted NHibernate will also check if BinaryData has to be updated/inserted, while only loading the binary field if you actually access the Invoices.ScannedInvoice field.

The fairy tale of binary blob fields

One of the main advantages students get from being members of imagineClub is that they get access to uploaded course materials through the website. Naturally, the new site has to support file upload and download somehow, and yesterday I started implementation of that feature.

In theory this sounds really simple, especially since the file upload in MonoRail is so trivial I figured it wouldn’t be a problem to implement.

One major thing to consider when designing a file upload feature is the question: Save to disk or save to database? Let’s look at the two options:

Save to disk:

Pro: Very easy
Con: Requires metadata to be kept in the database. Could go out of sync with the db. Requires backup. Requires special permissions.

Save to db:

Pro: Zero setup. Data all in one place, backup hugely simplified. Enforces data integrity
Con: Non-trivial implementation.

Now, I naturally went with the db option. Deployment is hugely facilitated if you don’t need to look at file permissions, and most hosters have databases backed up anyway. So things go south, the only thing I need to recover the site would be the database file.

Some searching revealed that binary data could be mapped to the database through AR quite easily:

[Property(ColumnType = "BinaryBlob", SqlType = "varbinary(MAX)")]
public byte[] BinaryData { get; set; }

Problem with that is that it crashed ALL of my database dependant unit-tests:

------------ System.Data.SQLite.SQLiteException : SQLite error
near "MAX": syntax error

Apparently SqlLite can’t figure out that MAX thing and will crash. Since it would accept a numeric value instead I looked at the SqlServer 2008 documentation for varbinary to find out what MAX would be. Turns out it’s exaclty 2147483647 (2^31-1), so my natural reaction was to change the SqlType to be exactly varbinary(2147483647) instead of MAX. Now SqlLite can interpret it and all tests run great again, but creating the schema on SqlServer isn’t possible any more due to the following (odd) error:

The size (2147483647) given to the column 'BinaryData' exceeds the maximum allowed for any data type (8000).

So, what we just saw is a leaky abstraction inside the ORM. But NHibernate never claimed to abstract the DB completely away from me, so we’ll not use that against it. NHibernate explicitly supports these scenarios and in a real NHibernate scenario it’s just a matter of having two different mapping files, one mapping to the appropriate SQLite datatype and the other mapping to the Sql2008 datatype that would be varbinary(MAX).
But, I’m not using NHibernate here, I’m using ActiveRecord that handles mapping through attributes on the data classes, and I’ve no intention of using #ifdef statements anywhere around my code.

The problem here is mainly that whenever you are trying to use two different RDBMS at once you are limiting yourself to the least common denominator, and you have to deal with that.


I won’t be able to use advanced Sql2008 features, and I also won’t be able to use anything fancy inside SQLite either.

The least common denominator in this case is the datatype IMAGE, something that Microsoft is discouraging people to do in their documentation:

image

This puts me in a delicate position since the imagineClub website is hosted on a server I don’t control. So I could just wake up one morning and seeing the iC website down because the hosting company decided to upgrade all users to 2010 (or whatever version the next SQL Server will have).


And I know, usually providers send out warning for stuff like this, but I doubt that through all the structural changes with imagineClub lately they even know where to send those warnings to.

So: Long story short, use image over varbinary(MAX) if you plan on doing in-memory SQLite testing, just keep in mind that your app will break when you upgrade to a newer version of SqlServer.

Update: Looks like Krzysztof Kozmic had the same issues and found a quite clever solution for that. I’m not totally clear on how to do this with ActiveRecord, but it’s a very pragmatic approach to a problem that seems to not have a perfect solution anyway.

ActiveRecord gotchas when testing with an in memory database.

If the title sounds familiar to you, it’s intentional. After having to deal with this in pure NHibernate it came around to also bite me with ActiveRecord.

In short: in-memory SQLite will drop the schema whenever you close the NHibernate ISession object (since ActiveRecord uses NHibernate behind the scenes this poses a problem to us).

So, assuming you have setup ActiveRecord using an InPlaceConfiguratonSource similar to this:

IDictionary<string, string> properties = new Dictionary<string, string>();
properties.Add("connection.driver_class", "NHibernate.Driver.SQLite20Driver");
properties.Add("dialect", "NHibernate.Dialect.SQLiteDialect");
properties.Add("connection.provider",                "NHibernate.Connection.DriverConnectionProvider");
properties.Add("connection.connection_string", "Data Source=:memory:;Version=3;New=True;");
properties.Add("show_sql", "true");
properties.Add("proxyfactory.factory_class",                "NHibernate.ByteCode.Castle.ProxyFactoryFactory, NHibernate.ByteCode.Castle");

var source = new InPlaceConfigurationSource(); source.Add(typeof (ActiveRecordBase), properties);

ActiveRecordStarter.Initialize(source, typeof (Member).Assembly.GetTypes()); ActiveRecordStarter.CreateSchema();

You will not be able to run any queries against it because there is no schema present. In fact the code will consistently blow up with a SQLiteException stating “no such table: ….”. Unfortunately it’s not really possible to change the SessionFactory implementation here because that code is inside ActiveRecord.

Thank god I found this handy guide that suggested subclassing the DriverConnectionProvider class to replace the CloseConnection call with a fake like this:

public class SqLiteInMemoryTestingConnectionProvider : NHibernate.Connection.DriverConnectionProvider
{     public static System.Data.IDbConnection Connection = null;     public override System.Data.IDbConnection GetConnection()     {         if (Connection == null)             Connection = base.GetConnection();         return Connection;     }     public override void CloseConnection(System.Data.IDbConnection conn)     {     }
}

I then had to pass that new ConnectionProvider into NH through the configuration and all was well:

properties.Add("connection.provider",                "ImagineClub.Tests.SqLiteInMemoryTestingConnectionProvider, ImagineClub.Tests");

Only catch is, this doing the AR Initialization is painfully slow so I wrote a baseclass for my xUnit tests that makes sure AR Init is only run once and that the ActiveRecordStarter.CreateSchema() is run before every test (xUnit runs the testclass constructor before each test):

The final implementation for all my tests looks like this:

public class ActiveRecordInMemoryTestBase
{     public ActiveRecordInMemoryTestBase()     {         if (!ActiveRecordStarter.IsInitialized)             Initialize();         ActiveRecordStarter.CreateSchema();     }     private static void Initialize()     {         IDictionary<string, string> properties = new Dictionary<string, string>();         properties.Add("connection.driver_class", "NHibernate.Driver.SQLite20Driver");         properties.Add("dialect", "NHibernate.Dialect.SQLiteDialect");         properties.Add("connection.provider",                        "ImagineClub.Tests.SqLiteInMemoryTestingConnectionProvider, ImagineClub.Tests");         properties.Add("connection.connection_string", "Data Source=:memory:;Version=3;New=True;");         properties.Add("show_sql", "true");         properties.Add("proxyfactory.factory_class",                        "NHibernate.ByteCode.Castle.ProxyFactoryFactory, NHibernate.ByteCode.Castle");         var source = new InPlaceConfigurationSource();         source.Add(typeof (ActiveRecordBase), properties);         ActiveRecordStarter.Initialize(source, typeof (Member).Assembly.GetTypes());         ActiveRecordStarter.CreateSchema();     }
}

public class SqLiteInMemoryTestingConnectionProvider : NHibernate.Connection.DriverConnectionProvider {     public static System.Data.IDbConnection Connection = null;     public override System.Data.IDbConnection GetConnection()     {         if (Connection == null)             Connection = base.GetConnection();         return Connection;     }     public override void CloseConnection(System.Data.IDbConnection conn)     {     } }

Caution: If you copy/paste the above make sure to change the namespace and assemblyname for your SqLiteInMemoryTestingConnectionProvider to match yours.

So a database dependant unit test would look like this:

public class DatabaseTest : ActiveRecordInMemoryTestBase
{     [Fact]     public void DatabaseIsEmpty()     {         Assert.Equal(0, Member.FindAll().Length);     }
}

Initial runs are quite slow due to the AR Init, but the whole test-suite should run quite fast since only the schema creation is run before each test.

Running from trunks can be tough

I just spent almost 10 hours running in circles collecting different releases of NHibernate, Fluent NHibernate and castle, in a rather futile attempt to make them work together.

NHibernate is currently in 2.1 Beta1 at revision 2.1.0.2001 referencing the latest Castle.DynamicProxy2 at revision 2.1.0.0.

Fluent NHibernate’s current trunk compiles against NHibernate 2.1.0.1001 that in turn is compiled against Castle.DynamicProxy2 2.0.3 thus breaking.

So, while the NHibernate Project and the Castle project managed to match their versions pretty well, the Fluent NHibernate trunk was not.

So, the obvious choice would be to just take all the assemblies that are packed with Fluent NHibernate and work from that. But by doing so I then lack Calst MicroKernel and Windsor, two libraries that are not packed with Fluent NHibernate.

At that point I gave up and simply recompiled the Fluent NHibernate trunk with the latest NHibernate trunk.

In case you need Fluent NHibernate with Castle Windsor i packed my results for you: FluentNHibernateWithCastle.7z

Hope this helps.

Filed under net, castle, nhibernate

Fluent NHibernate gotchas when testing with an in memory database.

What I love most about programmatic configuration is that it’s close to the test.
While we were carrying dozens of XML files around for testing before, now with DSL based configuration everywhere the configuration is usually pretty near to the test fixture, instead of residing in some arbitrary XML that only insiders can associate with the test.

The standard sample for using SqlLite and Fluent NHibernate usually looks like this:

return
    Fluently
        .Configure()
        .Database(SQLiteConfiguration.Standard.UsingFile("mydb.db3").ShowSql())
        .Mappings(m => m.FluentMappings.AddFromAssemblyOf<SessionFactory>())
        .ExposeConfiguration(SaveSchema)
        .BuildSessionFactory();

Where SaveSchema is a method that does a database rebuild.

Now, Fluent Nhibernate has in-memory databases built into the API. Just remove the UsingFile directive and you replace it with:

.Database(SQLiteConfiguration.Standard.InMemory().ShowSql())

Charming isn’t it? Now the only problem is that you won’t be able to do anything with that DB since there is no schema present.
The in-memory database exists per session, so once you close the ISession the db is gone. Since the schema export from most samples operates in it’s own ISession the subsequent queries will still hit a blank database, and you’ll get an error stating there is no such table.

So my SessionFactory implementation had to change, since I needed to keep the configuration around for doing the schema export:

public class SessionFactory
{
    public static ISessionFactory CreateSessionFactory()
    {
        return
            Fluently
                .Configure()
                .Database(SQLiteConfiguration.Standard.InMemory().ShowSql())
                .Mappings(m => m.FluentMappings.AddFromAssemblyOf<SessionFactory>())
                .ExposeConfiguration((c) =>  SavedConfig = c)
                .BuildSessionFactory();
    }

    private static Configuration SavedConfig;

    public static void BuildSchema(ISession session)     {         var export = new SchemaExport(SavedConfig);         export.Execute(true, true, false, false, session.Connection, null);     } }

And my tests then use a another factory method to construct the ISession object:

public static ISession CreateSession()
{
    var factory = SessionFactory.CreateSessionFactory();
    var session = factory.OpenSession();
    SessionFactory.BuildSchema(session);

    return session; }

Hope this helps, quite an annoying problem and imo a far from perfect solution. Someone on the FNH mailing list suggested looking at the OneToManyIntegrationTester class but I couldn’t really extract any terribly useful information from there.

Filed under net, programmierung, nhibernate

My Photography business

Projects

dynamic css for .NET

Archives

more