For many reasons we have grown to love our ORMs. We enjoy the simplicity of Entity Framework, the way it separates us from the old much unappreciated world of stored procedures and the T-SQL. We also love the nHibernate for its maturity and the amount of features it provides.
What they give us is simplicity and a neat level of abstraction over the database access code which allows us to develop quicker, all for free. But we all know that if someone gives us something for free, there must be a catch. Well, nothing is free in this world. In this case we get tons of unnecessary overhead. Even if we are experienced with the particular ORM, and we know how to use it in an optimal way, we are still allowing some blackbox core parser – deep inside the guts of the ORM to generate our SQL for us. And while this is OK for simple CRUD operations, it may easily bite you by creating 10 000 lines of SQL from your 4 liner LINQ query. Been there, done that. Sad but true, I regret it.
But moving on, at the time when I wrote that monstrous beast of a query I could have made it more optimal by splitting it in two. I could have done it if I knew there was such a possibility. However, even then I wouldn’t come close to what I could have achieved by using a simple data reader and a stored procedure. Reasons for slower performance of ORMs vary among versions and brands of your relational mapper but here are some to consider:
- Query building mechanism complexity
- Inefficient “Joining” on related tables
- Custom concurrency checking mechanisms (think: Change Tracking)
So why are we using ORMs at all? Is it laziness? Lack of SQL skills? Combination of both? In my opinion only a developer who knows the semantics of database engines: how they optimize queries and how different types of indexes work, should be allowed to use ORM. If you have never done it, run the SQL profiler to check what is actually generated by ORM and sent to the database. Let me give you an example of what you could see:
var failsForAcct1001 = db.Failures.Where(x => x.AccountId == 1001); return failsForAcct1001.OrderBy(x => x.FailureId).Take(50);
Translated by ORM to:
select External1.FailureId, External1.col2, External1.col3, from (select Inner1.FailureId, Inner1.col2, Inner1.col3, from Failures Inner1 where External1.AccountId = 1001 ) External1 order by External1.FailureId limit 50
Let’s face it, it is far from optimal, as you can imagine, and it can be extremely slow, depending on various factors.
Now, here is what should have been generated:
select External1.FailureId, External1.col2, External1.col3, from Failures External1 where External1.AccountId = 1001 order by External1.FailureId limit 50
But this didn’t happen since we have decided to outsource our SQL code building to the ORM. Poor choice, if you ask me.
I like the Ted Neward’s quote that the object-relational mapping is the Vietnam of Computer Science. And it is very much true for most ORMs. They are easy to start with and they quickly solve 80 / 90 percent of the issues you would have to deal with manually. The problem is that we still have the remaining 20 percent, the rare data manipulation which is just not supported or require a workaround.
In the end though, it all comes to the use case. If you are building a small internal application with a simple database schema or even a medium application for which you can predict the load, feel free to help yourself to an ORM. Tools are to be used after all! Still, remember that the ORM is a hammer and not every problem is a nail so for more advanced applications just stick to the old DataReader and StoredProcedures. All it takes is a bit of architectural finesse and the direct access to SQL may actually be an intuitive and simple way of working with data. Check this great article on utilizing repository pattern with ADO.NET. It shows how to get the unit of work, transactions and all the good stuff you need in your application.
Below you can add your comments and share your experiences with ORMs or any thoughts you may have on this matter.
Comments are closed.