4

The usual advice for using EF Core in a web application is to use a scoped DbContext per HTTP request. This is even the suggested default for web application in the Microsoft documentation. I recently encountered the argument that this is a bad idea and that you should not use DbContext as a scoped service as it will affect behaviour across your service boundaries. This seems to contradict the advice I usually read about this, but I do find the argument quite convincing in some aspects.

So the comparison here would be between injecting the DbContext as a scoped service and using a DbContextFactory and creating a new DbContext inside my classes every time I use the database.

The main argument against scoped DbContext is that the context has a state and what you do in other classes earlier can potentially affect code that runs later in a different class if it uses the same context. For example you might get cached entities from the context when you query the database, which can be unexpected and lead to different results.

Essentially the behaviour of your code can vary depending on what happened previously in the scope (the current HTTP request).

Having a context per request seemed natural at first, but I find the argument compelling that it kinda breaks the boundaries between my services if they share the context via DI. The more abstract question behind this is whether the unit of work is actually equivalent to the request or not.

Would always using context factories via DI make sense for an ASP.NET Core application and lead to a stronger decoupling of services? Sharing the context would still be possible, but explicit as a method parameter. You would not add the DbContext itself via DI at all. Or am I overthinking this and the context per request is still the best idea in most cases?

5
  • Querying, modifying and saving data using a single DbContext instance is guaranteed to be transactional, your example seems to be working around it and doesn't specify the reason why it does that.
    – devnull
    Commented Mar 22 at 11:01
  • @devnull I removed it for now, I'll have to check the details again. I mean cases where you have a DbContext and then start a transaction, not the automatic behaviour of the DbContext. If you query an entity before the manual transaction is started it is cached when you query it again within that transaction as far as I understand. Commented Mar 22 at 11:14
  • have you read Using transactions?
    – devnull
    Commented Mar 22 at 11:20
  • @devnull yes, and the main thing I'm unsure about now is whether savepoints change this. But in the absence of savepoints EF Core should create sequential transactions in this case, and that should lead to the problem I described. Commented Mar 22 at 11:22
  • see Tracking queries. Not caching, per se, but it's a documented behavior that you can tailor your case around.
    – devnull
    Commented Mar 22 at 11:34

2 Answers 2

3

The more abstract question behind this is whether the unit of work is actually equivalent to the request or not.

This is really crux of the issue.

From a perspective of (micro-)service architecture, I would expect each external request to be a transactional/UoW boundary. Treating internal "services" of the external service as boundaries seems strange, to me. If that were true, why aren't those service calls treated as external endpoints?

There is usually lots of "middleware" (error handling, logging, etc..) around each service request boundary. If you treat ASP.NET Core request as the boundary, this is already working out-of the box. If you treat internal services as boundary, you would need to implement this "middleware" yourself.

Or am I overthinking this and the context per request is still the best idea in most cases?

It is not bad idea to think about things. But I don't see the arguments to be strong enough to deviate from the defaults. There might be some edge cases where handling the DbContext lifetime per-service is good idea, but I think those aren't common enough to bother with them unless absolutely necessary.

2

The DbContext is not thread-safe and doesn't support parallelism. If the "services" (it's not very clear what they are) process the request data in a parallel/multithreaded context then yes, it would make sense to use separate instances. Otherwise, it's best to stick with the default since it works on the assumption that the request carries/queries an aggregate (or a list of) in a single transaction.

7
  • 1
    you hit the nail on the head, the Scoped lifetime won't work if you have multithreading for the same reason a Singleton lifetime doesn't work when you have more than one request at a time.
    – Ewan
    Commented Mar 22 at 12:02
  • @Ewan arguably, one could implement a thread bound scope but that's a circle of hell i wouldn't want to find myself in :)
    – devnull
    Commented Mar 22 at 12:06
  • circles of hell sounds like normal day for EF :)
    – Ewan
    Commented Mar 22 at 12:12
  • @Ewan: or any ORM... or any object oriented code using a relational database. Commented Mar 22 at 13:19
  • @GregBurghardt no... just EF, EF core in particular
    – Ewan
    Commented Mar 22 at 13:21

Not the answer you're looking for? Browse other questions tagged or ask your own question.