1

I have an ASP .NET web application dashboard that is used to send notifications to several .NET desktop clients. The current implementation is that the web app writes the new notification to a database and the clients poll this database every 30 seconds and look for notifications with flags marked as 'unread'.

This isn't the most efficient approach as completing a notification send operation to all clients takes 30-45 minutes, mainly because of the slow DB write operations which makes a new entry for each user that the notification needs to be sent to.

What is the alternative strategy and would be most efficient for sending notifications operations which occur once every few days.

You can refer to the API sequence diagram of the DB write operation from the web app dashboard side below.

enter image description here

8
  • 1
    "completing a notification send operation to all clients takes 30-45 minutes" How many users are we talking about here? Unless we're talking literal millions, your database performance seems WAY disproportionate here. Changing your design to work around such a clear and present problem is not a good approach and it likely to lead to many obstructions and hindrances down the line.
    – Flater
    Commented Apr 19, 2021 at 9:27
  • If the client poll the db, they are not really push notifications. Commented Apr 19, 2021 at 10:17
  • @Flater The total users amount to around 30k. As far as the time is concerned, that is what was reported. The DB write operations become the bottleneck with the DB hosted on Azure CosmosDB limited to 400RU/s. Commented Apr 19, 2021 at 10:48
  • @DavideVisentin You are right. It's not a push mechanism. I suppose the title is misleading in that I would want to change the existing mechanism to a "push" mechanism. I'm looking to know what standard design pattern is used for this scenario. Commented Apr 19, 2021 at 10:48
  • 2
    @PavanRajkumar: 400RU is the free tier. Trying to run what is clearly an enterprise grade notification platform on a free tier is the equivalent of trying to run your application on a raspberry pi. It's just not going to happen with any reasonable degree of performance.
    – Flater
    Commented Apr 19, 2021 at 11:50

1 Answer 1

2

Based on the points established in comments, it appears you're trying to fix the wrong thing about your problem.

The total users amount to around 30k. As far as the time is concerned, that is what was reported. The DB write operations become the bottleneck with the DB hosted on Azure CosmosDB limited to 400RU/s.

400RU/s is the free tier, and is woefully underpowered for an application that clearly and enterprise grade communications platform.

Taking a very conservative estimate of 4 RU per insert, that's 100 inserts/s and thus at the very least 5 minutes to complete the 30k user message insert batch, and that's discounting the network overhead and general processing delay which is going to massively dominate the performance on a per-insert basis as your diagram seems to be.

Overall, I'm surprised you even get it done within the hour to be honest.

You can calculate the RU cost per insert statement if you want to fine tune my back-of-the-napkin approximation.

The Cosmos DB pricing also gives you an idea on how many RUs are customary. The lowest paid tier is listed as 5000RU/s, which would already cut your 45 min time down to 3.6 minutes (assuming a linear performance based on RU, i.e. a factor of 5000/400 = 12.5, ignoring network differences etc).

The performance you want is just not going to happen with the free and severely restricted database tier you've opted to use. Either upgrade your database or downgrade your expectations.

Not the answer you're looking for? Browse other questions tagged or ask your own question.