Kuba's space

Kuba's space

Creating tasks in batches

Recently I've learned that you have no control over tasks after initialization. Awaiting a job is just a way to synchronize and get a response. That naturally brings a question. How to split asynchronous calls into batches?

A case from the trenches

Let's say we have a collection of some newly created entities. We want to send them to an external service. But it's API isn't convenient. It requires one save call per element.

Sadly, we don't know how numerous our collection is. Therefore it's hard to reckon how many calls we will have to make. I had no idea how to solve this issue.

Bright colleagues to the rescue! They have pointed me to unique code in one of our repositories. Then, they explained it to me until I finally caught it. That part was tricky. It took me a few tries. And the solution goes like that:

async Task ExecuteWithMaxParallelism<T>(
    IEnumerable<T> items,
    Func<T, Task<T>> externalCall
) {
    const int maxParallelism = 10;
    var       queue          = new ConcurrentQueue<T>(items);

    var tasks = Enumerable.Range(0, maxParallelism)
        .Select(
            _ => Task.Run(
                async () => {
                    while (queue.TryDequeue(out var item)) {
                        await externalCall(item);
                    }
                }
            )
        );

    await Task.WhenAll(tasks);
}

The idea behind this piece of code is straightforward. It creates a determined number of tasks and allows them to make external calls in their tempo.

Let's go through it step by step.

First, we create a ConcurrentQueue with all items we have to save. By using the concurrent version of the Queue, we avoid sending the same item more than once.
After that, we spawn a collection of 10 tasks.
Subsequently, those ten tasks will try to pick an item from the Queue and make the request.
Finally, we call Task.WhenAll to synchronize the result.

Using this method, we were able to limit the number of requests. Which, in turn, improved the performance of our service.

 
Share this