Distribute caching in ASP.NET Core

Distributed caching is an interesting tool for IT companies. For this reason and reaffirming our commitment to support innovation and knowledge in new technologies, in this article we going to talk about distributed caching in .Net Core  and specifically with Azure services.

The performance and scalability of an application can be significantly improved by the use of cached data storage. The data with little variability over time are ideal to be stored since copies of them are generated and can be obtained in less time with respect to their original source.

.Net Core supports several cache mechanisms, such as in memory and in a distributed manner. The applications hosted in the cloud are ideal for the use of distributed cache where data is stored in shared repositories and are available for all servers in the application. The purpose of this article is to be a guide and present a comparison of the distributed cache implementation alternatives that we use in Trans Solutions Systems, and which provides .Net Core in a native way such as Redis Cache and SQL Server deployed as services in Azure.

Services in Azure

As reference points for both implementations, we proceed to create the following services in Azure with basic features:

  • Redis Cache

A single node without replication.

  • SQL Database

Standard hard disk without replication.

Project of reference points .Net Core

We proceed to create a new .Net Core 1.1 console project by adding the following additional packages:

  • BenchmarkDotNet

Utility package to create reference points.

  • Microsoft.Extensions.Caching.Abstractions

Cache abstractions package for .Net Core.

  • Microsoft.Extensions.Caching.Redis

Implementation of distributed cache with Redis.

  • Microsoft.Extensions.Caching.SqlServer

Implementation of distributed cache with SQL Server.

  • Newtonsoft.Json

Library for JSON format conversions.

Distributed .Net Core cache is only prepared to use String or Byte [] data types. Therefore, to extend its use to any object, extensions are created to the IDistributedCache interface by converting any object to JSON format.

Next, we proceed to create a POCO class to send and receive data from the distributed cache.

Using attributes of the BenchmarksDotNet package, a class is created that will evaluate the reference points for Redis and SqlServer.

Custom Cache Methods

Custom methods are created that will be consumed by each implementation:

  • CreateData – Generates an object with random data of the CacheItem class.
  • SetAndGetSingleItem – Evaluates the writing and reading of a single element.
  • SetAndGetNItems – Evaluates the writing and reading of a collection of N elements.

Then, the specific benchmark methods for Redis and SqlServer are created:

  • Write and read an element.
  • Write and read a collection of 10 elements.

Finally, we proceed to integrate the code with the main program.

It is necessary to execute the scripts for the creation of the Sql Server database and table with the dotnet sql-cache command.

When executing the program, each reference point (benchmark) will be evaluated 16 times independently.

Conclusion

This exercise has made it possible to become familiar with the distributed cache options provided by .Net Core using services deployed in Azure and to use tools that allow executing comparison cases based on reference points with their native implementations.
Also, we can say that the implementation of Redis, apparently, turns out to be the best alternative for the use of a distributed cache with .Net Core. However, this exercise should be done again and adding other caching platforms integrated into the framework.

You may also like...

Leave a Reply

Your email address will not be published.