How to use ElastiCache, Memcached or Runtime.Caching in C#

All software project struggle to cope with the demand as the size and density of the application increases. Database performance is often a key player and it is critical to design the the database with performance in mind.
One way to make the database run faster, is not to use it. Well, to use it only when needed and cache the data, which has not yet changed.

Runtime Cache
Regardless of the language you use, there are always many caching tools. Runtime.Caching is one of them.
If you are in the .NET world, this may be the quickest way to get started with caching. By default, the cache is stored locally in the memory of the web server.

However, if you have multiple web servers and wish to keep the cache instances consistent across the servers, it gets complicated with Runtime.Caching. Memcached is one of the options to tackle this problem.

“Memcached is an in-memory key-value store for small chunks of arbitrary data (strings, objects) from results of database calls, API calls, or page rendering.”

Here is a blog about how to install memcached on a 64 bit windows machine. Here is the version of memcached I used.
Once installed and started, memcached can be accessed via telnet on port 11211, if you have installed with the default settings.

telnet localhost 11211

Here are more telnet commands to interact with memcached.

The next part is to access memcached from a C# application. I used EnyimMemcached in this part. Their wiki page is handy.
Briefly, the NuGet package needs being installed, the rest is straight forward and the code is self-explanatory.
One thing to keep in mind, is that Enyim serializes the objects before storing them in memcached, so make sure the classes that you cache have the [Serializable()] and [DataContract()] attributes.

By default EnyimMemcached does not trigger exception, if it can not store an item to the cache. This makes debugging a little bit difficult, especially if you are not sure which one of the 100 classes is not serializable. Hence, the logging features are your friend during development. If I get enough time, I will explain the logging part as well.

It turned out that Memcached gets unresponsive after a big number of active clusterclients; a few hundred or thousands, depending on your infrastructure. Hence, in this project, I tried to preserve and re-use the clusterclient.

Amazon ElastiCache
When more caching space is needed, more memcached clusters can be added. It is not difficult to manage several memcached clusters in multiple servers. Yet, Amazon has made it even easier with Amazon ElastiCache. It is basically, memcached on the cloud. Amazon’s getting started page is very useful, if you are not familiar with ElastiCache.

It does not take much to switch a memcached application to ElastiCache, just changing the endpoint is enough. Beware that ElastiCache can be accessed only from within the Amazon Web Services! In my case the application is hosted on an AWS EC2.

Diagnostic and Logging
I used Nlog to log the EnyimMemcached messages. Nlog is ready to go, after installing the NuGet package and some configuration.

Later, I found it difficult to track all the log messages in a text file. Logentries is good fit for this situation, its web interface makes it easy to find the exact log message, when you need it.
I used Logentries as a new Nlog target. Logentries has an easy to follow document about how to do this. Again, it is about installing the NuGet package and some configuration.

In conclusion, this C# project stores cache items either in local memory with Runtime.Caching, a memcached server or a ElastiCahe endpoint. The diagnostic data can be logged into a text file or Logentries.
I added some MSUnit tests and also a few methods to push the cache servers a little bit and check how they perform under pressure. Guess who is the winner!
If you still feel like you want to see the code, download or clone it on Github.

MSSQL vs Aurora vs DynamoDB vs ElastiCache

Just a few years ago, the number of database technologies could almost be counted on the fingers of one hand. Nowadays there are as many options for each use case.
As ever, the abundance of options contributes to the complexity of making a choice. The type of data which will be stored, its amount, the way it is used, accessed and stored are points that need to be well-considered before jumping on-board with a (or several) database technology(ies).
In many cases, it is difficult to use a single technology for all data storage needs and can end up with a data architecture consisted of several technologies.
There is a hypnotic side in the complication of a harmonic mixed data architecture.
Mixed Data Architecture
Polyglot Persistence by Alex Garland, 2015, April

This blog is about the performance comparison of just four relational and NoSQL database technologies; Microsoft SQL Server, Aurora, DynamoDB and memcached on ElastiCache, which all are available in Amazon Web Services.
You can read more about database technologies at “Don’t get distracted by new database technology“.

Knowing how different database technologies operate for different functions, helps to choose the right technology. As we needed a speed comparison of the few technologies we were considering. I started by setting up small applications for the create, read, update, delete(CRUD) operations with each database technology and executed the applications on EC2, to provide the same connectivity for all databases. All the databases were hosted on AWS and accessed from a Windows EC2 instance.
The performance test was run for 100 and 1000 records consecutively and results are displayed in the following graphs.

Database Performances for 100 Records
Database Performances for 1000 Records

I was not surprised by how slow MS SQL performed, however did not expect that Aurora would do so well, even compared to the NOSQL DynamoDB. In the other hand, even though ElastiCache is not in the same league as the other three, still having it in the graph puts things in perspective.

These results have helped us distribute our database across three technologies and try to get the most of each. I hope this will be useful to someone.