Backpack and Digitalme

Today we had the first meeting of the Backpack Workgroup. It is a very welcoming news for us in Digitalme as we have been working hard to keep the Backpack alive for the last few years.

I haven’t been involved with the Backpack for long enough to tell the story, but Doug’s blog covers it. Instead, in my words, I will try to talk a little about what Digitalme have been doing with the Backpack and why.

We believed in what Backpack represents and when it didn’t get the attention that it deserved we took it on. So far we have;

  • Revamped the user interface and made it mobile friendly.
  • Migrated Persona to in-house authentication system.
  • Fixed the Connect API .
  • As well as fixing many other issues reported by the community.

Most importantly we now have an active code review process in place and we are ready to accept contributions from the technical members of the community on Github.

Digitalme does not have a vision of owning the Backpack. In fact, the beauty of the Backpack is that it is not owned by a business and is built for the public. It can be successful only if it is backed and used by the community. Purely out of passion, Digitalme with support from Mozilla have looked after this cool product while it was not well maintained. We believe that it still has potential. But we now need a helping hand.

I am a technical person, so I am not good at putting future visions forward. Jason’s Credential Switch Guarantee is one way forward and many of us have varios ideas about where the Backpack should head. Today, the general consensus was that we don’t want to build a feature-heavy Backpack to the point that it competes with other badge issuer and displayer platforms. Instead, it can be the glue that connects these platforms together. Emphasizing what features are available in each issuer and displayer; guiding the badge earner in the direction that helps their badge journey be smoother. Developments such as multiple email support, compliancy with Open Badges 2.0 and badge registry can be some of the first steps in this direction.

The key point is that the future of the Backpack should be shaped and supported by the badge community in the way that everyone finds it useful.

How to use ElastiCache, Memcached or Runtime.Caching in C#

All software project struggle to cope with the demand as the size and density of the application increases. Database performance is often a key player and it is critical to design the the database with performance in mind.
One way to make the database run faster, is not to use it. Well, to use it only when needed and cache the data, which has not yet changed.

Runtime Cache
Regardless of the language you use, there are always many caching tools. Runtime.Caching is one of them.
If you are in the .NET world, this may be the quickest way to get started with caching. By default, the cache is stored locally in the memory of the web server.

Memcached
However, if you have multiple web servers and wish to keep the cache instances consistent across the servers, it gets complicated with Runtime.Caching. Memcached is one of the options to tackle this problem.

“Memcached is an in-memory key-value store for small chunks of arbitrary data (strings, objects) from results of database calls, API calls, or page rendering.”

Here is a blog about how to install memcached on a 64 bit windows machine. Here is the version of memcached I used.
Once installed and started, memcached can be accessed via telnet on port 11211, if you have installed with the default settings.

telnet localhost 11211

Here are more telnet commands to interact with memcached.

EnyimMemcached
The next part is to access memcached from a C# application. I used EnyimMemcached in this part. Their wiki page is handy.
Briefly, the NuGet package needs being installed, the rest is straight forward and the code is self-explanatory.
One thing to keep in mind, is that Enyim serializes the objects before storing them in memcached, so make sure the classes that you cache have the [Serializable()] and [DataContract()] attributes.

By default EnyimMemcached does not trigger exception, if it can not store an item to the cache. This makes debugging a little bit difficult, especially if you are not sure which one of the 100 classes is not serializable. Hence, the logging features are your friend during development. If I get enough time, I will explain the logging part as well.

It turned out that Memcached gets unresponsive after a big number of active clusterclients; a few hundred or thousands, depending on your infrastructure. Hence, in this project, I tried to preserve and re-use the clusterclient.

Amazon ElastiCache
When more caching space is needed, more memcached clusters can be added. It is not difficult to manage several memcached clusters in multiple servers. Yet, Amazon has made it even easier with Amazon ElastiCache. It is basically, memcached on the cloud. Amazon’s getting started page is very useful, if you are not familiar with ElastiCache.

It does not take much to switch a memcached application to ElastiCache, just changing the endpoint is enough. Beware that ElastiCache can be accessed only from within the Amazon Web Services! In my case the application is hosted on an AWS EC2.

Diagnostic and Logging
I used Nlog to log the EnyimMemcached messages. Nlog is ready to go, after installing the NuGet package and some configuration.

Logentries
Later, I found it difficult to track all the log messages in a text file. Logentries is good fit for this situation, its web interface makes it easy to find the exact log message, when you need it.
I used Logentries as a new Nlog target. Logentries has an easy to follow document about how to do this. Again, it is about installing the NuGet package and some configuration.

In conclusion, this C# project stores cache items either in local memory with Runtime.Caching, a memcached server or a ElastiCahe endpoint. The diagnostic data can be logged into a text file or Logentries.
I added some MSUnit tests and also a few methods to push the cache servers a little bit and check how they perform under pressure. Guess who is the winner!
If you still feel like you want to see the code, download or clone it on Github.

MSSQL vs Aurora vs DynamoDB vs ElastiCache

Just a few years ago, the number of database technologies could almost be counted on the fingers of one hand. Nowadays there are as many options for each use case.
As ever, the abundance of options contributes to the complexity of making a choice. The type of data which will be stored, its amount, the way it is used, accessed and stored are points that need to be well-considered before jumping on-board with a (or several) database technology(ies).
In many cases, it is difficult to use a single technology for all data storage needs and can end up with a data architecture consisted of several technologies.
There is a hypnotic side in the complication of a harmonic mixed data architecture.
Mixed Data Architecture
Polyglot Persistence by Alex Garland, 2015, April

This blog is about the performance comparison of just four relational and NoSQL database technologies; Microsoft SQL Server, Aurora, DynamoDB and memcached on ElastiCache, which all are available in Amazon Web Services.
You can read more about database technologies at “Don’t get distracted by new database technology“.

Knowing how different database technologies operate for different functions, helps to choose the right technology. As we needed a speed comparison of the few technologies we were considering. I started by setting up small applications for the create, read, update, delete(CRUD) operations with each database technology and executed the applications on EC2, to provide the same connectivity for all databases. All the databases were hosted on AWS and accessed from a Windows EC2 instance.
The performance test was run for 100 and 1000 records consecutively and results are displayed in the following graphs.

Database Performances for 100 Records
Database Performances for 1000 Records

I was not surprised by how slow MS SQL performed, however did not expect that Aurora would do so well, even compared to the NOSQL DynamoDB. In the other hand, even though ElastiCache is not in the same league as the other three, still having it in the graph puts things in perspective.

These results have helped us distribute our database across three technologies and try to get the most of each. I hope this will be useful to someone.

Autosaving Text Editor with JavaScript

I was looking for a simple JavaScript only auto-saving text editor, which would work smoothly with jQuery and Ajax calls.
After spending some time searching, decided to combine few existing pieces of code and write the rest from scratch.
This is a starting point and possibly can be improved in many ways. Please let me know your suggestions in the comments.

-Works with input and textarea items, by adding the class ‘InputContainer’
-A javascript save function can be defined for each item
-Automatically saves every 20 seconds of none-stop typing
-Automatically saves after 2 seconds of no typing
-Saves when clicked out of the text box
-Saves before closing the page
-Activates the autosave while typing and deactivates afterwards to avoid overloads
-Every save operation resets the timer to makes sure different autosave points don’t conflict or save too many times

View the demo
Download the source code or find it on GitHub.

Feel free to use/change the code as you need.
Thanks to Christian C. Salvadó for the typewatch code.

ARM Assembler Emulator with C

A while ago I wanted to learn ARM assembly and after some search for existing tools to execute and debug instructions, finally decided that the best way to learn an assembly would be to write an emulator for it.
C seemed a good choice, as being a middle level language, it provides high level constructs while still providing low level details that are closer to assembly language.

If you need more information about ARM assembly Pete Cockerell’s ARM Assembly Language Programming has almost all the information one may need.

It is not a complete emulator, but has the following functionalities to kick-start the journey of learning ARM assembly.

-Pipeline: Size 4
-Reserves memory locations for; Interrupt instructions and user instructions.
-Counts the cycles for each instruction and total cycles.
-Data processing instructions: Sets the overflow and carry flags.
-Branch instructions: Uses stack to store the link data if need.
-Data transfer instructions
-Interrupts instructions: Executes the interrupt instruction in the input instructions set or accepts hardware instruction before fetching a new instruction.
-Prints out the registers, user accessible memory, stack memory and the status flags. “debugEnabled” and “instructionLogEnabled” variables can be switched on for farther information in the output.

Please download the source code or access it on GitHub below. Feel free to use/edit it as you may.
The ‘ArmEmulator.c’  file in the ‘src’ folder includes all the required code. I used gcc to compile the C code, after installing it, you may just edit the folder path in the ‘Compile_ArmEmulator.bat’ and use it to compile the code.

Download the source code or find it on GitHub.

Hello World!

I am very bad at writing.
Yet I have decided to set a blog and scribble a few sentences about the pieces of work I finally figure out after an initial struggle and/or online research. This is just an act of appreciation and payback to all the online resources, which no developer could have done without.
So, as a rookie developer would do when starting a new programming language; Hello world!
Huge thanks to Metin Yilmaz for the encouragement to start this and also the aesthetic touch to make the blog look nice. Edward Stott, you have an elegant way of arranging words, thank you very much for your help with writing.

Cheers