Saturday 29 October 2016

Version 2.0 in more detail

I've had several enquires seeking more details about version 2.0 of cqrs.net, and found myself sharing the same details, so here's some further details.

Currently cqrs.net is two things. Firstly a core framework to provide a solid structure to build on in pretty much any way you want to. Secondly, it is some tooling to remove a lot of hand written code IF you want to have a strong domain driven design (DDD) structure over the top of a CQRS base. This is an important point. As with all software there are compromises, be it architecture, maintainability, technical debt, ease of support or deployment.

Given most of DDD is repetition in the form of creating classes that follow a certain set of rules and patterns to create separation and order, much of what a developer ends up coding (time their fingers are on the keyboard) isn't actually coding the problem solving code... it's creating the structure for the separation.

Currently our tooling tackles this issue by allowing you to define (using UML) your commands, events, aggregates, queries and some thin façade services to provide public access to these generally internal concepts. Once defined, you can use the code generation built into our tooling to have the scaffold of all your classes created for you. Literally all you need to code is the actual business logic itself... usually some form of if or switch statement that asserts some condition and then publishes an event. Really this is what developers do when you take away the need to code structure - obviously there are things like calling third party services and sending emails or communicating with firmware, but basic process and work-flow control should be fast.

While writing our documentation, it really hit home how crude and basic our tooling was, just look at our first attempt
at https://github.com/Chinchilla-Software-Com/CQRS/wiki/Tutorial-0:-Quick-Northwind-sample. It's all branch, node and leaf based in a tree... not really that visual.

What version 2.0 is focused on is taking our tooling to the next step by adding visual work-flows, similar to what is seen in the blog post

https://cqrs-net.blogspot.co.nz/2016/10/version-20.html. This means you can drop a command, aggregate and several events onto a visual view and connect them together to define in a visual way that we start with this command, that will be handled by this aggregate, and return one or more of the following events.

From there the code generation will create your scaffold, again leaving only the code of reading the incoming command, doing whatever if/switch logic and depending on the path chosen publish the event(s) needed. The advantage here being that less technical people can be involved in the process of defining what the work-flow should do. This then becomes a promise from the development team that this is what you all agree on to be done. Other possible work-flows are out of scope (to be done later probably) - thus scope creep is avoided and unintentional work-flow changes don't occur. If you do need to be agile and modify the work-flow, the consequences of doing so are very visually apparent and easily spotted. This will all be backwards compatible with our existing tooling, so if you started with the branch/node/leaf based tooling you won't be wasting time. You'll be able to use which ever part of the tooling is most suitable to you and your needs at the time.

With version 2.0 we also aim have our akka.net modules supported - we're currently still testing and developing the module as the akka.net project moves forward into a production ready state.

We already have some improvements around simpler implementations of event and data stores using SQL and more Azure service-bus options (EventHubs and topics will be supported out of the box).

Version 3 is where we'll be redefining some internal workings for the tooling (a simple migration path is a requirement so this might take some time) to prepare us for our future development which includes .net core. So this would be the earliest you'll see .net core being active on our road map. We're also very dependant on our third party dependencies, like Azure and MongoDB.


Saturday 1 October 2016

Sensible micro-services

Micro Servicing Properly

One of the biggest mistake you can make is getting the granularity right for your micro-service. Where most developers go wrong is only thinking about the code, and not the operational resourcing required. By this I mean technical support, ensuring up-time, scaling easily, reconfiguring without recompiling etc. Think like you're the guy who has to wake up at 3:30 am on a Sunday morning because a database is being thrashed or something has gone offline. When you start thinking like that guy, you realise smaller can (but not always) can be better.



Is it too big?


Take the following simple workflow where the main objective is some data analytics with the result stored. The steps might be:
  1. Locate the objects that need to analysed.
  2. Load the data for each object (lets assume this is external data, even if by external we mean from a database, a flat file or some other such data).
  3. Analyse the data for each object.
  4. Store the result of each objects analysis.
To many developers this might seem like a simple case of:
  1. Execute a DB query to get the objects
  2. For-each every item (maybe in parallel like PLINQ or await/async in C#)
    1. Load the objects data
    2. Sum or average the values loaded from 2.1
    3. Store the result into the database
This seems perfectly sensible for a report that is as simple as loading data, averaging the values and saving it, especially when the next report has a slightly different requirement and you have 15 of these reports to write, think like that guy who's been woken up at 3:30 am with a report from the customer that the reports aren't working.

Imagine in this case it's a (relatively) simple issue of a report that's killing your database locking tables and trying to write too much data. This in itself is an issue, but only in that it slows down how long the data takes to be written, except you wrote all of these reports in a total of 10 lines each in one method. It's not hard. It's only 10 lines of code per report. But now all your requests are failing to save, and worse still... that data sourced for step 2 is from a remote system, like an accounting, CRM or ERP system. If the system isn't brought back up now, all the data analysis will be missing for today.

How to think better.


If this had been built on a smaller, more fine grain level, say each step as a different service then you start getting into some really cool possibilities.

Take our poor support person looking at a thrashed database at 3:30. If each step was a separate micro-service with something between each service like a service/message bus, then the resulting data from step 3 would have been saved into a queue. When the database had recovered, all those saved messages would be able to be processed and there would have been no loss of data. The micro-service running step 3 could also have gone offline as the data loaded from step 2 would also have been saved into a queue. You've now added fault tolerance to two steps in your reporting system. The poor support person could go back to bed knowing the reports would still be accurately saved at some point.

How to think global

With micro-servicing at this smaller, more finely grained level, you can start effectively using resourcing. Step 1 and step 2 might not require much CPU resourcing as it has a lot of wait time as it involves databases and external data, but needs fast networking access. Step 3 however might require a lot of CPU resourcing as it processes and analysis the data as quickly as possible. Step 4 again only requires network access, but this time, it's internal network access to the database.
With this thinking you can now start moving some of the micro-services around. Take the micro-services running step 2, they can be moved to other parts of the world, where they are closer to the external service they are getting data from making them faster and cheaper. The micro-services running step 3 could be moved to somewhere that has cheap CPU cycles per minute costs. And step 4 might actually be on premise.

When you start thinking beyond just those 10 lines of code, the world opens up and expands greatly.

Version 2.0

New things to come.

We've been a little quiet as we realised, while writing our tutorials, our UI wasn't as good as it could be. In fact it was quite cumbersome for those that are just getting started. Soooo, we decided to start writing a Visual Studio extension. This has meant a few small changes here and there, but that's meant some new things are coming.

UML and Profile Updates:

The UML profiles are getting an update with new stereotype properties, but should be backwards compatible.

Build By Diagram

We've put a heap of effort into enabling developers to code less and model faster, so we've made a heap of improvements with building workflow and relationships using the built in diagram editor tools in Visual Studio. This then means all your classes are written for you using  automation via even more tools that come built-in to Visual Studio. Why code when you can manage how your classes and methods work with each other via drag and drop.



Solution and project Templates

You can now find a CQRS solution template in the online Visual Studio gallery.

Roadmap

We're working to polish off the new UI and editors to make the framework far more usable and accessible, which means we'll re-write all the tutorials (to polish out any minor bugs) then release version 2 of the UI tooling. We expect this to be finished by the end of the year.

Anyone wanting an early preview and play around, contact us and we'll invite you to our private beta programme.