Featured Post

Linkedin

 These days, I mostly post my tech musings on Linkedin.  https://www.linkedin.com/in/seanmcgrath/

Friday, February 05, 2016

The biggest IT changes in the last 5 years: The re-emergence of data flow design

My first exposure to data flow as an IT design paradigm came around 1983/4 in the form of Myers and Constantine's work on "Structured Design" which dates from 1974.


I remember at the time finding the idea really appealing but yet, the forces at work in the industry and in academic research pulled mainstream IT design towards non-flow-centric paradigms. Examples include Stepwise Decomposition/Structured Programming (e.g Dijkstra), Object Oriented Design e.g. (Booch),  Relational Data modelling (e.g. Codd).



Over the years, I have seen pockets of mainstream IT design terms emerging that have data flow-like ideas in them. Some recent relevant terms would be Complex Event Processing and stream processing.

Many key dataflow ideas are built into Unix. Yet creating designs leveraging line-oriented data formats, piped through software components, local-and-remote, everything from good old 'cat' to GNU Parallels and everything in between, has never, to my knowledge, been given a design name reflective of just how incredibly powerful and commonplace it is.

Things are changing I believe, thanks to cloud computing and multi-core parallel computing in general. Amazon AWS pipeline, Google Dataflow, Google Tensorflow are good examples. Also, bubbling away under the radar are things like FBP (Flow Based Programming), buzz around Elixer and similar such as shared-nothing architectures.

A single phrase is likely to emerge soon I think. Many "grey beards" from JSD (Jackson Stuctured Design), to IBM MQSeries (asynch messaging), to Ericsson's AXE-10 Erlang engineers, to Unix pipeline fans, will do some head-scratching of the "Hey, we were doing this 30 years ago!" variety.

So it goes.

Personally, I am very excited to see dataflow re-emerge to mainstream. I naturally lean towards thinking in terms of dataflow anyway. I can only benefit from all the cool new tools/techniques that come with mainstreaming of any IT concept.

Thursday, February 04, 2016

The 'in', 'on' and 'with' questions of software development

I remember when the all important question for a software dev person looking at a software component/application was "What is it written in?"

Soon after that, a second question became very important "What does it run on?"

Nowadays, there is a third, really important question, "What is it build/deployed with?"

"In" - the programming language of the component/app itself
"On" - the run-time-integration points e.g. OS, RDB, Browser, Logging
"With" - the dev/ops tool chain eg. source code control, build, regression, deploy etc.

In general, we tend to underestimate the time, cost and complexity of all three :-) However, the "With" category is the toughest to manage as it is, by definition, scaffolding used as part of the creation process. Not part of the final creation.

Tuesday, February 02, 2016

Blockchain this, blockchain that...

It is fun watching all the digital chatter about blockchain at the moment. There is wild stuff at both ends of the spectrum. I.e. "It is rubbish. Will never fly. All hype. Nothing new here. Forget about it." on one end and "Sliced bread has finally met its match! Lets appoint a CBO (Chief Blockchain Officer)" on the other.

Here is the really important bit I think: The blockchain shines a light on an interesting part of the Noosphere. The place where trust in information is something than can be established without needing a central authority.

That's it.   Everything about how consensus algorithms work, how long they take to run, how computationally expensive they are, are all secondary and the S curve will out (http://en.wikipedia.org/wiki/Innovation). I.e. that implementation stuff will get better and better.

Unless of course, the proves to be some hard limit imposed by information theory that cannot be innovated around e.g. something analogous to the CAP Theorem or Entropy Rate Theorem or some such.

To my knowledge, no such fundamental limits are on the table at this point.  Thus the innovators are free to have a go and that is what they will do.

The nearest thing to a hard limit that I can see on the horizon is the extent to which the "rules" found in the world of contracts/legislation/regulation can be implemented in  "rules" that machines can work with. This is not so much an issue for the trust concepts of Blockchain as it is for the follow-on concept of Smart Contracts.