Monday, February 20, 2017

Paper Comp Sci Classics

Being a programmer/systems architect/whatever brings with it a big reading load just to stay current. It used to be the case that this, for me, involved consuming lots of physical books and periodicals. Nowadays, less so because there is so much good stuff online. The glory-days of paper-based publications are never coming back so I think its worth taking a moment to give a shout out to some of the classics.

My top three comp sci books, the ones I will never throw out are:
- The C Programming Language by Kernighan and Ritchie
- Structure and Interpretation of Computer Programs, Abelson and Sussman
- Godel, Escher, Bach, Hofstadter

Sadly, I did dump a lot of classic magazines:-/ Byte, Dr Dobbs, PCW....

Your turn:-)


Friday, January 27, 2017

ChatOps, DevOps, Pipes and Chomsky

ChatOps is an area I am watching closely, not because I have a core focus on DevOps per se, but because Conversational User Interfaces is a very interesting area to me and ChatOps is part of that.

Developers - as a gene pool - have a habit of developing very interesting tools and techniques for doing things that save time down in the "plumbing". Deep down the stack where no end-user ever treads.

Some of these tools and techniques stay there forever. Others bubble up and become important parts of the end-user-facing feature sets of applications and/or important parts of the application architecture, one level beneath the surface.

Unix is full of programs, patterns etc. that followed this path. This is from Doug McIllroy in *1964*

"We should have some ways of coupling programs like garden hose--screw in another segment when it becomes when it becomes necessary to massage data in another way."

That became the Unix concept of a bounded buffer "pipe" and the now legendary "|" command line operator.

For a long time, the Unix concept of pipes stayed beneath the surface. Today, it is finding its way into front ends (graphics editing pipelines, audio pipelines) and into applications architectures (think Google/Amazon/Microsoft cloud-hosted pipelines.)

Something similar may happen with Conversational User Interfaces. Some tough nuts might end up being cracked down in the plumbing layers by DevOps people, for their own internal use, and then bubble up....

The one that springs to mind is that we will need to get to the point where hooking in new sources/sinks into ChatBots doesn't involve breaking out the programming tools and the API documentation. The CUI paradigm itself might prove to be part of the solution to the integration problem.

For example, what if a "zeroconf" for any given component was that you could be guaranteed to be able to chat to it - not with a fully fledged set of application-specific dialog commands, but with a basis set of dialog components from which a richer dialog could be bootstrapped.

Unix bootstrapped a phenomenal amount of integration power from the beautifully simple concept of standard streams for input, output and error. A built-in lingustic layer on top of that for chatting about how to chat, is an interesting idea. Meta chat. Talks-about-talks. That sort of thing.

Dang, just as Chomsky's universal grammar seems to be gathering dissenters...:-)


Wednesday, December 21, 2016

The new Cobol, the new Bash

Musing, as I do periodically, on what the Next Big Thing in programming will be, I landed on a new (to me) thought.

One of the original design goals of Cobol was English-like nontechnical readability. As access to NLP and AI continues to improve, I suspect we will see a fresh interest in "executable pseudo-code" approaches to programming languages.

In parallel with this, I think we will see a lot of interest in leveraging NLP/AI from chat-bot CUI's in programming command line environments such as the venerable bash shell.

It is a short step from there I think, to a read-eval-print loop for an English-like programming environment that is both the programming language and the operating system shell.

Hmmm....

Friday, November 25, 2016

Recommender algorithms R Us

Tommorow, at congregation.ie, my topic is recommender algorithms, although, at first blush, it might look like my topic is the role of augmented reality in hamster consumption.

A Pokemon ate my hamster.

Friday, November 04, 2016

J2EE revisited

The sheer complexity of the Javascript eco-system at present, is eerily reminiscent of the complexity that caused many folk to balk at J2EE/DCOM back in the day.

Just sayin'.

Wednesday, October 19, 2016

Nameless things within namless things

So, I got to thinking again about one of my pet notions - names/identifiers - and the unreasonable amount of time IT people spend naming things, then mapping them to other names, then putting the names into categories that are .... named.... aliasing, mapping, binding, bundling, unbundling, currying, lamdizing, serializing, reifying, templating, substituting, duck-typing, shimming, wrapping...

We do it for all forms of data. We do it for all forms of algorithmic expression. We name everything. We name 'em over and over again. And we keep changing the names as our ideas change, and the task to be accomplished changes, and the state of the data changes....

It gets overwhelming. And when it does, we have a tendency to make matters worse by adding another layer of names. A new data description language. A new DSL. A new pre-processor.

Adding a new layer of names often *feels* like progress. But it often is not, in my experience.

Removing the need for layers of names is one of the great skills in IT in my opinion. It is so undervalued, the skill doesn't have um, a, name.

I am torn between thinking that this is just *perfect* and thinking it is unfortunate.

Wednesday, October 05, 2016

Semantic CODECs

It occurred to me today that the time-honored mathematical technique of taking a problem you cannot solve and re-formulating it as a problem (perhaps in a completely different domain) that you can solve, is undergoing a sort of cambrian explosion.

For example, using big data sets and deep learning, machines are getting really good at parsing images of things like cats.

The more general capability is to use a zillion images of things-like-X to properly classify a new image being either  like-an-X or not-like-an-X, for any X you like.

But X is not limited to things we can take pictures of. Images don't have to come from cameras. We can create images from any abstraction we like. All we need is an encoding strategy....a Semantic CODEC if you will.

We seem to be hurtling towards large infrastructure that is specifically optimized for image classification. It follows, I think, that if you can re-cast a problem into an image recognition problem - even if it has nothing to do with images - you get to piggy-back on that infrastructure.

Hmmmmm.