Wednesday, December 21, 2016

The new Cobol, the new Bash

Musing, as I do periodically, on what the Next Big Thing in programming will be, I landed on a new (to me) thought.

One of the original design goals of Cobol was English-like nontechnical readability. As access to NLP and AI continues to improve, I suspect we will see a fresh interest in "executable pseudo-code" approaches to programming languages.

In parallel with this, I think we will see a lot of interest in leveraging NLP/AI from chat-bot CUI's in programming command line environments such as the venerable bash shell.

It is a short step from there I think, to a read-eval-print loop for an English-like programming environment that is both the programming language and the operating system shell.


Friday, November 25, 2016

Recommender algorithms R Us

Tommorow, at, my topic is recommender algorithms, although, at first blush, it might look like my topic is the role of augmented reality in hamster consumption.

A Pokemon ate my hamster.

Friday, November 04, 2016

J2EE revisited

The sheer complexity of the Javascript eco-system at present, is eerily reminiscent of the complexity that caused many folk to balk at J2EE/DCOM back in the day.

Just sayin'.

Wednesday, October 19, 2016

Nameless things within namless things

So, I got to thinking again about one of my pet notions - names/identifiers - and the unreasonable amount of time IT people spend naming things, then mapping them to other names, then putting the names into categories that are .... named.... aliasing, mapping, binding, bundling, unbundling, currying, lamdizing, serializing, reifying, templating, substituting, duck-typing, shimming, wrapping...

We do it for all forms of data. We do it for all forms of algorithmic expression. We name everything. We name 'em over and over again. And we keep changing the names as our ideas change, and the task to be accomplished changes, and the state of the data changes....

It gets overwhelming. And when it does, we have a tendency to make matters worse by adding another layer of names. A new data description language. A new DSL. A new pre-processor.

Adding a new layer of names often *feels* like progress. But it often is not, in my experience.

Removing the need for layers of names is one of the great skills in IT in my opinion. It is so undervalued, the skill doesn't have um, a, name.

I am torn between thinking that this is just *perfect* and thinking it is unfortunate.

Wednesday, October 05, 2016

Semantic CODECs

It occurred to me today that the time-honored mathematical technique of taking a problem you cannot solve and re-formulating it as a problem (perhaps in a completely different domain) that you can solve, is undergoing a sort of cambrian explosion.

For example, using big data sets and deep learning, machines are getting really good at parsing images of things like cats.

The more general capability is to use a zillion images of things-like-X to properly classify a new image being either  like-an-X or not-like-an-X, for any X you like.

But X is not limited to things we can take pictures of. Images don't have to come from cameras. We can create images from any abstraction we like. All we need is an encoding strategy....a Semantic CODEC if you will.

We seem to be hurtling towards large infrastructure that is specifically optimized for image classification. It follows, I think, that if you can re-cast a problem into an image recognition problem - even if it has nothing to do with images - you get to piggy-back on that infrastructure.


Wednesday, September 21, 2016

The next big exponential step in AI

Assuming, for the moment, that the current machine learning bootstrap pans out, the next big multiplier is already on the horizon.

As more computing is expressed in forms that require super-fast, super-scalable linear algebra algorithms (a *lot* of machine learning techniques do this), it becomes very appealing to find ways to execute them on quantum computers. Reason being, exponential increases are possible in terms of parallel execution of certain operations.

There is a fine tradition in computing of scientists getting ahead of what today's technology can actually do. Charles Babbage, Dame Ada Lovlace, Alan Turing, Doug Englebart, Vannever Bush, all worked out computing stuff that was way ahead of the reality curve, and then reality caught up with their work.

If/when quantum computing gets out of the labs, the algorithms will already be sitting in the Machine Learning libraries ready to take advantage of them, because forward looking researchers are working them out, now.

In other words, it won't be a case of "Ah, cool! We have access to a quantum computer! Lets spend a few years working out how best to use them.". Instead it will be "Ah, cool!. We have access to a quantum computer! Lets deploy all the stuff we have already worked out and implemented, in anticipation of this day."

It reminds me of the old adage (attributable to Poyla, I think) about "Solving for N". If I write an algorithm that can leverage N compute nodes, then it does not matter that I might only be able to deploy it with N = 1 because of current limitations. As soon as new compute nodes become available, I can immediately set N = 2 or 2000 or 20000000000 and run stuff.

With the abstractions being crafted around ML libraries today, the "N" is being prepped for some very large potential values of N.

Thursday, September 15, 2016

Deep learning, Doug Englebart and Jimmy Hendrix

The late great Doug Englebart did foundational work in many areas of computing and was particularly interested in the relationship between human intelligence and machine intelligence.

Even a not-so-smart machine can augment human productivity if even simple cognitive tasks can be handled by the machine. Reason being, machines are super fast. Super fast can compensate for "not-so-smart" in many useful domains. Simple totting up of figures, printing lots of copies of a report, shunt lots of data around, whatever.

How do you move a machine from "not-so-smart" to "smarter" for any given problem? The obvious way is to get the humans to do the hard thinking and come up with a smarter way. It is hard work because the humans have to be able to oscillate between smart thinking and thinking like not-so-smart machines because ultimately the smarts have to be fed to the not-so-smart machine in grindingly meticulous instructions written in computer-friendly (read "not-so-smart") programs. Simple language because machines can only grok simple language.

The not-so-obvious approach is to create a feedback loop where the machine can change its behavior over time by feeding outputs back into inputs. How to do that? Well, you got to start somewhere so get the human engineers to create feedback loops and teach them to the computer. You need to do that to get the thing going - to bootstrap it....

then stand back....

Things escalate pretty fast when you create feedback loops! If the result you get is a good one, it is likely to be *a lot* better than your previous best because feedback loops are exponential.

Englebart's insight was to recognize that the intelligent, purposeful creation of feedback loops can be a massive multiplier : both for human intellect at the species level, and at the level of machines. When it works, it can move the state of the art of any problem domain forward, not by a little bit, but by *a lot*.

A human example would be the invention of writing. All of a sudden knowledge could survive through generations and could spread exponentially better than it could by oral transmission.

The hope and expectation around Deep Learning is that it is basically a Doug Englebart Bootstrap for machine intelligence. A smart new feedback loop in which the machines can now do a vital machine intelligence step ("feature identification") that previously required humans. This can/should/will move things forward *a lot* relative to the last big brohuha around machine intelligence in the Eighties.

The debates about whether or not this is really "intelligence" or just "a smarter form of dumb" will rage on in parallel, perhaps forever.

Relevance to Jimmy Hendrix? See