It’s a good exercise to ask, “What’s the smallest thing that would work?” Understanding the limits is a key step in the creation of anything.
In that spirit, what is the minimum description of software? This question has been the domain of computer language designers for decades. What’s the minimum syntax? I think Lisp can be declared the winner. What’s the minimum required description of a language? That’s still being worked out.
What is minimum required computing interface? That one we might be able to answer. The public clouds are touching on the answer in the form of AWS Lambda or Azure Service Fabric — inspired (I assume) by the general trend toward micro-services. There is a notion of an event that can be accepted, and a pre-defined computation that should be executed with the event. In the case where no event is required, you just have a computation that needs to be run to completion.
Any computational job could be described as an event to respond to and a job to run. Event-driven programming is a powerful paradigm that has been around for a while. It requires an effective way to express kinds of events and some kind of program to accept them.
This is a very light definition, so it’s an attractive interface. However, there are some aspects of computing that it doesn’t take into consideration. The biggest one is the run time of the job. For example, it’s often the case that if you can process one event in 1 second, you can process 10 events in 5. This is often the case in database or distributed applications where your computation involves the creation of lots of intermediate sets. There are caching techniques to try to mitigate the problem, but often it’s much more effective to do lots of the same things together. This suggests that the interface needs to be expanded.