Inextricably linked to the trend of hyper automation, autonomous systems have captured our collective imagination for quite a while.
In fact, we argue this goes back to the first industrial revolution, when we were confronted with mainstream ‘machines’ taking over human roles. There have been many movies and books about the topic too. Many of us will remember ‘I, Robot’, which was made not that long ago (2004), and it is set not that far away either (2035). It touches on the core of the fascination we have with autonomous systems: because just exactly, who is boss?
In ‘I Robot’, humans and robots have become substantially intertwined. This theme is mixed up with 'racial' segregation, in this case humans v robots. It plays with the fact that we may have created the perfect conditions for our collective enslavement. For most of the movie it paints a grim picture of where we may be heading.
This is one way to look at robots taking over the world, though. According to Yuri Noah Harari, an Israeli historian, it could just be what comes after the preceding RNA and DNA: further codification of information into more efficient storage modes. Be that bits, bytes, qubits, or wherever computing takes us next. In his view (for a lengthy read, refer to his book Homo Deus), this is only a natural part of evolution. And guess what? Species disappear all the time. Why wouldn’t we (in our current form)?
The above paragraph probably sets off your survival instincts to some degree. This certainly holds true for our collective response to the autonomous systems trend. Still, it is important to think about it and consciously evaluate what is happening. Therefore, allow us to define the trend first, in more objective terms, and subsequently reflect on what this might mean for integrations.
Here is a definition that is quite broad and good at capturing what falls under this trend, in a direct sense:
‘An autonomous system is one that can achieve a given set of goals in a changing environment—gathering information about the environment and working for an extended period of time without human control or intervention. Driverless cars and autonomous mobile robots (AMRs) used in warehouses are two common examples.
Autonomy requires that the system be able to do the following:
-Sense the environment and keep track of the system’s current state and location.
-Perceive and understand disparate data sources.
-Determine what action to take next and make a plan.
-Act only when it is safe to do so, avoiding situations that pose a risk to human safety, property or the autonomous system itself.’
There are lots of other familiar examples as well, like autonomous vacuum cleaners, automatic pool cleaners, smart fridges, and other smart household appliances.
Further technologies that are required to be improved for autonomous systems to come to fruition beyond where we stand today, are already covered in many other trend blogs we wrote; such as artificial intelligence, machine learning, 5G, Blockchain, Cloud, Edge Computing, and next generation cyber security security.
In addition to these technical developments, we also need to improve regulatory frameworks, policies and procedures globally, to provide a safe legal environment for the autonomous systems to operate in.
What is holding these robots back?
In terms of the above mentioned survival instinct, in reality, we are pretty scared of giving any autonomous system any real autonomy. Especially where it comes to our safety, or where it can control us explicitly. We are generally okay with robots operating parts of factories, as ‘cute’ receptionists, vacuum cleaners, or if they help the elderly feel less alone. Provided not too many jobs are made redundant, probably.
But autonomous cars? No thank you. It’s a true nightmare for the driverless car industry, and the biggest bottleneck for its advancement, refer to this study.
Technologically they were ready a while ago. Anecdotes of hacked breaks, perceived ‘odd’ decision making in terms of which person to hit/kill and grey areas of responsibility in terms of responsibility for human safety between machine and person, can be found everywhere.
In many ways, this is interesting. First of all statistically, of course, since we humans are everything but flawless. In the example of autonomous cars, and in line with the study linked above, it is probably the machine that wins in the long run in terms of accidents and fatality rates. There is hope for adoption in time, though. It appears that certain societies and demographics are more willing to embrace autonomous vehicles: this article mentions China, geographically and young male millennials in terms of demographics.
Aside from time, a stranger concept on the road to adoption is that we appear to accept autonomous vehicles more if they exhibit human characteristics. Showing once more, we are fundamentally wired differently to machines. It appears we need to rewire ourselves to learn when to trust an autonomic system and when human judgement (still) prevails. Which may be counterintuitive to our emotional response. As explained here, it is obviously no good to use system 'likability' as a yardstick for how much to place reliance on it (plainly stated, thinking a car that is being nice to you is safer than a car that isn't).
Autonomous systems, magic and integrations
Alexander Mankowsky, a futurologist at Daimler in Stuttgart, Germany, says that a key message is for consumers to not believe in magic insofar as autonomous vehicles are concerned.
It is here that we see a parallel in terms of how integrations will work…in our experience people expect APIs to just talk to each other, like magic, and they almost never do in reality.
Specifically, autonomous systems integrations (let alone autonomous integrations, i.e. the ‘magic’ described above) are an underdeveloped field. For example, some big technology players suggests to just contact consultants, which we feel is a placeholder and catch-all solution anyone could offer, anytime, for any problem. Therefore, more work to be done.
We expect that autonomous system integrations will play in the IoT domain with sensors and very specific IoT devices involved with the autonomous system emitting very specific signals, to be received by very specific receptors. These ‘direct’ autonomous system integrations are probably not our cup of tea (yet).
We do see ‘traditional’ integrations players like ourselves active in the space of integrating information generated by autonomous systems, into other core organisational systems. In fact, we have a few live use cases that border on this already. We are fascinated by this trend and therefore hope to work with these non-traditional sources of data more and more. This will also allow us to gradually build up our knowledge for the next evolution of autonomous systems, and integration requirements.
So then we resort to the true science fiction question: can we expect to see autonomous integrations? In other words, can we make our own jobs redundant? Will the integrations code and configure themselves?
The answer at this stage would be, possibly. There are tools available, that leverage artificial intelligence to build software.
Having said that, at the moment these could not just be deployed and do what Harmonizer does. However, as they mature and are trained in the specific domain of integrations, this might be different.
- They would have to learn how to search for, read and understand open API documentation online, and translate this to requirements. Text which they would need to semantically understand as it is fed to them by a prospect.
- They would then need to figure out the data model and interaction of both/all systems at either end of the integration, and through learning from past integrations, suggest the best technical approach to deliver on the integration requirements.
- Finally, they would need to translate this to code. Next they would test the solution. Once the customer presses the “go” button, they could switch to production.
As always, we welcome your thoughts on the matter.
Photo by Alex Knight on Unsplash