The death of intuition?
Whether its maps on our smartphones, or machines that cook the perfect dish at the touch of a button, there is barely an element of decision-making in our lives left untouched by technology, and more recently artificial intelligence (AI).
But what if all of the technology disappeared tomorrow?
No more sensors. No more cameras. No more computers. Such a scenario may seem fanciful, but it warrants consideration. Not because it’s necessarily likely to happen, but because by doing so leads us to ask fundamental questions relating to how we engage with technology:
• Where has the impact of technology been most felt?
• How has it changed the way in which we do our jobs?
• How positive has its influence really been?
• Why has technology proliferated into almost every aspect of our lives?
Whilst the above questions seem to have obvious answers, some in fact do not and are also seldom asked. In many cases, we’ve run headlong into technology adoption without clear answers to them and right now are doing exactly the same thing with AI, despite calls for a pause in development.
So as we continue to adopt more and new technologies and apparently achieve satisfactory outcomes, will this reduce the need to rely on our intuition?
What is intuition – and why does it matter?
Simply, intuition is the recognition of patterns stored in the memory. Ideally it can then be used, either consciously or subconsciously, to make a decision and take an appropriate course of action. Whether it’s fast or slow decision-making, we all rely on our intuition daily and it is constantly updating based on new information we receive. Developing it has the potential to improve confidence and produce outcomes in scenarios whereby there isn’t time to assess objective data or where technology isn’t available. Yet despite this most of us dedicate little to no time formally improving it.
So what does this have to do with technology?
Well, technology has the ability to provide value to decision-making through at least one of three means: 1) supplementation, 2) replacement, or 3) refinement. Supplementation can be as simple as adding a new piece of information to shed new light on a problem or question, whereas replacement may provide a new process or system to enhance efficiency or reduce resources.
Most technology has focused on these first two means in order to make us smarter and more efficient – and a lot of the time it has done so. As technology has continued to develop, it could be argued that it has achieved this to the detriment of our intuition. We’ve become increasingly connected to the technological world and receive far more external relative to internal feedback on our actions than any other time in human history. This is a problem, because when it comes to intuition on many problems we need to ‘Use it or lose it’ (have you tried to navigate without your smartphone map after relying on it for years?). And right now, there are still a whole range of circumstances where a replacement or supplementation technology isn’t available, or simply isn’t performing better than our intuition.
As a result, it’s somewhat surprising that much less attention has been tended by technology developers to this third means: the refinement of human intuition. In a world currently fixated on AI-generated prediction and automation, perhaps an equally influential, largely unrealised way in which we can benefit from technology is staring us right in the face. So what are some of the key questions we should be asking ourselves before we decide whether to use technology to supplement, replace or refine?
Is the technology better than what we have now?
It stands to reason that if we’re going to replace our intuition with some form of technology, then it should improve on what we already have at our disposal. On simple problems, opportunities are easy enough to identify. No one would say that the human eye can measure the running speed of an athlete better than GPS technology, and to have the data available in near real-time is also a huge reason behind the latter’s popularity. Other times, things are less clear. In field-based settings, where understanding exactly how intuition or technology is working, it may be difficult to determine who is performing better on a given task. ‘New’ technology can also sometimes be conflated with ‘better’, simply because it has a flashy label, or our competitors have already adopted it.
Right now, speed and efficiency are the main selling point for lots of technologies. Whether its information, a service or decision on demand: speed and ubiquitous access are king. Sometimes even when a technology isn’t faster or more efficient than ourselves, we willingly adopt it simply to rid ourselves of the burden of a task or process. But is efficiency alone enough to make adoption of a technology worthwhile? Measuring a 100 m running circuit using a GPS system is certainly not more accurate than a tape measure, but it’s unquestionably faster. Computers are an obvious technology that helps us every day due to their speed in performing computationally difficult tasks. But in some cases, speed has simply superseded quality, reflecting our desire for everything on-demand: from fast food to streaming services, even to our own personal development. This also reflects the challenges faced by new disruptive tech that doesn’t meet this criterion: contrast the time taken to charge an electric car versus filling up at the tank.
More often than not though, defining what better actually means relies on context. A highly sophisticated, expensive tool may provide more detailed, valid information than what a human currently offers, but is it sufficiently more in order to warrant the financial investment? How much better would you need an important report to be to tolerate it taking overnight versus 1-hour to generate? The offset between a minimum satisfactory solution and the time taken to achieve the ‘best’ answer is a challenge well-handled by computer science and something that we face with our own decision making every day (see here for more on satisficing).
The good news is that defining your own ‘better’ is up to you and your organisation. If the accuracy of a new device or system is more important than its cost or efficiency, then this can be embedded into your technology adoption or acquisition process. Organisations being clear on what they value from technology is a simple but seldom undertaken exercise, especially when we consider that managing a tech strategy for many organisations is already a full-time job. A lack of one, combined with increasingly regular decisions about where to adopt or upgrade, will likely lead to it being only a matter of time until a big mistake gets made.
Do we need to know how the technology works?
If a new robot performs a difficult surgery that I happen to need at an almost perfect success rate then I don’t care how it works: I just care that it does. Contrastingly, if a computer-generated analysis run by a bank tells me that I’ve been rejected in my mortgage application then I absolutely want to know why, so I can understand which features of the process are biased against me and take steps to improve my chances next time (or find another provider!)
The question of whether we need to be able to develop a working understanding of technology is perhaps more prominent now than ever before due to the recent rise of AI. But due to AI’s inherent black box nature, very few if any of us will ever be able to possess this understanding, which of course also has profound implications for the ethics of decision making, particularly when AI outputs are biased or just plain wrong.
Sport is not immune from these challenges. If AI indicates that an athlete is at a high risk of injury and shouldn’t play in an upcoming match, then the athlete will likely want to know how that recommendation was made: which information was used to generate it and how this was weighted, particularly if they might disagree with it. After all, it’s their career and livelihood on the line.
The problem is that when it comes to refining intuition through comparisons to AI, it’s not enough to know that we were right or wrong: we need to why we were wrong. If we don’t, then we cannot learn and revise our assumptions when the next opportunity arises. Consequently, black box approaches cannot be used to refine intuition, unless solely determining the outcome is of interest.
Of course, there are times when not knowing how AI or technology works may not be a problem. Most of us rely on these systems all of the time; from spam filters, to rideshare apps, smart homes and media recommendations. But relying on technology in scenarios where we actually need to understand the ‘why’ comes with some warnings. The first bears repeating from above: if you don’t understand how it works, then you can’t use it to refine your own intuition. This means in many cases, in the event that a technology is replaced or disappears, you can’t replace the expertise. Second, if you’re going to hand over a decision or process to a technology that you don’t fully understand, be prepared for the fall out when something goes wrong: not being able to probe for a solution, or having something clear and tangible to blame. Third, be ready for the possibility of being replaced once your organisation works out that the technology is better at a given task or process than you. Fourth, be clear on what utilising that technology means for your responsibility over its outcomes, particularly if they’re poor.
Can the technology handle change and complexity?
The last thing anyone wants to do is hand over a manually conducted process to technology or AI, only for it to fail as soon as circumstances change. Such is a major frustration of people working in sport, as once managers and staff move on, entire systems and processes can change overnight. New ways of measuring the same phenomena are constantly being developed and many co-exist at the same time. In football for example, both optical and GPS-based tracking ostensibly measure the same thing, yet right now for various reasons, professional teams utilise both, with staff scrambling to rewrite algorithms that can render both forms of data interchangeable. AI is also limited in knowing what it doesn’t know: it can only use data it has access to and relies on human intervention to find new sources.
This inability to seamlessly cope with change and complexity is one of the main limitations of technology right now compared to humans. We are often able to update our mental models based on new experiences, even if we don’t know exactly when it has happened. The problem is, we’re currently living in a quasi-future state where we’re increasingly relying on hybrid models of tech-generated data combined with human intuition and expertise. With landscapes changing so quickly, one of the key challenges we currently face is ensuring best practice now, yet also being prepared for any future environments that could emerge overnight. How does a professional franchise operationalise having a form of technology available to players at the major league level, but not in the minor leagues? How does a governing body ensure that the same technology and processes implemented in their US-based programs filter over into their African-based academies? It’s an awkward in-between period of human history whereby we’re still learning how to interact with technology, and at the same time acutely aware that we’re building in redundancies for many of the tasks we’ve had to conduct over hundreds, if not thousands of years. Yet underpinning all of this is a deep-seeded cognisance that certain types of technology are simply not there yet and may never be.
Can the technology actually help to refine intuition?
Whilst new opportunities to supplement and replace humans will continue to emerge, at its best expression, technology can also be actively involved in refining human intuition. The most obvious way in which it can help to achieve this is in situations where a technology solution is shown to perform better than the human. This is typically achieved through the technology emphasising information sources not readily available or interpretable by the decision maker. For example, the human may not be weighting certain types of information as heavily as they should, or it may be discounting the influence of others. Of course, the reverse too can be true: when a technology is performing below that of a human, what is it about the human’s intuition judgement that can be used to refine technology?
In some cases, direct comparison of performance per se is not the goal. Simply using technology to develop a model of the human’s intuition itself is of value (see here for an example from football scouting). This can help to identify the human’s traits and preferences and can have implications for how organisations build successful teams of people with complementary skillsets. Also, most decision-maker themselves cannot be considered as reliable narrators with respect to why they made a correct (or incorrect) decision. It may be of particular use to conduct this modelling exercise on individuals who are known experts, as it could provide clues to leverage in developing the skills of others.
But all of these opportunities come with a few caveats. If we’re going to use technology to help refine intuition in complex settings, then the environment in which the technology is operating must be valid and represents its real-world use case. It ideally should be capable of providing feedback to the human in a format that they can use; something that is not front and centre in the development of many technologies. For instance, whilst observing a swaying bridge we may never be able to tell that is about to collapse through audio-visual cues alone, but if we had access to information from a strain gauge with pre-identified critical load thresholds that supported this information then we might. Without validity of a cue and its environment, we cannot be sure that any learnings will transfer to practice. This represents a problem for a lot of technologies, whereby this validity remains untested; virtual reality being the most obvious example that comes to mind.
Perhaps it is unsurprising then that there is almost no research investigating how technology can be used to refine intuition (see here and here for more on the topic). Part of the reason is that most technologies are still so new, but another is that in real-world settings it is still so difficult to measure and define intuition and expertise. We also need repetitions to develop intuition. These repetitions are hard to produce in difficult to replicate and high-pressure scenarios. The conundrum is that the ability to recognise that a situation is rare and poses a novel challenge is one of the hallmarks of authentic expertise. Learning situations or technologies designed to develop intuition also need to consider the human learner themselves: which type of practice they enjoy, their level of engagement and motivation, and the self-regulatory processes they use. Of course, these things serve to partly explain why some develop expertise better than others, irrespective of the learning stimulus provided.
But despite these many unknowns, the promotion of technology for intuition refinement is an area worth pursuing, not only for the reasons already discussed in this article, but for a far more meaningful reason: retaining human characteristics. Improving our capabilities as humans and reducing our reliance on technology when there is not good justification to do so is advisable for all of us, from the staff member that feels parts of their job are under threat, to the fans that start turning off due to technology-inducted deleterious changes to their sport. It also helps us to correctly identify the right problems and processes for supplementation and replacement, allowing us to take back control of how we experience work, ideally leading to greater opportunities for creativity, which I think we’d all agree is cause for optimism. After all, on its own technology doesn’t want anything from us: its merely a tool. We just need to harness it appropriately.
So, in an age of technology are we witnessing the death of intuition after all? Perhaps. But if we see technology not as a threat or replacement, but as an opportunity for the development of true expertise, the opposite may just be true.