Since we have heard so much about 'artificial intelligence' (AI) as a match for human thinking, it is fair to put the two to a test. One way to do so, at the start of a new year, is to compare predictions of how the new year may turn out. The Economist, by placating a massive unsupervised language model, GPT-2 (Generative Pretrained Transformer 2), over a number of questions, did undertake such an exercise. Addressing the China-US trade impasse, the future of the European Union (EU), the AI future, particularly in displacing humans, and whether US President Trump would be re-elected, among other issues, some GPT-2 answers to contemporary affairs, whether they come out correctly or not, still exposed a clear human challenge.
Over the China-US spat, GPT-2's response conveyed as much calibration as a human mind's, even going beyond: it refused to pick sides, as humans tend to do. By proposing a "more balanced relationship," and "a more competitive world," the imbroglio of the past two years was well captured. That these responses raised China more than they did the United States, in fact, depicted the US influence spread-effect, or network, to be decidedly more circumscribed than ever before, says more about how AI forthrightness and relative neutrality make human pale by comparison. While we have to wait and see how AI-stamped outcomes fare with 2020 realities at the end of the year, it is fair to say the AI responses, compared to the expert human being's, leaves food for homosapiens to think about on the cusp of an automated age!
Responding to West Europe's future confirms the proposition. With West Europe seen as an agent of 2020 change, much as GPT-2 saw China, their trajectories differed markedly: whereas the Chinese view captured the country on the ascendant, the European counterpart was amid an across-the-board decline. Particularly with Great Britain exiting the continent, "major changes" afflicting another salient 20th century actors might make GPT-2 look like an agent out to destabilise leading global countries. That might be the consequence, but at least the machine is saying out loud what many fear to even whisper. Like many seasoned human observers, GPT-2's response was as sophisticated, analytical, and built upon multiple variables (as evident from the multiple factors assessed), again raising questions if human input might be even necessary in a murky future when machines may prove more sustainable.
GPT-2 saw AI knowledge and machines both complementing human needs and as replacement. It similarly forecast AI usage to be both helpful and a threat. Once again, we get responses seasoned observers could have made, indicating how the human being should be more than alert of their jobs heading south. Without emotions, these AI contraptions handle sensitive issues better. If we take the case of democracy, for instance, since we all have our own 'take' on democracy (either we like it or not, or remain ambivalent), GPT-2 saw democracy as being instantly threatened, not so much by how it is practised, but from our own interpretative impositions. Circulating misinformation, particularly during election-time, illustrates our blatantly subjective intervention, something automation safeguards against. Stained democracy may be remedied, particularly if 'our' candidate wins the election, yet the new GPT-2 generation of machines offers a cleaner, clearer and quicker appraisal, more directly, and without our sentimental baggages.
One final question (of the many other issues discussed) is about the US 2020 presidential election results. Here we find humans towing in line more dramatically behind our preferred outcome, while GPT-2's January 2020 prediction against Trump winning in November may be the boldest outcome announcement yet made. With such a clear-cut yes-no answer over a single, closed event, we will also know in November if GPT-2 machine knowledge surpasses human's.
If it does, we should have more reasons to take precautionary actions, for example, reinventing our own skills such that they become more relevant and remain fluid enough given the flux typically expected with knowledge, innovation, or technological development: we would not like to get stumped by a machine, especially over our own job skills. If we don't, it is quite likely, not the machines themselves, but the people behind them, may emerge as the winner. We must bear in mind how every machine is given latitude by its own creator; therefore AI contraptions only go as per the intentions of its creator. To put the message bluntly: AI ignorance can become very costly, though AI know-how does not necessarily guarantee survival in the evolving society, nor constitute the final word. We must learn and keep all options open in any human-machine relationship, and we must begin straightaway: AI contraptions are not going to invade us one day as the Japanese invaded Pearly Harbor one December day in 1941 since it is already partly inside our house, and partly altering our deeply entrenched habits in one way or another. We can imagine ourselves to be very much like the individuals suddenly shifting from their typical horse-carriage transportation to the first automobile: joy and excitement rather than fear and apprehension should characterise any such transitions.
Turning to the substance of the prediction, 2020 is already off to a less glowering start against the huge tides lashing at its gates. Yes, China and the United States did eke out an agreement, but just adding the subtitle, Phase 1, to their agreement suggests how incomplete and reversible it is: Trump needs to keep the China pressure on to get the votes he seeks in November; and China, fully alert of how the United States functions against an election calendar, is not going to hand an agreement on a silver-plate to the United States just like that. A statement is good for the short-term, since an all-out blow would be detrimental; but over the long-haul, it will widen the gap between the two beyond repair (and what it is today).
In Europe the case is similar. Britain will have to wait no longer than this unfolding weekend to sever the European umbilical cord. Neither side has made back-up plans strong enough to compensate for the losses; and the fear of replication may grow louder than the need to just hang on together. With populism and Angela Merkel's departure in the wings, the European boat may be more rocked than it will remain stable. No machine can swallow that dynamic, as indeed none of the others discussed, for a solution viable enough to last
Nevertheless, as far as the age-old human-machine relationship is concerned, machines do win out, only this time far faster and in more arenas than just in the single machine-dictated context. Yet, this will come so silently, and, in case we have not noticed, has already been impacting us ever since at least the 2008-11 Great Recession that the dramatic event we await might obscure the many lower-level shifts already enslaving humans. The genie has been out of the bottle for quite a while, and if concerns are all we have responded with, as opposed to alarm, changes of surviving the storm may be better than not.
Most of all, 2020 may be rocked by many, many lesser issues, as with Iran, or over the deteriorating economy, climate-change dynamics, and election-triggered developments. Even machine knowledge might not bail us out of such an overloaded expression of self-seeking pursuits. On balance, automation re-energises the human 'survival of the fittest' instinct, taking far more an overtly hostile intellectual casus belli than physical. Damages may be more widespread, although the conflict tools may be more representative than all others we have utilised before.
Dr. Imtiaz A. Hussain is Dean (Acting), School of Liberal Arts and Social Sciences (SLASS) and Head, Global Studies & Governance Program Independent University, Bangladesh