January 22, 2025
Artificial Intelligence (AI) rightfully occupies the spotlight in discussions on modern thought and thought processes. Such centre-stage treatment leads to a duplication of boilerplate information so I will try not to bore you with redundant and repetitious relaying of regurgitated facts.
This piece seeks to illustrate the rapidity with which AI has risen and the need to prioritise the tempering of such rise with considerations of ethicality.
A thought once uttered tends to force its realisation on the world. The genesis of the conception of AI is difficult to trace. Students of the humanities and mythology may cite Hephaestus and his sentient creations as the birthplace of the idea.
Readers of modern history and persons with an interest in the field may be led to the writings of Alan Turing or, more appropriately, the Dartmouth workshop. This was a series of discussions held in 1956 by and between leading intellectuals and professionals holding expertise in mathematics, computing, information theory and other related fields to discuss theories to equip computers with the ability to mimic human intelligence — a phenomenon which was granted its ubiquitous monicker, ‘Artificial Intelligence’, by the proposer of the workshop, Professor John McCarthy.
Another major milestone in the display of intelligence in an information system may be the creation of the AlphaZero program in 2016 which after only four hours of self-play became the most advanced chess engine of its time. However, the proverbial ‘shot heard around the world’ has to be the advent of ‘generative AI’ (GAI) in the form of the GPT models which culminated in the release of the ChatGPT platform to the public at large.
Now that the pervasiveness of AI has become an undeniable reality, it may be helpful to delve briefly into the actualised and potential implications, both positive and petrifying.
AI has made huge strides in energy, medicine, transport, agriculture and countless other fields. Some innovations accredited to AI would not have even been practically possible if attempted through human intelligence alone.
A notable example is AI’s discovery of the antibiotic ‘Halicin’ in 2020 which, without AI’s ability to rapidly process data at far lower costs, would have been ‘prohibitively expensive.’ In medical diagnostics, AI has been able to read and analyse medical data and detect diseases and genetic orders much earlier and more accurately than medical professionals.
In China, we see examples of AI being used to monitor and enhance the efficiency of energy, utilities and communication for entire cities such as in the operation of the electricity grid management system in Guangzhou which has reaped many practical benefits.
Fear-mongering is not a practice I am attuned to, but with the benefits there are risks, and they are serious. Search engine algorithms powered by machine learning tailor users’ experience.
However, this causes a form of content restriction which is tantamount to censorship leading to severe perception bias or even extremism and there appears to be no escape. There is no inherent programming for morality, sympathy or other such ‘human’ mechanisms for self-regulation. This makes the unbridled development of AI solely for efficiency a terrifying thought.
While the dystopian future brought about by Skynet is an extreme example, we have, in the short life of AI, seen the drawbacks of unregulated and unmoderated use.
In 2016, Microsoft’s ‘Tay’ chatbot very soon after deployment was nurtured into a propagator of hate speech and had to be taken down. Better minds than mine admit something which, coupled with the feats of which AI is capable, I personally find terrifying: we simply do not understand the method to AI’s madness.
This leads one now to ethical concerns arising from the fact that we are now set to employ daily a resource which is beyond our comprehension and, so far, is incapable of explaining its reasoning to us. Our trepidations are exacerbated by the fact that AI is evolving so fast that it is challenging to establish any rigid standards for its oversight.
But the establishment of such standards is necessitated all the same. This is so because the uptick in over-reliance is also having devastating practical effects. Due to overuse and the diminishing need to retain information, young people entering their academic and professional lives are becoming ‘conceptually impotent prompt experts’.
Luckily the challenges in regulation have not deterred those at the helm of affairs in many governance and industry areas from resorting to laying down malleable guidelines to ensure ethicality in the use of AI.
In the healthcare sector, entities such as the Food and Drug Administration in the US, the UK’s National Health Service and China’s National Medical Products Administration have adopted various guidelines for regulating AI use in the medical industry, particularly in medical devices.
What we are seeing in all the aforementioned jurisdictions is a periodic exercise of formulation of guidelines or regulations which is leading to the adoption of relatively harmonised frameworks to ensure responsible use and personal data protection.
The legal industry has also recognised this need. The American Bar Association, the New York Bar Committee on Professional Ethics and the State Bar of California in the US have issued opinions on the use of GAI for generating legal work to ensure compliance with existing codes of ethics.
In the UK, the Bar Council for England and Wales has issued guidance to barristers to ensure accountability and compliance with Bar Council ethical standards; the incorporation of affirmative rules requiring disclosure of the use of AI in the preparation of materials is also under debate.
As with the medical field, such regulations are not reinventing the wheel. They are simply trying to ensure that existing standards of confidentiality, professional competence, non-conflict, communication and general supervisory standards on practitioners are appropriately applied to a changing legal landscape powered by AI.
The commonality we see is that the current regulation of AI use is being done primarily through the adoption of a soft law approach (guidelines and recommendations) rather than through hard law (stringent and strictly enforceable statutory regimes) — with the notable exception of the European Union (EU) which has enacted a robust AI law in 2024.
While it augurs well that the need for regulation is being recognised and implemented, this need seems to be largely ignored except for a few nations that are pioneering the development of AI.
This is an unfortunate misstep by countries such as ours for the simple reason that even though we are not materially contributing to the development of AI, its usage is prevalent everywhere which is why ethical concerns loom in tandem.
We need regulators to formulate and adopt appropriate regimes but at the same time avoid a rigidity which cannot be practically adhered to and initiate such exercise through a soft law approach.
The writer is a litigation and transactional lawyer from Lahore who is licensed to appear before the high courts of Pakistan.
Disclaimer: The viewpoints expressed in this piece are the writer's own and don't necessarily reflect Geo.tv's editorial policy.
Originally published in The News