The advent of microelectronics and AI has opened the door for the development and application of neural implants – devices inserted into the brain or nervous system to monitor or directly influence their activity. Such technology is revolutionising the treatment of neurological disorders but also creating complex ethical and legal challenges. This article aims to stimulate a dialogue on these complexities and propose policy reforms, keeping a focus on the Gulf Cooperation Council (GCC) region.
With the financial backing of the world’s wealthiest investors, companies worldwide, like Kernel and Elon Musk’s Neuralink, are leading the way in this emerging field. Recent years have seen the launch of multiple devices globally, such as the WaveWriter Alpha Spinal Cord Stimulator Systems, following regulatory approval in the EU. However, with advancements come new questions, primarily about the ethical deployment of AI algorithms in neural implants and intellectual property rights concerning the technology.
AI ethics and neural implants
Neural implants, driven by AI, can revolutionise healthcare and beyond. They offer an unprecedented opportunity for the treatment of neurological conditions such as Parkinson’s disease and epilepsy. Moreover, they promise to bring about profound changes in areas like cognitive enhancement and even direct brain-to-computer interaction. However, the interfacing of these devices with the human brain gives rise to essential ethical questions. For instance, how can we protect against misuse, such as unauthorised alteration or access to confidential neural data?
If algorithms used for tailored advertising, insurance premium calculations, or potential partner matching can access neural information – for example, neuron activity patterns tied to specific attention states – their functionality would see a significant boost. What’s more, neural devices linked to the Internet could open up the possibility for individuals or entities (like hackers, corporations, or governmental agencies) to observe or even influence an individual’s mind.
Research findings from adaptive deep brain stimulation (aDBS) experts suggest that although the use of aDBS systems for enhancement might be a distant reality, it’s not an impossible one, with a substantial majority (70%) acknowledging its potential. Simultaneously, these specialists have expressed deep ethical concerns related to safety and security, the perception of enhancement as unnecessary or unnatural, and matters of fairness, equality, and distributive justice.
Call to action for policymakers, research organisations, and scientific communities, as well as companies involved in developing, manufacturing, or marketing AI or neurotechnology: We must develop robust ethical frameworks grounded in respect for autonomy, beneficence, non-maleficence, and justice. These principles can guide us in creating safeguards to ensure that AI technologies used in neural implants are designed and used ethically. To limit this problem, we propose that the sale, commercial transfer, and use of neural data be strictly regulated. Such regulations, which would also limit the possibility of people giving up their neural data or having neural activity written directly into their brains for financial reward, may be linked to existing legislation that prohibits the sale of human organs, such as the 1984 US National Organ Transplant Act.
Intellectual property rights
The convergence of AI and neural implants presents unique complexities within the current intellectual property (IP) law framework. Consider, for example, the predicament that arises when a company that provides a neural implant solution declares bankruptcy. Does the user, whose life quality has been considerably enhanced by the implant, retain the product? Or does the firm, the holder of the IP rights, possess the power to retrieve it?
Traditional IP laws were designed when only human inventors were taken into account. This presents a multitude of questions and challenges in the era of escalating reliance on AI systems, whether they contribute to an invention or create one outright.
As neurotechnology merges with AI, the classical definition of an inventor as a ‘natural person’ becomes insufficient. Can an AI be classified as an inventor, particularly when it contributes significantly to the creation of a neural device or technique? Different jurisdictions have varying definitions of inventorship. In the US, an inventor must be a natural person, while other jurisdictions reference the Paris Convention for the Protection of Industrial Property, which mandates that an inventor be human. However, the Paris Convention only specifies the right of an inventor to be acknowledged as such in the patent. So, could this right be extended to an AI system?
Another issue is disclosure requirements. AI innovations are often the result of black box operations by the machine, which makes it impossible to disclose the innovations in sufficient levels of detail to satisfy existing laws. Patents, copyright, and trademarks may not be enough to protect an AI-related invention.
The development of neurotechnology also frequently involves collaboration between AI developers, neuroscientists, and biomedical engineers. Current IP laws may not adequately address such collaborative innovation, potentially leading to disputes.
Call for action for policymakers and international bodies: This area of uncertainty demands comprehensive policy discussions and international convergence. It’s a balancing act; on one side, inventors’ IP rights must be protected to encourage innovation. Conversely, users’ rights to health and well-being must be preserved. This context of uncertainty should also be seen as a chance to explore new rules for IP protection. For example, policymakers can reconsider the definition of an ‘inventor’, incorporate ethical aspects directly into the IP rights, and, under certain circumstances, adjust patent laws to include provisions for mandatory licensing.
The broader global policy landscape & digital divide
It’s crucial to recognise that the technology gap is likely to expand further as developed countries – the frontrunners in AI – extend their lead in its adoption and use. Back in 2018, forecasts for 2030 suggested that AI systems could make up 20 to 25 percent of the overall economic gain in developed countries, as opposed to a 5 to 15 percent contribution in developing nations. Considering the trend identified in McKinsey’s study, it’s not hard to surmise that this disparity has likely grown even further since then.
Developed countries not only have a solid technological base for AI but also substantial incentives to invest in the sector. These nations enjoy superior digital infrastructure, widespread Internet accessibility, affordable and rapid broadband, a workforce adept at integrating new knowledge, more flexible labour market structures, and a higher propensity for innovation.
Yet, it’s widely understood that when scientific or technological choices are rooted in a limited range of systemic, structural, or societal norms and ideas, the ensuing technology is capable of favouring specific groups while disadvantaging others. Hence, the question arises: how effective are the present-day regulations and laws in developed countries at addressing bias?
Although it’s hard to pinpoint where bias has crept into AI models today, based on DataRobot’s 2022 State of AI Bias Report, more than one in three organisations surveyed (mostly multinational CIOs, IT directors, managers, data scientists, and development leads) have experienced challenges or direct business impact due to an occurrence of AI bias in their algorithms. The research also found that 77 percent of organisations had an AI bias or algorithm test in place prior to discovering bias. Despite significant focus and investment in removing AI bias across the industry, organisations still struggle with many challenges in eliminating this problem. These include:
- Understanding the reasons for a specific AI decision.
- Understanding the patterns between input values and AI decisions.
- Developing trustworthy algorithms.
- Determining what data is used to train AI.
These challenges expand onto the global stage, heightening existing concerns over the digital divide.
Could the GCC lead?
With its burgeoning technological advancements and a keen focus on innovation, the GCC region might provide a novel perspective on how to approach the ethical and IP best practices in neurotechnology.
Internet penetration and quality: The region’s robust Internet infrastructure is an essential prerequisite for harnessing the potential of AI and advanced neurotechnology. This is because AI systems often rely on the rapid transmission of large amounts of data, and neurotechnology devices, especially those connected to networks for monitoring or control purposes, require stable and fast connections to function efficiently and effectively.
Cosmopolitan cities and tech hubs: The global metropolises of Dubai, and potentially Riyadh and Doha in the coming years, are quickly becoming the tech hubs of the Middle East. Their diverse populations provide a wide range of perspectives that are essential in addressing the ethical and societal impacts of neurotechnology.
Relatively new and agile nations: The relative youth of these countries provides them with a unique advantage. They have the opportunity to adopt forward-thinking policies without the inertia of longstanding traditions and norms. This agility can enable the GCC region to respond more swiftly and effectively to the evolving challenges presented by neurotechnology.
Given the region’s current reliance on foreign AI algorithms and technologies, GCC nations have an impetus to invest in domestic innovation. By fostering a robust, homegrown tech industry, these countries can ensure that their AI applications, including neurotechnology, are sensitive to their unique cultural, societal, and legal contexts.
Conclusion
In conclusion, while many challenges lie ahead in navigating the ethical and IP implications of neurotechnology, the GCC region, with its rapid technological progress and commitment to innovation, could potentially spearhead effective solutions. As we move forward into this exciting and uncharted territory, it will be crucial for international discourse to include diverse voices and perspectives, including those from the GCC region.