Artificial Intelligence (AI) and space are both popular subjects in the current policy climate. AI techniques are being applied to space datasets and accelerating progress in the satellite and space industry through natural language processing, machine vision and advanced analytics. The combination of AI and space could play an integral role in increasing global connectivity and closing the digital divide.
AI space services face the same problems as terrestrial AI services. They are exposed to the same policy challenges when delivered through a fibre network as they are when transmitted wirelessly from a satellite. However, delivery via space sometimes amplifies terrestrial challenges by creating another channel for access and delivery across borders. As more data is transferred to and processed in space, the juris and territorial grey area of space from data ownership and data protection perspective may create legal challenges.
Data, infrastructure and ethics are the keys to enabling AI. A successful AI ecosystem requires good and useable datasets and policies that reflect this. Infrastructure is also necessary to store and process data – for example, through cloud computing networks. Additionally, broadband connectivity is needed to connect and deliver AI services. Huge digital inequalities exist and prevent regions of the world from leveraging new technology. AI space services could play a pivotal role in connecting underserved communities to broadband and supporting further digitisation. Finally, AI can be used to further the UN’s Sustainable Development Goals (SDGs) through improved climate change modelling, use of earth imaging for better refugee response, economic forecasting and targeting development aid. However, it is critical to have an ethical AI approach for trust in these services and policy-makers must keep this, as well as different stakeholder perspectives, in mind when creating the boundaries of what constitutes good policy.
What does this mean for policy?
Companies using AI in space will try to offer their services over as large a territory as possible, meaning they will face cross border challenges. National regulators may not be comfortable with new services, especially if they are delivered remotely, potentially without a legal nexus in their country through which they can exercise oversight or control. Regulators may become creative in finding ways to assert themselves and control their markets from unwanted intrusion and this could inhibit the further development of AI in space.
While most data protection laws take a territorial approach to governing data flows, this does not apply to space as it is understood as a territorial grey area and not part of a sovereign state. For example, the GDPR governs transfers to “a third country, a territory or one or more specified sectors within a third country, or an international organisation”. When data is transferred to a spacecraft, however, it is not clear what territory it has been transferred through. While most data transmitted to a satellite will return to earth, it remains unsure how data processing storage in space will be governed. We can expect equally interesting questions about data ownership in space to arise. Geospatial and earth imaging data has been collected for commercial and government purposes for decades. However, data is increasingly owned by third-parties instead of government, meaning it will be more complicated to navigate data ownership in space.
Data and AI also have national security implications. Governments will likely become more concerned with exerting control and assert ownership over the geospatial data of their countries. As AI is increasingly viewed through a security lens, government may exert more control over industry through foreign investment policies, security screenings and export controls that may harm industry. Additionally, the growing role of security institutions is both positive and negative for the private sector as they are likely to raise government investment and support, while also increasing levels of strict regulation.
AI is still an evolving field and governments must be cautious not to take regulatory actions that could minimise the future benefits of the technology. To avoid this, policy-makers should consider placing focus on promoting data. Data is the key to AI development and governments need to focus on ensuring they have the right policies to promote a robust, competitive and collaborative digital economy. On this point, it is important that policy-maker collaborate with all sectors. Dynamic exchanges between all parties and sectors will promote development and a deeper understanding of how to craft effective regulatory responses.
Additionally, attention should be given to current legislative gaps. AI does not necessarily need new specialised laws; however, existing laws – such as anti-discrimination – should be applicable to AI without major legal interventions. Policy-makers should encourage international cooperation to manage cross-border challenges. Many countries have valuable experience that can help regulators converge policies and reduce barriers to international commerce. Finally, they should seek to balance commercial and security interests. Security implications should be balanced against the social benefits of commercial development.