Skip to content Skip to sidebar Skip to footer



The riskiness of AI development results from the fact that modern AI tools are pushing ethical boundaries under the existing legal frameworks that weren’t made to fit them. However, the way regulators choose to proceed in light of this varies greatly between different countries and regions.

The recent AI Action Summit in Paris highlighted these regulatory differences. Notably, its final statement focused on matters of inclusiveness and openness in AI development while only broadly mentioning safety and trustworthiness and without emphasizing specific AI-related risks, such as security threats or existential dangers. Drafted by 60 nations, the statement was not signed by either the US or the UK, which shows how little consensus there is in this space.

Viktorija Lapenyte

Head of Product Legal Counsel at Oxylabs.

How different regulators tackle AI risks

error: Content is protected !!