Effective rules for AI, but without caging the future

Effective rules for AI, but without caging the future

[ad_1]

Artificial intelligence promises a lot and scares us. From this fear arises the request for rules that limit the risks of its applications. There have already been cases, accidents caused by self-driving cars, the creation of false and defamatory news, but the fears go further, reaching the manipulation of consciences, the alteration of democratic processes, the most prosaic traffic management up to what he absorbs them all and summarizes them: uncontrollability. Regulating this matter is necessary and also enormously difficult because the limits of its scope are as impalpable as they are mobile and because artificial intelligence does not know borders while regulations do. The United States, Europe and China are moving with different approaches but on the basis of partly similar principles and a constructive relationship is possible and necessary. The work to be done is enormous and fascinating, evaluating what is already covered by the existing rules, evolving some paradigms that have accompanied us until now, such as the criteria of responsibility, attribution of works, data management and creating new rules where necessary , respecting technological neutrality and without caging the future and then having to hastily chase it.

[ad_2]

Source link