Within a year, the number of lawsuits surrounding the use of AI increased by 60%

Within a year, the number of lawsuits surrounding the use of AI increased by 60%

[ad_1]

Over the year, the number of court decisions in disputes surrounding the use of AI increased by 60%. The main categories of disputes were the use of technology for debt collection, grants and agreements for the development of IT products. Lawyers say that there is no full-fledged legal regulation of this trend in the Russian Federation yet, as well as stable approaches in judicial practice. Therefore, judges have virtually nothing to rely on when making decisions. But at the same time, the proceedings could “prepare the foundation for changes in the regulatory environment.”

From March 2023 to March 2024, Russian courts issued 406 decisions in disputes surrounding the use of AI – a 60% increase year-on-year, according to a study by RTM Group. Its authors note that in the analyzed period, out of 165 judicial acts, in 53% of cases the claims were satisfied in full or in part, and in 34% they were refused. A year ago, the situation was actually the opposite – 31% and 55.5%, respectively.

“The decrease in the number of satisfied claims was a consequence of an increase in the number of claims, and it is not yet possible to talk about a trend. Sustainable approaches in judicial practice have not yet been developed, and therefore judges do not have the opportunity to base their decisions on similar examples,” explains Kirill Nikitin, head of the directorate of the Vegas Lex law firm.

The study also names the main categories of disputes. This is the provision of grants for the development of IT products using AI; disputes over licensing agreements or agreements for the development of software that uses AI; administrative offenses of calls or mailings using AI. The authors of the study emphasize that during the period under review, the last category became the most common: “The main offenses are found in the activities of banks and debt collectors.”

“About 50% of cases are administrative violations. The fine for them is usually about 100 thousand rubles. Then there are civil disputes – about 40%. For grants, the average amount in dispute is about 12 million rubles, and for contracts – 240 thousand rubles,” says Evgeniy Tsarev, manager of RTM Group. He adds that copyright disputes are the least likely to occur, accounting for only about 5%. “Attention to the topic of AI is growing, and within a year we expect an increase in litigation by at least 80%,” believes Mr. Tsarev.

MTS, Yandex, Axenix, Just AI, VK declined to comment. Sber, Visionlabs and the Alliance in the field of AI (Sber, Gazprom Neft, Yandex, MTS, VK, etc.) did not respond.

The developer of biometric solutions using AI, RekFaceys, clarified to Kommersant that its clients receive non-exclusive rights to deploy the product on their own infrastructure: “Users of our AI-based software are protected by the law on biometrics, where roles are clearly distributed and areas of responsibility of participants are designated. market. In the commercial biometrics space, we do not expect a significant increase in litigation.”

“Until the emergence of full-fledged legal regulation of any industry, the rules are established, including at the level of judicial practice,” notes Kirill Nikitin. Managing partner of Semenov & Pevzner Roman Lukyanov considers it unlikely that the emerging practice will negatively affect AI developers. Rather, we are talking about “preparing the foundation for changing the regulatory environment,” he believes: “For example, collection organizations received the right to call debtors through an automated intelligent agent only from February 1, 2024. Until this moment, the active implementation of AI technologies conflicted with the legislation on the use of such technologies in force at the time of preparation of the study.”

“The agreements of AI development companies already include provisions on the responsibility of the developer and the direct user,” explains Daria Nosova, partner of the O2 Consulting law firm. “As a rule, the distribution of responsibility directly depends on who carries out the training and subsequent adjustment of the artificial intelligence system. In this case, the developer abdicates responsibility and shifts it and the risks associated with the use of AI to the user, who trains the system independently.”

Alexey Zhabin, Anna Zanina

[ad_2]

Source link