Abstract:
The growth of Artificial Intelligence (AI) causes several challenges, not only technological
but also legal, ethical, and social dimensions. Initiatives are being undertaken globally to
formulate suitable legal mechanisms to regulate AI. The European Union (EU) adopted the
world’s first-ever comprehensive legal framework for AI, the AI Act, effective from 2nd August
2024. The primary purpose of the Act is to protect fundamental rights, democracy, the rule
of law, and the environment from the harmful effects of AI, while supporting innovation and
promoting trustworthy AI. The research aims to examine the mechanisms adopted under the
Act by addressing questions regarding how comprehensively the term ‘AI’ is defined and how
effective the risk-based classification introduced by the Act is. The study explores, the risk
based approach of the Act to regulate different AI applications ranging from minimal risk to
applications which are banned entirely, based on the level of risk they pose to the society. The
research analyses how such a method is beneficial for addressing the existing challenges of AI
faced by the global population in different fields. The study is primarily designed as doctrinal
legal research, utilizing the black letter approach, and adopting qualitative methodology to
evaluate the effectiveness of the Act in addressing current AI challenges, with a focus limited
to the identified areas. Since this Act is a brand-new addition, there is a dearth...