Comparative Legal Analysis Across Jurisdictions
The research uses a juridical-normative and comparative legal approach. It examines key regulatory frameworks, including:
- The European Union Artificial Intelligence Act (2024).
- The White House Blueprint for an AI Bill of Rights (2022).
- The OECD AI Principles (2022–2024).
Across these frameworks, a
consistent pattern emerges: AI systems are treated as objects of regulation,
not subjects of law. Legal obligations are assigned to providers, developers,
deployers, and operators of the technology. In the EU AI Act, for example,
obligations focus on risk classification, compliance requirements, and
oversight mechanisms. There is no recognition of AI as having legal
personality. Similarly, U.S. policy frameworks emphasize transparency,
non-discrimination, and human oversight, rather than attributing intent to
machines.
When No Human Makes the Decision
For centuries, legal responsibility
has been grounded in the assumption that only humans or legally recognized
entities can form intent, exercise control, and therefore be held accountable.
However, AI systems today operate through complex algorithms and machine
learning processes that can produce harmful outcomes without a specific human
choosing each action. According to researchers, currently no jurisdiction recognizes artificial intelligence (AI) as a legal entity capable of bearing responsibility. “Autonomous systems do not have moral agency, intent, or legal will,” they explained in the study. As a result, legal responsibility cannot be directly imposed on the technology itself.
Shift From Fault to Risk
One of the study’s central findings
is a paradigm shift from fault-based liability to risk-based responsibility. Traditional fault-based systems
require proof of negligence or intent. However, AI decisions are often opaque,
sometimes described as “black box” processes. It can be extremely difficult to
establish causation or identify individual fault when harm occurs.
In response, regulators are
increasingly adopting a preventive model. Instead of asking who is to blame
after damage occurs, the law now focuses on:
- Risk management systems.
- Algorithmic audits.
- Documentation and transparency requirements.
- Ongoing monitoring and compliance.
This risk-based approach assigns
responsibility to those who design, train, deploy, or control AI systems, even
if no clear fault can be demonstrated.
Impact on Business and Technology
Sectors
For technology companies and
startups, the study signals a shift in compliance expectations. Organizations
deploying AI cannot rely solely on contractual disclaimers. They must implement
robust risk assessment procedures, maintain transparent documentation, and
ensure accountability mechanisms are embedded in system design. The move toward preventive
governance also means that liability may arise from failure to manage risk
adequately, even if no intentional wrongdoing exists.
Author Profile
Sumiyati is a legal scholar at
Politeknik Negeri Bandung.
Her
research focuses on law and technology, AI governance, digital regulation, and
the evolution of legal responsibility in autonomous systems.
Sources
Sumiyati. “Legal Responsibility
in Systems without Human Decision Makers.” International Journal of Law
Analytics (IJLA), Vol. 4, No. 1, 2026, pp. 105–118.
DOI: https://doi.org/10.59890/ijla.v4i1.160
URL: https://slamultitechpublisher.my.id/index.php/ijla

0 Komentar