The European Parliament's Internal Market and Consumer Protection Committee (IMCO) voted on Tuesday to continue working on the AI Liability Directive, despite the European Commission's decision to withdraw the policy from its 2025 work program.
The EC announced its intent to withdraw the AI Liability Directive last week, citing "no foreseeable agreement" on the proposal and stating it would assess whether to table another proposal or choose a different approach.
The AI Liability Directive, originally proposed in September 2022, aimed to establish uniform rules for non-contractual civil liability for damage caused by AI systems and reduce the burden of proof for victims seeking compensation.
The EU AI Liability Directive should remain a key policy objective. It provides necessary safeguards, ensuring accountability and transparency in AI development. Without it, Europe risks falling behind in protecting citizens' rights and safety. By addressing AI's legal and ethical concerns, the directive strengthens public trust in innovation while positioning the EU as a leader in responsible AI governance.
Strict AI regulation risk stifling innovation and hindering Europe's ability to compete with global leaders. By focusing too heavily on oversight, Europe will create an environment where AI startups struggle under the weight of compliance costs, driving talent and investment elsewhere. Instead of restrictive policies, the EU should foster a more flexible, innovation-driven approach to AI development.