EU AI Act (approved 3/2024, phased enforcement 2024-2027) — luật AI toàn diện đầu tiên trên thế giới, theo risk-based approach. Áp dụng cho bất kỳ AI nào deploy vào EU, kể cả developer ngoài EU.
4 tier rủi ro:
1. Unacceptable risk — CẤM HOÀN TOÀN
- Social scoring (như China).
- Exploiting vulnerability (trẻ em, disability).
- Biometric categorization theo race/religion/sexual orientation.
- Emotion recognition ở workplace/education.
- Real-time remote biometric identification ở nơi công cộng (vài ngoại lệ cho law enforcement nghiêm ngặt).
- Untargeted scraping facial images cho database.
2. High-risk — QUY ĐỊNH NGHIÊM NGẶT
- AI trong: tuyển dụng, education admission, credit scoring, law enforcement, migration, critical infrastructure, medical devices, biometric identification.
- Nghĩa vụ:
- Risk management system — identify, assess, mitigate risk qua lifecycle.
- Data governance — training data quality, bias mitigation.
- Technical documentation — cách model work, data, training.
- Logging — automatic event logs.
- Transparency — user biết đang tương tác với AI.
- Human oversight — human có thể intervene.
- Accuracy, robustness, cybersecurity requirements.
- Conformity assessment trước khi deploy.
- CE marking như sản phẩm physical.
- Register trong EU database.
3. Limited risk — TRANSPARENCY
- Chatbot, deepfake, emotion recognition, biometric categorization.
- Nghĩa vụ: disclose user rằng đang dùng AI, content AI-generated phải label.
4. Minimal risk — KHÔNG RESTRICT
- Spam filter, game AI, recommendation thông thường.
- Voluntary code of conduct.
General-Purpose AI (GPAI) — tier riêng (Foundation models như GPT-4, Claude, Gemini):
- Tất cả GPAI:
- Technical documentation.
- Copyright compliance (transparent về training data).
- Summary về training content.
- GPAI với systemic risk (> 10²⁵ FLOPs training — top models):
- Model evaluation + adversarial testing.
- Risk mitigation documentation.
- Incident reporting.
- Cybersecurity protection.
Timeline enforcement:
- Feb 2025 — prohibited AI + AI literacy rules.
- Aug 2025 — GPAI rules.
- Aug 2026 — high-risk rules (main compliance deadline).
- Aug 2027 — remaining high-risk provisions.
Penalties:
- Prohibited AI: up to €35M hoặc 7% global revenue (whichever higher).
- High-risk violation: up to €15M hoặc 3%.
- Incorrect info: up to €7.5M hoặc 1.5%.
- Larger than GDPR fines.
Nghĩa vụ cụ thể cho developer AI engineering:
1. Classify AI system vào tier nào — tự đánh giá hoặc consult legal.
2. Inventory — danh sách AI system đang build/deploy.
3. Documentation pipeline — model card, data sheet, risk assessment cho mỗi system.
4. Bias audit với regular testing cho high-risk.
5. Human oversight — UI design cho human review, override.
6. Logging infrastructure — event log với retention theo yêu cầu.
7. Transparency UI — disclosure "bạn đang chat với AI", label AI-generated content.
8. Incident response plan — cho GPAI và high-risk.
9. User rights implementation — contest decision, explanation, data access.
10. Supplier due diligence — nếu dùng 3rd-party AI (OpenAI, Anthropic), phải verify họ compliance.
Practical cho team engineering:
- Label AI-generated content với C2PA watermark, SynthID.
- User disclosure rõ ràng trong UI chat.
- Logging all LLM interactions với retention 6 tháng+.
- Documentation template (model card theo standard).
- Impact assessment template (DPIA + Fundamental Rights Impact Assessment FRIA cho high-risk).
- Fairness metric tracking trong production monitoring.
Conflict với business:
- High-risk AI chậm hơn deploy (conformity assessment).
- GPAI training data transparency → conflict với copyrighted data.
- Some feature bị cấm (emotion recognition in hiring) — business model impact.
Strategies:
- Design for compliance from start, không retrofit.
- Regional variants — limit high-risk feature ở EU, full ở nơi khác.
- Work with legal early — lawyer hiểu AI Act là asset.
- Monitor guidance — EU AI Office publish technical guidance continuous.