Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Show HN: Open-Source Article 12 Logging Infrastructure for the EU AI Act
42 points by systima 1 day ago | hide | past | favorite | 7 comments
EU legislation (which affects UK and US companies in many cases) requires being able to truly reconstruct agentic events.

I've worked in a number of regulated industries off & on for years, and recently hit this gap.

We already had strong observability, but if someone asked me to prove exactly what happened for a specific AI decision X months ago (and demonstrate that the log trail had not been altered), I could not.

The EU AI Act has already entered force, and its Article 12 kicks-in in August this year, requiring automatic event recording and six-month retention for high-risk systems, which many legal commentators have suggested reads more like an append-only ledger requirement than standard application logging.

With this in mind, we built a small free, open-source TypeScript library for Node apps using the Vercel AI SDK that captures inference as an append-only log.

It wraps the model in middleware, automatically logs every inference call to structured JSONL in your own S3 bucket, chains entries with SHA-256 hashes for tamper detection, enforces a 180-day retention floor, and provides a CLI to reconstruct a decision and verify integrity. There is also a coverage command that flags likely gaps (in practice omissions are a bigger risk than edits).

The library is deliberately simple: TS, targeting Vercel AI SDK middleware, S3 or local fs, linear hash chaining. It also works with Mastra (agentic framework), and I am happy to expand its integrations via PRs.

Blog post with link to repo: https://systima.ai/blog/open-source-article-12-audit-logging

I'd value feedback, thoughts, and any critique.

 help



Anyone can generate an alternative chain of sha256 hashes. perhaps you should consider timestamping, e.g. https://opentimestamps.org/ As for what the regulation says, I haven't looked but perhaps it doesn't require the system to be actually tamper-proof.

Thanks for the thoughts and feedback.

Fair point on the reconstruction attack.

The library is deliberately scoped as tamper-evident, not tamper-proof; it detects modification but does not prevent wholesale chain reconstruction by someone with storage access. The design assumes defence-in-depth: S3 Object Lock (Compliance mode) at the infrastructure layer, hash chain verification at the application layer.

External timestamping (OpenTimestamps, RFC 3161) would definitely add independent temporal anchoring and is worth considering as an optional feature. From what I can see, Article 12 does not currently prescribe specific cryptographic mechanisms (but of course the assurance level would increase with it).

On the regulatory question: Article 12 requires "automatic recording" that enables monitoring and reconstruction and current regulatory guidance does not require tamper-proof storage (only trustworthy, auditable records). The hash chain plus immutable storage is designed to meet that bar, but what you raise here is good and thoughtful.


Good one. QQ. If you store it hash chained, how are you handling GDPR erasure requests? Isn't that supposed to erase within 30 days for GDPR instead of 180? Do you recreate the chain or soe sort of Pseudonymization or anything else?

Great question.

voxic11 is right that the AI Act creates a legal obligation that provides a lawful basis for processing under GDPR Article 6(1)(c).

To add to that, Article 17(3)(b) specifically carves out an exemption to the right to erasure where retention is necessary to comply with a legal obligation.

(So the defence works at both levels; you have a lawful basis to retain, and erasure requests don’t override it during the mandatory retention period).

That said, GDPR data minimisation (Article 5(1)(c)) still constrains what you log.

The library addresses this at write-time today, in that the pii config lets you SHA-256 hash inputs/outputs before they hit the log and apply regex redaction patterns, so personal data need never enter the chain in the first place.

This enables the pattern of “Hash by default, only log raw where necessary for Article 12”.

For cases where raw content must be logged (eg, full decision reconstruction for a regulator), we’re planning a dual-layer storage approach. The hash chain would cover a structural envelope (timestamps, decision ID, model ID, parameters, latency, hash pointers) while the actual PII-bearing content (input prompts, output text) would live in a separate referenced object.

Erasure would then mean deleting the content object, and the chain would stay intact because it never hashed the raw content directly.

The regulator would also therefore see a complete, tamper-evident chain of system activity.


Thanks both for the replies. Can't you make it simpler: encrypt the data, store the encryption key separately and move the raw data to cold storage. If user wants to erase, delete the encryption key avoiding massive recompute from cold store. Do you think this is better approach? This is not efficient, but in large scale (peta bytes) this could work. Developers make mistakes, if they miss encrypting due to some bug in the code, and they want to fix it, then the hash chaining will be a problem though.

IMO what you’re describing is essentially crypto-shredding.

It would definitely work (and when dealing with petabyte levels of data the simplicity of only having to delete the key is convenient).

We’re leaning toward the dual-layer separation I described though (metadata separate to content) mainly because crypto-shredding means every read (including regulatory reconstruction) depends on a key store.

In my view that’s a significant dependency for an audit log whose whole purpose is reliable reconstructability, whereas dual-layer lets the chain stand on its own.

Your point about developer mistakes is fair. It applies to dual layer as you say with your example, but I’d say crypto shredding isn’t immune to mistakes because (for example) deleting the key only works if the key and plaintext never leaked elsewhere accidentally in logs / backups etc.


GDPR permits retention where necessary for compliance with a legal obligation (Article 6(1)(c)).

The AI Act qualifies as such a legal obligation.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: