Chrome now includes a native on-device LLM (Gemini Nano) starting in version 138. I've been building with it since it was in origin trials, it's powerful but the official Prompt API is still a bit awkward:
- Enforces sessions even for basic usage
- Requires user-triggered downloads
- Lacks type safety or structured error handling
So I open-sourced a small TypeScript wrapper I originally built for other projects to smooth over the rough edges:
github: https://github.com/kstonekuan/simple-chromium-ai
npm: https://www.npmjs.com/package/simple-chromium-ai
- Stateless prompt() method inspired by Anthropic's SDK
- Built-in error handling and Result-based .Safe.* variants with neverthrow
- Token usage checks
- Simple initialization that provides a helper to trigger downloads (must be triggered by user action)
It’s intentionally minimal for hacking and prototyping. If you need fine-grained control (e.g. streaming, memory control), use the native API directly:
https://developer.chrome.com/docs/ai/prompt-api
Would love to hear what people build with it or any feedback!
Comments URL: https://news.ycombinator.com/item?id=44482710
Points: 12
# Comments: 1
Войдите, чтобы добавить комментарий
Другие сообщения в этой группе

Article URL: https://arxiv.org/abs/2504.18412
Comments URL: https://news.ycombinator.c
Article URL: https://danielsada.tech/blog/microsoft-pact/
Comments URL: ht