We’ve built an AI risk assessment tool designed specifically for GenAI/LLM applications. It's still early, but we’d love your feedback. Here’s what it does:
1. it performs comprehensive AI risk assessments by analyzing your codebase against different AI regulation/framework or even internal policies. It identifies potential issues and suggests fixes directly through one click PRs.
2. the first framework the platform supports is OWASP Top 10 for LLM Applications 2025, upcoming framework will be ISO 42001 as well as custom policy documents.
3. we're a small, early stage team, so the free tier offers 5 assessments per user. If you need more, just reach out, happy to help.
4. sign in via github is required. We request read access to scan code and write access to open PRs for fix suggestions.
5. we are looking for design partners to collaborate with us. If you are looking to build compliance-by-design AI products, we'd love to chat.
product url: https://www.gettavo.com/app
we'd really appreciate feedback on:
- what you like
- what you don't like
- what do you want to see for the next major feature
- bugs
- any other feedback
feel free to comment here or reach out directly: email: percyding@gettavo.com, linkedin: https://www.linkedin.com/in/percy-ding-a43861193/
Comments URL: https://news.ycombinator.com/item?id=43994486
Points: 19
# Comments: 3
Login to add comment
Other posts in this group

I discovered that in LLM inference, keys and values in the KV cache have very different quantization sensitivities. Keys need higher precision than values to maintain quality.
I patched llama.cp
Article URL: https://clojurescript.org/news/2025-05-16-release




Article URL: https://bobacollection.staxmuseum.org/
Comments URL: https://news.y