I'm sharing my project to control 3D models with voice commands and hand gestures:
- use voice commands to change interaction mode (drag, rotate, scale, animate)
- use hand gestures to control the 3D model
- drag/drop to import other models (only GLTF format supported for now)
Created using threejs, mediapipe, web speech API, rosebud AI, and Quaternius 3D models
Githhub repo: https://github.com/collidingScopes/3d-model-playground
Demo: https://xcancel.com/measure_plan/status/1929900748235550912
I'd love to get your feedback! Thank you
Comments URL: https://news.ycombinator.com/item?id=44170694
Points: 24
# Comments: 8
Chcete-li přidat komentář, přihlaste se
Ostatní příspěvky v této skupině
Article URL: https://networkedartifacts.com/airlab/simulator

Article URL: https://arxiv.org/abs/2505.17117
Comments URL: https://news.ycombinator.c

Article URL: https://github.com/bearstech/phptop
Comments URL: https://news.ycombin

Article URL: https://arstechnica.com/tech-policy/2025/06/openai-


Article URL: https://github.com/codexu/note-gen
Comments URL: https://news.ycombinat
