We built BSE (Bramble Semantic Engine) – a semantic compressor that transforms natural inputs into low-dimensional structured representations.
It's designed as a preprocessing engine for LLMs, capable of reducing long inputs into compact, logic-preserving forms across:
1. Language
Extracts SVO (Subject, Verb, Object) structure
Captures modifiers: adjectives/adverbs
Restores pronouns from short-term memory
Detects questions
Computes:
Compression Rate (%)
Semantic Loss (%)
Compares sentence compression outputs via SDC:
Subject-Subject, Verb-Verb, Object-Subject similarity
Sentence distance
2. Image
Crops and weights center-priority patches
Converts into 100x100 weighted matrices
Visualizes:
R, G, B Channels
Brightness
3. Audio
Decomposes audio into pitch & intensity across frequency bands
Returns normalized 2D matrices
Visualized as grayscale spectro-patches
Live demo (Gradio): https://huggingface.co/spaces/Sibyl-V/BSE_demo
Feedback welcome on:
Compression logic
Use cases (LLM fine-tuning, retrieval, alignment)
Design of multi-modal structure output
Built in 48 hours by a solo dev & their black nine-tailed fox partner. Let us know what you'd improve — and what scares you.
Comments URL: https://news.ycombinator.com/item?id=43670527
Points: 6
# Comments: 0
Zaloguj się, aby dodać komentarz
Inne posty w tej grupie

Hey everyone!
Over the past two years I threw myself back into full-time engineering with a simple goal: write code that gives back to the community. After a lot of late-night FOMO (“AI w


Article URL: https://cacm.acm.org/news/the-collapse-of-gpt/

I discovered that in LLM inference, keys and values in the KV cache have very different quantization sensitivities. Keys need higher precision than values to maintain quality.
I patched llama.cp
Article URL: https://clojurescript.org/news/2025-05-16-release
