Ask HN: How do you maintain personal annotations for code you don't control?

I spend significant time reading and understanding codebases that I don't control (open source libraries, internal legacy systems, etc.). As I build understanding, I need to document my insights, gotchas, and mental models - but these notes are purely personal and shouldn't be part of the actual codebase.

My challenges:

1. These annotations need to be tightly coupled with specific locations in the source code (particular functions, variables, or even specific lines)

2. The underlying code changes regularly (new versions, updates from maintainers) which can break the connection between my notes and the code

3. My notes are private - they include half-formed thoughts, questions, and sometimes critical observations that wouldn't be appropriate as public comments

4. I want to preserve this knowledge across different machines and working environments

I've tried various approaches: - Local IDE bookmarks (lost between sessions) - Separate markdown files (hard to maintain precise code references) - Private forks with comments (becomes unmaintainable as source evolves)

I'm curious how others solve this problem. Do you have a systematic approach for maintaining personal annotations on code that's not under your control? How do you handle the challenge of the code evolving while keeping your notes relevant?

Would especially love to hear from people working with large codebases or those who regularly need to dive deep into external dependencies.


Comments URL: https://news.ycombinator.com/item?id=42514803

Points: 7

# Comments: 6

https://news.ycombinator.com/item?id=42514803

Létrehozva 6mo | 2024. dec. 26. 13:50:12


Jelentkezéshez jelentkezzen be

EGYÉB POSTS Ebben a csoportban

Show HN: Sharpe Ratio Calculation Tool

I built a simple but effective Sharpe Ratio calculator that gives the full historical variation of it. Should I add other rations like Calmar and Sortino?


Comments URL:

2025. jún. 29. 20:40:08 | Hacker news
Show HN: A tool to benchmark LLM APIs (OpenAI, Claude, local/self-hosted)

I recently built a small open-source tool to benchmark different LLM API endpoints — including OpenAI, Claude, and self-hosted models (like llama.cpp).

It runs a configurable number of test requ

2025. jún. 29. 18:20:10 | Hacker news