About the only time I find myself using regular Wikipedia these days is if I need to know if someone died since August 2025 when this ZIM dump was created.
About the only time I find myself using regular Wikipedia these days is if I need to know if someone died since August 2025 when this ZIM dump was created.
I have all of wikipedia in a single 156 GB text file. in my
.zshrci havefastWikiLookup() { cat ~/wikipedia.txt | grep "$@" }If you want a free and massive performance optimization, remove the
cat:fastWikiLookup() { grep "$@" ~/wikipedia.txt }Reading and piping 156 GB of data to another process every time you want to look something up is a somewhat nontrivial action. Grep can directly read the file, which should result in a pretty damn good speed up.
i thought it was obvious i was joking but yeah, that would be faster
It was obvious and I was being a bit of a dummy this morning. Mea culpa.
Considering the community, I think catting 156 GB to grep and calling it
fastWikiLookupis a subtle joke about how absurd this is.Yeah, I was being pretty thick earlier today. Oopsie!
rip Greg would shine here
Poor Greg. 😔🪦
You know what? I’m leaving it.
RIP ripgrep Long live rip Greg
How fast is it, really? How do you differentiate between topics?
the topic differentiation technology doesn’t exist yet, so i just hit ctrl+c about a second after i hit enter
lol database engineers who’ve built very complex systems and ingest and query mechanisms across the world all the sudden got very mad at your comment and they’re not sure why
That’s the neat part …
what the hell I want that
Holy…
my shit is optimized AF