[-] fossilesque@mander.xyz 19 points 19 hours ago

If someone has a transcript bot pls dm

500
Good Design (thelemmy.club)
submitted 21 hours ago* (last edited 21 hours ago) by fossilesque@mander.xyz to c/science_memes@mander.xyz
832
Astronauts are funny (thelemmy.club)
[-] fossilesque@mander.xyz 2 points 23 hours ago* (last edited 23 hours ago)
4

Sorry, can't perfect the format of this post right now cuz I gotta run, but will do later. Don't miss this one, this plugin is SO GOOD.

https://www.reddit.com/r/ObsidianMD/comments/1shntdn/new_plugin_llm_wiki_turn_your_vault_into_a/

From the link:

Main interface

Inspired by Andrej Karpathy’s post, I wanted to use an LLM to talk to my notes — without having to send them to OpenAI, Anthropic, or Google. I also wanted to see if the whole thing could work with local models, on regular hardware.

LLM Wiki is the result. It reads your vault, extracts people, ideas, and connections from your notes, and lets you ask questions in natural language. Answers stream back with clickable links to the source notes so you can verify everything.

It runs on Ollama by default — free, local, your notes never leave your machine. If you want more power and less privacy, I have included the ability to use you API keys Cloud providers are available as an option if you want them. What it does

Extracts knowledge — entities (people, organizations, tools, books, places), concepts (ideas, theories, frameworks), and the connections between them
Answers questions in natural language — a chat interface grounded in your own notes, with source links
Hybrid search — combines keyword matching, semantic similarity, and vault structure to find the right context, even when your question uses different words
Knows when it doesn’t know — if your vault doesn’t have enough on a topic, it says so instead of making things up
Generates wiki pages — structured markdown pages for every entity, concept, and source, compatible with Obsidian Bases
Keeps up with your writing — saving a note triggers background re-extraction, no manual re-indexing needed
Multi-turn conversations — chats are saved and resumable
Multiple providers — Ollama (local, free) by default; OpenAI, Anthropic, and Google available in settings

Screenshots

Main interface

Sources

Settings Quick start

Install Ollama and pull two models (~5 GB total): ollama pull qwen2.5:7b ollama pull nomic-embed-text
Install LLM Wiki from Community Plugins or github (https://github.com/domleca/llm-wiki)
Run LLM Wiki: Run extraction now from the command palette
Run Ask knowledge base and ask your first question

Privacy

With Ollama (default): everything stays on your machine, nothing is sent anywhere
Cloud providers are opt-in and clearly labeled
No telemetry, analytics, or tracking

GitHub: https://github.com/domleca/llm-wiki

Feedback welcome — especially on extraction quality and search relevance. This is v1.0 and I’d love to hear what works and what doesn’t.

[-] fossilesque@mander.xyz 2 points 1 day ago* (last edited 1 day ago)

👵 i made your internet. This bit is much older, 2006, it's got many variants that came after. Looks like gregnant was 2016.

https://knowyourmeme.com/memes/how-is-babby-formed

[-] fossilesque@mander.xyz 6 points 1 day ago

Throwing flux ropes

281
89
41
[-] fossilesque@mander.xyz 5 points 1 day ago

It's the socratic method.

18
Zotero (youtu.be)
[-] fossilesque@mander.xyz 1 points 1 day ago

Japanese has a large number of pronouns, differing in use by formality, gender, age, and relative social status of speaker and audience. Further, pronouns are an open class, with existing nouns being used as new pronouns with some frequency.

[-] fossilesque@mander.xyz 51 points 1 day ago

Rip harambe

43
代名詞 (thelemmy.club)
[-] fossilesque@mander.xyz 49 points 1 day ago

Am I Pragnent?

374
220
684
I want to believe (thelemmy.club)
555
Land where (thelemmy.club)
view more: next ›

fossilesque

0 post score
0 comment score
joined 3 years ago
MODERATOR OF