-32
submitted 15 hours ago by PumpkinDrama@reddthat.com to c/linux@lemmy.ml
top 12 comments
sorted by: hot top new old
[-] artyom@piefed.social 34 points 14 hours ago* (last edited 14 hours ago)

Dear God, please don't. FF does not want your AI slop bug reports. You people are ruining open source.

[-] Hexarei@beehaw.org 2 points 9 hours ago

Especially from a 7b model

[-] utopiah@lemmy.ml 0 points 4 hours ago

This makes me genuinely curious, who thought that would be a good idea?

It feels like a lot of "contribution" to open source suddenly is fueled by AI hype. Is it a LinkedIn/TikTok "trick" that is being amplified that somehow one will get a very well paid job at a BigTech company if they somehow have a lot of contributions on popular projects?

Where does that this trend actually come from?

Did anybody doing so ever bother checking contribution guidelines to see which tasks should actually be prioritized and if so with which tools?

This seems like a recurring pattern so it's not a random idea someone had.

[-] Hexarei@beehaw.org 3 points 9 hours ago

run a local LLM like Claude!

Look inside

"Run ollama"

Ollama will almost always be slower than running vllm or llama.cpp, nobody should be suggesting it for anything agentic. On most consumer hardware, the availability of llama.cpp's --cpu-moe flag alone is absurdly good and worth the effort to familiarize yourself with llamacpp instead of ollama.

[-] ctrl_alt_esc@lemmy.ml 1 points 2 hours ago

I have used Ollama so far and it's indeed quite slow, can you recommend a good guide for setting up llama.cpp (on linux). I have Ollama running in a docker container with openwebui, that kind of setup would be ideal.

[-] Hexarei@beehaw.org 1 points 2 hours ago

I just run the llama-swap docker container with a config file mounted, set to listen for config changes so I don't have to restart it to add new models. I don't have a guide besides the README for llama-swap.

[-] org@lemmy.org 14 points 13 hours ago

Pretty sure if you have to ask how to do it, you’re not qualified to do it.

[-] TachyonTele@piefed.social 4 points 12 hours ago
[-] hendrik@palaver.p3x.de 4 points 14 hours ago* (last edited 14 hours ago)

Did you forget the body text? Or is this some bug? Looks like a question here, and like an AI fabricated tutorial in the original version of this cross-post.

[-] ZWQbpkzl@hexbear.net 2 points 13 hours ago

You'll have to be more specific with how anthropic is debuging Firefox. There's many sort of possible setups. In general though, you'll need

  • an llm model file
  • some openai compatible server, eg lmstudio, llama.cpp, ollama.
  • some sort of client to that server there's a myriad of options here. OpenCode is the most like Claude. But there's also more modular programmatic clients, which might suit a long term task
  • the Firefox source code and/or an MCP server via some plugin.

You'll also need to know which models your hardware can run. "Smarter" models require more ram. Models can run on both CPUs and GPUs but they run way faster on the GPU, if they fit in the VRAM.

[-] etchinghillside@reddthat.com 0 points 13 hours ago

Props for putting someone together and not burying it in a 20 minute YouTube video.

My mind initially went to OpenCode - I’m not familiarity lite-cc - any reason you opted for that? Is it just kinder on smaller local models?

[-] hendrik@palaver.p3x.de 2 points 5 hours ago* (last edited 5 hours ago)

Judging by the github repo, it's the very basic cousin, written (vibe-coded) in Python. It doesn't do planning or anything, just preface your command with a system prompt telling your model it's a coding assistant. And gives it tool access to read and write files. And execute commands.

And seems no human uses it, there's no interactions like bug reports, PRs or people who star and like the repo.

this post was submitted on 23 Mar 2026
-32 points (16.7% liked)

Linux

64031 readers
509 users here now

From Wikipedia, the free encyclopedia

Linux is a family of open source Unix-like operating systems based on the Linux kernel, an operating system kernel first released on September 17, 1991 by Linus Torvalds. Linux is typically packaged in a Linux distribution (or distro for short).

Distributions include the Linux kernel and supporting system software and libraries, many of which are provided by the GNU Project. Many Linux distributions use the word "Linux" in their name, but the Free Software Foundation uses the name GNU/Linux to emphasize the importance of GNU software, causing some controversy.

Rules

Related Communities

Community icon by Alpár-Etele Méder, licensed under CC BY 3.0

founded 6 years ago
MODERATORS