Asklemmy
A loosely moderated place to ask open-ended questions
If your post meets the following criteria, it's welcome here!
- Open-ended question
- Not offensive: at this point, we do not have the bandwidth to moderate overtly political discussions. Assume best intent and be excellent to each other.
- Not regarding using or support for Lemmy: context, see the list of support communities and tools for finding communities below
- Not ad nauseam inducing: please make sure it is a question that would be new to most members
- An actual topic of discussion
Looking for support?
Looking for a community?
- Lemmyverse: community search
- sub.rehab: maps old subreddits to fediverse options, marks official as such
- [email protected]: a community for finding communities
~Icon~ ~by~ ~@Double_[email protected]~
Not really, it's been pretty useless for me. But I'm also a very senior developer, I've been coding for 18 years so more often than not I'm stuck on a problem much bigger than the best AI can possibly handle, just in amount of context needed to find out what's wrong.
It's still much faster for me to just write the code than to explain what I want to an AI. IDE snippets and completion just makes it super quick. Writing out code is not a bottleneck for me, if anything I shit out code and shell commands without a thought. It comes out like it's regular speech.
I'm also at a point where I Google things out, and end up answering myself 5 years ago, or asking 5 years ago and there's still zero answers to the question.
I do see my juniors using Copilot a good bit though.
I've only worked for about a year as coder. I've used LLM extensively for work. I kinda feel bad that I might be lazying out on actually learning how to do it myself.
AI chatbots are sometimes quicker than using official library documentation. I daresay usually quicker, for anything but documentation that I know really well already.
I haven't spent my own money on a development tool in a long time, but I find it worth a few of my employer's dollars.
It's hardly life-changing, but it's convenient.
I can't comment on it's mistakes or hallucinations, because I am a godlike veteran programmer - I can exit Vim - and so I - so far - have immediately recognized when the AI is off track, and have been able to trivially guide it back toward the solution I'm looking for.
The chatbot version? Meh, sometimes, but I don't use it often.
The IDE integrated autocompletion?
I'll stab the MFer that tries to take that away.
So much time saved for things that used to just be the boring busywork parts of coding.
And while it doesn't happen often, the times it preempts my own thinking for what to do next is magic feeling.
I often use the productivity hack of leaving a comment for what I'm doing next when I start my next day, and it's very cool when I sit down to start work and see a completion that's 80% there. Much faster to get back into the flow.
I will note that I use it in a mature codebase, so it matches my own style and conventions. I haven't really used it in fresh projects.
Also AMAZING when working with popular APIs or libraries I'm adding in for the first time.
Edit: I should also note that I have over a decade of experience, so when it gets things wrong it's fairly obvious and easily fixed. I can't speak to how useful or harmful it would be as a junior dev. I will say that sometimes when it is wrong it's because it is trying to follow a more standard form of a naming convention in my code vs an exception, and I have even ended up with some productive refractors prompted by its mistakes.
Which ide integration? I like the leaving a prompt for tomorrow idea
Visual Studio.
And yeah, forget where I picked up the "leave the function unfinished with a comment" trick but it's been a great way to jump back in.
Depends on if you want to work with existing code. LLMs tend to be good at generating small code snippets but not good at understanding / finding errors in existing code
Not really. Writing code is the easy part. It's not the rate limiting step. The hard part is getting requirements out of customers, who rarely know what they want. I don't need to push out more code and features faster, that would make things into unmaintainable spaghetti.
I might send it a feature list and ask it "what features did they forget?" or "Can you suggest more features?", or even better -- "which features are the least important for X and can be eliminated?". In other words, let it do the job of middle-management and I'll just do the coding myself.
Anyway, ChatGPT blocks my country (I've confirmed it's on their end).
I’ve found it almost uniformly useless. Dangerous, even, because it produces output that looks good at first glance. But it only understands the line I typed and tries to figure out what I’m likely to keep typing. It doesn’t have a clue what problem I’m solving, what domain I’m working in, the scope I’m concerned with, and myriad other things that are what is actually important when writing software.
The one area I’ve found it useful is turning code comments into real code. If I’m coding in a language I’m not super familiar with, I can pseudo code what I’m trying to do in descriptive comments and it will often suggest a block of code that follows coding conventions and does what I asked for. Is it better than just learning the damn language? No. But it’s a handy tool to have for that one time a year I need to touch that program written in Go.
Yes and no. I compare it to a graphing calculator: I know how to graph a parabola by hand already, but I don’t want to have to do it over and over already. That’s just busy work for me.
LLMs are similar that way. There’s often a lot of boilerplate to get out of the way that’s just busy work to write over and over again. LLMs are great at generating some of that scaffolding.
LLMs have also become a lot more helpful as Google search has gotten worse over time.
Mostly as a search engine. I have it set up to only respond with answers it has web sources for. Code completion like Copilot can be useful, however 90% of the completions aren’t really saving me any time, the other 10% are awesome though.
So I could easily drop copilot but ChatGPT or HuggingChat used like search engines are awesome.
Try perplexity.ai, as a search engine I think it's better than chatgpt. But to use as a creation tool, it legs behind.
Yep! It’s the best autocorrect I’ve ever used, and it does a decent job explaining config files when needed. Just don’t let any unvetted code in because it can have some quirky bugs
Absolutely. I just built a little proof of concept thing where I loaded some GIS data into a google map to display the major rivers of the world.
ChatGPT, the v4 that I pay $20/mo for, was like someone with deep knowledge of all the technologies and APIs involved.
I’m gonna post a link to screenshots of the convo so you can see exactly how it went.
Not the whole thing because it's longer than I remember.
But just consider how long it would have taken me to answer each of those questions just by googling and reading old forums and stack overflow posts.
Much like sitting next to someone with experience, a question that could take me hours to answer on the internet took me only seconds to answer by asking directly. GPT's responses are still long, so it's not pure conversational style, but the longer responses aren't wasted fluff. It's all relevant to what I asked.
Natural language as a way to query a knowledge base is enormously useful. Especially for something that requires update of existing knowledge as often as tech work.
Natural language as a way to query a knowledge base is enormously useful.
Great post. I want to highlight your sentence above as a key point, for folks trying to come to grips with where and how to use the current generation of AI.
Yes, by far the most useful thing is stuff like API and keyword documentation for poorly documented code. Its literally the promise of self generating docs for tedious shit.
And it’s docs that form themselves around a specific question! It’s incredible!
I tried to use Copilot but it just kept getting in the way. The advanced autofill was nice sometimes, but its not like i'm making a list of countries or some mock data that often...
As far as generated code... especially with html/css/js frontend code it consistently output extremely inaccessible code. Which is baffling considering how straightforward the MDN, web.dev, and WCAG docs are. (Then again, LLMs cant really understand when an inaccessable pattern is used to demonstrate an onclick
instead of a semantic a
or to explain aria-* attributes...)
It was so bad so often that I dont use it much for languages I'm unfamiliar with either. If it puts out garbage where i'm an expert, i dont want to be responsible for it when I have no knowledge.
I might consider trying a LLM thats much more tuned to a single languge or purpose. I don't really see these generalized ones being popular long run, especially once the rose-tinted glasses come off.
As someone who is just getting started in a new language (rust), it can be very helpful when trying to figure out why something doesn’t work, or maybe some tips I don’t know (even if gets confused sometimes).
However, for my regular languages and work, I imagine it would be a lot slower.
I mostly use shell-gpt and ask it trivial questions. Saves me the time for switching to a browser. I have it always running in a tmux pane. As for code, I found it helpful for getting started when writing a functionality, but the actual engineering part should be done manually imo. As for spending money on it, depends on how you benifit from it. I spend about 50c on my openai API key, but I know a friend who used ollama (I think with some mistral derivative) locally on a gaming laptop with decent enough results.
I'm pretty sure even if it was helpful they wouldn't use it out of principle. Shit's basically plagiarism laundering.
EDIT: Oh you're talking about devs who use Lemmy, not the Lemmy devs.
I'm no real dev, but yes.
Even the free version is helpful.
It’s sometimes helpful when working with libraries that are not well documented. Or to write some very barebones and not super useful tests if I’m that lazy. But I’m not going to let it code for me. The results suck and I don’t want to become a „prompt engineer“.
Hmm well for research it can give me good pointers, when I am going into a new field.
For actual coding it's mostly useless for the moment. It's not trained to be productive, so it doesn't know what to focus on and tends to be overly verbose. Its internal model of what's going on is also quite shaky.
It feels like working with clay, I have to somehow get the code the llm generates into the shape I need. But it's like looking at a movie at super slow mo, and the clay is too wet and keeps falling apart.
Furthermore, it cannot handle anything more than relatively low complexity code. Sure it can give you a function for drawing a circle. But architecture and code smell are things it doesn't understand.
So after using it for a year I must say that I don't use it for actual coding. I use it mostly to get an overview of fields I'm not that much into. For example lately I've looked into quantum field theory again, and Rust for the first time. I know it spouts a lot of nonsense but I can still get the gist of it.
Still relying on good ol Bessie 🧠
More of a hobbyist, but it helps finding that typo I've made earlier that went unnoticed. And for command-line utilities it's nice being able to ask what you want to do and it provides the parameters you're looking for right away.
Basically made stack overflow useless for me. Great for pasting error messages. I don’t really find it useful for actually writing the code tho, unless its standard boilerplate stuff.
ChatGPT will mock up a python script pretty quickly given a basic english description and reference materials like API docs, sparing me the burden of doing something tedius, but that's about the extent of its utility for me.
It helps me write emails that are less nerdy.
After I write code, I pass it through Claude for "review". But normally I write the code by myself.
Yes, it's extremely powerful. For example I recently had an idea for a script to merge cbz files into a single document so I wouldn't have to clutter my ereader with many individual chapters. The LLM had almost no issue writing the whole thing with only one revision, spent less time and thought on that than I did googling around to see if there were existing solutions. It's really nice being able to create programs like that just on the high level concept and without mentally getting into the weeds of implementation details. If I wrote it myself I would have had to refresh my memory on stuff like regex and sorting syntax and it would have been way more time and effort. It basically lets me write custom scripts for any trivial problem where they could be useful where otherwise it might be too much trouble.
I've used chagpt and google's new one, whose name eludes me.
In cases where I absolutely have to write in a language or structure I hate, I prototype in an A-I to speed up the experience so I can stop sooner. It saves me so much time doing something I hate.
I’ve had the most luck with using ChatGPT for troubleshooting my existing code. I typically tend to lean more towards creative coding, and can provide it with my source code and a casual explanation of the issue and it can often explain how to manipulate things in a way I want.
I’ve relied on it a lot less for code generation and found it to be much more useful as a tutor for concepts that I can rework myself. I haven’t spent much time with Copilot since most of my projects are aiming for an uncommon goal.
Where I’ve found it to be less than useful in code generation is I’ll get caught in a loop where it’s trying an approach I’m not familiar with, so I feed it back the errors I’m getting and hoping it can solve it on its own, but it rarely is able to.
I don’t code professionally, but I’d probably hesitate to use it for anything used in production just based on what I’ve experienced.
I'm a new DM (and new to TTRPGs in general). I'm using bard and chatgpt to keep track of homebrew stuff.
I'm running an almost completely custom system, adapted to ASOIAF. Races (renamed to origins), classes, backgrounds, feats, etc. extra mechanics like duelling systems and large battle simulations, and faction interaction systems. It's a lot, and I find it easier for me to have the bot spray solutions to whatever issue I run into, then grab the one that might work, and refine it until it might sound fun. I need to get a system in order to keep track of my campaign, though. Tried WorldAnvil and honestly, I don't need that many tools. Might go back to Notion and keep track of all the factions and characters that way. Gonna be a lot of work though.
Tried WorldAnvil and honestly, I don't need that many tools. Might go back to Notion and keep track of all the factions and characters that way. Gonna be a lot of work though.
Obsidian has been great for me to keep track of all my worldbuilding notes for Pathfinder 2e
Obsidian.md? Did you get it from GitHub?
Not from GitHub as it's not FOSS. It does have a far more open approach than Notion though.
Hm, imma look into it. Shame it isn't FOSS. Thanks for the tip!
It's a decent general purpose data formatter (like "convert this giant json to yaml") but there are other ways to do that.
It's ok at being able to ask questions of documentation, as long as you don't take anything at face value. Really if you understand 90% of something its not bad at giving you the missing 10%. And it makes me a bit faster when I go back to bash, the anti-bicycle, after a break.
And I find myself not writing as many IDE snippets because AI is good at super repetitive stuff like, "wrap this promise in an async function." That's not the best example but it's what I could think of quickly.