I'm open to a conversation discussing the pros and cons of large language models. Whilst I use AI coding tools myself, I also consider myself quite a sceptic, and often share articles critical of these tools.
Things are getting easier. Many of the JavaScript runtimes support TypeScript out of the box now.
Back in the day, I used CakePHP to build websites, and it had a tool that could "bake" all the boilerplate code.
You could use a snippet engine or templates with your editor, but unless you get a lot of reuse out of them, it's probably easier and quicker to use an LLM for the boilerplate.
I also make use of ‘⚠’ to mark significant/blocking comments and bullet points. Other labels, like or similar to conventional comment prefixes, like “thought:” or “note:”, can indicate other priorities and significance of comments.
Thank you for introducing me to conventional comments! I hadn't heard of them before, and I can see how they'd be really useful, particularly in a neurodiverse team.
Vulkan?
I'm not too familiar with either, but this article goes into more detail: A Comparison of Modern Graphics APIs
How does one measure code quality? I'm a big advocate of linting, and have used rules including cyclomatic complexity, but is that, or tools such as SonarQube, an effective measure of quality? You can code that passes those checks, but what if it doesn't address the acceptance criteria - is it still quality code then?
The author of the article is Dan Abramov, the co-creator of Redux and a prominent React contributor. Putting aside what you may think of vibe coding, there is little doubt that he is an experienced developer, that knows what he is doing.
The author does make some good points about colours as visual cues, instead of just making things look colourful. I have to admit prior to reading this post, I always picked my themes on aesthetics, but it has made me think about colour as utility.
These are all very good points, and there is in value of having a common framework for front-end development. However, I would argue React isn't always the right tool for the job, yet it has become the dominant framework, and that dominance is being further bolstered because of generative AI.
This isn’t about React being the best tool or that it’s Model is good for LLMs (I don’t see any evidence there at all). It’s about React being past the point where network effects make alternatives viable.
So even if a better framework came along, and ideally one that's not owned by Meta, it would be very difficult for it to take hold because of this.
My understanding is that an example of a hypothesis, is that users want a feature. The experiment is putting that feature in front of users, or performing user research, which which then allows you to validate if a hypothesis is true or not.
This does remind me there was a time when websites having the W3C validation badges was all the rage.
Expensive as hell! 🤑