118
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
this post was submitted on 18 Jul 2025
118 points (97.6% liked)
Programming
21634 readers
319 users here now
Welcome to the main community in programming.dev! Feel free to post anything relating to programming here!
Cross posting is strongly encouraged in the instance. If you feel your post or another person's post makes sense in another community cross post into it.
Hope you enjoy the instance!
Rules
Rules
- Follow the programming.dev instance rules
- Keep content related to programming in some way
- If you're posting long videos try to add in some form of tldr for those who don't want to watch videos
Wormhole
Follow the wormhole through a path of communities [email protected]
founded 2 years ago
MODERATORS
goes looking for the issue
Hmm. I would believe that there are efficiency gains from doing one large insert rather than many small
like, there are probably optimizations one can take advantage of in rebuilding indexes
and it'd be nice for database users to have a way to leverage that.
On the other hand, I can also believe that DBMSes might hold locks while running a query, and permitting unbounded (or very large) size and complexity queries might create problems for concurrent users, as a lock might be held for a long time.
EDIT: Hmm. Lock granularity probably isn't the issue:
https://stackoverflow.com/questions/758945/whats-the-fastest-way-to-do-a-bulk-insert-into-postgres
Any lock granularity issues would also apply to transactions.
Might be concerns about how the query-processing code scales.