18
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
this post was submitted on 19 Oct 2025
18 points (100.0% liked)
TechTakes
2463 readers
91 users here now
Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.
This is not debate club. Unless it’s amusing debate.
For actually-good tech, you want our NotAwfulTech community
founded 2 years ago
MODERATORS
NeurIPS is one of the big conferences for machine learning. Having your work accepted there is purportedly equivalent to getting a paper published in a top-notch journal in physics (a field that holds big conferences but treats journals as more the venues of record). Today I learned that NeurIPS endorses peer reviewers asking questions to chatbots during the review process. On their FAQ page for reviewers, they include the question
And their response is not shut the fuck up, the worms have reached your brain and we will have to operate. You know, the bare minimum that any decent person would ask for.
"Yeah, go ahead, ask 'Grok is this true', but pretty please don't use the exact words from the paper you are reviewing. We are confident that the same people who turn to a machine to paraphrase their own writing will do so by hand first this time."
"Having positioned yourself at the outlet pipe of the bullshit fountain and opened your mouth, please imbibe responsibly."
Far be it for me to suggest that NeurIPS taking an actually ethical stance about bullshit-fountain technology would call into question the presentations being made there and thus imperil their funding stream. But, I mean, if the shoe fits....
I did not think anything could make me sympathetic to the authors who put 0.1pt white text in their papers so that any reviewer lazy enough to use an LLM would get prompt injected, but here we are.
Highlight the space just after the abstract of my own most recent arXiv preprint for a surprise. :-)