Ask Lemmy
A Fediverse community for open-ended, thought provoking questions
Please don't post about US Politics.
Rules: (interactive)
1) Be nice and; have fun
Doxxing, trolling, sealioning, racism, and toxicity are not welcomed in AskLemmy. Remember what your mother said: if you can't say something nice, don't say anything at all. In addition, the site-wide Lemmy.world terms of service also apply here. Please familiarize yourself with them
2) All posts must end with a '?'
This is sort of like Jeopardy. Please phrase all post titles in the form of a proper question ending with ?
3) No spam
Please do not flood the community with nonsense. Actual suspected spammers will be banned on site. No astroturfing.
4) NSFW is okay, within reason
Just remember to tag posts with either a content warning or a [NSFW] tag. Overtly sexual posts are not allowed, please direct them to either [email protected] or [email protected].
NSFW comments should be restricted to posts tagged [NSFW].
5) This is not a support community.
It is not a place for 'how do I?', type questions.
If you have any questions regarding the site itself or would like to report a community, please direct them to Lemmy.world Support or email [email protected]. For other questions check our partnered communities list, or use the search function.
Reminder: The terms of service apply here too.
Partnered Communities:
Logo design credit goes to: tubbadu
view the rest of the comments
It's hard to say what Instagram does behind closed doors, so who knows. Having write access to the production database is almost certainly in the hands of a select few, but it's not unexpected for developers to have read access / access to a replicated database for testing, and to help compile reports that more business-focused individuals in the company might be interested in, or just to understand the impact certain changes they might introduce could have. I wouldn't entirely be surprised if companies like Instagram that deal with more sensitive data are a little more careful about what every random developer has access to, but I also wouldn't be surprised if there were a decent number of people with this level of access, and I would probably be more surprised if they were watched carefully than if they weren't. Regardless, in some sense I don't think it's particularly relevant -- you should assume that your DMs on platforms like Instagram and Twitter and whatever are accessible to people working on those platforms, so I think the concerns that the original comment here brings up are perfectly valid, even if it happens to not be the case in this particular instance. This is certainly a company secret for MANY companies, e.g.:
I think it's worth being aware. I'm pretty sure the average person doesn't really think about this and just assumes their DMs are completely private, or maybe it's seen by Facebook or Google in some automated way for advertising... But it is also possible for employees to do problematic things, or for the information to be leaked if the service is compromised. I really don't think the average person really realizes that it's probably just sitting as plain text in a database and can be read pretty trivially if you have access to it.
Yeah, absolutely? You have a user sending questionable messages to people / potential spam, you check the rest of their history for more context. That sounds perfectly sensible to me.
I don't use Instagram, so I'm not super aware of this particular case. It looks like the e2e encrypted chats are a very recent feature (seems like September 2022? Looks like it also wasn't a feature at some point in 2021), so it's possible this supposed incident happened prior to that feature rolled out, and it's unclear to me if it's the default? Either way, the feature seems recent enough that it might not be relevant to the original situation. It would make a difference in the future, though!
If they're truly end-to-end encrypted then, sure, there probably wouldn't be too much use for moderation tools specifically to keep track of the metadata, but it might be useful to help determine if somebody is harassing / spamming people en mass (plus that seems like real good data for advertising). It looks like they do have some moderation tools for encrypted messages:
https://help.instagram.com/753893408640265?helpref=faq_content
But it sounds like it's intentional from a user (i.e., when they report a DM it will send the message + some context to Instagram that Instagram otherwise wouldn't be able to read). Who knows if there's anything about metadata.
Bit of a tangent, but of course end-to-end encryption can also only go so far. You still trust Instagram about these claims, and you trust that they implement it correctly. I'd probably believe them, but there's a lot of places to play tricks on users, especially when you don't know what code they're running on your device. It may not be particularly hard for them to push an update that tells them what your private keys are, for instance, and there's often some security sacrifices for convenience (maybe your phone will automatically share encryption keys with a new browser login so you can read your message archive or something).
I would agree with this regardless, of course :). There's plenty of ways to glean this information without a rogue Instagram employee behind the scenes.