• 0 Posts
  • 22 Comments
Joined 1 year ago
cake
Cake day: February 10th, 2024

help-circle


  • we currently have our own solution to send emails with a custom text explaining why people were rejected and what they can do next. we’ll have to review whether the built-in solution would be capable of replacing this functionality adequately if we add rejection reasons to lemmy when rejecting the applications.

    our current solution rejects applications and then deletes the user from the database to ensure that they can sign up again if they want, as denied applications only get deleted after a week or so and an appeal process would require support tickets and a lot more time to be spent by us on addressing those.

    our application process is fully automatic and just depends on certain words to be provided and the email not being disposable.



  • The screenshot in my previous comment is directly from their abuse form at https://abuse.cloudflare.com/csam. Your email is specifically about their proactive scanner, not about abuse reports that are submitted.

    They also explicitly state on their website that they forward any received CSAM reports to NCMEC:

    Abuse reports filed under the CSAM category are treated as the highest priority for our Trust & Safety team and moved to the front of the abuse response queue. Whenever we receive such a report, generally within minutes regardless of time of day or day of the week, we forward the report to NCMEC, as well as to the hosting provider and/or website operator, along with some additional information to help them locate the content quickly.





  • unless you operate the instance that is being used to send this material you can generally only work with the content that is being posted/sent in PMs. almost all identifying information is stripped when it leaves your local instance to be federated to other instances. even if there was a group of instances collaborating on e.g. a shared blocklist, abusers would just switch to other instances that aren’t part of the blocking network. there’s a reason why it’s not recommended to run a lemmy instance with open signups if you don’t have additional anti-spam measures and a decently active admin team. smaller instances tend to have fewer prevention measures in place, which results in a burden for everyone else in the fediverse that is on the receiving end of such content. unfortunately this is not an easy task to solve without giving up (open) federation.





  • unfortunately we can’t just apply the update quickly, as this introduces sending emails on rejected applications. we already send rejection emails separately and with custom text, while the text implemented in the update is currently not configurable.

    i’ll see if we can deploy updated lemmy-ui without updating lemmy already this weekend, but i need to check if there were any api changes first, as we’d then have to backport them to lemmy first.

    we’ve already applied the security patch about 2 weeks ago.



  • The woman depicted is very likely the target of harassment.

    Agreed, but there is no proof of this. We also don’t know their true identity to check with them directly.

    Sharing the images depicting violence is tantamount to a threat of violence.

    The images did not depict violence directly, it was a gory image of a dead person. They were very likely sent by a copycat not involved in the original harassment campaign and intended to fuck up fediverse users more than anything else. They did not appear to imply any kind of threat.

    you would have wasted 15 minutes

    This would require a lot more than 15 minutes to file a proper report. First we have to collect all relevant information that we have available and compile them in a format that can be submitted. Once we have this information we have to identify a police department to report this to. We are legally based in NL, as that’s where our non-profit Fedihosting Foundation is located. I’m based in Germany, so it would also be an option to report it here. The depicted person is claimed to be in Canada, so maybe this should be reported to a police department over there. Or maybe to all of them.

    All of this would easily add up to 2 hours or more if you want to do it properly and not just look for 3 online forms to write “hey there is someone sending spam”.
    If this was a paid job and I was doing this during working hours I wouldn’t mind, but all the time I spend here is taken out of my personal time, the same as with anyone else on our team, and also the same you’ll see with most other fediverse instances.

    perhaps Nicole has been trying to get a restraining order against some creep but has been unable to due to lack of evidence.

    If we receive a request for information from (real) law enforcement we’ll be more than happy to provide relevant data, but doing this for the (perceived low) chance of that somehow being linked from a random police report is a fairly high time investment as described above.



  • I don’t know if others have, I only know that we (Lemmy.World, Fedihosting Foundation) have not reported it to the police.

    I don’t have high hopes that the police would be able to do anything about this. For the harassment against the person shown in the images, that would likely have to be reported by them directly for the police to take that up.
    For random online spam, as in harassment of fediverse users receiving the PMs, that seems like it would be an extremely low priority for police. It’s also likely fairly difficult to impossible to follow up on, considering that the person sending the PMs most likely used a VPN to access these accounts.