SocialAutopsy Wants To Stop Cyberbullying by Making It Much Worse

Sometimes when approaching problems you come up with a solution where the medicine is worse than the sickness. Cyberbullying is particularly prone to this; a serious issue is regularly derailed by terrible legislation and suggestions that we abolish anonymity on the Internet altogether (I suppose people aren’t harassed online by people they know in real life!). At best these bad ideas are well meaning but naive; at worst they’re a cynical attempt to capitalize on the suffering of other people. Enter SocialAutopsy, a startup that was seeking $75,000 on Kickstarter prior to the campaign being suspended with the intent of ending cyberbullying by…basically giving people more tools to carry it out.

There have been so many amazing organizations before us who have made (and continue to make) significant strides and efforts to not only create awareness, but to push through to legislation to affect change.  

But we thought, “instead of trying to stop it, why don’t we try to capture it?” 

Because what’s the number one defense people use when they are making awful, nasty comments online? (Author’s note: They link to an image of the First Amendment)

Can’t argue that one, so let’s instead help them magnify those freedoms. Let’s launch a database where we capture them exercising those rights and create digital records for them that anyone can access.

In their Kickstarter video, founder Candace Owens declares it to be “The first ever search database that aggregates peoples’ social behavior and creates real profiles for them…we attach their words to their places of employment and anybody in the entire world can search for them.”

If this is starting to sound ethically questionable to you, you’ve probably coming to the realization that SocialAutopsy is in the practice of researching and broadcasting personally identifiable information about people and making it available to the public. In lay terms, SocialAutopsy is a platform for doxing.

On its own this would be problematic, but in a followup Kickstarter FAQ video, Owens explicitly states that this is targeted at minors: “We stand by the fact that we think the best time to be on our database is when you are a minor because…there’s no harm, no foul when you’re a minor, right?” So now we have a doxing database that from the founder’s own words is geared towards minors because hey, minors can take the hits since they aren’t looking for a job and applying to college. I guess 16 and 17 year olds are acceptable collateral damage but I digress.

Owens then takes the irony up to eleven in the same video, adding that “By the way, we don’t want to exclude minors because minors are the ones killing themselves because of what they’re reading online, right? We’re seeing ten years old, eleven year olds, twelve year olds, that are committing suicide; suicides are on the rise for kids that are really young.”

Yes, you read that correctly. In order to provide tools to combat cyberbullying among minors, we need to dox people who say things deemed “hate speech” (more on that in a moment). Even by some impossibly perfect process of due diligence only real bullies ended up in this database, this is basically the advocating of fighting fire with fire. One could make a strong case that this also runs afoul of the Children’s Online Privacy Protection Act. This would warrant an entire blog post on its own the SocialAutopsy team seemed to have no knowledge of it prior to the backlash.

What’s more, despite Owens insisting that you can’t search this public database by specific keywords to find people using particular racist or sexist slurs, it’s still publicly available and searchable. There is basically nothing stopping people from acting negatively on this information. Owens’ response to this legitimate concern?

In other words,  ¯\_(ツ)_/¯

There’s also a more sinister side to this type of platform. Candace Owens seems to think that publicly available data will discourage people from saying mean things online, assuming they’re acting under their real names. That’s debatable at best since you can look at any Facebook comment thread on a political news story and realize that plenty of people don’t care about the ramifications of what they’re saying under their real names.

More importantly, does Owens think that bullies and trolls won’t just as quickly use this new tool she’s kindly provided for them? What’s to stop Joe Blow from impersonating John Smith on Facebook, making a slew of racist and hateful posts, and then immediately rushing to submit John’s name to this database? Sure, John can contact SocialAutopsy and say he was impersonated, but how does the company know John didn’t create a sockpuppet account specifically to harass other people? What if he can’t easily be identified by a more active Facebook account? What if Joe Blow submits his own postings and accuses John being the one of abusing SocialAutopsy to harass him?

Do you really think, Candace, that the bullies who threaten and terrorize people online won’t immediately take advantage of the lucrative opportunities you’re providing to smear and destroy the reputations of their victims? If so, for however much time and money you’ve put into this endeavor you’re incredibly naive.

If you want an idea of the quality control SocialAutopsy is currently providing for its (alleged) 20,000 public listings, developer Izzy Galvez managed to introduce Kirby into the equation.

If this gets past the SocialAutopsy filters when the site is in beta, what happens when it’s launched? Beta or not, for a platform that is relying on extremely vigorous curation of user submissions their standards are wanting.

To make matters worse, the submitted data is, as of right now, publicly indexed on Google and easily accessible.

Now, this could potentially be an easy fix with nofollow tags, but the fact that this was never addressed initially is alarming. What’s more, whoever is running the SocialAutopsy account has tweeted that all this chatter is doing is pointing out to them their weak points. Candance, you are operating a platform meant to curb online harassment. Don’t you think you should have known about this from the beginning? It begs the question of what other security flaws there are if due diligence didn’t even catch something as simple as this. It’s worth noting, by the way, that despite the lofty claim of 22,000 profiles, Google’s site indexing is only displaying 114; some of them have already been confirmed as real people. I wonder how many of them consented to this as being part of the beta.

Owens also alludes to a legal team in her Kickstarter, which I don’t envy given the questionable legal knowledge of the SocialAutopsy team. Brittany, one of the team members, had this to say in the comments (emphasis mine):

I just want to stress we are only publishing hate speech as defined by the law. “hate speech is any speech, gesture or conduct, writing, or display which is forbidden because it may incite violence or prejudicial action against or by a protected individual or group, or because it disparages or intimidates a protected individual or group.” Hate speech isn’t saying “I hate the jets”, hate speech is saying “Any person who supports the Jets I am going to have them lynched because they are a dirty explicits.” Big difference between having a difference in opinion and actually threatening someone.

Let’s leave aside the fact that this hate speech definition was copied wholesale from Wikipedia. The bigger problem is that, in the United States, there’s no legal definition of hate speech. “Hate speech” is a fluid concept with different meanings to different people and there’s no First Amendment exception to hate speech. There’s speech that could be considered by reasonable, common consensus to be hate speech, but you explicitly identified hate speech as “defined by the law” when it isn’t. At best you haven’t done your legal due diligence despite being in beta (not to mention the sensitive nature of the industry this startup is in) and at worst you’re lying by omission. The content being flagged as hate speech is entirely subjective, making abuse all the more likely.

Of course, none of this touches on the fact that this won’t do a thing about anonymous online harassment. Or people who think to change their usernames or display names. Or people who are using proxies or VPNs. Or harassment that takes place on platforms where people are just less likely to be using real names. Brittany also notes that “We are staying away from posts from places like 4chan and twitter, simply because there are more fake profiles.” SocialAutopsy is jettisoning Twitter even though they explicitly mention it in their Kickstarter video. Good thing people don’t get harassed on Twitter!

Popehat was speculating incredulously on Twitter whether or not this was a troll, but I’ve never met a troll determined enough to set up a fake Kickstarter, social media accounts, a professional looking Kickstarter video and have a website and parent company set up. This is definitely the real deal, and no matter how well meaning it may or may not be (people have raised this question in more detail), SocialAutopsy a mess. I’m obviously not alone in this line of thought given the snail’s pace at which the company raised raised just $4,000 (mostly from people where the company is headquartered, so it’s a reasonable assumption that this came from friends and family).

The tl;dr is that asking for $75,000 for a startup with this many legal, logical and ethical holes is nothing short of lunacy.

Online harassment is a serious issue that needs real solutions. What it doesn’t need is having petrol dumped on a fire.