And for it to be executed with little to no error.ĪI moderation technologies can help with this process as they can flag and disable offending content before it’s distributed. The ideal scenario is for moderation to take place in real time or before content has been seen. Our theory was that if conversations are great (not offensive, bullying or otherwise), then engagement will be great too.Ĭontent moderation can be tricky as it traditionally happens after the offending content has been seen by some users. While there’s much debate around whether social media apps have a societal or moral responsibility to monitor what people do and say on their networks, it’s easy to agree that poor user experiences affect user adoption, retention and revenue. Moderate offending content-ideally, before users see it. While these kinds of measures can result in upfront user churn, they can create more engagement among those who stay because they feel comfortable expressing themselves. (In our case, we integrated facial-age estimation technology, where users take a selfie during the app onboarding process.) Technology can help achieve this level of granularity while preserving users’ privacy. For instance, a 16-year-old can only interact with people between 15 and 17 years old. The app then analyzes it to verify they are the person on their profile.įor our app, it was critical to verify users’ age since it pairs teen users with people no more than one year younger or older than them. Users are prompted to upload a video of themselves. Hinge further introduced a Selfie Verification feature to prevent fake accounts and catfishing. While Hinge doesn’t currently have a way of verifying this information, false entries can come back to haunt users whose dates complain they’ve been misled or even report them to the app. The most important identifiers will vary by network, though.įor instance, the dating app Hinge only allows users to change their age once after submitting their profile, as this factor is fact-based but also of particular importance to its users. The first safety measure we introduced was age verification. “I’ve met some chill people on this app, but Wizz just needs to loosen up on the rules a little.” Introduce age and/or other identity verification measures. ![]() I cannot stress enough how many times I've been banned for a small cuss word.” “Stop being such snowflakes about small things. “I'm tired of my posts getting taken down because they’re seen as inappropriate.” Some of the very public feedback we received upfront sounded like this: Know that this will inevitably result in some pushback in the short term. And we don’t allow pictures, videos or comments that are violent, are against the law (such as drug use) or are sexually oriented.Īfter establishing these guidelines, the next step is enforcing them at scale. We banned behaviors intended to harass or threaten users, as well as pretending to be someone else or lying about your age. Some of the things we determined would result in temporary or permanent suspension include content or comments that are bullying, humiliating, mean, insulting or include defamation, profanity or hate speech. Because ours is primarily 13- to 21-year-olds, we were particularly conscious of users’ emotional safety. It will be important to consider your user base. Together, you should identify behaviors that the majority of you could agree are either universally offensive, inappropriate or misleading-independent of whether you would personally be offended by them. ![]() I recommend bringing together a diverse group of people to answer this question. ![]() We did it because we believed that making Wizz a safe place would make users feel more comfortable engaging. In our case, we didn’t introduce safety and moderation measures because we had to. Define what’s okay, what’s not and what happens when users don’t comply.ĭetermining what your network will and won’t tolerate is arguably the most difficult task.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |