Whenever an influencer applies to your campaign, we use advanced image analysis to determine if an influencer might have content that is not safe for some brands. This analysis is shown on the platform as “content warnings”.
There are 3 types of content warnings:
Suggestive content – Indicates influencer has content that could be considered “racy” or “adult” in nature on their Instagram account.
Profanity – Indicates that the influencer uses profane words on their Instagram account.
Violence – Indicates that the influencer has violent content on their Instagram account. It can also be triggered by content that is medical in nature (an image of a surgery, for example).
Each one of these warnings can have one of four ratings:
- Blank (if there is no warning, nothing will appear)
These content warnings can be found at the bottom of the right sidebar while looking at an influencer’s content:
Content warnings are the most valuable when you’re curating a list of applicants to accept into your campaign. This is because you no longer will have to review every influencer for brand safety – you can instead use risk level filters to organize the influencers and decide what to do with them. For example, you may choose to bulk reject all influencers with high risk, manually review everyone with medium risk, and not review anyone with low or no risk.