How Does CleanApp Handle Fake Reports & False Positives?

How Does CleanApp Handle Fake Reports & False Positives?

One of the most common critical questions we get when we introduce them to incentivized waste/hazard reporting is this:

How does CleanApp handle fake reports & false positives?

There are several additional ways of asking this question: “what stops people from just dumping trash, only to report that same trash, so they earn some @Littercoin?”

  • Won’t this actually make the trash problem worse?
  • How could we ever verify whether something was pre-existing trash/hazard or fake trash?
  • Wouldn’t kids [or “homeless people” or poor people, or X people or Y people — [insert a typical suspect & marginalized group here] abuse the system?

Fake Reporting is Dumb But Rational

Faking CleanApp Reports to game the system is a legitimate concern, because “cheating” or “market-exploiting” behavior is grounded in 100% rational economic theory. If potential earning outputs outweigh the labor input of generating a “fake report,” logic suggests systemic collapse under the weight of fake reporting.

However, notwithstanding limited iteration game theoretical predictions of CleanAppers uniformly “gaming the system” for individual financial gain, our research and observations show remarkably different behavior patterns.

pexels-photo-761144

For instance, despite being one of the easiest commodity classes around which a rogue actor could design, implement, and/or capture an artificial data market (because verification is still so difficult), trash/waste/litter still have so much stigma attached to them that there have been no known attempts of gamesmanship within existing reward-based platforms (e.g., Litterati/OpenLitterMap) or among market actors in this space more generally.

We take the problem of false positives very seriously because, even if limited, it introduces transactional friction and costs.

Fake Reporting is a Known Problem

The same concern with false reporting was raised with Wikipedia, OpenStreetMap, Wikinews, and numerous other reward-based crowd-sourcing projects where a premium attaches not just to accuracy but to highest verifiable forms of accuracy and consensus-based “truth.”

We follow leading streams of research on “cheating” in this context and we continue to push our own research in this direction.  Our research shows that for litter/hazard reporting, there are numerous rational and irrational compliance pulls towards accurate reporting behavior. Despite these findings, we know that more can always be done to make our processes even more resilient to bad faith actors.

So, our short answer is that, “yes,” fake reporting is a very powerful logical conundrum and a known theoretical possibility.  However,

while vandalism and false positive reporting happens in many contexts (from Wikipedia, to social media bot behavior, to litter reporting), statistically and in practice, it’s not the death knell that one imagines at first.

Furthermore, what Wikipedia, social-media anti-bot measures, and theoretical/applied research on cheating shows is that it’s actually quite easy to minimize the effect of even outright intentional vandalism with minor structural and incentive-based adjustments to the overall operational logic.

pexels-photo-761297

One of these adjustment techniques is the introduction of different verification mechanisms and transactional planes (above, and alongside) report-response transactions.

To Drown Out Fake Reports, Submit More Good Reports

Here is what it will take to answer this question conclusively:  the only way to figure out the optimal incentive structure that will be sufficient enough to nudge users towards progressively higher reporting rates, and yet, be small enough that it does not incentivize cheating is to deploy CleanApp globally.

person holding plastic bottle

Only then can we use different pricing/market mechanisms to figure out an efficient middle ground between the demands for hyper-accurate real-time reporting and the supply of reward-backed CleanApp reports.

These types of market adjustments to discourage spam/bots/vandalism are performed every day in millions of other contexts.

We’re confident in our (CleanApp’s, but also, humanity’s) ability to figure out an efficient yet flexible alignment of interests on waste/hazard reporting-response processes so as the minimize the disruption of fake reporting.