Whether you have heard the words Human Review, Manual Review or Content Review: it all means the same thing. It is a process conducted by a person or a team of people which involves evaluating, editing and approving, or declining certain instances of data.
The goal of Human Review is to insert human judgement into a computing process that can't be fully trusted to operate on its own without human supervision. Human review is a mechanism that builds trust in systems that one might not be able to take for granted. This is usually the case with:
- Complex environments: when the space of possibilities is so vast it is impossible to foresee all that may happen.
- High-stakes environments: when the risk and/or cost of making an incorrect decision is high.
- Non-deterministic processes: when the behavior of the systems in charge of making decisions is not completely predictable.
- Black-box processes: when the causes for a certain decision made by a system in charge are difficult or impossible to discover (low explainability).
Because Human Review acts as the gatekeeper of a process, it is established towards the end of it as a final step. Human Review validates that the information we have can be trusted and the corresponding actions we want to take are adequate.
Types of Human Review
Human Review can exist in more than one configuration, depending on:
- Scope: what is the extent of each review?
- Source: who has triggered this review?
Scope of Human Review
In its simplest form, a Human Review is nothing else than an approval process: correct decisions are approved and erroneous ones are rejected. A Human Review step of this kind would prevent undesired actions from being taken.
That said, the extent of this review process can be expanded. For instance, one might consider what to do with rejections in the previous example. In some cases, we might be okay with simply discarding incorrect decisions. Or we might want to correct erroneous decisions so we can still let them go through to take subsequent action.
It could even be that we decide to correct those mistaken decisions, not because fixing each mistake in particular is impactful — it likely isn't at scale —, but because we want to know what went wrong so we can prevent future mistakes from being made in previous steps within the process. This is a common strategy found in Human in the Loop Machine Learning settings, where humans help ML models improve as bad predictions are spotted and addressed through model retraining.
Sources of Human Review
Human Review becomes a necessary step when we are taking actions on the basis of information and/or decisions we can't trust.
These sources are untrustworthy because their behaviour isn't reliably predictable. Some examples are:
- Users. Those internal processes taking actions as a function of certain user activity shouldn't be trusted. For example, imagine if anyone could censor any user on Facebook by merely reporting them.
- External data. Internal processes basing their actions on external information outside of the company's control might be susceptible to mistakes. There is a reason why Quality Control processes are no stranger to web scrapping setups.
- Machine Learning. Low interpretability, visibility and unpredictability are common characterizations of ML-based automation processes, which may warrant a human-based gatekeeper or circuit-breaker.
Active Learning might be interpreted as another source of Human Review cases in Human in the Loop ML setups. The goal of Active Learning is to prioritize what to review with the goal of maximizing ML model improvement. Because it acts as a triaging model for an originating ML prediction, rather than an actual source, we are keeping it outside of the taxonomy.
Examples of Human Review
Below we illustrate a few practical examples of Human Review we've observed:
- Human Review in Machine Learning: an ML model doing the tedious work of spotting leaks in a gas pipeline, with a human-based evaluation step that evaluates and verifies the findings before dispatching a workforce that will fix the issue.
- Human Review in Content Moderation: a content review step that approves or rejects user reports, in order to prevent malicious usage of their reporting functionality.
- Human Review in Payments Fraud: a manual review step for Fraud agents that need to investigate payments flagged by their Fraud rules engine.
- Human Review in Workflow Automation: a complex and sophisticated execution flow might be able to automate much of the work a human operator would otherwise have performed, but we still want to have a human in charge for peace of mind.
- Human Review in RPA: RPA bots aren't safe from encountering corner cases that block execution. Manual review processes offer a viable option to unblock these.
- Human Review in Web Scraping: Similar to RPA, scraping bots will fetch data that may not be guaranteed to be structured as expected, which puts the integrity of the data at risk. Systematizing Quality Assurance through Human Review of such data can help ensure ingested data is trustable.
Setting up a Human Review process
Typically, a Human Review system will need to support the following needs: creating review tasks, dispatching them to the next available worker in the team, conveniently visualize whatever information and decisions need to be approved, and propagate the outcomes of each review to the right systems downstream.
At Human Lambdas this is our bread and butter. If you're curious about how we can help you run your Manual Review processes, don't hesitate to reach out to us.