What is Human Review?

September 21, 2020

Bernat Fages

Whether you have heard the words Human Review, Manual Review or Content Review: it all means the same thing. It is a process conducted by a person or a team of people which involves evaluating, editing and approving, or declining certain instances of data.

The goal of Human Review is to insert human judgement into a computing process that can't be fully trusted to operate on its own without human supervision. Human review is a mechanism that builds trust in systems that one might not be able to take for granted. This is usually the case with:

Because Human Review acts as the gatekeeper of a process, it is established towards the end of it as a final step. Human Review validates that the information we have can be trusted and the corresponding actions we want to take are adequate.

Types of Human Review

Human Review can exist in more than one configuration, depending on:

Scope of Human Review

In its simplest form, a Human Review is nothing else than an approval process: correct decisions are approved and erroneous ones are rejected. A Human Review step of this kind would prevent undesired actions from being taken.

That said, the extent of this review process can be expanded. For instance, one might consider what to do with rejections in the previous example. In some cases, we might be okay with simply discarding incorrect decisions. Or we might want to correct erroneous decisions so we can still let them go through to take subsequent action.

It could even be that we decide to correct those mistaken decisions, not because fixing each mistake in particular is impactful — it likely isn't at scale  —, but because we want to know what went wrong so we can prevent future mistakes from being made in previous steps within the process. This is a common strategy found in Human in the Loop Machine Learning settings, where humans help ML models improve as bad predictions are spotted and addressed through model retraining.

Sources of Human Review

Human Review becomes a necessary step when we are taking actions on the basis of information and/or decisions we can't trust.

These sources are untrustworthy because their behaviour isn't reliably predictable. Some examples are:

Active Learning might be interpreted as another source of Human Review cases in Human in the Loop ML setups. The goal of Active Learning is to prioritize what to review with the goal of maximizing ML model improvement. Because it acts as a triaging model for an originating ML prediction, rather than an actual source, we are keeping it outside of the taxonomy.

Examples of Human Review

Below we illustrate a few practical examples of Human Review we've observed:

Setting up a Human Review process

Typically, a Human Review system will need to support the following needs: creating review tasks, dispatching them to the next available worker in the team, conveniently visualize whatever information and decisions need to be approved, and propagate the outcomes of each review to the right systems downstream.

At Human Lambdas this is our bread and butter. If you're curious about how we can help you run your Manual Review processes, don't hesitate to reach out to us.