Edit Review Improvements/Proposed Huggle improvements

Edit Review Improvements (ERI) is a WMF Collaboration Team project seeking to improve the edit-review process generally and, in particular, to reduce the negative effects current processes can have on new editors to the wikis. Huggle is a powerful and popular edit-review and anti-vandalism tool. While semi-automated patrol tools like Huggle are vitally important to protecting the wikis’ integrity, a body of research suggests that the use of such tools can have the unintended consequence of discouraging and even driving away good-faith new editors.

To address this issue, Collaboration Team is researching a suite of improvements to Huggle. By providing ORES edit-scoring data in a new, user-friendly display, these improvements will help Hugglers generally to target their efforts more efficiently. The new tools will also enable reviewers to know when an edit under review was made by a wiki newcomer, and whether the newcomer was acting in good faith. With this information in hand, reviewers will be able to calibrate their responses.

Community participation is vital to this project’s success, given Huggle’s history as a community-created and community-maintained tool. This page lays out the project's objectives and proposed features as a first step towards getting that community feedback.

Goals

edit
 
A mockup shows the proposed new Huggle features. Note the additional row of icons next to the standard Huggle icons in the edit queue.

While a thorough review of Huggle could doubtless turn up a long list of possible improvements, a total rewrite of the tool is not within the scope of the ERI project. The following goals are consistent with ERI and have been formulated with an eye both toward empowering Hugglers and helping them to provide a better review experience for new users.

To help Huggle users fight vandalism more effectively:

edit
  • Provide clear and user-friendly display of ORES edit-quality predictions, enabling reviewers to more efficiently target problem edits.
  • Provide better data and a more user-friendly display of editor history, thereby enabling reviewers to see individual edits in context and to more easily patrol a vandal’s past edits.
  •  
    A detail view of the Queue window. The new column of icons will help reviewers more easily spot problem edits and better understand editors' experience level and intentions.
    Improve reviewers’ ability to create custom filters by including more data types as options, enabling reviewers to build edit queues more tailored to their interests.
  • By switching Huggle over so that it ingests the new ReviewStream feed, improve speed overall and make development of new features in future easier.
 
In the new icon system, each icon communicates three things about an edit. Color indicates edit quality. Facial expression denotes the good- or bad-faith prediction. And shape tells which edits were made by Newcomers.

To help Hugglers provide a better review experience to good-faith newcomers:

edit
  • Provide a clear indication of editors’ experience level, enabling reviewers to easily spot newcomers in the edit queue.
  • Provide clear and user-friendly display of ORES good-faith predictions, enabling reviewers to identify edits that, while they may have problems, were made in good faith.
  • Enable reviewers, if they choose, to filter out edits by good-faith newcomers (leaving such edits for reviewers more focused on supporting new users).

Planned improvements

edit

User-friendly display of Quality, Intent and Experience data

edit
Goal
edit

Enable reviewers to more effectively prioritize their work generally and to recognize when an edit under review was made by a newcomer who, while struggling, was working in good faith.

Proposed features
edit

In the edit queue, a new column of icons will add to and run alongside the existing Huggle edit icons. The new icon system will be based on newly standardized  classifications designed to make the following information easy to understand and use:

  • Edit-quality predictions: Machine-learning predictions of edit quality are presented in one of four categories, ranging from  “Very likely good” to “Very likely has problems.”[1]
  • User-intent predictions: Machine-learning predictions of whether or not an edit was made in good faith are ranked using three levels that range from “Very likely good faith” to “Likely bad faith.”
  • User experience level: To provide more visibility to edits made by new users, experience level is classified as one of the following:
    • Newcomer—fewer than 10 edits and 4 days of activity
    • Learners—more days of activity and edits than Newcomers but fewer than Experienced users. (On English Wikipedia, this corresponds to autoconfirmed status)
    • Experienced users: more than 30 days of activity and 500 edits. (On English Wikipedia, this corresponds to extended confirmed status)

When the user rolls over any edit, a tooltip displaying full descriptive information about the edit will appear. (See the illustrations at right for more detail.)

Better User Info data

edit
Goal
edit
 
A mockup of the User Info window shows enhanced editor-summary and edit-history data.

Enable reviewers to more easily understand an edit’s context. When vandalism is detected, help reviewers trace and address a bad actor’s past activity.

Proposed features
edit

As shown in the illustration, we propose to enhance the User Info window with both a richer summary of notable actions from the user’s Talk page as well as better data about past edits.

  • Enhanced summary data: The experimental edit-review program Snuggle showed how a revealing summary of an editor’s recent interactions with other wiki users can be constructed by pulling tag and other data from the user’s Talk page. Such data would include instances of warnings, welcomes, Teahouse invitations, blocks, deletion proposals, etc. (To avoid permanently stigmatizing editors,  such an editor summary will probably look back over only a limited period.)
  • Enhanced edit-history data: At present, the User Info window provides only minimal data about an editor’s past edits (the relevant page name, time and date of the edit, and  the edit number). Providing Quality and Intent rankings of past edits and indicating which edits were reverted will enable reviewers to get a fuller picture of an editor’s past activities. In the event the editor under review is a vandal, reviewers will be able to click on and review any past suspicious edits that weren’t reverted.

Add ERI data to Queue Filters

edit
Goal
edit

Enable reviewers to more narrowly focus their efforts on the types of edits that interest them most. Encourage reviewers to use the new ERI data to either find and assist good-faith new users or, alternatively, to exclude such users from edit queues, leaving them to reviewers who specialize in new-user support.

Proposed features
edit

The user-friendly display of Quality, Intent and Experience data described above will give reviewers new ways to evaluate and screen edits. We can automate that process and improve efficiency even further by letting reviewers filter the edit queue by these new data types.

  • New standard filter offerings: By adding a suite of new standard offerings to the queue filter menu, we can encourage use of the new data types. Such predefined filters could, for example, target “Likely vandalism” or “Problem edits by good-faith newcomers.”
  • New customization options: Huggle includes a tool (in Preferences) that lets reviewers define custom edit-queue filters. Adding Quality, Intent and Experience data as choices in that tool will give reviewers new options for creating even more powerful and nuanced filters.

Note

edit
  1. While ORES predictions about edit quality are available in Huggle currently, they are presented in a way that does not enable reviewers to easily use them for prioritizing which edits to examine. In the current interface, reviewers must hover over an edit in the queue to see the edit score, and no interpretation is offered beyond a simple number on an unexplained scale.