Automated Code Reviews¶
New in version 3.0.
Review Board provides some tools for doing automated review when code changes are published.
By integrating static analysis tools, style checkers, and CI (or Continuous Integration) builds into your review process, you can free up your developers to concentrate on the larger, more important issues.
Choosing Your Tools¶
There are several tools which can help with automated code review. Some of these are bundled with Review Board, and some are available through external or third-party tools.
-
Our free Review Bot microservice connects your Review Board server to a wide ecosystem of code analysis tools, including:
This can be installed in your network on Linux or by using our Review Bot Docker images.
Jenkins
Jenkins is a full-featured CI service that works in-house, allowing you to build and test your code automatically, posting results back to Review Board.
Jenkins can work with any type of source code repository, and is a good choice for Review Board.
Support for Jenkins is included with Review Board, and requires a Jenkins plugin to activate.
CircleCI
CircleCI is a CI service that supports code hosted on GitHub and Bitbucket. It allows you to build and test your code automatically, posting results back to Review Board.
Support for CircleCI is included with Review Board.
Travis CI
Travis CI is CI service that works with GitHub. It can also be used to perform various checks, posting results back to Review Board.
Support for Travis CI is included with Review Board.
Custom automated review tools can also be developed entirely in-house.
Configuring Automated Reviews¶
Configuration depends on whether you’re using a built-in automated review solution (such as Jenkins or CircleCI), or using an extension (such as Review Bot).
Automated reviews can be enabled for all review requests, or subsets based on conditions (such as review requests on specific repositories or assigned to specific review groups).
See the guides below to learn more:
Performing Automated Reviews¶
Most automated review tools will default to running automatically when a new review request has been published, or a new diff has been published to a review request.
Some tools can be configured to run manually instead. This is useful for tools that may take a long time to check (such as CI tools that need to build products and run test suits).
If a tool is configured to be run manually, it’ll be up to the owner of a review request to trigger the automated review through a Run button.
Automated Review Status and Results¶
When automated reviews are active for a review request, they’ll be presented as a list of Checks run (also known as status updates). These will appear under the review request overview and under any Review request changed overview.
Each status update lists:
The name of the automated tool being run
The status of the check (such as running, failed, or succeeded)
A link to additional build output (if provided by the tool)
A button to run the tool (if configured to run manually instead of automatically)
These will update as results come in.
Failing automated code reviews will come with a list of open issues, helping you track what needs to be fixed. These can be discussed and resolved like any other review.
Developing Automated Review Tools¶
If you need a custom solution for automated code review (to integrate with in-house tools or compliance systems), you can build it on the Review Board Platform in one of three ways:
On top of Review Bot’s automated review platform classes.
All tools are built on top of Review Bot’s
reviewbot.tools.base
classes.See the existing source code for Review Bot’s built-in tools for examples.
On top of Review Board’s extensions-framework.
Extensions can listen to
review_request_published signals
, run any necessary checks, and manage results withStatusUpdates
entries.Through a combination of WebHooks and RBTools.
By listening to new review requests via WebHooks, your own internal tools can perform checks and report back using rbt status-update.