Since the beginning of the year (2013) Redhat(RH) / JBoss is working on the most robust, scalable and easy-to-use version of OptaPlanner formerly called Drools Planner. This new release marks a major milestone in both organizational and technical terms.
This article is intended for those considering to migrate from earlier versions of Drools Planner to OptaPlanner and for those interested in the development of this framework. Those of you unfamiliar with OptaPlanner, read about it here (click).
OptaPlanner to become a top-level JBoss Community project
In mid-April OptaPlanner has graduated from the Drools project to become a top-level JBoss Community project as described here. This has reasonable impact on the amount of attention and resources RH is making available to this project and improves offerings for professional and support services.
Major technical improvements compared to the previous version 5.x
I would like to characterize technical improvements in comparison with the previous version as huge! Indeed the list of major architectural, performance, API and usability improvements is rather long. In this article I am listing those that impressed me the most:
Performance improvements
Subjectively – much faster. Objectively – hard to tell. During the migration from version 5.x to 6.0 I did a number of improvements in both my Production Rules and my Moves. I don’t know if metaheuristic algorithm of OptaPlanner undergo any improvements. What I know is that Drools Expert in version 6.0 is using the PHREAK algorithm (there are some performance comparisons on the internet) which is supposed to be much faster than the “old” RETE algorithm. To be able to compare version 5.x and 6.0 objectively, I would have to use exactly the same Production Rules and Moves(Factories). Alas, up to now I haven’t found those spare hours to implement that.
Adding up all the changes and improvements I just mentioned improved my project performance from about 1500-3000 average counts per second (ACS) in version 5.x to a stunning 6000-22000 ACS in version 6.0! That surely is an impressive performance gain.
Reduction of framework boilerplate
A number of methods required by framework to work are not required anymore in the version 6.0. For instance
- Solution.cloneSolution() is not required;
- Solution.equals() is not required;
- Solution.hashCode() is not required;
These methods were subject to frequent but boiler-platy change if ones domain model was changing/evolving and were often prone to errors and side-effects (at least in my case).
ScoreHolder replaces *ConstraintOccurrence
This one is on the Production-side (Drools Expert) and my favorite one. In previous versions of OptaPlanner one would have to logically insert constraint occurrences, count them and pass them to ScoreHolder. ScoreHolder then is used by OptaPlanner during the search to determine the overall goodness of the current working solution. The code for this looks e.g. like this:
/**
* Before with Drools Planner 5.x
*/
rule "no assignments before kick-off"
when
$project : Project(kickOff != null)
$assignment : Assignment(project == $project, interval != null, $project.kickOff.isAfter(interval.start))
then
insertLogical( new IntConstraintOccurrence("assignment before kick off", ConstraintType.NEGATIVE_HARD, 1, $assignment) );
end
/**
* and at the end of rule-set
*/
//Accumulate hard constraints
rule "hardConstraintsBroken"
salience -1 //Do the other rules first (optional, for performance)
when
$hardTotal : Number() from accumulate(
IntConstraintOccurrence(constraintType == ConstraintType.NEGATIVE_HARD, $weight : weight),
sum($weight) //Vote for http://jira.jboss.com/jira/browse/JBRULES-1075
)
then
scoreCalculator.setHardConstraintsBroken($hardTotal.intValue());
end
On the one side this is partially a bothersome boilerplate code and on the other side the parametrization of the *ConstraintOccurrence() has caused me headaches a couple of times.
When looking at JavaDoc one can see the optional parameter Object… causes enumerating domain information that has caused the constraint occurrence. If one looks at the rule example above, one sees that only $assignment was passed as a cause for the IntConstraintOccurrence. One could now ask “Why wasn’t e.g. $project passed as a cause as well?” My answer to that question is: “Heck, I have no idea!” (and yes, I wrote this rule myself).
In the version 6.0 everything looks nice, like this:
/**
* Now with OptaPlanner 6.0
*/
rule "no assignments before kick-off"
when
$project : Project(kickOff != null)
$assignment : Assignment(project == $project, interval != null, $project.kickOff > interval)
then
scoreHolder.addHardConstraintMatch(kcontext, -1);
end
/* AND THERE IS NO CONSTRAINT ACCUMULATION RULE NEEDED ANYMORE! */
More transparent planning result reporting
This is a very important step towards user friendly and transparent planning-result reporting. Most of my customers are asking me: “What does solution score -2hard/-3soft mean? What constraints were violated, what constraints could be met?” With the OptaPlanner v6.0 the possibilities to report this are somewhat improving. With the code example below one can generate a pretty detailed constraint match report:
@SuppressWarnings("unchecked")
private ArrayList generateSchedulingConstraintList(MySolution mySolution) {
ArrayList myConstraintList = new ArrayList();
KieBase kieBase = ((DroolsScoreDirectorFactory) solver.getScoreDirectorFactory()).getKieBase();
ScoreDirector scoreDirector = solver.getScoreDirectorFactory().buildScoreDirector();
if (!(scoreDirector instanceof DroolsScoreDirector)) {
return;
}
scoreDirector.setWorkingSolution(mySolution);
scoreDirector.calculateScore();
for (ConstraintMatchTotal cmt : scoreDirector.getConstraintMatchTotals()) {
Rule rule = kieBase.getRule(cmt.getConstraintPackage(), cmt.getConstraintName());
for (ConstraintMatch cm : cmt.getConstraintMatchSet()) {
List justificationList = new ArrayList();
for (Object justification : cm.getJustificationList()) {
// workaround for nested collections of justifications
if (justification instanceof Collection) {
for (MyFact singleJustification : ((Collection) justification)) {
justificationList.add((MyFact) singleJustification);
}
} else {
justificationList.add((MyFact) justification);
}
}
if (justificationList.size() > 0) {
MyConstraint myConstraint = new MyConstraint(cmt.getConstraintPackage(), cmt.getConstraintName(), cmt.getScoreLevel(), justificationList);
myConstraintList.add(myConstraint);
myConstraint.setMetaData(new ArrayList());
for (Entry meta : rule.getMetaData().entrySet()) {
myConstraint.getMetaData().add(new MyConstraintMetaData(myConstraint, meta.getKey(), meta.getValue().toString()));
}
}
}
}
return myConstraintList;
}
This code, although it “does the job”, is still far from a sufficient solution. I have many arguments why so:
- Code is too big with too many boilerplate
- Too much of framework internals have to be used to get the result
- No direct support for Rule Metadata (it has to be extracted manually with the mechanics provided by Drools Expert API)
- I personally dislike how the Justifications are made available (copying back and forth, nested collections – all this adds to complexity and reduces intuitiveness)
- Own data structures have to be provided for storing constraint information that, to my opinion, should be natively supported by the framework
Configurable Selectors
One of the difficult tasks to be accomplished during OptaPlanner-based application development is to ensure an appropriate selection. I’ll define an appropriate selection as a selection of planning facts, planning entities, planning values that allow OptaPlanner to search towards the best possible solution and as a selection that at the same time is small enough to fit into (Working)Memory. In version 6.0 a great effort was done to move selection generation from application java code into configuration. At the moment I use solely the features allowing to probabilistically distribute the generation of different types of selections, like this:
...
4.0
MySwapMoveFactory
6.0
MySimpleMoveFactory
...
The new selection mechanism offers many other powerful features to improve selection performance and scalability e.g.
- just in time planning value generation helps to save RAM
- entity and value selectors/filters allow to constrain selection to particular entity/entities, particular variable/variables or even filter for particular values
- caching allows to control when Moves are being generated thus reducing required RAM and CPU time necessary for the generation (the mimic selection has similar effect)
- selection order allows to control selection distribution of moves, planning entities, planning values,…
Multi-level (aka bendable) scoring
This one is important to actually find a solution in a number of real-life planning problems. In older versions of OptaPlanner a limited number of scoring was available (such as either simple or hard and soft). Of course one could always implement own (java) scoring mechanism but it is quite hard to do it right and in case of java scoring one can not take advantage of the production system (Drools Expert). With the version 6.0 a new configurable scoring mechanism was added – bendable score allowing for multiple levels of hard and soft score (a special case of that is the hard, medium and soft score – medium and soft being two levels of soft score).
First step towards the separation of API and Implementation
This one, to my opinion, is also a very important step towards establishing a robust and elegant API design – the challenge OptaPlanner-Team has apparently taken up to. The application programming interface and the implementation of OptaPlanner Framework are being separated! In the version 6.0 org.optaplanner.core has now the following package structure:
--- org.optaplanner.core +- api +- config +- impl
This may not seem as such a big deal, but it is! Good Framework means – an architecture and a design that are intuitive and easy to use by hundreds if not thousands of application developers. And designing this is far more than just moving classes to *.api or to *.impl package respectively.
@ValueRangeProvider
Another nice improvement on the way to a truly intuitive and elegant framework design are the ValueRangeProviders. Although the previous solution was not bad, the new solution offers looser coupling and more explicit demarcation of elements involved in interactions with OptaPlanner API. Here’s what I mean:
/**
* Before with Drools Planner 5.x
*/
@PlanningEntity
public class MyPlanningEntity
@PlanningVariable(strengthWeightFactoryClass = IntervalStrengthWeightFactory.class)
// the annotation below means - values are comming from Solution and are returned by getIntervalList()
@ValueRangeFromSolutionProperty(propertyName = "intervalList")
public Interval getInterval() {
return interval;
}
...
public class MySolution implements Solution {
// no annotation here!
public List getIntervalList() {
return intervalList;
}
...
/**
* Now with OptaPlanner 6.0
*/
@PlanningEntity
public class MyPlanningEntity
// the annotation below means - values are comming from ValueRangeProvider with the id "intervalList"
@PlanningVariable(valueRangeProviderRefs = { "intervalList" }, strengthWeightFactoryClass = IntervalStrengthWeightFactory.class)
public Interval getInterval() {
return interval;
}
...
public class MySolution implements Solution {
@ValueRangeProvider(id = "intervalList")
public List getIntervalList() {
return intervalList;
}
...
As one can see, the declaration of PlanningVariable in the version 6.0 is consolidated, the specific annotations such as @ValueRangeFromSolutionProperty are dropped and more general @ValueRangeProvider introduced (this annotation also improves code readability since now it is apparent what properties provide planning values).
Introduction of Generics
I was wondering for quite some time as to when the OptaPlanner team will introduce support for generics. This time is now, featuring two elegant improvements:
Generic *WeightFactory
Especially while learning concepts of OptaPlanner, and in addition to rather detailed and well-structured documentation, usage of generics helps an inexperienced user a lot. God knows how many hours I initially invested to understand and implement my first WeightFactory. Below I added two code fragments that demonstrate this:
/**
* Before with Drools Planner 5.x
*/
public class MyWeightFactory implements PlanningValueStrengthWeightFactory {
@Override
public Comparable createStrengthWeight(Solution solution, Object planningEntity) {
MySolution mySolution = (MySolution) solution;
MyPlanningEntity myEntity = (MyPlanningEntity) planningEntity;
return new MyEntityStrengthWeight(mySolution.getWeight(myEntity), myEntity.getId());
}
/**
* Now with OptaPlanner 6.0
*/
public class MyWeightFactory implements SelectionSorterWeightFactory {
@Override
public Comparable createSorterWeight(MySolution mySolution, MyPlanningEntity myEntity) {
return new MyWeight(myEntity, mySolution.getWeight(myEntity));
}
Generic Move*Factory
MoveFactories got generics-pimped as well, here a short comparison of the code:
/**
* Before with Drools Planner 5.x
*/
public class MyMoveFactory implements MoveListFactory {
@SuppressWarnings("unchecked")
@Override
public List createMoveList(@SuppressWarnings("rawtypes") Solution solution) {
MySolution mySolution = (MySolution) solution;
/**
* Now with OptaPlanner 6.0
*/
public class MyMoveFactory implements MoveListFactory {
public List createMoveList(MySolution mySolution) {
What’s missing
I personally miss two things in OptaPlanner:
1. The constraint reporting is for me personally the most lacking feature in OptaPlanner. I can only speculate why it was “neglected” up to now and my guess is that OptaPlanner started as a Research & Development or Proof-of-Concept project and as such it was important to prove that it performs well. I draw this speculation from the available reporting/benchmarking mechanisms in OptaPlanner. The scalar scores are good to compare different planning runs or quantitatively compare the solution of two different problems by OptaPlanner. For a qualitative analysis a detailed constraint report is a must, imho! Also, the benchmarking support is quite good in OptaPlanner (you can even visualize it graphically and generate a html based report that is just beautiful) – this too, as of my guess, comes from the intention to prove that the framework performs well.
2. The age of big data and machine learning is almost there and since OptaPlanner or, for that matter, the whole Drools Ecosystem is actually predestined to solve voluminous problems with highly complex correlations and rules I wonder when that will come to OptaPlanner and consorts. I personally think that multi-threading or another form of parallelism along with the ability for OptaPlanner to learn (some ideas such as hyper-heuristics are already in the air) shall come quickly.
Summary
By all means use OptaPlanner v6.0! It is better, more robust and with a better performance than all its predecessors. And even if you don’t care for predecessors, still try it since it is a beautiful and easy-to-use framework for you to build solutions for planning problems.
reinis.
One response to “OptaPlanner (a.k.a. Drools Planner) – the brand new version 6”
Great article, Reinis!
To answer about the question of 2 big missing things in OptaPlanner:
1) The constraint reporting has not been detailed further yet, because it’s still unclear to me (despite your welcome jira’s [1]) what exactly is missing and how we could provide that. If you could provide some more practicle examples (preferable in the jira PLANNER-215), I am sure we’ll work that out.
2) We’re working on that, both in OptaPlanner and Drools seperatly. The big challenge is to go multi-threaded without sacrificing important scalability algo’s (most notably incremental score calculation). Otherwise, we’d just get a slower algorithm which uses more cores (and we’re not going to do this just for the sake of being able to check the “multi-threaded” checkbox on our feature list).
[1] https://issues.jboss.org/browse/PLANNER-216?jql=project%20%3D%20PLANNER%20AND%20reporter%20in%20%28reinis%29