Building Applications with Deep Learning: Expectations vs. Reality

Posted 2 CommentsPosted in Software Development, Software Engineering


Nowadays, building applications involves many technologies. There are technologies to render user interfaces, to retrieve and store data, to serve many users, to distribute computing, etc. Increasingly, certain requirements imply the usage of neural networks. So what is the reality of building enterprise applications with the available state-of-the-art neural network technology?


I am not a scientist. I am curious, like to understand how things work, have analytical aptitude and love math, but… this is not what I am being paid for. During working hours, my task is to develop real world applications in timely and cost-efficient manner. And thankfully there is plenty of available technology aka tools aka frameworks that allow me to do exactly that. Without understanding of magnetism, I am still able to store information. Without understanding query optimization principles, I am still able to write efficient data queries or, without knowing how to fill memory of graphic card, I am still able to render user interface on the monitor. There is even this funny quotation[1] from definition of technology on Wikipedia:

“(Technology)… can be embedded in machines which can be operated without detailed knowledge of their workings.”

I expected that using Neural Network Technology is not different. That I could, merely by obeying the constraints of the design of a framework, applying patterns, avoiding anti-patterns and gluing it with all the other relevant technologies, develop real world application without detailed knowledge of every technology I use. If I still haven’t convinced you, read on here…


The reality is very different. Someone willing to employ neural network technologies at the moment[2] is forced to do scientific work or at least have an in-depth understanding of neural network methods, despite a number of publicly available technologies created by the brightest and most resourceful minds of our age.


Sure, for a plethora of reasons such as – the technology is not mature enough, some fundamental issues are still unsolved, there is too little pressure from industry[3], etc. However, there are some reasons that are focus-related. I will address those which became obvious to me during work on a real-world application. In particular:

  • Kicking-off with the Technology
  • Tools to move from experiment to the real application
  • Development Language
  • Design of Deep Learning Framework

Coming to Terms with Deep Learning

Background Story

Our project started 2014 with the development of recommendation engine for finding solutions in a text-corpora based on documented customer contacts with an after-sales support organisation. After successfully implementing a number of use cases based on statistical approaches with token frequency, collaborative filtering and number of data quality improvements involving, among others, advanced natural language processing techniques we have rolled out a number of productive systems.


In 2016 we turned our attention to Neural Networks and Deep Learning. Having great success with Java (actually, Scala), Spark and the available Java-based machine learning frameworks our first choice was deeplearning4j[4] Version dl4j-0.4-rc3.9 (aka dl4j).


It was spring 2016, and we got annoyed with dl4j. In retrospect, I see the main drivers of our annoyance were less the framework itself and more our expectations. What we expected, was a yet another robust enterprise framework. Ok the “0.4” and “rc” in the version number should have given us a hint on the maturity of the framework, but we were ignorant. At that time, making dl4j to work for us was complicated, we did not manage to make it run on GPU backend, and even to make the CPU backend work, we had to compile the framework ourselves, which felt like too much additional work that kept us from fulfilling our main task – implementing a neural network that would learn stuff for our use case. After two months of trial & error and parallel experiments with a well-known Python-based framework we decided to switch to that Python framework and suspend work on the dl4j-based network implementation. Oh, the configuration of Python framework was as complicated as that of dl4j, we just got more lucky with it, that’s all.


By the end of November 2016, seven months later, we still hadn’t managed to build a network configuration that would converge with data from our domain. After the initial success with building toy models (MNIST, seq2seq and some other models) we had decided that the Python framework was the most promising, but boy did we err. There were plenty of assumptions about what we could have gotten wrong, but there was no visible direction that would enable us to succeed.


At that time, a colleague of mine, Wolfgang Buchner[5], mentioned that he has recently saw that dl4j had undergone a major re-vamp. We immediately attempted to build an experimental model with dl4j Version 0.7.2 and within two weeks, we actually succeeded. Within the next two weeks, our model converged to a satisfactory level with our actual data. Four weeks.


Of course no one was very optimistic at the beginning, thus the result surprised us. Reflecting on this surprise, I attempted to analyze the main factors that, in my opinion, helped us succeed, and I came to the conclusion that there were four.

Kicking-off with the Technology

There are times when it’s OK to skip the documentation and move straight to the code. I personally don’t often need to read documentation to understand a framework providing the MVC pattern or ORM framework, because these are well established patterns that are provided by well established frameworks.


In case of neural networks I do have to read the documentation, if they have it at all at all to kick off a project. Sure there are plenty of papers on arXiv, great lectures on YouTube from renowned MIT professors on KLD, Entropy, Regressions, Backprop and whatnot. But theoretical explanations of principle, and the capability to write the code that implements those principles, are two very different animals.


Dl4j has two strengths when it comes to helping someone at the start:

  • Documentation. Framework documentation is not my first choice to understand principles, but definitely the first choice if I want to be able start writing code really fast. Reason being – it focuses on making things work instead of explaining algorithm working principles in-depth and it focuses on end-to-end use cases including pre-processing data and giving advice on the “dark art” of hyper-parameter tuning. This I have never seen in the documentation of other deep learning frameworks;
  • Community. I have been hanging around in every deep learning community I could find. Dl4j has the most vibrant, active, patient and open community I have experienced. Yes, most of the answers still come from Skymind people but there is always someone on dl4j gitter channel[6] who has a couple of good hints up their sleeve.


In general, I have the feeling that the intention of dl4j community is to build real applications. Based on my experience with other deep learning communities I feel that their intention is to discuss topics of their ph.d. thesis or prove this or that theorem.

Tools to move from experiments to real applications

Dl4j is an ecosystem. And as an ecosystem, it provides number of tools to pre-process data or read it from different formats, to integrate the framework with other (e.g. consumer) technologies and to semi-automatically tune the hyperparameters of my model.


There is one single tool provided by dl4j above all others that had a massive impact on the success of our project so far. It is so called the dl4j User Interface, or UI. It is a web page automatically created by the framework (with minimal configuration, literally five lines of code) that shows graphs of some parameters during network training:


By itself, that would be fine, if you can “read” this analysis data (which, by the way, does not happen by default). So dl4j goes step further and provides extensive and very concrete documentation[7] on how to interpret and analyse the readings, even providing very particular advice on tuning my network configuration. That really made difference for our project. I am posting the picture of the UI below, but seriously, just navigate to the visualization documentation page of dl4j, and you can read about it in a way more detail.

Development Language

To my astonishment, most of the deep learning frameworks are implemented in dynamic (type) languages. Dynamic typing is a good thing in many cases, but I believe it is the worst possible choice when developing deep learning software.


If you have already worked with some deep learning framework(s), haven’t you wondered why each and every framework provides a number of classes that download, pre-process and feed the data into neural network? I haven’t seen such a thing in any other class of frameworks but I have a guess as to why it is so: namely, because the data has to be quantified and formated in rather complex structure and the transformation of data in a form readable (and learnable) by network is damn difficult.


And then we have this dynamic language, that is so implicit that I literally NEVER know what the hell method A or method B is returning. And when I look at method A, I see it is calling at some point the method A’ which in turn calls A’’ and so on and so forth until I reach the bottom of the stack where the data array is instantiated. By the time I reach the bottom, I have already forgotten what I wanted to accomplish and am trying to figure out the implementation of some utility method of the framework.


In the domain that is so data-centric, where data structure is so important and models’ ability to learn is so dependant on correctness of data, how for heaven’s sake can someone select dynamic language as the development language?


Fun fact: when you create matrix(10, 255, 64) for training a Recurrent Neural Network in a well known framework, you get 10 sequences of 255 elements of size 64; in dl4j, instead, you get 10 sequences of 64 elements of size 255. How is that not important to know in advance what data structure what method would return?


Dl4j is developed in Java. And although Java itself is not the most innovative language out there, it offers two things extremely important to me and my teammates: type safety and its youngest “cousin” Scala, one of the languages best adapted for machine learning out there.

The Design of the Deep Learning Framework

What is available out-of-the-box versus what has to be built by ourselves is an important issue. My observation is that many frameworks are built with only a limited number of use cases in mind, and all the deep learning frameworks I have encountered have mainly research in mind.


One major design advantage of the dl4j from version 0.7.2 is its ability to switch backends without re-compiling the code. The class-path is being scanned for the backend libraries and the available backend is loaded automatically. The obvious advantage is, to be able to run the code on CPU while testing locally and, to run the same code on GPU when deploying on a GPU-Rig. Another advantage is to be able to do backend specific stuff. Consider this code:


With this simple trait you are able to e.g. configure your model differently based on available backend. We set the batch size based on available BackendType because e.g. GPU is able to process larger batches more efficiently:


The well-known Python framework broke our back when, in an attempt to improve the convergence of our network, we tried to implement number of custom layers (e.g. Stochastic Depth). Because of the design of that network, there is literally no possibility to debug a layer, the usage of backends (the ones who do the heavy lifting) was so unintuitive that we literally were guessing what the design should be, and since everything we wrote was compiled without problems and failed only in runtime, this attempt turned into a nightmare.


Up until now, we haven’t written own layers in dl4j. That work is commencing now and in a short time I will be able to evaluate this aspect more objectively.


Currently I see a lot of discussion, experiments and effort being invested into networks to analyze image information. I believe that there is huge potential in the manufacturing industry, and until now I have heard very little about efforts to build solutions for manufacturing.


I think manufacturers or producers of manufacturing equipment will require networks of choreographed and aligned networks. Often subsystems or aggregates have to work in a intelligent and semi-autonomous mode while contributing to the coordination or analysis of the whole manufacturing process.


So ability to train networks for specific tasks and then join them in larger networks will be important. Also the ability to train a network and then replicate it with slight modifications (such as those required because the next production line has a slightly different configuration) will be extremely important to build products using deep learning efficiently. Last but not least, even with state-of-the-art technology, the hyperparameter tuning of a neural network is a painstaking and elaborate process which, to my opinion, could be one of the main hindrances in bringing deep learning applications to market in a timely manner.


In respect to dl4j I strongly feel that that this framework will overtake the current top dogs of deep learning simply by providing the industry tools to build actual products/applications using deep learning. This feeling is motivated by the current condition and the focus of the framework developers. For instance:

  • Dl4j team is working on the solution for hyper-parameter tuning, called Arbiter[8];
  • the community is very active, just check-out the liveliness of the dl4j gitter channel[6];
  • github statistics[9] look very healthy to me and last but not least;
  • from the involvement of Skymind employees both in support of community and evolving dl4j code-base, it seems that dl4j is very central for the business model of this company. And, according to my experience, when commercial enterprize is backing open source project, it gives a huge boost to that project.


The work on the described project will continue throughout 2017 and, likely, 2018. Our current plan is to stick with dl4j and use it in production. I would love to hear about your experience with deep learning and currently available deep learning frameworks, so comment on!


by Reinis Vicups


I am freelance software developer specializing on machine learning. I am not affiliated with deeplearning4j or skymind.

The described project is being developed for Samhammer AG[10] and continues as of 2017.

Special thanks to Wolfgang Buchner[5] and other guys for your excellent criticism and corrections.

Angry rantings


Curse of Technology on achieving Mastery

When I was younger I sought for mastery in every technology I used. If I recall correctly, I have achieved mastery in Borland Delphi Pascal, shortly after it died[11] (at least in the part of the world visible to me). I attempted to gain mastery on several other technologies every since and concluded this: If I work for a commercial software development company that is successful, there will be a ongoing change of employed technologies because of the reasons of

  • constantly evolving software development technologies,
  • constantly changing market demands leading to employment of different technologies,
  • constantly evolving business model of the software company itself again, leading to changes in employed technology.

So basically, unless your business is to develop frameworks, it is next to impossible to achieve mastery in most of technologies. With 41 I have used more than thousand frameworks and several dozen development languages for at least one project (~6 month to ~3 years). Except couple of languages I haven’t kept much, I sometimes don’t even recall the names anymore.


Working for commercial Enterprizes makes it very hard to achieve mastery in used technologies due to fast evolution of technologies themselves, evolving markets and business models of the Enterprises themselves.

[2]as of January 2017
[3]these are my assumptions

OptaPlanner (a.k.a. Drools Planner) – the brand new version 6

Posted 1 CommentPosted in Software Development

Since the beginning of the year (2013) Redhat(RH) / JBoss is working on the most robust, scalable and easy-to-use version of OptaPlanner formerly called Drools Planner. This new release marks a major milestone in both organizational and technical terms.

This article is intended for those considering to migrate from earlier versions of Drools Planner to OptaPlanner and for those interested in the development of this framework. Those of you unfamiliar with OptaPlanner, read about it here (click).

OptaPlanner to become a top-level JBoss Community project

In mid-April OptaPlanner has graduated from the Drools project to become a top-level JBoss Community project as described here. This has reasonable impact on the amount of attention and resources RH is making available to this project and improves offerings for professional and support services.

Major technical improvements compared to the previous version 5.x

I would like to characterize technical improvements in comparison with the previous version as huge! Indeed the list of major architectural, performance, API and usability improvements is rather long. In this article I am listing those that impressed me the most:

Performance improvements

Subjectively – much faster. Objectively – hard to tell. During the migration from version 5.x to 6.0 I did a number of improvements in both my Production Rules and my Moves. I don’t know if metaheuristic algorithm of OptaPlanner undergo any improvements. What I know is that Drools Expert in version 6.0 is using the PHREAK algorithm (there are some performance comparisons on the internet) which is supposed to be much faster than the “old” RETE algorithm. To be able to compare version 5.x and 6.0 objectively, I would have to use exactly the same Production Rules and Moves(Factories). Alas, up to now I haven’t found those spare hours to implement that.
Adding up all the changes and improvements I just mentioned improved my project performance from about 1500-3000 average counts per second (ACS) in version 5.x to a stunning 6000-22000 ACS in version 6.0! That surely is an impressive performance gain.

Reduction of framework boilerplate

A number of methods required by framework to work are not required anymore in the version 6.0. For instance

  • Solution.cloneSolution() is not required;
  • Solution.equals() is not required;
  • Solution.hashCode() is not required;

These methods were subject to frequent but boiler-platy change if ones domain model was changing/evolving and were often prone to errors and side-effects (at least in my case).

ScoreHolder replaces *ConstraintOccurrence

This one is on the Production-side (Drools Expert) and my favorite one. In previous versions of OptaPlanner one would have to logically insert constraint occurrences, count them and pass them to ScoreHolder. ScoreHolder then is used by OptaPlanner during the search to determine the overall goodness of the current working solution. The code for this looks e.g. like this:

On the one side this is partially a bothersome boilerplate code and on the other side the parametrization of the *ConstraintOccurrence() has caused me headaches a couple of times.

When looking at JavaDoc one can see the optional parameter Object… causes enumerating domain information that has caused the constraint occurrence. If one looks at the rule example above, one sees that only $assignment was passed as a cause for the IntConstraintOccurrence. One could now ask “Why wasn’t e.g. $project passed as a cause as well?” My answer to that question is: “Heck, I have no idea!” (and yes, I wrote this rule myself).
In the version 6.0 everything looks nice, like this:

More transparent planning result reporting

This is a very important step towards user friendly and transparent planning-result reporting. Most of my customers are asking me: “What does solution score -2hard/-3soft mean? What constraints were violated, what constraints could be met?” With the OptaPlanner v6.0 the possibilities to report this are somewhat improving. With the code example below one can generate a pretty detailed constraint match report:

This code, although it “does the job”, is still far from a sufficient solution. I have many arguments why so:

  • Code is too big with too many boilerplate
  • Too much of framework internals have to be used to get the result
  • No direct support for Rule Metadata (it has to be extracted manually with the mechanics provided by Drools Expert API)
  • I personally dislike how the Justifications are made available (copying back and forth, nested collections – all this adds to complexity and reduces intuitiveness)
  • Own data structures have to be provided for storing constraint information that, to my opinion, should be natively supported by the framework

Configurable Selectors

One of the difficult tasks to be accomplished during OptaPlanner-based application development is to ensure an appropriate selection. I’ll define an appropriate selection as a selection of planning facts, planning entities, planning values that allow OptaPlanner to search towards the best possible solution and as a selection that at the same time is small enough to fit into (Working)Memory. In version 6.0 a great effort was done to move selection generation from application java code into configuration. At the moment I use solely the features allowing to probabilistically distribute the generation of different types of selections, like this:

The new selection mechanism offers many other powerful features to improve selection performance and scalability e.g.

  • just in time planning value generation helps to save RAM
  • entity and value selectors/filters allow to constrain selection to particular entity/entities, particular variable/variables or even filter for particular values
  • caching allows to control when Moves are being generated thus reducing required RAM and CPU time necessary for the generation (the mimic selection has similar effect)
  • selection order allows to control selection distribution of moves, planning entities, planning values,…

Multi-level (aka bendable) scoring

This one is important to actually find a solution in a number of real-life planning problems. In older versions of OptaPlanner a limited number of scoring was available (such as either simple or hard and soft). Of course one could always implement own (java) scoring mechanism but it is quite hard to do it right and in case of java scoring one can not take advantage of the production system (Drools Expert). With the version 6.0 a new configurable scoring mechanism was added – bendable score allowing for multiple levels of hard and soft score (a special case of that is the hard, medium and soft score – medium and soft being two levels of soft score).

First step towards the separation of API and Implementation

This one, to my opinion, is also a very important step towards establishing a robust and elegant API design – the challenge OptaPlanner-Team has apparently taken up to. The application programming interface and the implementation of OptaPlanner Framework are being separated! In the version 6.0 org.optaplanner.core has now the following package structure:

This may not seem as such a big deal, but it is! Good Framework means – an architecture and a design that are intuitive and easy to use by hundreds if not thousands of application developers. And designing this is far more than just moving classes to *.api or to *.impl package respectively.


Another nice improvement on the way to a truly intuitive and elegant framework design are the ValueRangeProviders. Although the previous solution was not bad, the new solution offers looser coupling and more explicit demarcation of elements involved in interactions with OptaPlanner API. Here’s what I mean:

As one can see, the declaration of PlanningVariable in the version 6.0 is consolidated, the specific annotations such as @ValueRangeFromSolutionProperty are dropped and more general @ValueRangeProvider introduced (this annotation also improves code readability since now it is apparent what properties provide planning values).

Introduction of Generics

I was wondering for quite some time as to when the OptaPlanner team will introduce support for generics. This time is now, featuring two elegant improvements:

Generic *WeightFactory

Especially while learning concepts of OptaPlanner, and in addition to rather detailed and well-structured documentation, usage of generics helps an inexperienced user a lot. God knows how many hours I initially invested to understand and implement my first WeightFactory. Below I added two code fragments that demonstrate this:

Generic Move*Factory

MoveFactories got generics-pimped as well, here a short comparison of the code:

What’s missing

I personally miss two things in OptaPlanner:

1. The constraint reporting is for me personally the most lacking feature in OptaPlanner. I can only speculate why it was “neglected” up to now and my guess is that OptaPlanner started as a Research & Development or Proof-of-Concept project and as such it was important to prove that it performs well. I draw this speculation from the available reporting/benchmarking mechanisms in OptaPlanner. The scalar scores are good to compare different planning runs or quantitatively compare the solution of two different problems by OptaPlanner. For a qualitative analysis a detailed constraint report is a must, imho! Also, the benchmarking support is quite good in OptaPlanner (you can even visualize it graphically and generate a html based report that is just beautiful) – this too, as of my guess, comes from the intention to prove that the framework performs well.

2. The age of big data and machine learning is almost there and since OptaPlanner or, for that matter, the whole Drools Ecosystem is actually predestined to solve voluminous problems with highly complex correlations and rules I wonder when that will come to OptaPlanner and consorts. I personally think that multi-threading or another form of parallelism along with the ability for OptaPlanner to learn (some ideas such as hyper-heuristics are already in the air) shall come quickly.


By all means use OptaPlanner v6.0! It is better, more robust and with a better performance than all its predecessors. And even if you don’t care for predecessors, still try it since it is a beautiful and easy-to-use framework for you to build solutions for planning problems.


End-to-end setting up TomEE on a linux server

Posted Posted in Software Development

In this post I describe an end-to-end setup for TomEE+ and my application on a vanilla linux (Debian) server.

This is just one of the many possible configurations. Be advised that changes done to system(s) or configuration(s) might be useful in some cases while in other cases not. Although I have put an effort in explaining why I perform these or those changes, errors and omissions are likely. That’s why I cannot take any responsibility for the loss of data or damage to your systems. Always use your own brain and question stuff you read here at all times.

One more disclaimer – information in this post is mainly credited by other people in numerous publications in internet. I just aggregated, structured and adapted it to my needs. If some of you recognize own material (or that of your friend) please let me know and I will gladly add the credit. I am not doing this right away since this article is a result of days and days of research and I just can’t remember the sources I got this information from.



  • 64bit Debian Server
  • LAMP (Linux Apache MySql PHP)

1. Install Java JDK 1.7

I am installing oracle jdk and yes it has to be jdk (as in no – jre is NOT sufficient). OpenJDK had some issue (with either TomEE or, more likely, my own web application) I unfortunately cannot remember.

  • get Java 1.7 here:
  • move Java to the “right” location
  • set symlink (so that later java updates get propagated)
  • activate java
  • open profile file, set JAVA_HOME, save it and exit

  • refresh environment

2. Configure apache2

So why do we need apache2 at all? I had two reasons, the reason one being this article on stackoverflow and the sconed reason being apache2 already pre-installed by my server provider as part of LAMP.

  • get mod_jk
  • change two lines in /etc/libapache2-mod-jk/
  • create and fill /etc/apache2/conf.d/mod-jk.conf
  • create your virtual Host in /etc/apache2/sites-available/
  • activate vHost and restart apache2

3. Install TomEE

If at this point you still don’t know what TomEE is please leave a comment explaining why the heck did you read this article up to this point! 🙂 Seriously tho, here’s good starting point.

  • get TomEE Plus here:
  • move TomEE to the “right” location
  • add tomee user
  • create init file for tomee
  • add following text to tomee’s init file and save it
  • set rights for the init file
  • set autostart

4. Configure TomEE

Now TomEE runs out-of-the-box, so this part is required only if you have explicit configuration needs related to your specific application. I have an application that uses MySql and have couple of special needs regarding logging and application deployment.

  • configure lib dir of TomEE
    • get mysql connector here : and copy it to tomee’s lib dir
    • get log4j-1.2.17.jar and copy it to tomee’s lib dir
    • get slf4j-log4j12-1.7.1.jar and copy it tom tomee’s lib dir
    • remove slf4j-jdk14-1.7.2.jar from lib dir to avoid slf4j init conflicts

  • add log4j config directly in lib folder and add configuration

  • remove standard log

  • adjust Engine and Host tag in /usr/local/tomee/conf/server.xml to

  • replace all Resource tags in /usr/local/tomee/conf/tomee.xml with this

  • change welcome-file-list in /usr/local/tomee/conf/web.xml to

  • adjust properties in /usr/local/tomee/conf/

  • remove the default ROOT webapp

5. Install iC

  • get iC *.war files and move them to TomEEs webapps dir

6. Configure MySql

  • create iC database

  • add ic tables

  • add data

7. Run

  • set tomee as owner of tomee dir

  • execute as super user

8. Test

Finally, call your application (I did it by calling

You have set up a TomEE instance on a linux system, congratulations!

maven-ear-plugin and application.xml

Posted 2 CommentsPosted in Software Development

If you are building a modern JavaEE 6 application you might need to package it in EAR.

Supposingly, specification says if your EAR contains META-INF/application.xml file, you must provide configuration of modules so that the application server knows what to load.

Now, specification also supposingly says that, if you want your application server to AUTO DISCOVER your modules (EJBs, CDI beans and so on) you MUST omit application.xml file alltogether.

Well I didn’t know that and wasted serious amount of time making the EAR the “right way”.

So how do I create EAR easily if I live in a maven world?

Luckily there is maven-ear-plugin that supposingly allows for easy EAR creation. Modern versions of this plugin has an configuration option named generateApplicationXml that makes maven-ear-plugin to stop generating application.xml… supposingly.

If you simply add the generateApplicationXml to your plugin configuration, your build will fail with the message:

Ye okay, another couple of hours searching internet until I came up to solution. maven-ear-plugin has another configuration option called version this indicates somehow the java version the descriptors are to be generated for. And only combo of generateApplicationXml and version makes the plugin to stop generating application.xml and not failing to build.

Stupid me or stupid plugin?

Here’s a complete example:

Quirks and twists of testing CODI @Bundle with OpenEJB ApplicationComposer

Posted Posted in Software Development

Using JavaEE means also testing JavaEE with all the implications!

I personally use an ApplicationComposer from OpenEJB when writing unit tests not requiring all containers (as in web, cdi, ejb, …) up and running but just enough to have injection and deploy ejbs. I am not happy with ApplicationComposer because I think it has number of limitations but that’s another discussion. If you care, you can read up on ApplicationComposer in this nice post from Romain Manni Bucau.

Just recently I encountered an issue when ApplicationComposer-testing application using MyFaces CODI extensions. The error shows as an exception (I marked the interesting parts red):

Now what we see here is that

  • CDI container does not find some bean class (UnsatisfiedResolutionException)
  • in particular it cannot find ResourceBundle class
  • injected as a variable excelTemplate
  • into ScheduleWorkbookController

Ok so according to ApplicationComposer configuration rules “all” we have to is to add all the CDI relevant classes to a Class array returned by the @Module method:

That’s it right? Wrong, the exception will still occur!

The reason is the fact that the org.apache.myfaces.extensions.cdi.core.api.resource.bundle.ResourceBundle is just an interface. You still need the implementing class so that during CDI container initialization the real instances can be injected.

So let us find out if there are classes provided in CODI implementation that implement ResourceBundle. In fact there is exactly one default class org.apache.myfaces.extensions.cdi.core.impl.resource.bundle.DefaultResourceBundle.

Alas, adding it to the @Module class array will NOT work since the DefaultResourceBundle has package visibility!

Now, the solution is logical at the end but believe me, based on error messages and semi-chaotic attempts to somehow make it work, it did not came to me the easy way.

Unless you have already guessed – the solution is to add org.apache.myfaces.extensions.cdi.core.impl.resource.bundle.ResourceBundleProducer to the @Module class list.

That’s right – the class producing the actual instances of ResourceBundle!

eclipse juno – the worst release of all times

Posted Posted in Miscellanea

Enough is enough, I am working with the eclipse juno release for already two month and it still keeps me disappointing on daily base.

Now some of you will say: “Oh great, another flame post on a great piece of software hundreds thousands of people invested large amounts of energy and time. Dude, just get real and help fixing those issues instead of moaning around…”

To those of you I say… ye you right… partially. Don’t get me wrong I do realize the immense effort put into eclipse project and surroundings by the countless heroes of open source. I am just sad and angry (sangry) that all this effort has led to this… really disappointing result.

Now to the facts (read – my subjective experiences). I am strongly encouraging you, the seeking, NOT to migrate to eclipse juno, period! Just don’t! Here are my reasons:

– The new interface face-lift is a fail. Toolbar is horrible, static, changed button positions for no apparent reason, ability to reduce perspective part in toolbar is just gone. Advantages? None, even if I think hard and positive – I can’t imagine any;

– Window position. What the hell? I am just used to place window in some place, then close eclipse, reopen it and the window layout is the same it was when I closed it. Not in eclipse juno, not for me. Some windows keep their position, some just don’t no matter what I try. So basically each re-start of eclipse is accompanied by re-arrangement of windows;

– XML-Editor huh? I hope it’s me, I really do, but every time I open xml-file in in-place editor, eclipse juno just goes slo-mo (slow motion) – every navigation, every cursor position move, every typed character literally keeps eclipse self-busy for minutes. Now I edit whole bunch of my xml based config files with gEdit and every time I double click on that xml file per reflex I have this minute long swearing attack, after that I get up and go make me a cup of tee and when I come back, maybe just maybe eclipse has finished rendering (or whatever it does during those minutes and minutes of non-responsiveness);

– Stability. Worse, worst – many times a day, it just dies. Just like that, no message, no exception pop-up window nothing, at some point all the windows just dissapear and the process is not listed anymore… magical stuff indeed;


Need more arguments, let me think (and work a little while) and I will add them to this post.


Mežs, mežinš or nature preservation latvian-style

Posted Posted in Ich

Sometimes I just cannot believe what a corrupt and sometimes plain pathetic people are responsible for managing my country. Well in theory I knew that all the time but now I have the honor to feel it on my own skin. This is the story I want to tell everyone and encourage everyone to re-tell it everyone else who might care (or tell them even if they don’t care… just for the hell of it):

My parents live in small village on the outskirts of Kuldiga, a beautiful place surrounded by forest and nature so typical for Latvia, my fatherland. Additionally, the area is a nature reserve… well at least we thought so. Then a “swedish company” came and deforested everything down naked. Well I am exaggerating of course – in their generosity they left the tree with the swing my father put up for the kids (thank you “swedish company”!)

Sure, formally everything is probably according to law (check out this nature reserve site of my government). That is not even my point. My point is this:

For the past years my friends were asking me: “Reinis, why should we go to Latvia, what is going on there?” and my answer was always this: “Well, not much, but the nature is rich, beautiful and mainly untouched…”

Today I am affraid that soon enough I will have to change my answer to: “Well, not much…”

I am putting up a screen-shot from Google Maps (thank you Google for making it possible) which still shows the forest. As soon as the map is updated I will post the follow-up to show everyone who cares what is left of that once idyllic nature place.

Thunderbird not properly opening URLs contained in e-mails

Posted Posted in Miscellanea

At some point during the frequent update orgy of my bellowed Ubuntu oneiric I over-updated…

The result was that every time I was attempting to open an URL within Thunderbird, my web browser (firefox) displayed following:

thunderbird urls not working

God knows I tried everything (forums, irc, blogs, twitters)… to no avail.

Help came unexpectedly from this thread.

Some dude explained how to change url-handler from firefox to chrome and that’s when it struck me.



I noticed in his post the line “firefox %s”. And was wondering why in my case the url is converted to “%u”. I checked the url-handler configuration of mine by:

$ gconftool-2 -g /desktop/gnome/url-handlers/http/command
firefox %u

and then changed it to:

$ gconftool-2 --type string -s /desktop/gnome/url-handlers/http/command "firefox %s"

Voilà, the problem solved!

JMX and JPA with Hibernate

Posted Posted in Software Development

Firstly, I cheated. Actually this post should be named “JMX and application-specific resources”. But, since I found this architectural property of JMX while attempting to use the JPA within JMX managed bean, the title is what it is.

Secondly, this post is neither on what JMX is nor what JPA is. If you are unfamiliar with those – read basics someplace else. Be advised though that while there is plethora of excellent material on JPA (just google for it and you’ll find everything you could’ve dreamed about), there is next to nothing on JMX good enough for ME to understand it! Well yeah, there are number of docs and articles from Sun and independent authors available out there. Me, I am dissatisfied with all of them. For instance, how the heck I work with composite or tabular data types of the open beans or, in fact, how do I work with application specific resources, huh?

So, as I said, there was this use case I was working on – “Show me in a JMX managed bean some application configuration data out of persistence”. My application is a web-app running on the tomcat. I initialized MBean in her (bean’s) constructor and registered it with the default MBeanServer of the JVM with something like this:

MBeanServer server = ManagementFactory.getPlatformMBeanServer();
server.registerMBean(this, "name");

Within MBean I was using a DAO which loaded for me some config data over the JPA with Hibernate. To my great pleasure – the MBean worked at the first try until… I changed something. I did some minor optimization and my MBean broke down. For whole two days I and my colleague (wink Christian) were sweating hard to fix the bug.

The symptom was that the JPA couldn’t instantiate EntityManager anymore with the exception of something like “There are no providers for the persistence unit MyPersistenceUnit”.

After two day trial and error, endless reading and attempting to understand scarce JMX docs the issue turned out to be initialization of that DAO I mentioned earlier. Inside the DAO I was creating an entity manager like this:

emf = Persistence.createEntityManagerFactory("MyPersistenceUnit");
em = emf.createEntityManager();

In a web-app (EJB app to be precise) the JPA (or in fact Hibernate) expects a persistence.xml to be located in /META-INF/persistence.xml of the web-app. The persistence.xml is loaded then by Hibernate as a resource using context class loader of the current thread.

This is where things get ugly when using JMX. How I understand it, registering MBean with the platform MBean server causes the MBean methods be called from the system class loader of the JVM when the MBean methods are invoked by the call-backs(?) from the MBeanServer. Yes, the system class loader of the JVM (as in Java Virtual Machine). Not the class loader of the web-app, not even class loader of the tomcat, but the class loader of the JVM itself. Why? Because MBeanServer runs directly in JVM. This in turn means that if attempts are made to load resources located on the class path of the web-app in the MBean methods called back by the MBeanServer, they will fail since the class loader of the MBeanServer (the system class loader of the JVM) knows nothing about web-app or, in fact, any class path that lies below that of the JVM.


Yeah, I can’t call it a solution since it’s not. But the workaround that worked for me was to initialize the DAO and store it (the DAO) as a class attribute of the MBean implementation before I hand the control over to MBeanServer. This way, the methods of MBean can work with the initialized instance of the DAO. This workaround sucks for number of reasons but I can’t seem to find a better one.

The final point is – any resource loading done in the methods of MBean invoked by call-backs from MBeanServer will fail since those invocations are done within thread of the JVM itself.

Flowchart vs flowchart

Posted 2 CommentsPosted in Software Engineering

Just recentlly I was about to hold release planning workshop with domain experts of the customer. And to my huge surprize this customer were prepared! They made diagram that displayed current process. And I must admit I liked it.

I am using word “flowchart” to denote way my customer prepared their diagram because it displayed mixed flow of control and objects plus some additional support information. Sure there is similar diagram in UML – activity diagram. I find though activity diagrams quite difficult to explain for someone who is not familiar with UML. Yes, you can teach your customer UML, but what if he is not interested or willing to. What if there is actually alternative and more or less equal ways to display same information without having to learn UML.

Check out the flowchart of my customer (I beautified and modified it ofcourse):


Well, original diagram was even simpler since it didn’t had object flow. I added object flow to the diagram by reflex and now am too lazy to delete it. I think you will still get the point. And the point is – besides simplified display of activities, there are two additional areas on the both sides of diagram that let you

1. much better understand who is having the ball at the moment

2. adds relevant information without cluttering diagram

Yes, it is possible to display same information using activity diagram, e.g. like this:


My opinnion is – the UML-compliant version of diagram is cluttered, responsibilities not as clear as in first version, if activity is shared by two actors (like “agree on delivery conditions based on availability”) – for me a question pops up if placing activity on the border between two swimlanes is formatting error or intended. Additionally, I think that additional information in notes is not as prominent as it is in the first, non-compliant diagram. Finally, I should’ve ben adding third swimlane for final node since it is being executed by someone else and not purchasing agent.


Sure the differences between two diagrams above are not that huge. But I think that when it comes to acceptance of one or another modeling methodology the devil is in the detail. So I say, for the sake of your customer and productive joint effort, feel free to give up formalism and concentrate on enhancing understanding and readability versus trying to hold on to standard.