tag:blogger.com,1999:blog-28991974849439934822024-03-08T03:35:12.783-08:00Open Source TechnologiesThis blog is about the open-source technologies I work with daily. Currently they are karaf, Camel, OSGi, and OpenAgile. No wacky musings, just helpful code with explanations and working examples.Mike Vanhttp://www.blogger.com/profile/04234635748096635968noreply@blogger.comBlogger11125tag:blogger.com,1999:blog-2899197484943993482.post-79475516334097325922012-08-13T17:58:00.000-07:002012-08-14T08:47:38.363-07:00Improving Code Quality - Scheduling Technical Debt and the Bucket Parable<div dir="ltr" style="text-align: left;" trbidi="on">
<h2 style="text-align: left;">
Overview</h2>
Please read the previous post on <a href="http://os-tech.blogspot.com/2012/08/improving-code-quality-lcom4-and.html" target="_blank">improving code-quality using LCOM4 and Cyclomatic Complexity</a> before reading this entry. Below I present two ways of thinking about technical debt in your organizations. The first is the old-school approach, and the second is the approach preferred once the vast amount of cyber-fraud was detected.<br />
<h2 style="text-align: left;">
Old way of thinking</h2>
<div>
Before cyber-security became a large concern, software development companies thought differently than now about their approach to their applications. In those days, if you'd run Sonar against your codebase, and identified some areas where your source-code was butt-hurt and you wanted to fix it, you had a problem. See, the viewpoint was that acknowledging flaws in your code was also acknowledging legal liability. So, companies would fix bugs in "maintenance releases" but would not acknowledge the specific security failures they were fixing. This environment of subtlety safeguarded application developers, but exposed end-users to the prospects of security holes as a result of bugs.<br />
<h2 style="text-align: left;">
The Reality</h2>
</div>
<div>
Currently, there are massive cyber attacks against companies around the world making companies more appreciative of an increased level of due diligence with their applications. The mark of a successful company is not that they pretend problems don't exist in their code, it is in the fact they acknowledge it, fix it, and communicate it. To understand how to plan to include technical debt resolution, you must first understand the Bucket Parable.</div>
<div>
<h2 style="text-align: left;">
The Bucket Parable.. a digression (bear with me, it pertains)</h2>
</div>
<div>
Back in the olden-days, there was a small village that had a very unique annual contest: the rocks-in-a-bucket contest. The goal was to see who among the competitors could get the most rocks in their buckets. Now, their buckets were all the same size;so, the only thing that differed was the size of the rocks. In the first year's contest, a strong man named "Hugo" won the prize because he had managed to stuff 2 massive rocks into his bucket. The next year, a strapping gentleman on a horse won with 3 rocks, each weighing five pounds each. And the third year? Well, the next legendary player was Mike Van himself, and he was able to stuff over 400 rocks into his bucket. But how, you say? Well, while all the other folks were worrying about the big rocks, Mike Van paid attention to the little rocks too. So, whenever there was a gap between the big rocks, Mike Van filled them with the little rocks. As he began to build up his bucket, he noticed that the massive number of little rocks also gave the bucket extra weight, making it more formidable. Well, Mike Van was proclaimed the winner, after which he went into retirement from rocks-in-a-bucket, moving into a cave to knit "delicates" for poodles. He apparently feels it is a growth investment zone.</div>
<h2 style="text-align: left;">
The Bucket Parable and Scheduling Technical Debt</h2>
<div>
If you've ever been on a good development team, you know about feast and famine. There are times that you have so much work to do there you have to work stupid long hours, and there are other times when you play Call of Duty with your co-worker cuz there aint squat. It is this time that the Bucket Parable comes into play.</div>
<div>
<br /></div>
<div>
If you are delivering product, meaning software, on a schedule, that means you have a specific period of time to deliver a specific set of functionality. Think of your timeline as the bucket in the bucket parable. Now, each of your major tasks are going to take up some of the time of one-or-more of your developers. These major tasks are your big rocks. However, what will they do when they're complete?</div>
<div>
<br /></div>
<div>
After you've completed your Sonar analysis and have noted the weaker areas in your codebase, you should create tasks for each item to fix. Then,. in your tracking tool, allow your developers to choose those "little rocks" they can do when/while they do their big rocks. At the end of the development cycle, not only have you completed all of the intended features, you've also completed a large set of smaller tasks including paying down your technical debt. Customers like this. Its good. Do it, then drink beer with friends.</div>
<div>
<h2 style="text-align: left;">
Summary</h2>
Using Sonar to develop the conceptual model of a "bucket-of-rocks" we are able to identify the most significant areas of risk associated with our technical debt, and manage it while creating a negligible impact on our customer's bottom line.</div>
<div>
<br /></div>
</div>
Mike Vanhttp://www.blogger.com/profile/04234635748096635968noreply@blogger.com18tag:blogger.com,1999:blog-2899197484943993482.post-72769262140164606602012-08-13T15:47:00.001-07:002012-08-13T17:15:51.837-07:00Improving Code Quality - LCOM4 and Cyclomatic Complexity<div dir="ltr" style="text-align: left;" trbidi="on">
<h2 style="text-align: left;">
Overview</h2>
<div>
One of the reasons that open-source software is so solid is that we use some of the most cutting-edge open-source code analytic-tools available to ensure our software does what we intend in a bug-free manner. In this post, I will talk about one tool we use, Sonar, and two specific metrics I've found useful in focusing the resolution of our technical debt.</div>
<div>
<br /></div>
<div>
Technical debt is defined as all the stuff you should have done that you didn't have time to do. For example, you may have left off a couple of unit-tests. Or, perhaps you decided not refactor that 6,000 line java class because it worked as a prototype and "if it ain't broke...". As your applications grow, the amount of technical debt will also grow. In many commercial and consulting settings, it may not be realistic to take a few months off of implementing new features to resolve the technical debt. In that same spirit of realism, the only time a team will realistically focus on resolving technical debt is when there is absolutely nothing else to do. You know, after all tasks are completed, the Kanban board's WIP column is clear, and your team is tired of playing Call of Duty.</div>
<div>
<br /></div>
<div>
This lack of priority and time to resolve technical debt creates a problem. Too much technical debt, and your codebase becomes unmaintainable. Your team is literally one production outage away from working 18 hours days seven days a week until the bug is found. You need to have a way to focus the resolution of your technical debt in order to reduce the risk of a production outage.</div>
<div>
<br /></div>
<div>
Sonar is a source-code analysis tool that has proven very useful in doing this. Specifically, there are two metrics that are very useful for targeting the work: <a href="http://en.wikipedia.org/wiki/Cyclomatic_complexity" target="_blank">Cyclomatic Complexity</a> and <a href="http://docs.codehaus.org/display/SONAR/LCOM4+-+Lack+of+Cohesion+of+Methods" target="_blank">LCOM4</a>. Together, these two metrics will provide you with a very easy, and inexpensive way to target your technical debt resolution.</div>
<h2 style="text-align: left;">
Cyclomatic Complexity</h2>
<div>
The measure of the number of unique pathways through a class, method, or application is called "cyclomatic complexity". This metric was originally proposed by Thomas McCabe in 1963. It already has a write-up on <a href="http://en.wikipedia.org/wiki/Cyclomatic_complexity" target="_blank">Wikipedia</a> that goes into great technical detail about what it is, how it is calculated, and even has some pretty pictures. Instead of completely describing it here, instead I'll hope you clicked the link and skimmed the wiki before continuing to read further.</div>
<div>
<br /></div>
<div>
Another way to think about cyclomatic complexity is as a measure of the difficulty a new developer will have understanding source-code. Usually, a cyclomatic complexity of 5 or less is considered good. Anything between 6 and 11 is considered moderately risky. And, any source-code with a complexity over 10 is considered poor. </div>
<div>
<br /></div>
<div>
In my experience, you should also take into consideration the difference between classes encapsulating your business algorithms, and those containing a large number of utility methods. A utility class may have 100 methods, each doing something very small. Your complexity for the utility class may be over 100, but when you look at the average complexity per method, it will be 1 or less because your methods will be very discreet. Now, compare that with a class containing methods which implement business logic. In the prototype phase of development, these will likely consist of multiple if statements each with for loops and switches. This kind of class will have a complexity which will grow with each "if" statement in your method. While you usually can easily ignore large utility classes, you should absolutely refactor classes containing business logic with high complexity.</div>
<div>
<br /></div>
<div>
When using Cyclomatic Complexity to target technical debt resolution, identify those classes with the highest complexity that are not utility classes. Sonar provides this information is an easy format, and is fairly easy to set-up and use!</div>
<h2 style="text-align: left;">
LCOM4</h2>
<div>
<a href="http://docs.codehaus.org/display/SONAR/LCOM4+-+Lack+of+Cohesion+of+Methods" target="_blank">LCOM</a> stands for "Lack of Cohesion of Methods" and generically is a set of metrics that measure the how methods in a class interact with each other. This metric was updated a number of times until LCOM4 was introduced by Hitz & Montazeri. LCOM4 measures the connected components within a class. The term "connected components" refers to related methods and class-scope attributes. LCOM4 suggests that only methods and attributes that rely on each other should be in a class.</div>
<div>
<br /></div>
<div>
If you think about it, from a maintainability standpoint, it is a lot easier to understand a class if all of the components of the class refer to each other. Think of this as a single unit of algorithmic activity. Consider if your hello world class contained methods and attributes that convert between Celsius and Fahrenheit in addition to methods to print out the words "Hello World". How much easier would it be for a new developer to maintain that code if the Celcius-to-Fahrenheit conversion code were in a different class than then hello-world code?</div>
<div>
<br /></div>
<div>
This may seem like a pretty simple example, but imagine a prototype composed of 500 classes each with 1000 lines or more, and with an average LCOM4 score over 5? Can you imagine being handed this codebase to maintain? Better yet, can you imagine being asked, 4 years after you wrote the code, to come back and "upgrade" it to use the latest-and-greatest architecture?</div>
<div>
<br /></div>
<div>
Just as with complexity, you should also consider the difference between utility classes and classes containing business logic. The LCOM4 score of a utility class may be in the 100's. While this is completely unacceptable for classes containing the implementation of business algorithms, it is completely acceptable for utility classes. When you use LCOM4 to identify classes to refactor, make sure that you don't focus on your utility classes.</div>
<h2 style="text-align: left;">
Using Them Together</h2>
<div>
LCOM4 and Cyclomatic Complexity are related to each other, and by taking them both into account, you will be able to determine where to focus your technical debt. Below should help when you compare a given class' LCOM4 versus Cyclomatic Complexity. I am rating the re-factoring on a scale of one to four, where one is the first priority for re-factoring, and four is the lowest.:</div>
<div>
<ul style="text-align: left;">
<li>Complexity is high and LCOM4 is high: This class has a refactor priority of one. This class implements a number of business algorithms and the methods are very complex. Refactor each algorithm into its own class, then simplify the methods.</li>
<li>Complexity is high and LCOM4 is low: This class has a refactor priority of two. This class contains a low number of distinct business algorithms, but its methods are very complex. First simplify the methods. Then, refactor the algorithms into their own classes. This order works here because the problem isn't the mixture of the algorithms but rather the complexity of the methods. By simplifying the methods, you will see that some of the methods were written to apply to both algorithms. Refactoring the methods first will result in an easier refactoring of algorithms.</li>
<li>Complexity is low and LCOM4 is high: This class has a refactor priority three This is a utility class. There are a large number of methods which implement different business algorithms. There is no need to bother with this class.</li>
<li>Complexity is low and LCOM4 is low: This class has a refactor priority of four. This is a well-written class. Don't touch it. Consider giving the developer accolades, bonuses, or perhaps not stealing their lunch from the fridge on brown-bag Fridays, Marvin!!</li>
</ul>
<h2 style="text-align: left;">
Summary</h2>
</div>
<div>
Technical debt represents the skeletons in a software development project's closet. Tools like Sonar give you access to metrics like LCOM4 and cyclomatic complexity. Don't be afraid though, the good thing about being able to see your skeletons is that you can fix them. In this case you can fix potential problems as they arise.</div>
</div>
Mike Vanhttp://www.blogger.com/profile/04234635748096635968noreply@blogger.com4tag:blogger.com,1999:blog-2899197484943993482.post-55199900344527429762012-02-03T21:15:00.000-08:002012-02-10T18:15:55.285-08:00A History: Integration solutions, from ESB's to Camel + Karaf!<div dir="ltr" style="text-align: left;" trbidi="on">This was based largely from a response I made to an OSGi question on stackoverflow.com. I thought it would be good to share it with you folks.<br />
<br />
First and foremost, ESB's were a really good idea 8 years ago when they were proposed. And, they solved an important problem: how do you define a business problem in a way that those pesky coders will understand? The goal was to develop a system that would allow a business person to create a software solution with little or no pesky developer interaction needed that would suck up money better spent on management bonuses. <br />
<br />
To answer this the good folks at many organizations came up with JBI, BPMN and a host of other solutions that let business folks model the business processes they wanted to "digitize". But really, they were all flawed at a very critical level: they addressed business issues, but not integration issues. As such, many of these implementations were unsuccessful unless done by some high-priced consultant, and even then your prospects were sketchy.<br />
<br />
At the same time, some really smart folks in the very late 90's published a book called "Enterprise Integration Patterns" which identified over 60 design patterns used to solve common integration problems. Many of the folks performing ESB stuff realized that their problem wasn't one of business modelling. Rather the problem was how to integrate thier existing applications. To help solve this James Strachan and some really smart guys started the Apache Software Foundation Project "Camel". <br />
<br />
Camel is a good implementation of the basic Enterprise Integration Patterns in addition to a huge number of components designed to allow folks like you and I to hook stuff together.<br />
<br />
So, if you think of your business process as simply a need to send data from one application to another to another, with the appropriate data transformations between, then Camel (riding in a container such as Karaf) is your answer. In fact, there is a large movement towards replacing ESB's with Karaf + Camel.<br />
<br />
Now, what if you want to base the "route" (a specified series of application endpoints you want to send data thorugh) off of a set of configurable rules in a database? Well, Camel can do that too! There's an endpoint for that!</div>Mike Vanhttp://www.blogger.com/profile/04234635748096635968noreply@blogger.com9tag:blogger.com,1999:blog-2899197484943993482.post-89358622965560306472012-02-02T16:51:00.000-08:002012-02-02T19:54:23.657-08:00Creating a Virtual Appliance with Karaf - Part 2<div dir="ltr" style="text-align: left;" trbidi="on">The last few months have been rife with decisions, hard work, and ultimately led to a number of good things including a new Virtual Appliance containing a fully pre-configured software development environment consisting of applications that are fully consistent with the Apache Software License 2.0! Completely open-source, not gimped, fully functional and best yet, fully configured.<br />
First, a couple of decisions had to be made. Instead of working inside of a full-blown cloud as I originally proposed, I decided that it would save time to target a specific virtualization technology: VMWare's VMPlayer. This technology was chosen because it is free to use lowering cost barriers for new developers. Second, For an IDE, I spoke with a number of my open-source colleagues and chose IntelliJ's Community Edition. Next, I had to decide what operating system to use, and I chose CentOS. How should I distribute this new VM? That's the sticky part. To help manage the creation and distribution of this VM, I created a small open-source company called <a href="http://www.atraxia.net/">Atraxia Technologies</a>. Unfortunatley, this really doesn't solve the problem of how to distribute it. For reasons that I'll explain later, I still haven't gotten an answer for that.<br />
<br />
Lastly, I needed to decide what the purpose of the VM should be. Sure, creating a virtual appliance is a fun thing to do. But, if it doesn't have a clear purpose, nobody is going to use it. So, after talking to my fellow open-source developers, I decided that my first Virtual Appliance would be a fully-configured software development environment. Many of my friends and I have a number of new software developers we mentor. Unfortunately, a large amount of time is needed to get these developers' environments configured, reducing the amount of time we can spend helping improve thier software development skills. Having a VM they can install by themselves will greatly configuration time.<br />
<br />
The virtual appliance was pretty easy to set up. <br />
<br />
First, I downloaded and installed VMWare VMWorkstation. This is a $200.00 product, but it made the creating of the VM pretty easy, so it was worth the cost. Once the VMWorkstation was installed, I downloaded and installed CentOS 6 into it. Again, this was pretty easy. The login and password are both "blue". Next I installed the IDE, the 1.6_22 compatible version of OpenJDK, git, subversion, maven 2.2.1 and 3.0, and Nexus.<br />
<br />
Why install nexus? Well, from my experience there are cases when a build is halted prematurely which results in certain maven metadata files becoming corrupted. In those cases most developers will simply delete their /home/blue/.m2/repositories directory instead of attempting to find the corrupted file. However, in a wireless environment blowing away your repository can result in a very long build time because maven will have to wirelessly re-download all of the libraries and then rebuild the repositories directory. To fix this, each VM comes with its own preconfigured nexus repository and the .m2/setting.xml file is written to only pull files from the local Nexus. The local Nexus, in turn points to the global Maven Repository, Codehaus, and a couple of other public repos. <br />
<br />
The Nexus repsitory will cache all of the files that maven needs to build. The first time you build an application, it may take some time to populate the nexus repository. However, after that, even if you have to blow away your maven repo, it will take a very small amount of time to rebuild your application and your maven repo.<br />
<br />
Now on to the JDK. This is where things got sticky. Despite all of hard work of the OpenJDK team, there are still some applications that won't build with it. And, due to licensing restrictions from Oracle, I can't bundle the Oracle JDK with each VM. So, to test the VM, I had to install the Oracle JDK, compile my test application (Apache Karaf), and then uninstall the JDK and point the path and JAVA_HOME environment variables back to OpenJDK. OpenJDK is a perfectly fine JVM for most developers. But for folks developing applications like Hadoop, Accumulo, etc, the Oracle JDK is really necessary. If you need it, it is free to download and use, but not free to distribute.<br />
<br />
So, back to distributing the VM. Currently, the VM is completed. However, I'd have to pay $$ in order to get the 3 Gigabyte file hosted. As such, it is sitting on my hard-drive awaiting some generous donor of bandwith to host it. The VM is called "Atraxia Blue" and it is the first of three planned virtual machine offerings. This one's intent is to be a desktop development environment. The next one will take the place of the software hub used by most development teams to host thier ticketing system, Sonar, and central nexus respository. I'm still researching whether I should also a central Git repository also. This offering will be called "Atraxia Sienna". The last VM I will produce will include open-source office automation software and some open-source back-end business tools that are being developed by the Apache Software foundation. This one will be called "Atraxia Pointy-Hair". I'm open to a different name though, if someone wants to suggest one.<br />
<br />
The final goal in all of this is to have a suite of completely open-source virtual appliances a small team or company can use to stand up a complete business, fully configured out-of-the-box. <br />
<br />
Oh and thanks to my daughter for coming up with a new motto for Atraxia Technologies: "The Cloud, only Fluffier". <br />
<br />
Until next time!</div>Mike Vanhttp://www.blogger.com/profile/04234635748096635968noreply@blogger.com1tag:blogger.com,1999:blog-2899197484943993482.post-85187008324498967962011-09-21T22:02:00.000-07:002011-09-21T22:06:31.309-07:00Creating a Virtual Appliance with Karaf: Part 1 (Finding the Right Cloud)Something I like to do to keep my understanding of technologies up-to-date is to identify a technology on the cusp of widespread acceptance, and to figure it out. This is why I decided to start working on Karaf and OSGi a year ago, and it has served me very well.<br />
<br />
The current wave is the cloud and virtualization. Like OSGi, its been around for a while, and is being widely accepted. Many organizations are familiar with "virtual machines", not Java VM's, but the ability to start up an instance of an operating system within another operating system.<br />
<br />
Anyhow, as a newcomer to cloud computing, my first small chore was to create a virtual appliance containing Karaf and Celler. Karaf is a small, lightweight OSGi container that allows you to do enterprise stuff. :-) To accomplish this, I used an older Toshiba Satellite computer system as my server, and a newer Dell laptop as my client. The client is planned to be used for development of the virtual appliance.<br />
<br />
VMWare: VMWare has a great web-centric application called "Go" that small-to-medium businesses can use to download thier Hypervisor version of software. Using this, businesses can get access to thousands of virtual appliances, and leverage them (for a cost) to handle their business needs. However, there are some drawbacks. Specifically, my laptop needed a gigabit controller. Because my slow computer only had a little ol' nic-card, the driver couldn't be found and I was unable to install Hypervision to test it. Also, remember that installing this app successfully on your system will completely wipe out the previously installed operating system.<br />
<br />
Susestudio: SuseStudio has an awesome web-based interface for creating virtual applications. The only drawback I saw to that was the fact that .rpm's needed to be created for any applications I wanted to install into my VApp. I'm still looking to see if this is the standard. In any case, this was a show-stopper for me.<br />
<br />
XenServer: This, like VMWare, was a download that wiped out my hard-drive. They were nice-enough to let me know that during installation. The good thing is that this is the first cloud system I've been able to install on my test-system, so this is the server I'll use to buildand install my virtual appliance!<br />
<br />
Next time: How to find the XenServer UI!!Mike Vanhttp://www.blogger.com/profile/04234635748096635968noreply@blogger.com0tag:blogger.com,1999:blog-2899197484943993482.post-16794751305121511852011-09-19T17:51:00.000-07:002011-09-20T07:01:08.006-07:00Why OSGi?Recently I answered a question on stackoverflow.com about the difference between component-based and modular architectures. However, what it really does is show one of larger benefits of OSGi, fixing the classpath.<br />
<br />
Here is the question and my answer:<br />
<br />
Q: OSGi is a modular architecture, JavaBeans is a component architecture. What's the diff?<br />
<br />
A: The primary difference between OSGi and Java Beans is in how the classloader works. In a standard .jar file or EJB, the rt.jar file or EJB equivalent maintains the classpath. Additionally, if you are using a container to deploy your application into, you may have multiple classpath maintanance mechanisms which cause problems. As a result, when you make a .war file, for example, you usually create a lib directory with all of your .war's .jar dependencies inside of the .war. If you only have one war or .jar in your application, this isn't so bad. But imagine a large enteprise deployment with 100 EJB's all containing apache-commons! You end up with 100 instances of apache-commons all running inside the same container sucking up resources.<br />
<br />
In OSGi, you deploy each .jar file (we'll call them bundles cuz this is OSGi now) into the OSGi container. Each .jar file exposes (exports) the packages it wants other packages to use, and also identifies the version of the bundle. Additionally, each bundle also expressly states (imports) the packages it needs from other bundles to work. The OSGi container will then manage all of these exports and match them up to the appropriate imports. Now you have apache-commons available to each of the EJB's you want to make available. You've done away with your /lib directory and now your application takes up less resources.<br />
<br />
In your question you asked the difference between a component architecture and a modular architectures. Modularity refers to this process of making each bundle its own deployment unit and allowing it to talk to other bundles instead of balling them all up into one massive .jar file.Mike Vanhttp://www.blogger.com/profile/04234635748096635968noreply@blogger.com0tag:blogger.com,1999:blog-2899197484943993482.post-57801590491518285942011-07-20T16:14:00.000-07:002011-07-20T17:08:06.213-07:00Hibernate Antlr and OSGi FragmentsFrom time to time, you'll run across instances where a bundle of yours in OSGi needs to have access to a class, but can't. The error you'll get is "ClassNotFoundException: foo.Bar" where foo.Bar is the package and class your bundle is trying to access. Then, after reviewing your OSGi environment, you see that the package of that class is available, but for some reason your bundle can't see it. What the heck? Why can't it see it? <br /><br />This usually happens when you use a pre-bundled .jar file that calls "Class.forName(bar.Foo)". Calling Class.forName() isn't a good practice inside of OSGi, because it requires your bundle to be able to perform "wiring" after it has been active and started. But, seriously, you can't change over a decade of programming practices overnight, so folks still do this. Seeing that this would be an issue, the good folks at OSGi created a neat way of fixing it, the bundle fragment! (applause, Woo HOO!)<br /><br />What a bundle fragment does is add functionality to an existing bundle. In the OSGi reference the purpose is usually listed as providing localization support. However, it is also a very power mechanism for adding and changing the contents of a bundle's MANIFEST.MF file. For those of you unaware of what this file does, it includes a set of directions for the OSGi environment on how to treat a bundle.<br /><br />In the above example, you would create a fragment adding the foo package to the Import-Package section of the MANIFEST.MF file.<br /><br />Most bloggers would stop there. I identified an issue and then told you how to fix it. But not me, nope, I want to show you how a real-life example of this using Hibernate and Antlr.<br /><br />Hibernate uses a parsing service called AST. This, in turn uses Antlr to help with its parsing. Unfortunately, Antlr needs to use a hibernate class called "org.hibernate.hql.ast.HqlToken". And, of course, Antlr does a Class.forName() at runtime to get an instance of it. This kind of makes sense, Antlr wasn't written just for use with Hibernate. As such, it needs to be told at runtime what token it should use for parsing.<br /><br />To fix this issue, you create a fragment that adds an Import-Package entry for the package HqlToken is in, "org.hibernate.hql.ast". Doing this is pretty simple. First, you create a normal java project. It doesn't matter what tool you use, as long as you have the following basic structure.<br /><br />- antlr-hibernate-fragment<br />   pom.xml<br />   - src<br />     - main<br />       - java<br />       - resources<br />         - META-INF<br /><br />Because we're only adding something to the MANIFEST.MF file, the only file in this project that will have anything in it will be the Maven pom.xml. There are a ton of places folks can go to get smart on Maven, so I'm not going to review how the pom.xml file should look other than the maven-bundle-plugin.<br /><br />In your build section of your pom.xml, add the following<br /><br /><plugins><br />   <plugin><br />     <groupId>org.apache.maven.plugins</groupId><br />     <artifactId>maven-bundle-plugin</artifactId><br />     <extensions>true</extensions><br />     <configurations><br />       <instructions><br />         <Fragment-Host>com.springsource.antlr</Fragment-Host><br />         <Import-Package>org.hibernate.hql.ast</Import-Package><br />       </instructions><br />     </configurations><br />   </plugin><br /><plugins><br /><br />Then, run the following from your console, or compile it with your IDE.<br /><code><br /> mvn clean install<br /></code><br />This will create a bundle fragment ready for use with OSGi. In Karaf, after you deploy your original antlr bundle and the fragment, you can run the following console command to see the new import package directive added to the rest of antlr's import-package section. The bundleId referred to in this code is the bundleId of the original antrl bundle, not the new fragment.<br /><code><br /> headers (bundleId)<br /></code><br /><br />I added the new antlr fragment to my hibernate features.xml document (from a previous blog entry) right after my antlr bundle and then deployed it as part of my Hibernate feature.<br /><br />Please let me know if that helps!Mike Vanhttp://www.blogger.com/profile/04234635748096635968noreply@blogger.com4tag:blogger.com,1999:blog-2899197484943993482.post-89726680531965925422011-07-18T20:57:00.000-07:002011-07-20T15:59:48.957-07:00Hibernate and OSGiThis is a copy of a blog post I did about a year ago on Java.net. Of my messages on that blog, this was the most viewed and linked. So, I've improved it, and included it in my new blog to help folks looking for help with Hibernate inside of OSGi.<br /><br />One of the major contributors to the OSGi movement is Peter Kriens who created an excellent tool called BND. The good folks at apache-felix then created a maven-plugin that makes quick work of bundling apps, but I'll go over that in a later blog. The reason I bring this up here is that a while ago Peter wrote <a href="http://www.aqute.biz/Code/BndHibernate">www.aqute.biz/Code/BndHibernate</a> which discusses how to create a mega-bundle with all of Hibernate 3's core .jar files and dependant .jar files wrapped up inside of one huge .jar.<br /><br />This is an excellent approach, but it has a few drawbacks:<br /><ul><br /><li>It requires the creation of a new .jar file which may confuse new developers,</li><br /><li>It is pretty complex, and</li><br /><li>The BND tool, while most excellent and wonderful, takes some time to figure out.</li></ul><br />What I'm going to do is show you how to leverage Karaf provisioning and SpringSource bundles to do something similar, but without having to create a massive Hibernate mega-bundle. As Peter says <a href="http://www.hibernate.org/">Hibernate </a>is one of the more complicated open source projects to wrap". To do this, I'll:<br /><ul><br /><li>Talk about an excellent source for bundles, Spring Source,</li><br /><li>Describe what a features.xml file is,</li><br /><li>Show you a working features.xml file for Hibernate 3, and finally</li><br /><li>Show to modify one of Karaf's configuration files to automagically start up Hibernate when you start Karaf.</li></ul><br />First, I 'd like to talk about <a href="http://www.java.net/external?url=http://www.springsource.com/repository/app/">Spring Source</a>. These folks are doing an outstanding job of creating bundles out of commonly used java apps, like Hibernate. I would be remiss if I didn't mention the fact that, without their hard work, simply deploying Hibernate using a features.xml file would not be possible. The link points to their bundle repository, you should bookmark it, its a great resource.<br /><br />Karaf has a few core concepts that this blog will illustrate to you: <br /><ul><br /><li>OSGi bundles,</li><br /><li>Karaf Features,</li><br /><li>Features.xml documents, and</li><br /><li>the org.apache.felix.karaf.feature.cfg file.</li></ul><br />When you have an application like Hibernate that has dependencies on other libraries, one way to deploy them is to turn them into OSGi bundles and then create a features.xml file. An OSGi bundle is simply a .jar file with a MANIFEST.MF file containing OSGi goodness. There are a lot of great resources already available on the internet that go into detail about what an OSGi bundle is composed of, so I won't go further into it here.<br /><br />Deploying a large application composed of bundles using a features.xml file is much less time intensive than deploying them manually. Most of the large bundled open-source programs have created features.xml files for you, Camel being the first one that comes to mind. Unfortunately, at the time of this writing, I was unable to find one for Hibernate, so I made my own using a list of dependencies created by the good folks at <a href="http://www.java.net/external?url=http://www.springsource.com/repository/app/">Spring Source</a>.<br /><br />First, start Karaf. Then, to deploy a single OSGi bundle into Karaf, you simply need to invoke the following command (for the uninitiated "karaf@root> " is the prompt, dont' type that.) <br /><code style="font-size: 10pt" ><br />karaf@root> osgi:install -s (uri of your bundle, eg mvn:myproject/myapp/version)<br /></code><br /><br /><p>In practice, the uri can be anything that can be resolved: a maven URI, a url, or even some file on your local file-system! Once you've typed that, Karaf will print out the name of the bundle you installed on the command-line. Alternatively, you could find out the bundle number of the bundle you installed by typing:<br /><br /><code><br />karaf@root> osgi:list grep (bundle symbolic name)<br /></code><br /><br /></p><br /><p>That will give you the number of the bundle as it is installed in Karaf. Finally, you start the bundle by typing: </p><br /><br /><code style="font-size: 10pt"><br />karaf@root> osgi:start (bundle Id number)<br /></code><br /><br />A bundle can have a number of states, it can be Installed, Active, or Active and Failed. What you want is for your bundles to be Active. Usually, anything else is a problem. This is what you'd type to install and start an OSGi bundle from a maven repository:<br /><code style="font-size: 10pt"><br />kara@root> osgi:install mvn:org.dom4j/com.springsource.org.dom4j/1.6.1<br />89<br />karaf@root> osgi:start 89<br /></code><br /><br />Now, imagine if you have 90 bundles in your application, including all of the dependencies. That's a lot of typing! A features.xml does all that heavy lifting for you. The features.xml file is simply an xml file where you identify a given feature, and then define the OSGi bundles and other features necessary for that feature to work inside of Karaf.<br /><br />Karaf will read in the features.xml file, and when you install a feature, it will automatically download each bundle listed from its associated URI, install it as an OSGi bundle, and start it. It will do this for each bundle unless it finds a problem. If it finds a problem, it will then stop and uninstall each bundle it started. If a bundle was already installed, the Karaf will "refresh" it, and if there's a problem, it will leave it active. This is the features.xml file I wrote for Hibernate, the file is called hibernate-features-3.3.2-GA.xml.<p style="font-size: 8pt"><br /><!--?xml version="1.0" encoding="UTF-8" ?--><br /><features><br /><feature name="hibernate" version="3.3.2.GA"><br /><bundle>mvn:org.dom4j/com.springsource.org.dom4j/1.6.1</bundle><br /><bundle>mvn:org.apache.commons/com.springsource.org.apache.commons.collections/3.2.1</bundle><br /><bundle>mvn:org.jboss.javassist/com.springsource.javassist/3.9.0.GA</bundle><br /><bundle>mvn:javax.persistence/com.springsource.javax.persistence/1.0.0</bundle><br /><bundle>mvn:org.antlr/com.springsource.antlr/2.7.7</bundle><br /><bundle>mvn:net.sourceforge.cglib/com.springsource.net.sf.cglib/2.2.0</bundle><br /><bundle>mvn:org.apache.commons/com/springsource.org.apache.commons.logging/1.1.1</bundle><br /><bundle>mvn:javax.xml.stream/com.springsource.javax.xml.stream/1.0.1</bundle><br /><bundle>mvn:org.objectweb.asm/com.springsource.org.objectweb.asm/1.5.3</bundle><br /><bundle>mvn:org.objectweb.asm/com.springsource.org.objectweb.asm.attrs/1.5.3</bundle><br /><bundle>mvn:org.hibernate/com.springsource.org.hibernate/3.3.2.GA</bundle> <bundle>mvn:org.hibernate/com.springsource.org.hibernate.annotations/3.3.1.ga</bundle><br /><bundle>mvn:org.hibernate/com.springsource.org.hibernate.annotations.common/3.3.0.ga</bundle><br /><bundle>mvn:org.hibernate/com.springsource.org.hibernate.ejb/3.3.2.GA</bundle><br /></feature><br /></features></p><br />Now, copy that bad-boy into an editor, and save it in your ${karaf-home}/etc directory. Once that's done, you'll be ready for the next step.<br /><br />We can make Karaf aware of this features.xml file manually by typing:<br /><br /><code style="font-size: 10pt"><br />karaf@root> features:installUrl file:///home/myname/apache-felix-karaf-2.2.1/etc/features/hibernate-features-3.3.2-GA.xml<br /></code><br /><br />To verify your new features.xml file is installed, type<br /><code style="font-size: 10pt"><br />karaf@root> features:listUrl<br /></code><br /><br />If you see your feature listed there, you can start your feature by typing:<br /><code style="font-size: 10pt"><br />karaf@root> features:install hibernate<br /></code><br /><br />Give it a few seconds, and when you get the prompt, if you haven't gotten any errors, you can verify that hibernate is installed by typing:<br /><br /><code style="font-size: 10pt"><br />karaf@root> features:list grep hibernate<br /></code><br /><br />Installing your features.xml and starting your feature this way is fun, but what if you've got a ton of features to install? Manually is fun; but lets face it, it's not very sexy... If only there was a way to make Karaf aware of the features.xml on start-up? Well, there is! We just add it to the ${karaf.home}/etc/org.apache.felix.karaf.features.cfg file! This is what it will look like when you download and install karaf:<br /><code style="font-size: 8pt"><br />################################################################################<br />#<br /># Licensed to the Apache Software Foundation (ASF) under one or more<br /># contributor license agreements. See the NOTICE file distributed with<br /># this work for additional information regarding copyright ownership.<br /># The ASF licenses this file to You under the Apache License, Version 2.0<br /># (the "License"); you may not use this file except in compliance with<br /># the License. You may obtain a copy of the License at<br />#<br /># <a href="http://www.java.net/external?url=http://www.apache.org/licenses/LICENSE-2.0">http://www.java.net/external?url=http://www.apache.org/licenses/LICENSE-2.0</a><br />#<br /># Unless required by applicable law or agreed to in writing, software<br /># distributed under the License is distributed on an "AS IS" BASIS,<br /># WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.<br /># See the License for the specific language governing permissions and<br /># limitations under the License.<br />#<br />#################################################################################<br /># Comma separated list of features repositories to register by default<br />#<br />featuresRepositories=mvn:org.apache.felix.karaf/apache-felix-karaf/1.6.0/xml/features<br />#<br /># Comma separated list of features to install at startup<br />#<br />featuresBoot=ssh,management<br /></code><br /><br />This file is composed of two very important items for us, a list of features repositories, and also a list of the features to start up when we start Karaf. Including our features.xml file is pretty straight-forward, we simply add the uri to the end of the featuresRepositories line so that it looks like this:<br /><code style="font-size: 10pt"><br />featuresRepositories=mvn:org.apache.felix.karaf/apache-felix-karaf/1.6.0/xml/features,file:///home/myname/apache-felix-karaf-1.6.0/etc/hibernate-features-3.3.2-GA.xml<br /></code><br /><br />When I first did this I tried to put a \ character after the first comma and start each features file on its own line, but that created issues, so now I put all of my features on the same line.<br /><br />When we start up Karaf, we can verify that the features file is installed by typing:<br /><br /><code style="font-size: 10pt"><br />karaf@root> features:listUrl<br /></code><br /><br />This produces a list of each features.xml file Karaf was able to successfully read. If you don't see your features.xml file there, go look at it, there's probably an issue. Also, check out the ${karaf.home}/data/log/karaf.log file and see if any errors or exceptions were reported.<br /><br />Ok, what if Hibernate is part of a larger set of applications, and you want them all to start up when you start up karaf? Well, that's not too difficult. Simply add your new feature to the featuresBoot line so it looks like this:<br /><br /><code style="font-size: 10pt"><br />featuresBoot=ssh,management,hibernate<br /></code><br /><br />If everything works properly, when you start up Karaf, your happy new hibernate feature should be up and running and ready for some abuse!<br /><br />Please let me know if this helps!Mike Vanhttp://www.blogger.com/profile/04234635748096635968noreply@blogger.com0tag:blogger.com,1999:blog-2899197484943993482.post-28149137711198126472011-07-18T20:12:00.001-07:002011-07-20T15:57:24.323-07:00Karaf Exceptions: "Missing Constraint: Import Package"One of the toughest things about working with Karaf, is that the error messages created by Karaf are not clear to folks new to OSGi. My "Karaf Exceptions" set of blogs are here to provide help in navigating these tough waters. Our first exception is "Missing Constraint: Import Package".<br /><br />(Blatantly plagiarized and improved from my original <a href="http://karaf.922171.n3.nabble.com/How-to-resolve-Missing-Constraint-Import-Package-org-springframework-web-util-version-quot-3-0-0-quo-td3105497.html#a3134841">post</a> on the <a href="http://karaf.922171.n3.nabble.com/Karaf-User-f930749.html">Karaf User Mailing List</a>)<br /><br />Lets set up the scenario. A user asked the dev's this:<br /><code><br />I'm unable to activate a bundle and am getting - "could not be resolved. Reason: Missing Constraint: Import-Package: org.springframework.web.util; version="3.0.0"".<br /><br />What configuration setting am I missing?<br /></code><br /><br />This error usually arises when the bundle in question is attempting to wire to a package that is not present in the OSGi environment. To verify this, do the following:<br />1) ensure that the packages isn't already available within Karaf<br /><code><br />root@karaf> exports | grep org.springframework.web.utilhttp://www.blogger.com/img/blank.gifhttp://www.blogger.com/img/blank.gif<br /></code><br />2) If that command doesn't return anything, then you should find the bundle that contains the package you need, and install that into your container. Doing a simple search for that package on a <a href="https://repository.sonatype.org/index.html#welcome">sonatype repository</a> should let you see the proper bundle.<br /><br />Ok, this will work for most cases. But there are corner cases where you'll get this error if a third-party bundle can't access a given package. For example, if you're using <a href="http://www.hibernate.org/">Hibernate</a> and <a href="http://www.mchange.com/projects/c3p0/index.html">C3P0</a> to access a <a href="http://www.mysql.com/">MySQL</a> database. You'll get the above error when C3P0 attempts to get access to mysql driver. Unfortunately, the C3P0 bundle doesn't wire to the driver, so you need to create a "fragment" that adds an import statement to the C3P0 bundle. This is in itself a sticky issue deserving its own blog, so I'll handle it there.<br /><br />Please try this and let me know if it helps. Mike Vanhttp://www.blogger.com/profile/04234635748096635968noreply@blogger.com0tag:blogger.com,1999:blog-2899197484943993482.post-73068232289879103232011-07-18T19:27:00.000-07:002011-07-20T15:57:45.790-07:00Karaf Logging Overview<div><div><div><br />A recent post on the Karaf User's mailing list was focused on the topic of Karaf logging. This topic is easily misunderstood, and can take some time to figure out. As is usual with many open-source items, this topic is only dryly documented. Below, hopefully I can help to alleviate this.<br /><br />First, its good to remember that in log4j, logging can be set on a package level. This can be done in Karaf using the following console command:<br /><code><br />karaf@root> log:set LEVEL mypackage.subpackage<br /></code><br />This will set the logging for that specific package to whatever the level you provide.<br /><br />In Karaf 3.x, you can also filter for a specific package using the following command:<br /><code><br />karaf@root> log:get mypackage.subpackage<br /></code><br />Some caveats with the <a href="https://cwiki.apache.org/KARAF/41-console-and-commands.html#4.1.ConsoleandCommands-Logshell">log console commands</a>. With the exception of <a href="https://cwiki.apache.org/KARAF/43-logging-system.html#4.3.Loggingsystem-Commands">log:set</a>, your commands won't affect the logs placed on the file system. For example, if you type <a href="https://cwiki.apache.org/KARAF/43-logging-system.html#4.3.Loggingsystem-Commands">log:clear</a>, you won't have access to any log messages written out prior to executing the log:clear command. However, <a href="https://cwiki.apache.org/KARAF/43-logging-system.html#4.3.Loggingsystem-Commands">log:clear</a> won't remove those older log messages from the log directory.<br /><br />This is because all of the logging commands don't actually work against your log files. Instead, the logging commands look for <a href="http://www.ops4j.org/projects/pax/logging/xref/org/ops4j/pax/logging/spi/PaxLoggingEvent.html">PaxLoggingEvent</a>s inside of Karaf. Anytime a log message is generated in Karaf, a <a href="http://www.ops4j.org/projects/pax/logging/xref/org/ops4j/pax/logging/spi/PaxLoggingEvent.html">PaxLoggingEvent</a> is created. The logging commands look for these and then act on them. So, if you accidentally run <a href="https://cwiki.apache.org/KARAF/43-logging-system.html#4.3.Loggingsystem-Commands">log:clear</a>, dont' worry, you won't lose all of your log messages. However, if you accidentally set your logging level to DEBUG or TRACE, this change will result in all of your logging messages being set to the new level and your logs filling up very quickly.<br /><br />Now, the logging console commands are great, but what if you want to break your logging messages out into different log files? For the most part, the logging commands won't help you. If what you want is a file containing only the messages from a given package, you simply create a logger for your package, and then create a RollingFileAppender for that logger. Here's an example of what to place in your <a href="http://felix.apache.org/site/43-logging-system.html">org.ops4j.pax.logging.cfg</a> file.<br /><code style="font-size: 10pt"><br /># mypackage.subpackage appender<br />log4j.logger.mypackage.subpackage=LEVEL, subpackage<br />log4j.appender.subpackage=org.apache.log4j.RollingFileAppender<br />log4j.appender.subpackage.layout=org.apache.log4j.PatternLayout<br />log4j.appender.subpackage.layout.ConversionPattern=PATTERN<br />log4j.appender.subpackage.file=${karaf.data}/log/subpackage.log<br />log4j.appender.subpackage.append=true<br />log4j.appender.subpackage.maxFileSize=10MB<br />log4j.appender.subpackage.maxBackupIndex=10<br /></code><br />Now, some caveats. If your "subpackage" has camel routes that you'd like logged into your subpackage log file, they won't appear. This is because the messages from your camel routes are generated from the <a href="http://camel.apache.org/">org.apache.camel </a>package, and not by your subpackage.<br /><br />Also, all messages are still going to be written into your karaf.log file. So, if you are seeing some strangeness and can't diagnose it in your subpackage.log file, check out your karaf.log.<br /><br />As always, please let me know if you have any questions.</div></div></div>Mike Vanhttp://www.blogger.com/profile/04234635748096635968noreply@blogger.com0tag:blogger.com,1999:blog-2899197484943993482.post-41191971281878993392011-07-18T19:11:00.001-07:002011-07-20T15:57:24.324-07:00Intro - Open Source Technologies<div>Over the years I've been involved with a number of emerging open-source technologies including J2EE, OSGi, and a number of Apache Software Foundation projects. The purpose of this blog is to focus on the nitty-gritty details of these new open-source technologies as I start working with them. Currently, I'm working with the Apache Software Foundation's <a href="http://karaf.apache.org/">Karaf</a>, <a href="http://camel.apache.org/">Camel</a>, and <a href="http://karaf.apache.org/index/subprojects/cellar.html">Cellar</a> projects, along with helping to guide the emerging open-source management framework, <a href="http://www.openagile.com/">OpenAgile</a>.<br /><br />The majority of this blog will be composed of posts made to various user groups, intended to help out the communities using the technologies I help develop and implement.<br /></div>Mike Vanhttp://www.blogger.com/profile/04234635748096635968noreply@blogger.com2