Showing posts with label Karaf. Show all posts
Showing posts with label Karaf. Show all posts

Friday, February 3, 2012

A History: Integration solutions, from ESB's to Camel + Karaf!

This was based largely from a response I made to an OSGi question on stackoverflow.com. I thought it would be good to share it with you folks.

First and foremost, ESB's were a really good idea 8 years ago when they were proposed. And, they solved an important problem: how do you define a business problem in a way that those pesky coders will understand? The goal was to develop a system that would allow a business person to create a software solution with little or no pesky developer interaction needed that would suck up money better spent on management bonuses.

To answer this the good folks at many organizations came up with JBI, BPMN and a host of other solutions that let business folks model the business processes they wanted to "digitize". But really, they were all flawed at a very critical level: they addressed business issues, but not integration issues. As such, many of these implementations were unsuccessful unless done by some high-priced consultant, and even then your prospects were sketchy.

At the same time, some really smart folks in the very late 90's published a book called "Enterprise Integration Patterns" which identified over 60 design patterns used to solve common integration problems. Many of the folks performing ESB stuff realized that their problem wasn't one of business modelling. Rather the problem was how to integrate thier existing applications. To help solve this James Strachan and some really smart guys started the Apache Software Foundation Project "Camel".

Camel is a good implementation of the basic Enterprise Integration Patterns in addition to a huge number of components designed to allow folks like you and I to hook stuff together.

So, if you think of your business process as simply a need to send data from one application to another to another, with the appropriate data transformations between, then Camel (riding in a container such as Karaf) is your answer. In fact, there is a large movement towards replacing ESB's with Karaf + Camel.

Now, what if you want to base the "route" (a specified series of application endpoints you want to send data thorugh) off of a set of configurable rules in a database? Well, Camel can do that too! There's an endpoint for that!

Thursday, February 2, 2012

Creating a Virtual Appliance with Karaf - Part 2

The last few months have been rife with decisions, hard work, and ultimately led to a number of good things including a new Virtual Appliance containing a fully pre-configured software development environment consisting of applications that are fully consistent with the Apache Software License 2.0! Completely open-source, not gimped, fully functional and best yet, fully configured.
First, a couple of decisions had to be made.  Instead of working inside of a full-blown cloud as I originally proposed, I decided that it would save time to target a specific virtualization technology: VMWare's VMPlayer. This technology was chosen because it is free to use lowering cost barriers for new developers.  Second, For an IDE, I spoke with a number of my open-source colleagues and chose IntelliJ's Community Edition.  Next, I had to decide what operating system to use, and I chose CentOS. How should I distribute this new VM?  That's the sticky part. To help manage the creation and distribution of this VM, I created a small open-source company called Atraxia Technologies. Unfortunatley, this really doesn't solve the problem of how to distribute it. For reasons that I'll explain later, I still haven't gotten an answer for that.

Lastly, I needed to decide what the purpose of the VM should be. Sure, creating a virtual appliance is a fun thing to do. But, if it doesn't have a clear purpose, nobody is going to use it.  So, after talking to my fellow open-source developers, I decided that my first Virtual Appliance would be a fully-configured software development environment.  Many  of my friends and I have a number of new software developers we mentor. Unfortunately, a large amount of time is needed to get these developers' environments configured, reducing the amount of time we can spend helping improve thier software development skills. Having a VM they can install by themselves will greatly configuration time.

The virtual appliance was pretty easy to set up.

First, I downloaded and installed VMWare VMWorkstation. This is a $200.00 product, but it made the creating of the VM pretty easy, so it was worth the cost.  Once the VMWorkstation was installed, I downloaded and installed CentOS 6 into it.  Again, this was pretty easy. The login and password are both "blue". Next I installed the IDE, the 1.6_22 compatible version of OpenJDK, git, subversion, maven 2.2.1 and 3.0,  and Nexus.

Why install nexus? Well, from my experience there are cases when a build is halted prematurely which results in certain maven metadata files becoming corrupted. In those cases most developers will simply delete their /home/blue/.m2/repositories directory instead of attempting to find the corrupted file. However, in a wireless environment blowing away your repository can result in a very long build time because maven will have to wirelessly re-download all of the libraries and then rebuild the repositories directory.  To fix this, each VM comes with its own preconfigured nexus repository and the .m2/setting.xml file is written to only pull files from the local Nexus.  The local Nexus, in turn points to the global Maven Repository, Codehaus, and a couple of other public repos.

The Nexus repsitory will cache all of the files that maven needs to build. The first time you build an application, it may take some time to populate the nexus repository. However, after that, even if you have to blow away your maven repo, it will take a very small amount of time to rebuild your application and your maven repo.

Now on to the JDK.  This is where things got sticky.  Despite all of hard work of the OpenJDK team, there are still some applications that won't build with it. And, due to licensing restrictions from Oracle, I can't bundle the Oracle JDK with each VM. So, to test the VM, I had to install the Oracle JDK, compile my test application (Apache Karaf), and then uninstall the JDK and point the path and JAVA_HOME environment variables back to OpenJDK.  OpenJDK is a perfectly fine JVM for most developers. But for folks developing applications like Hadoop, Accumulo, etc, the Oracle JDK is really necessary.  If you need it, it is free to download and use, but not free to distribute.

So, back to distributing the VM.  Currently, the VM is completed. However, I'd have to pay $$ in order to get the 3 Gigabyte file hosted.  As such, it is sitting on my hard-drive awaiting some generous donor of bandwith to host it. The VM is called "Atraxia Blue" and it is the first of three planned virtual machine offerings. This one's intent is to be a desktop development environment. The next one will take the place of the software hub used by most development teams to host thier ticketing system, Sonar, and central nexus respository.  I'm still researching whether I should also a central Git repository also.  This offering will be called "Atraxia Sienna".  The last VM I will produce will include open-source office automation software and some open-source back-end business tools that are being developed by the Apache Software foundation. This one will be called "Atraxia Pointy-Hair". I'm open to a different name though, if someone wants to suggest one.

The final goal in all of this is to have a suite of completely open-source virtual appliances a small team or company can use to stand up a complete business, fully configured out-of-the-box. 

Oh and thanks to my daughter for coming up with a new motto for Atraxia Technologies: "The Cloud, only Fluffier". 

Until next time!

Wednesday, September 21, 2011

Creating a Virtual Appliance with Karaf: Part 1 (Finding the Right Cloud)

Something I like to do to keep my understanding of technologies up-to-date is to identify a technology on the cusp of widespread acceptance, and to figure it out. This is why I decided to start working on Karaf and OSGi a year ago, and it has served me very well.

The current wave is the cloud and virtualization. Like OSGi, its been around for a while, and is being widely accepted. Many organizations are familiar with "virtual machines", not Java VM's, but the ability to start up an instance of an operating system within another operating system.

Anyhow, as a newcomer to cloud computing, my first small chore was to create a virtual appliance containing Karaf and Celler. Karaf is a small, lightweight OSGi container that allows you to do enterprise stuff. :-) To accomplish this, I used an older Toshiba Satellite computer system as my server, and a newer Dell laptop as my client. The client is planned to be used for development of the virtual appliance.

VMWare: VMWare has a great web-centric application called "Go" that small-to-medium businesses can use to download thier Hypervisor version of software. Using this, businesses can get access to thousands of virtual appliances, and leverage them (for a cost) to handle their business needs. However, there are some drawbacks. Specifically, my laptop needed a gigabit controller. Because my slow computer only had a little ol' nic-card, the driver couldn't be found and I was unable to install Hypervision to test it. Also, remember that installing this app successfully on your system will completely wipe out the previously installed operating system.

Susestudio: SuseStudio has an awesome web-based interface for creating virtual applications. The only drawback I saw to that was the fact that .rpm's needed to be created for any applications I wanted to install into my VApp. I'm still looking to see if this is the standard. In any case, this was a show-stopper for me.

XenServer: This, like VMWare, was a download that wiped out my hard-drive. They were nice-enough to let me know that during installation. The good thing is that this is the first cloud system I've been able to install on my test-system, so this is the server I'll use to buildand install my virtual appliance!

Next time: How to find the XenServer UI!!

Monday, July 18, 2011

Hibernate and OSGi

This is a copy of a blog post I did about a year ago on Java.net. Of my messages on that blog, this was the most viewed and linked. So, I've improved it, and included it in my new blog to help folks looking for help with Hibernate inside of OSGi.

One of the major contributors to the OSGi movement is Peter Kriens who created an excellent tool called BND. The good folks at apache-felix then created a maven-plugin that makes quick work of bundling apps, but I'll go over that in a later blog. The reason I bring this up here is that a while ago Peter wrote www.aqute.biz/Code/BndHibernate which discusses how to create a mega-bundle with all of Hibernate 3's core .jar files and dependant .jar files wrapped up inside of one huge .jar.

This is an excellent approach, but it has a few drawbacks:

  • It requires the creation of a new .jar file which may confuse new developers,

  • It is pretty complex, and

  • The BND tool, while most excellent and wonderful, takes some time to figure out.

What I'm going to do is show you how to leverage Karaf provisioning and SpringSource bundles to do something similar, but without having to create a massive Hibernate mega-bundle. As Peter says Hibernate is one of the more complicated open source projects to wrap". To do this, I'll:

  • Talk about an excellent source for bundles, Spring Source,

  • Describe what a features.xml file is,

  • Show you a working features.xml file for Hibernate 3, and finally

  • Show to modify one of Karaf's configuration files to automagically start up Hibernate when you start Karaf.

First, I 'd like to talk about Spring Source. These folks are doing an outstanding job of creating bundles out of commonly used java apps, like Hibernate. I would be remiss if I didn't mention the fact that, without their hard work, simply deploying Hibernate using a features.xml file would not be possible. The link points to their bundle repository, you should bookmark it, its a great resource.

Karaf has a few core concepts that this blog will illustrate to you:

  • OSGi bundles,

  • Karaf Features,

  • Features.xml documents, and

  • the org.apache.felix.karaf.feature.cfg file.

When you have an application like Hibernate that has dependencies on other libraries, one way to deploy them is to turn them into OSGi bundles and then create a features.xml file. An OSGi bundle is simply a .jar file with a MANIFEST.MF file containing OSGi goodness. There are a lot of great resources already available on the internet that go into detail about what an OSGi bundle is composed of, so I won't go further into it here.

Deploying a large application composed of bundles using a features.xml file is much less time intensive than deploying them manually. Most of the large bundled open-source programs have created features.xml files for you, Camel being the first one that comes to mind. Unfortunately, at the time of this writing, I was unable to find one for Hibernate, so I made my own using a list of dependencies created by the good folks at Spring Source.

First, start Karaf. Then, to deploy a single OSGi bundle into Karaf, you simply need to invoke the following command (for the uninitiated "karaf@root> " is the prompt, dont' type that.)

karaf@root> osgi:install -s (uri of your bundle, eg mvn:myproject/myapp/version)


In practice, the uri can be anything that can be resolved: a maven URI, a url, or even some file on your local file-system! Once you've typed that, Karaf will print out the name of the bundle you installed on the command-line. Alternatively, you could find out the bundle number of the bundle you installed by typing:


karaf@root> osgi:list grep (bundle symbolic name)



That will give you the number of the bundle as it is installed in Karaf. Finally, you start the bundle by typing:




karaf@root> osgi:start (bundle Id number)


A bundle can have a number of states, it can be Installed, Active, or Active and Failed. What you want is for your bundles to be Active. Usually, anything else is a problem. This is what you'd type to install and start an OSGi bundle from a maven repository:

kara@root> osgi:install mvn:org.dom4j/com.springsource.org.dom4j/1.6.1
89
karaf@root> osgi:start 89


Now, imagine if you have 90 bundles in your application, including all of the dependencies. That's a lot of typing! A features.xml does all that heavy lifting for you. The features.xml file is simply an xml file where you identify a given feature, and then define the OSGi bundles and other features necessary for that feature to work inside of Karaf.

Karaf will read in the features.xml file, and when you install a feature, it will automatically download each bundle listed from its associated URI, install it as an OSGi bundle, and start it. It will do this for each bundle unless it finds a problem. If it finds a problem, it will then stop and uninstall each bundle it started. If a bundle was already installed, the Karaf will "refresh" it, and if there's a problem, it will leave it active. This is the features.xml file I wrote for Hibernate, the file is called hibernate-features-3.3.2-GA.xml.


<!--?xml version="1.0" encoding="UTF-8" ?-->
<features>
<feature name="hibernate" version="3.3.2.GA">
<bundle>mvn:org.dom4j/com.springsource.org.dom4j/1.6.1</bundle>
<bundle>mvn:org.apache.commons/com.springsource.org.apache.commons.collections/3.2.1</bundle>
<bundle>mvn:org.jboss.javassist/com.springsource.javassist/3.9.0.GA</bundle>
<bundle>mvn:javax.persistence/com.springsource.javax.persistence/1.0.0</bundle>
<bundle>mvn:org.antlr/com.springsource.antlr/2.7.7</bundle>
<bundle>mvn:net.sourceforge.cglib/com.springsource.net.sf.cglib/2.2.0</bundle>
<bundle>mvn:org.apache.commons/com/springsource.org.apache.commons.logging/1.1.1</bundle>
<bundle>mvn:javax.xml.stream/com.springsource.javax.xml.stream/1.0.1</bundle>
<bundle>mvn:org.objectweb.asm/com.springsource.org.objectweb.asm/1.5.3</bundle>
<bundle>mvn:org.objectweb.asm/com.springsource.org.objectweb.asm.attrs/1.5.3</bundle>
<bundle>mvn:org.hibernate/com.springsource.org.hibernate/3.3.2.GA</bundle> <bundle>mvn:org.hibernate/com.springsource.org.hibernate.annotations/3.3.1.ga</bundle>
<bundle>mvn:org.hibernate/com.springsource.org.hibernate.annotations.common/3.3.0.ga</bundle>
<bundle>mvn:org.hibernate/com.springsource.org.hibernate.ejb/3.3.2.GA</bundle>
</feature>
</features>


Now, copy that bad-boy into an editor, and save it in your ${karaf-home}/etc directory. Once that's done, you'll be ready for the next step.

We can make Karaf aware of this features.xml file manually by typing:


karaf@root> features:installUrl file:///home/myname/apache-felix-karaf-2.2.1/etc/features/hibernate-features-3.3.2-GA.xml


To verify your new features.xml file is installed, type

karaf@root> features:listUrl


If you see your feature listed there, you can start your feature by typing:

karaf@root> features:install hibernate


Give it a few seconds, and when you get the prompt, if you haven't gotten any errors, you can verify that hibernate is installed by typing:


karaf@root> features:list grep hibernate


Installing your features.xml and starting your feature this way is fun, but what if you've got a ton of features to install? Manually is fun; but lets face it, it's not very sexy... If only there was a way to make Karaf aware of the features.xml on start-up? Well, there is! We just add it to the ${karaf.home}/etc/org.apache.felix.karaf.features.cfg file! This is what it will look like when you download and install karaf:

################################################################################
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.java.net/external?url=http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
#################################################################################
# Comma separated list of features repositories to register by default
#
featuresRepositories=mvn:org.apache.felix.karaf/apache-felix-karaf/1.6.0/xml/features
#
# Comma separated list of features to install at startup
#
featuresBoot=ssh,management


This file is composed of two very important items for us, a list of features repositories, and also a list of the features to start up when we start Karaf. Including our features.xml file is pretty straight-forward, we simply add the uri to the end of the featuresRepositories line so that it looks like this:

featuresRepositories=mvn:org.apache.felix.karaf/apache-felix-karaf/1.6.0/xml/features,file:///home/myname/apache-felix-karaf-1.6.0/etc/hibernate-features-3.3.2-GA.xml


When I first did this I tried to put a \ character after the first comma and start each features file on its own line, but that created issues, so now I put all of my features on the same line.

When we start up Karaf, we can verify that the features file is installed by typing:


karaf@root> features:listUrl


This produces a list of each features.xml file Karaf was able to successfully read. If you don't see your features.xml file there, go look at it, there's probably an issue. Also, check out the ${karaf.home}/data/log/karaf.log file and see if any errors or exceptions were reported.

Ok, what if Hibernate is part of a larger set of applications, and you want them all to start up when you start up karaf? Well, that's not too difficult. Simply add your new feature to the featuresBoot line so it looks like this:


featuresBoot=ssh,management,hibernate


If everything works properly, when you start up Karaf, your happy new hibernate feature should be up and running and ready for some abuse!

Please let me know if this helps!

Karaf Logging Overview


A recent post on the Karaf User's mailing list was focused on the topic of Karaf logging. This topic is easily misunderstood, and can take some time to figure out. As is usual with many open-source items, this topic is only dryly documented. Below, hopefully I can help to alleviate this.

First, its good to remember that in log4j, logging can be set on a package level. This can be done in Karaf using the following console command:

karaf@root> log:set LEVEL mypackage.subpackage

This will set the logging for that specific package to whatever the level you provide.

In Karaf 3.x, you can also filter for a specific package using the following command:

karaf@root> log:get mypackage.subpackage

Some caveats with the log console commands. With the exception of log:set, your commands won't affect the logs placed on the file system. For example, if you type log:clear, you won't have access to any log messages written out prior to executing the log:clear command. However, log:clear won't remove those older log messages from the log directory.

This is because all of the logging commands don't actually work against your log files. Instead, the logging commands look for PaxLoggingEvents inside of Karaf. Anytime a log message is generated in Karaf, a PaxLoggingEvent is created. The logging commands look for these and then act on them. So, if you accidentally run log:clear, dont' worry, you won't lose all of your log messages. However, if you accidentally set your logging level to DEBUG or TRACE, this change will result in all of your logging messages being set to the new level and your logs filling up very quickly.

Now, the logging console commands are great, but what if you want to break your logging messages out into different log files? For the most part, the logging commands won't help you. If what you want is a file containing only the messages from a given package, you simply create a logger for your package, and then create a RollingFileAppender for that logger. Here's an example of what to place in your org.ops4j.pax.logging.cfg file.

# mypackage.subpackage appender
log4j.logger.mypackage.subpackage=LEVEL, subpackage
log4j.appender.subpackage=org.apache.log4j.RollingFileAppender
log4j.appender.subpackage.layout=org.apache.log4j.PatternLayout
log4j.appender.subpackage.layout.ConversionPattern=PATTERN
log4j.appender.subpackage.file=${karaf.data}/log/subpackage.log
log4j.appender.subpackage.append=true
log4j.appender.subpackage.maxFileSize=10MB
log4j.appender.subpackage.maxBackupIndex=10

Now, some caveats. If your "subpackage" has camel routes that you'd like logged into your subpackage log file, they won't appear. This is because the messages from your camel routes are generated from the org.apache.camel package, and not by your subpackage.

Also, all messages are still going to be written into your karaf.log file. So, if you are seeing some strangeness and can't diagnose it in your subpackage.log file, check out your karaf.log.

As always, please let me know if you have any questions.

Intro - Open Source Technologies

Over the years I've been involved with a number of emerging open-source technologies including J2EE, OSGi, and a number of Apache Software Foundation projects. The purpose of this blog is to focus on the nitty-gritty details of these new open-source technologies as I start working with them. Currently, I'm working with the Apache Software Foundation's Karaf, Camel, and Cellar projects, along with helping to guide the emerging open-source management framework, OpenAgile.

The majority of this blog will be composed of posts made to various user groups, intended to help out the communities using the technologies I help develop and implement.