Java Debugging in Stackato with Harbor
by John Wetherill

John Wetherill, April 2, 2013

One thing that continually amazes me about Stackato is how it makes difficult things easy. A good example is the centralized aggregated logging function, which should be really hard in the cloud, but it’s not with Stackato’s easy to use yet sophisticated logging capabilities.

Another example is Java debugging. Like logging, enabling debugging in Stackato is trivially easy, accomplished with the addition of a single flag to the “push” command:

stackato push -d 

Debugging server-based Java applications is made possible by communication to the application via a separate debugging port. The recent addition of Stackato's Harbor service is what enables this.


Harbor is essentially a Stackato network traffic cop, directing TCP and UDP traffic to the multitudes of listening applications running in the Stackato cloud. Harbor sits on the edge of the Stackato network, maintains the port mappings, and forwards traffic to the appropriate application. It is similar in function to the Stackato Router, which also sits on the edge of the Stackato network and directs web traffic to application instances. But it is different too, in a number of ways.

First, Harbor is implemented as a service (like RabbitMQ and MongoDB) while the Router is a core Stackato role. Practically this means that it is managed differently using separate commands, and there are considerations when building HA systems.

Another difference between the two is that the Router understands only TCP traffic that is part of the HTTP or HTTPS (or WebSocket) protocols. Harbor, in contrast, can route any network traffic, whether TCP or UDP, and uses any protocol, or even no protocol at all.  Harbor is happy to route random binary data for example.

A third difference is in the means used to decide what traffic goes to which application. The Router examines the incoming URL string and uses this to look up the receiving application whereas Harbor has no URL string to inspect (non-HTTP/S traffic typically doesn't include a URL).

Instead, Harbor traffic is routed by port number alone. When a Harbor service is created, two ports are allocated: an external port that outside apps use to communicate to the Stackato-hosted app, and an internal port which the app itself binds to. When incoming traffic arrives on an external port Harbor is listening to, it knows to direct the traffic to the corresponding internal port.

Requesting a Harbor Service

A Harbor service can be requested at deploy-time in stackato.yml:

   port-of-call: harbor

This creates a Harbor service named “port-of-call”, which has the external and internal ports associated with it as described above.

The relevant port numbers are available to the app via environment variables, and also externally through the REST api. They can be queried in the command line interface and via the web console.

Java Debugging with Harbor

Java developers who use an IDE (which ideally for Java developers should be all of them) have immediate access to sophisticated built-in debuggers which communicate to the target application through an additional port. IntelliJ IDEA is no exception, and is the IDE I will be using here. Eclipse and NetBeans have similar steps.

Enabling all of this with Stackato is as simple as adding "-d" to the "stackato push" command. This causes a new "debug" Harbor service to be created which causes Stackato to enable debugging. For Java apps, this reconfigures tomcat to start with JPDA (Java Platform Debugger Architecture) enabled, and binds the JPDA listener to the internal Harbor debug port.

Try this at Home

To see this in action, clone the hello-java application from the Stackato-Apps repo. Build it with maven and push it with the -d flag as follows:

mvn package
stackato push -n -d 

In the output of this command, you will find a message indicating the port to attach to by the IDE / debugger:

Debugging now enabled on port 30579

Take note of the debug port for the next step.

In IntelliJ IDEA, open the project and click Edit Configurations… in the debug pulldown.

Click the +, choose Remote, then specify the port number from above and the hostname for your microcloud. If you are deploying to a cluster, see the note below.

Next, set a breakpoint in the code, click the green debug beetle, and visit your app in a browser.

Voila! the breakpoint is hit, and the IDE pops forward with the breakpoint line highlighted. Could it possibly be any simpler?

Debugging in a Cluster

Harbor load-balances network traffic between each application instance running in your cloud. This means that if you set a break point and then visit the app in a browser expecting the breaktpoint to be reached, it will not trigger your debugger if a different app instance (not the one you are connected to for debugging) responds to the request.

This is where the complexity of the cloud environment becomes apparent. But with Stackato, it doesn't matter. Just scale the application down to a single instance or reload the browser a few more times, and it will eventually hit the app instance that the debugger is talking to.

Another point to consider is that Harbor can run on any node in the cluster, providing it is accessible to the "outside." In this case, the host given to the Java debugger must correspond to the host that Harbor is running on, which is not necessarily the same as the Stackato core node host. To determine the debug port and host in a cluster, use the "stackato service appname-debug" command.


Users may think that Java debugging would be difficult in a cloud environment with multiple apps and app instances coming and going across servers deployed in different data centers. It is actually quite the opposite. Debugging remote applications with Stackato and Harbor is about as simple as can be.

For more information, see the JPDA debugging section of the Stackato Java docs.

For more on Harbor, see the Harbor Port Service documentation.

Subscribe to ActiveState Blogs by Email

Share this post:

About the Author: RSS

John, ActiveState's Technology/PaaS Evangelist, spent much of his career designing and building software at a handful of startups, at Sun Microsystems, NeXT Inc., and in the smart grid and energy space. His biggest passion is for tools, languages, processes, or systems that improve developer productivity and quality of life. He now spends his time immersed in cloud technologies, focusing on PaaS, microservices, and containerization.