Non object-oriented objects

I always feel uncomfortable when I encounter instances of certain classes in the codebase I’m working with. Perhaps uncomfortable is not the right word here; the whole situation is a huge paradox: objects that fail to behave in an object-oriented way. Of course I’m talking about classes like Tuple and Triplet (or whatever name they may have in different libraries). Continue reading “Non object-oriented objects”

Advertisements

The service provider interface pattern

Today I’d like to talk about a not so well known design pattern. It is actually so little known, that people sometimes are using it without even knowing that they are implementing a pattern. (I was no exception. Until my friend Andrea has drawn my attention that this thing had a name, I had used it several times without even noticing it – I thought it simply was some flexible design).

What

Suppose you have a group of objects that are doing similar pieces of work – they are validating data, converting between data structures, transforming objects etc- but with slightly different input data. The question is: how do you select one of your objects at runtime (as a response to some external event) such that you have to do as less extra work as possible, when a new validator/converter/translator gets added to the system?

Let’s consider a very simple example: given a JMS client that can accept different types of messages from a queue; create a method that is capable of dispatching the messages to different message handlers.  A very naive solution would be a bunch of if-else or switch-case statements. Such an implementation would be something like this:

    @Override
    public void onMessage(Message message) {
        Result result = null;
        if (message instanceof TextMessage) {
            result = textMessageHandler.handle((TextMessage)message);
        } else if (message instanceof MapMessage) {
            result = mapMessageHandler.handle((MapMessage)message);
        } else if (message instanceof ObjectMessage) {
           result = objectMessageHandler.handle((ObjectMessage)message);
        }
        ...
    }

This would certainly work, but:

  1. it’s ugly
  2. every message type needs a new else-if statement
  3. every message type needs to be referenced by the class that contains the onMessage method
  4. it’s rather procedural than object-oriented
  5. and ugly, again

How

Well if it’s ugly, then it’s time to refactor. It’s easy to see the message handlers all carry out a very similar task: they interpret one incoming message. Let then object oriented best practices rock, and have all the message interpreters implement a common interface; say Handler.

public interface Handler {
    Result handle(Message message);
}

public class TextMessageHandler implements Handler {
    @Override
    public Result handle(Message message) {
        //...
    }
    //...
}

public class MapMessageHandler implements Handler {
    @Override
    public Result handle(Message message) {
        //...
    }
    //...
}

Now in the message listener class we can have a cleaner implementation for the onMessage method:

@Override
    public void onMessage(Message message) {
        Result result = correctHandler.handle(message); //bear with me
        //...
    }

It looks somewhat better.  Two questions are, however, still open. One: how do we get a hold of the correctHandler, and two: how do we know if a handler is correct for a specific message type?

As we said earlier, we like object oriented concepts. One such OO idiom is encapsulation: data and methods working on data put together in a class. Why couldn’t then a specific handler tell, whether it can handle a message or not? With a tiny change to the interface, we can achieve just that. Consider this:

public interface Handler {
    Result handle(Message message);
    boolean canHandle(Message message);
}

public class TextMessageHandler implements Handler {
    @Override
    public Result handle(Message message) {
        //...
    }
    //...
    @Override
    public boolean canHandle(Message message) {
        return message instanceof TextMessage;
    }
}

And now, with the smarter handlers it’s easy to answer the other open question about how to obtain a correct message handler: have an entity, a supplier that can select the correct handler. Let that be a kind of repository for all the available handlers and let it pick the correct implementation (based on the canHandle() method):

public class Handlers {
    List<Handler> handlers;

    public Handlers(List<Handler> handlers) {
        this.handlers = handlers;
    }

    public Result handle(Message message) {
        for (Handler handler:handlers) {
            if (handle.canHandle(message)) {
                return handler.handle(message);
            }
        }
    return new EmptyResult();
    }
}

So we can change the onMessage() method to this:

@Override
    public void onMessage(Message message) {
        Result result = handlerRepository.handle(message);
        //...
    }

And pretty much that’s it. Now the onMessage method does not have to know about any specific message handler – it only knows about the repository. Should a new message handler type come into play, there is only one place you have to change – the initialization logic for the list of Handlers – and you are all set. If you happen to use Spring, you don’t even have to change anything; you just create the new handler class, and let Spring take care of the rest for you (remember that Spring is able to provide you with a list of all the beans that implement a particular interface. Exactly, write one single class and you are done).

Why

In the example above, we’ve used SPI to decouple business objects from event handling logic. The onMessage() method can now deal with accepting and acknowledging messages, while it does not know anything about the object that will eventually process the message. The bean only sees the repository object, and has no knowledge of the interface Handler, or any of its implementers.

Things that can change (the concrete handlers) have been separated from dispatching logic, so any change in the message handling (the removal of a message handler, for instance) would not affect the message reception or message dispatching. It is now easier to test this component, as you don’t have to engage in extensive mocking.

Single responsibility principle is satisfied; the message listener deals with listening to messages, the dispatcher routes the messages to the correct handler, while a handler’s only responsibility is to correctly interpret the incoming message.

Aspects for good, decoupled design

Basics

We’ve all learnt that good, object oriented design is based on some kind of layering of the concepts; one layer has a very well defined scope, like for example database access, presentation and the alike. A layer is not allowed to use any other layer, but the one and only the one  right below it. A typical such architecture would look something like the one on the following diagram (drawn with creately.com):

layer
Typical layered architecture

Every class to be written, is perfectly contained in one and only one layer. For example, the class UserDataConverter would probably be contained in layer “Data Conversion”; it has nothing to do with data access, for example, nor with displaying information.

The situation, however, gets completely different when dealing with requirements that are all over the system – like transactions or logging, for example. These things are spread in an application. They are not tied to a specific layer, but rather to all of the layers. If you want transaction management, that will possibly impact all the layers from the top to the bottom; you can’t do it in the DAO level alone. A transaction may start with a click on the GUI, and propagate all the way down to the database – should anything go wrong at any level, the transaction rolls back.

Let’s take an example: we want to perform some profiling, to measure the running time for some of our methods. These methods are not specific to any layer, they can be anywhere in our system, from the data access layer up to presentation. If we’d represent this feature on the diagram above, it could look something like this:

asp
Profiling feature breaks the boundaries of layers

We cannot say that profiling should apply to business logic alone, as we may as well want to know how many time does it take to convert data between different formats. Now, a very bad solution to this problem would be getting the current time in the beginning and the end of every method we want to be profiled, and doing something with the result. This solution would create some awful spaghetti code with lot of duplication, not to mention the tedious work one should be doing for a large, legacy system with hundreds of methods to be profiled.

A nicer option would be creating some kind of utility class, to which you could tell that the method has started executing, and has finished eventually. The utility would take care of computing the elapsed time and handling the resulting numbers. However, this would still be pretty tightly coupled, with two invocations per method.

public void longRunningMethod() {
  profiler.methodStarted();
  // ...
  profiler.methodFinished();
}

It may seem that we’ve now got rid most of the code multiplication, but those start/stop methods are still spread all around our code base. How do we clean up our code, then?

Aspects to the rescue

Thank God, there is a way to handle this situation (and of course not only this one; it’s also suitable for error handling, self-* operations, customized access management, and almost any feature that spans through layers): using aspects (as we will see, aspects are also called interceptors in some frameworks).

In order to understand how aspects actually work, let’s begin with examining method calls. A usual method call looks like this (obvious diagram to follow):

call
Ordinary method call

With the usage of aspects, this is slightly changed. The object on which the method is called, will be coated in a proxy object to which all the method calls will be routed. Whenever a method call is made to the proxy object, the call will be ‘hijacked’, and the proxy will be given a chance to perform different operations before, after or around (before+after) the method call. For this to work, you will need some kind of a container (either Spring or JavaEE can be a good choice) or third party library, as you cannot do this with Java right out of the box.

As you can see on the diagram above, the container directs the call of otherMethod to the proxy around object2, and performs some operations before and/or after the original method call. This way object2.otherMethod can always stay the same, the only things that change are the before and after methods, but as you are going to see, they are present at only one place within the whole system. The class that implements the logic for before and after is called the Aspect.

With such an aspect, the code for measuring elapsed times for methods can be totally decoupled from the rest of the application. We can come up with an aspect to hold logic that will be run around (remember, around = before+after) the method call:

public class LatencyTracerAspect {
    //...
    public void interceptAround() {
      //code for getting start time
      // run original, business method
      //code for measuring elapsed time
    }
}

All that remains, is to make the container notice our aspect, and tell it what methods to intercept. This is, nevertheless different for each and every container – see below for examples.

Simple, elegant, 100% reusable and totally independent of the business logic (single responsibility principle for the win).

 

Ups and downs of aspects

As mentioned earlier, aspects are very useful when some application requirements would break the boundaries of different layers. In these cases it’s best to organize these features such a way they are easily accessible everywhere, without code duplication – through aspects. However, it’s a very bad practice to extract pieces of business logic into aspects. A poorly implemented aspect with surprising side effects can be the best place for bugs to hide. It is perfectly OK to implement an aspect to check whether a certain XML file is well formed or not, before starting to work with it, but do not send back a reject message to the client from the aspect if the validation fails. Throw an unchecked exception instead, and let business logic handle functional requirements like sending reply messages.

Usually, using before or around aspects you can manipulate the parameters passed in to a method call; with after and around aspects you can modify the return values of the those methods intercepted. Even if it’s technically possible, (most of the times) it is not a good practice to do so. Again, figuring out that the system is throwing exceptions just because some aspect keeps changing the input parameters or the return values is, well, at least tedious and frustrating.

Don’t overuse aspects. Although many times they come in handy, they can make debugging a nightmare. Not to mention that normally aspects will not run with the unit tests suite (as there is no container present to interpret them), so watch out for logic hiding in them. Test them as well as your regular business objects.

 

Examples

As we know the theory by now, let’s apply this knowledge in practice. The code examples to follow are not meant to make you an aspects/interceptors expert, just to exemplify how the things mentioned above can be achieved with different containers. In the following, we are going to implement the profiling aspect using three different containers: Spring, EJB and CDI.

Spring

In Spring, each method to be intercepted should be contained in a Spring bean (note that only managed beans will be intercepted, not those created with the new keyword), and the same is true for the caller method. You cannot intercept a method call on a non-Spring-bean.

We are going to create an annotation, through which we will tell the container that we want our method intercepted. For this, we create a usual, nothing-fancy annotation:

@Retention(RetentionPolicy.RUNTIME)
@Target({ElementType.METHOD, ElementType.TYPE})
public @interface Profiled {

}

That’s it, the signaler annotation is done. Now, we shall create an aspect, and bind our annotation to it.No rocket science here, all we have to do is type in the following code:

import org.aspectj.lang.ProceedingJoinPoint;
import org.aspectj.lang.annotation.Around;
import org.aspectj.lang.annotation.Aspect;

@Aspect
public class ProfilingAspect {

	@Around("@annotation(Profiled)")
	public Object intercept(ProceedingJoinPoint context) {
		Object retval = null;

		long start = System.nanoTime();

		try {
			retval = context.proceed();
		} catch (Throwable t) {
			// Some clever exception handling
		}

		long end = System.nanoTime();
		System.out.println("------------------------------");
		System.out.println("Profiling result: " + (end - start) + " nanos.");
		System.out.println("------------------------------");
		return retval;
	}
}

Notice the @Aspect annotation at the top of of the class. Don’t fall in a common trap here: in your config file, you still have to define this aspect as a bean; thus the @Aspect annotation alone is not enough. Also, in the config file (I am taking a stand on Java class based configuration) you have to enable AspectJ style proxying, using the  @EnableAspectJAutoproxy annotation. Finally, don’t forget to call proceed() on the context object, so the program flow can advance.

Also note the @Around(“@annotation(Profiled)”) annotation. The expression (aka Pointcut) tells Spring to intercept all methods annotated with @Profiled. So, let’s create a method that can be intercepted (probably the easiest part):

public class Intercepted {
	@Profiled
	public void sayIt(String name) {
		System.out.println("We're happy, " + name);
	}
}

Pretty simple, huh? From now on, every method annotated with @Profiled will be intercepted by the container, provided that these annotations are placed on managed beans’ public methods.

Note:

  • The container will not intercept private method calls.
  • Don’t forget to put aspectjveawer.jar on your classpath.
  • You can play around with the context object; you can even change what’s getting passed and returned from a method.
  • Method calls from within the same bean are not intercepted
  • You can read further on this topic here

JavaEE

EJBs

(Aspects are called Interceptors in the enterprise world) In this case, the implementation for the profiler is somewhat easier. As this is JavaEE, you don’t have to place different kinds of jars on the classpath, as your container will have everything needed – right out of the box.

We don’t have to create annotations here, as the binding is solved differently with EJBs. Our interceptor class is somewhat similar to the one before, and looks like this:

import javax.interceptor.AroundInvoke;
import javax.interceptor.InvocationContext;

public class Interceptor {

	@AroundInvoke
	public Object intercept(InvocationContext context) {
		//Logic is the same as before...
	}
}

The binding in the EJB code is done using the @Interceptors annotation applied on EJB class level(in which case all public methods will be intercepted) or method level(only those annotated methods will be intercepted). Our intercepted bean/method will be nothing more than:

import javax.ejb.Stateless;
import javax.interceptor.Interceptors;

@Stateless
// @Interceptors(Interceptor.class) could have been declared here too
public class Intercepted {

    @Interceptors(Interceptor.class)
	public void sayIt(String name) {
		System.out.println("We're happy," + name);
	}
}

Note:

  • Internal method calls (within the same bean) are not intercepted, no matter how the methods are annotated.
  • Private methods (resulting from the bullet above) are not intercepted
  • You are free to have several interceptors for the same bean/method. Those interceptors have to be enumerated like this: @Interceptors({Interceptor1.class, Interceptor2.class})

CDI interceptors

In the case of CDI interceptors we need an intercepted bean, an interceptor, and an interceptor binding – which is just like in case of our Spring example, a custom annotation.

With the help of Interceptor Bindings, we can tell the container which methods should be intercepted. The interceptor binding is not much different from our first @Profiled annotation; we only need to add @InterceptorBinding to it:

import java.lang.annotation.ElementType;
import java.lang.annotation.Retention;
import java.lang.annotation.RetentionPolicy;
import java.lang.annotation.Target;

import javax.interceptor.InterceptorBinding;

@InterceptorBinding
@Target({ElementType.METHOD, ElementType.TYPE})
@Retention(RetentionPolicy.RUNTIME)
public @interface Profiled {
}

An interceptor will be annotated with @Interceptor as well as the annotation dealing with the interceptor binding – it is somewhat similar to a Spring pointcut.  This will bind our @Profiled annotation to the interceptor logic:

import javax.interceptor.AroundInvoke;
import javax.interceptor.Interceptor;
import javax.interceptor.InvocationContext;

@Interceptor
@Profiled
public class CdiInterceptor {

	@AroundInvoke
	public Object around(InvocationContext context) throws Exception{
        //... Logic same as before ...
    }

The intercepted class is, again, the same as with Spring:

	@Profiled
	public void sayIt() {
		System.out.println("CDI bean for the win");
	}

Finally, we have to enable CDI and the interceptors too. By default, CDI is turned off, you can switch it on by placing an empty beans.xml file in your META-INF directory. This is not enough in our case, though. CDI interceptors are also disabled by default; in order to enable them, we will make our new interceptor class visible to CDI container using the same beans.xml:

<beans xmlns="http://java.sun.com/xml/ns/javaee" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
	xsi:schemaLocation="
                         http://java.sun.com/xml/ns/javaee
                         http://java.sun.com/xml/ns/javaee/beans_1_0.xsd">

	<interceptors>
		<class>blog.intercept.CdiInterceptor</class>
	</interceptors>
</beans>

To read more in depth about CDI interceptors, go to: http://docs.jboss.org/weld/reference/1.0.0/en-US/html/interceptors.html.

Note:

  • Internal method calls (within the same bean) are not intercepted, no matter how the methods are annotated.
  • Private methods (resulting from the bullet above) are not intercepted

 

Conclusions

Aspects can be a very efficient and clean means of decoupling business logic from omnipresent features. This way the business methods can deal with business requirements while aspects add the decorator functionality for handling common, non-layer specific tasks.

Aspects are easy to get started with, as chances are high that larger systems are already based on either Spring or JavaEE. They both handle aspects (interceptors) very well, without writing complex code to enable them.

Aspects over-usage can have a bad impact on the system, as it can become hard to track what feature is handled by what aspect. Also, take care not to implement features carrying business value as aspects.

Thoughts on clean code

Back in high school I was taught Pascal. It was a really nice way of learning concepts, basics, programming structures in general. As time went by, I discovered Delphi. Now, Delphi (for those who don’t know) is an object oriented language, based on a Pascal-like syntax. And I was like “wow, I can write applications for Windows, but I’m still doing Pascal”. Continue reading “Thoughts on clean code”

Converting business objects into value objects

In my current project there are two types of data holder objects defined: business objects, used by the server side of the application and value objects (aka data transfer objects) which are very very very (etc.) similar to business objects in terms of properties. These types of value objects are used to pass data to the client. Nothing new so far, this actually happens in pretty many projects.

Continue reading “Converting business objects into value objects”

Builder vs. Large arg-list constructor

Today I was browsing one of our module’s code. I wasn’t looking for anything in particular, I just wanted to familiarize myself with that part of the system. However, I found something strange; the code was full of object constructions like:

private MyObject myObject = new MyObject(null, null, "", "", importantData, importantData2);

You might say, yes, this class does way too many things if it takes this many arguments. Believe it or not, it actually made sense to keep those data together. The real problem was passing around null and dummy values across the system. And the situation was even worse in case of unit tests.

Let’s see a toy example – a.k.a. code that’s not confidential. We consider the class:

import java.util.Date;

public class User {

	private String userName;
	private String password;
	private String nickName;
	private Date lastOnline;
	private Address address;
	private Balance balance;

	public User(String userName, String password, String nickName,
			Date lastOnline, Address address, Balance balance) {
		this.userName = userName;
		this.password = password;
		this.nickName = nickName;
		this.lastOnline = lastOnline;
		this.address = address;
		this.balance = balance;
	}
}

For the sake of the example, let’s suppose we have object constructions like:

User user = new User("tamasgyorfi", "lotsOfAsterisks", "", null, null, null);

User anotherUser = new User("", "", "chaster", new Date(), null, null);

Well, this is 100% functional, but not that clean. Null values and empty strings make these lines sort of “noisy”. How do we make this cleaner? I think in such situations it is worth to give Builder Design Pattern a try, as it makes object construction more straightforward.

This is how I usually do it, in six easy steps: I

  1. create a new class and name it UserBuilder (I like to add the word Builder so others know what they’re facing)
  2. copy all the fields of the object under construction into the builder
  3. (as far as I know) Eclipse is not able to generate the methods I need, so I have it generate all the setters for the fields
  4. with find/replace I replace all the words “set” with “with”. Also replace the return types from void to UserBuilder
  5. have all the methods return this
  6. create a method named “build”. It does the actual construction and returns the object constructed.

After step 6, we have something like this:

import java.util.Date;

public class UserBuilder {

	private String userName;
	private String password;
	private String nickName;
	private Date lastOnline;
	private Address address;
	private Balance balance;

	public UserBuilder withUserName(String userName) {
		this.userName = userName;
		return this;
	}

	public UserBuilder withPassword(String password) {
		this.password = password;
		return this;
	}

	public UserBuilder withNickName(String nickName) {
		this.nickName = nickName;
		return this;
	}

	public UserBuilder withLastOnline(Date lastOnline) {
		this.lastOnline = lastOnline;
		return this;
	}

	public UserBuilder withAddress(Address address) {
		this.address = address;
		return this;
	}

	public UserBuilder withBalance(Balance balance) {
		this.balance = balance;
		return this;
	}

	public User build() {
		return new User(userName, password, nickName, lastOnline, address,
				balance);
	}

}

So we can replace the object constructions mentioned above with these calls:

UserBuilder builder = new UserBuilder();
User user = builder.withUserName("tamasgyorfi")
				.withPassword("lotsOfSterisks")
				.build();

and similarly:

UserBuilder builder = new UserBuilder();
User anotherUser = builder.withLastOnline(new Date())
				.withNickName("chaster")
				.build();

Cleaner, more straightforward, don’t care things are left out and only relevant arguments are mentioned. And last but not least object construction is moved to one place and it’s not spread across multiple classes.

Note: the ideas above are only valid when your objects are not required to be immutable.

JUnit’s parameterized test cases

I like JUnit. I really do. It has really cool features that make my life easier when it comes to testing. I like how I can document my production code with a set of tests. I also like the clear way of separation of the test cases; that each of them have a unique, descriptive name; that they all represent a well understandable use case.

However, I hate, yes it’s true, I HATE this particular feature JUnit offers, namely parameterized test cases. Continue reading “JUnit’s parameterized test cases”