Tag Archives: Code & Technology

How to manage software compatibility

For most software companies the ability to ship new versions of a product that will preserve clients’ data and customizations is a matter of market share. Still, this is often an afterthought and there seems to be little documentation available.

This article is the first of a serie about managing backward compatibility in enterprise applications. This will not be a definitive guide but I will try to spot the common areas where incompatibilities can appear and give guidelines about managing them.

This first post is about the project management side of backward compatibility.

One of the most important thing to remember about backward incompatibility is that it is mostly a matter of process and project management.

In order to find the most accurate way of solving a compatibility issue you need to talk about it because the solution can be driven by technical, business or project considerations. Once a solution is accepted, the reason as to why this as been done that way must be properly advertised (this is of uttermost importance when only documentation is provided) and rolled-out.

As backward compatibility is a project concern it must be:

  1. Listed in the project risks list
  2. Considered at the project level
  3. Optionally considered at the product level (mostly when it has business impacts)

There are three ways to solve backward incompatibilities, they are listed from the most desirable to the one that requires the less developer work:

  1. Ensure binary compatibility – Work is done at the development’s level.
  2. Provide migration tools – Work is split between development and services but emphasis is put on development.
  3. Provide thorough documentation of incompatibilities and ways to overcome them – Work is split between development and services but emphasis is put on services
  4. Reject or postpone the change – Work is then at the product management level

Like for bugs, backward compatibility cannot be guaranteed at 100{5f676304cfd4ae2259631a2f5a3ea815e87ae216a7b910a3d060a7b08502a4b2}, the best thing a project manager can provide is a good measure of the risk upon it for a given version.

When a new version is released, incompatibilities, those that have not been foreseen or at least documented, must then be treated like any other bug and become part of the maintenance process.

In the following posts I will focus on what can make an application backward incompatible and give some guidelines in order to limit those issues and ensure binary compatibility.

See also Backward Compatibility on Wikipedia.

Fun with Java files encoding

Have you ever tried to write Java code with non-ASCII characters? Like having French method names?

The other day I stumbled upon Java classes written in French. Class names like “Opération”, methods names like “getRéalisateur” and embedded log messages and comments all the same.

At first you say “not common but cool” (and you start thinking about writing code in Chinese because your boss always wondered how we could forbid clients from decompiling our classes without using an obfuscator).

But cool it is not!

Why? Because of encoding!

Here is a quiz, what is the encoding those Java files were saved in?

  1. UTF-8 (after all this is how strings are encoded in the JVM)
  2. ASCII (come-on, everybody is writing code in English)
  3. MacRoman (why not?)

Just wonder for a while.

Answer is #3 because the Java IDE (Eclipse in this case) is by default using the platform encoding to save files. And those classes have been created on a Mac.

I actually had no problem reading and compiling them because I also use Eclipse on a Mac and because the Java compiler is also assuming the source files are in the platform encoding.

So what, nothing wrong then? Yeah, except the integration server is running on Ubuntu and sometimes I work on Windows as well. And on those platforms the default encoding is not MacRoman…

Something interesting is that it is always like that! I mean, even when you code in plain English there are chances that your IDE is going to write the files in the platform encoding. But nobody notices because as long as you only use characters in the ASCII-7 range, then they will be encoded the same in almost all encodings.

So what is the solution? Well it depends if you really want to code in French (or in Chinese). My advice anyway is “don’t do that” and externalize localized strings. However, if you really insist you have two solutions:

  1. Make the whole production chain encoding-explicit: Configure your IDE to use UTF-8 and specify in your build that the Java compiler is going to deal with UTF-8 encoded files (UTF-8 is better in most cases).
  2. Make sure you only use ASCII-7 characters in your files and replace all non-ASCII-7 characters with their \uXXXX equivalent (even in comments).

However, be aware that #1 is not always possible, you might be using processing tools that do not offer you the option to use something else than the platform encoding.

Have fun with encoding :)

Image Credits: Arite

What is a lightweight application server?

A colleague of mine just sent me a link to this article from Jeff Hanson on JavaWorld: Is Tomcat an application server?.

That’s funny because another colleague, yesterday during the lunch, asked us if instead of developing for a JEE container it would not be better to adopt a lightweight container like Tomcat? Using frameworks like Spring?

My answer was actually another question (as often): What is a lightweight container?

When frameworks like Spring and Hibernate started, their purpose was to add functionalities that did not exist or were badly designed: flows, inversion of control and injection or entity management. People were complaining about JEE and some switched to Tomcat plus Spring and Hibernate. Some of them did so because at this moment they did not need the other JEE services.

Hanson concludes his article with the following:

When attempting to determine the server environment best suited to a particular application or system, it is helpful to break down the requirements of the system and determine which Java EE components will need to be supported.

I could not agree more with this. However, requirements evolve and people switch to new projects but they usually continue to use the same frameworks.

The result is that when the need for new services increases (transaction, security, messaging, administration) the pressure on frameworks increases and they add those services to their stack because their clients ask for it and that is fun to code.

Then, what is the difference between a JEE server and Tomcat+Spring? I mean at which point a lightweight container is not lightweight anymore? When you add transactions? And in that case why not using JEE? Because it is JEE and it is said to be heavyweight?

My answer is to always use JEE when it offers the services you need. If it does not? Use something else but otherwise use JEE. Today if I was creating a new application I would not use Hibernate for entity persistence, I would use EJB3.

Windows Integrated Security and Java Web Applications

In my previous post I was explaining how to use an Active Directory server to authenticate a user. Indeed, I was trying to make the system authenticate the user using the Windows credentials that she already entered when logging onto her workstation.

Some years ago I was working with IIS and it was only a matter of configuration of the server to enable that for browsers that were supporting the appropriate protocol (others would be using HTTP basic).
One of the advantages of that protocol is that the user’s password is never sent over the wire. I found out this protocol is named SPNEGO and is an extension to the HTTP Negotiate protocol.

Since negotiation must occur between the browser and the server, if the server does not natively implement that protocol you cannot use the standard security APIs like custom registries or JAAS.
The solution is then to disable the server standard authentication mechanism and implement a filter that will negotiate, using SPNEGO, with the browser.

In the principle it looks easy but one still need to implement SPNEGO and bridge with Windows, because it’s Windows that finally authenticates the user.

After some goggling I found that the jCIFS library and its extension jCIFS-Ext have the necessary support to help me do the job. In fact everything is already there, even the filter: jcifs.http.AuthenticationFilter.

So first, let’s configure the security constraints for our web-app. In the web.xml we must have the following:

        <web-resource-name>Any resource</web-resource-name>
        <description>Any resource</description>

I do not define any role nor any authentication method because I don’t actually want the server to do the authentication by himself. Nevertheless, I define that I want confidentiality on those URLs.
I do that because I will configure my filter to fall-back to HTTP Basic if the browser does not support SPNEGO or HTTP Negotiate and I do not want the password to travel unencrypted on the net.

I hope this imply that if the application is not served over HTTPS there will be a problem, but I actually correctly configured my server to serve the application over HTTPS so I did not test this behaviour.

The second step is to configure the filter itself, the jCIFS-Ext filter has undocumented parameters so I had to go through the code to find them:

    <description>SPNEGO Authentication Filter</description>
        <description>The name of the Windows domain.</description>
        <description>The address of the Windows
            domain controller.</description>
        <description>If the browser does not support SPNEGO,
            fallback to HTTP Negotiate.</description>
        <description>If the browser does not support SPNEGO
            nor HTTP Negotiate, fallback to HTTP Basic
            but only if the connection is secure.</description>
        <description>Never fallback to HTTP Basic when the
            connection is insecure.</description>
        <description>The name of the domain in case of
            HTTP Basic authentication.
            Used only for display to the user.</description>

“Et voilà“, now your application should automatically authenticate the user based on its Windows credentials. I said should because there are some prerequisites:

  • on the browser side, Windows integrated security must be enabled
  • on the server side your platform must actually support Kerberos for the filter to properly work.

However, the former is a matter of configuration and the latter is a matter of slightly changing the code of the filter.

Configuring an Internet Explorer Browser

To configure an Internet Explorer browser to use Windows authentication, follow these procedures in Internet Explorer:

  1. Configure Local Intranet Domains
    1. In Internet Explorer, select Tools > Internet Options.
    2. Select the Security tab.
    3. Select Local intranet and click Sites.
    4. In the Local intranet popup, ensure that the “Include all sites that bypass the proxy server” and “Include all local (intranet) sites not listed in other zones” options are checked.
    5. Click Advanced.
    6. In the Local intranet (Advanced) dialog box, add all relative domain names that will be used for Integrator server instances participating in the SSO configuration (for example, myhost.example.com) and click OK.
  2. Configure Intranet Authentication
    1. Select Tools > Internet Options.
    2. Select the Security tab.
    3. Select Local intranet and click Custom Level…
    4. In the Security Settings dialog box, scroll to the User Authentication section.
    5. Select Automatic logon only in Intranet zone. This option prevents users from having to re-enter logon credentials, which is a key piece to this solution.
    6. Click OK.
  3. Verify the Proxy Settings (If you have a proxy server enabled)
    1. Select Tools > Internet Options.
    2. Select the Connections tab and click LAN Settings.
    3. Verify that the proxy server address and port number are correct.
    4. Click Advanced.
    5. In the Proxy Settings dialog box, ensure that all desired domain names are entered in the Exceptions field.
    6. Click OK to close the Proxy Settings dialog box.
  4. Set Integrated Authentication for Internet Explorer 6.0 (In addition to the previous settings, one additional setting is required if you are running Internet Explorer 6.0)
    1. In Internet Explorer, select Tools > Internet Options.
    2. Select the Advanced tab.
    3. Scroll to the Security section.
    4. Make sure that Enable Integrated Windows Authentication option is checked and click OK.
    5. If this option was not checked, restart the computer.

Despite all of this configuration I encountered some cases where this was not working at all in IE and I was unable to spot the problem, so you might be falling into this category. The symptoms are that the negociation process takes place but the browser does not answer the last challenge and no error message is displayed at all.

Configuring a Mozilla Firefox Browser

To configure an Mozilla Firefox browser to use Windows authentication, follow these procedures in Mozilla Firefox:

  1. Type about:config in the address bar of the browser and press return (a big list of properties should be displayed in the browser window).
  2. Type “network” in the filter box.
  3. Double-click on the network.automatic-ntlm-auth.trusted-uris property and enter “mydomain.com” (if there is already a value you can add a comma to separate both entries)

The value for this preference is a comma-separated list of URI fragments. This sample string shows the three legal kinds of fragments: https://, http://www.example.com, test.com

The first fragment says, “Trust all URLs with an https scheme.” The second fragment (a full URL) says, “Trust this particular web site.” The third fragment is interpreted to mean http://anything.test.com, so any web site that is a subdomain of test.com, including test.com itself, will also be trusted.

I did not encounter any problem with Firefox which is what I call a paradox…

Changing the filter to use NTLM instead of Kerberos

Actually the change must not occur in the filter but in the class jcifs.spnego.Authentication which comes with jCIFS-Ext. This class tries to determine if the system supports Kerberos but uses introspection, looking for some Java classes that enable Kerberos support in Java.
Nevertheless, those classes can be there without the actual system supporting Kerberos (which is the case where I work).

Fortunately, modifying the behaviour is not too much complicated, just change line 57 of this class:

private static final boolean KERBEROS_SUPPORTED = getKerberosSupport();

to the following:

private static final boolean KERBEROS_SUPPORTED = false;

And then the filter will use NTLM instead of Kerberos.

I hope the next posts will be shorter :-P

ActiveDirectory authentication in Java

Recently I came into trying to authenticate users of an intranet web-application against the ActiveDirectory server that is used to authenticate them on their Windows desktop. Here is some code I used to achieve this.

I went into several steps, the first of them being creating a custom user registry to interface my web server and the AD server.

I was using Jetty as the web container so I had to develop an implementation of Jetty’s UserRealm but in any other web container or application things should be the same.
Mostly you need to do two things:

  1. Authenticate the user’s credentials
  2. Retrieve the user’s roles

1. Authenticating the user

Hashtable env = new Hashtable();
env.put(Context.PROVIDER_URL, "ldap://mydomain.com:389/");
env.put(Context.INITIAL_CONTEXT_FACTORY, "com.sun.jndi.ldap.LdapCtxFactory");
env.put(Context.SECURITY_PRINCIPAL, "mydomain\\" + username);
env.put(Context.SECURITY_CREDENTIALS, password);

DirContext context = new InitialDirContext(env);

Once you have created the initial context, the user has been authenticated by the AD server and everything is fine (creating the inital context will throw a NamingException otherwise).

However, since you are going to send the user’s credentials over the network you may want to have some confidence in the protocole that is used to negociate the connection. The javax.security.sasl.qop and other properties may be set to ensure that the protocole is safe.

This code adds the domain name to the username, that way the user doesn’t have to enter domain\username as its credentials but only its username.
You may want to force her to enter the domain or do some autodetection… as you like.

2. Retrieving the user’s roles

Set memberOf = new HashSet();

SearchControls searchCtls = new SearchControls();
searchCtls.setReturningAttributes(new String[] { "memberOf" });

String searchFilter = MessageFormat.format("(sAMAccountName={0})",  new Object[] { username });

// search for objects
NamingEnumeration answer = context.search("ou=Managed Objects," + "dc=mydomain,dc=com", searchFilter, searchCtls);

// Loop through the search results
if (answer.hasMoreElements()) {
    SearchResult sr = (SearchResult) answer.next();

    Attributes attrs = sr.getAttributes();
    if (attrs != null) {
        Attribute memberOfAttr = attrs.get("memberOf");

        if (memberOfAttr != null) {
            NamingEnumeration rolesEnum = memberOfAttr.getAll();

            while (rolesEnum.hasMoreElements()) {
                Object role = rolesEnum.nextElement();

                // save group names into group list

The roles that are returned are distinguished names, like cn=Joe Smith,ou=Sales,dc=mydomain,dc=com so that may be another issue to map them to simpler names. Fortunately I didn’t need these roles for my application.

The second step for me was to actually enable single sign-on (authentication without asking for credentials).

I quickly discovered that the previous code was totally useless for that purpose. But I keep this one for a later post ;-)