Tuesday, December 14, 2010

Dehydration store purging

Interesting post for purging the dehydration store:

http://www.oracle.com/technetwork/middleware/bpel/learnmore/bpeldehydrationstorepurgestrategies-192217.pdf

Also a simple explanation of High watermark:
http://www.oracle.com/technetwork/middleware/bpel/learnmore/bpeldehydrationstorepurgestrategies-192217.pdf

Wednesday, November 24, 2010

Validating Date in java

I recently ran into a peculiar problem while working with Dates in java. I need to validate an input date provided in the format (dd-MM-yyyy). This seems a very simple requirement.

Input: String
Output: Boolean indicating whether the provided date is in provided format

private static String DATE_FORMAT = "dd-MM-yyyy";
    public boolean validateDate(String date) {
        try {
            DateFormat df = new SimpleDateFormat(DATE_FORMAT);
            df.parse(date);
            return true;
        } catch (ParseException e) {
            e.printStackTrace();
            return false;
        }
    }

The following results were observed.

InputActual OutputExpected Output
01-01-9999truetrue
333-01-9999truefalse

As provided in the API documentation of DateFormat states
By default, parsing is lenient: If the input is not in the form used by this object's format method but can still be parsed as a date, then the parse succeeds. Clients may insist on strict adherence to the format by calling setLenient(false).

Thus the unexpected behavior while parsing the input '333-01-9999'. We now change the code to add
private static String DATE_FORMAT = "dd-MM-yyyy";
    public boolean validateDate(String date) {
        try {
            DateFormat df = new SimpleDateFormat(DATE_FORMAT);
            df.setLenient(false);
            df.parse(date);
            return true;
        } catch (ParseException e) {
            e.printStackTrace();
            return false;
        }
    }

The following results were observed.

InputActual OutputExpected Output
01-01-9999truetrue
333-01-9999falsefalse
3a-01-9999falsefalse
02-03-1@@@truefalse
02-03-18truefalse

And boom. The parsing of '02-03-1@@@' and '02-03-18' again provides us with an unexpected result. This seems to be a bug in the validation.

So I searched the Sun bug database. But strangely enough the bug has been rejected. The rationale provided is completely and utterly bull*&(#.
http://bugs.sun.com/bugdatabase/view_bug.do?bug_id=5055568

The comments by the Bug submitter have been ignored.

Suppose the input to the method is supposed to be '01-01-2010' and the user provides '01-01-2o10' then the data will be converted into the date 01-01-0002. This after the seeing the setLenient documentation which states that:-
Specify whether or not date/time parsing is to be lenient. With lenient parsing, the parser may use heuristics to interpret inputs that do not precisely match this object's format. With strict parsing, inputs must match this object's format.
So the solution to this problem is to match the string using regular expression before validating with the parse method.
private static String DATE_FORMAT = "dd-MM-yyyy";
    private String REGEX_DATE_FORMAT = "^(0[1-9]|[12][0-9]|3[01])[- ](0[1-9]|1[012])[- ]\\d\\d\\d\\d$";
    public boolean validateDate(String date) {
        try {
            if(!Pattern.matches(REGEX_DATE_FORMAT, date)) {
                return false;
            }

            DateFormat df = new SimpleDateFormat(DATE_FORMAT);
            System.out.println(df.parse(date));
            return true;
        } catch (ParseException e) {
            e.printStackTrace();
            return false;
        }
    }

InputActual OutputExpected Output
01-01-9999truetrue
333-01-9999falsefalse
3a-01-9999falsefalse
02-03-1@@@falsefalse
02-03-18falsefalse

So we solve the problem.
Hoping some day java will include a simpler way of validation of dates.

Thursday, July 8, 2010

Extending schema with redefine and Java

Requirement

In this case a system which has an external interface (say a web service exposed to external clients) let us only consider the xsd for this example. Assume that the xsd is imported in a wsdl and used.Internally it needs to maintain an enhanced information model which adds multiple attributes to the provided complex types. The internal information model is not a new one it is just an enhanced model.Any additions or changes to the external xsd will require the changes in the internal xsd as well.





Possible solutions

  1. Use a new xsd for external and internal interfaces copying all attributes. The disadvantage for this approach is the maitainanve of the xsds.Any change requires changes in both the places. The advantage is that the internal and external xsd become disconnected and can be maintained seperately. We do not require the advantage in our case since it would be a maintainance overhead

  2. Import the external xsd and reuse the elements whereever possible by extending them or using them as is. It is not a very clean approach lot of repeated code would result but it would be lesser than option 1.

  3. Redefine the elements enhanced and use it for further processing. Much cleaner approach is expected.


Detailed diffierence between xml schema extend vs redefine elements is explained below:


The external xsd and its sample xml is show below:-

Fig. external xsd



Fig. external sample xml

The external interface conatains an element Request with two elements name and child. The child is a complex type with an element t1. The internal interface needs to enhace the child complex type to add two more elements val1 and val2


While using the extend element the schema and the example xml will look like:-

Fig. internal xsd using extends construct



Fig. internal sample xml

As seen above the extends needs to redefine the parent attribute if the child element is extended. This leads to a very complicated xsd and the reusability of the defined types becomes very limited. This problem can be overcome if redefine construct is used as shown below.



Fig. internal xsd using redefine and extends construct


Fig. sample xml for the above xsd



Tool and API support:

JAX-RPC and redefine

According to the JAX-RPC 1.1 specifications (http://test.javaranch.com/ulf/jaxrpc-1_1-fr-spec.pdf):-

The following XML Schema features are not required to be supported and WSDL to Java mapping tools are allowed to reject documents that use them: xsd:redefine, xsd:notation, substitution groups.

The JDeveloper does not generate the proxy for the web service that contains the redefine element.

JAXB 1 and Redefine

The XML Schema redefine construct is not supported by JAXB and if such unsupported construct is included in a schema, an error will be generated when you try to generate Java classes from them with xjc.(ref:- http://onjava.com/pub/a/onjava/2004/12/15/jaxb.html)

JAXB 2 and Redefine

XJC for JAXB 2 successfully generates the proxy for the web service

SOAP-UI and redefine

SOAP UI does not support the use of redefine elements. I have raised the following bug
http://sourceforge.net/tracker/index.php?func=detail&aid=3019440&group_id=136013&atid=737763

XML SPY and redefine

I am using XML Spy 2008. This successfully generates a sample SOAP message from a WSDL containing the redefine element.

WS-Interoperability Basic Profile 1.1

This element is compliant and does not cause any errors.



Conclusion

The redefine construct does provide a flexible construct for extending the schema definations. The support is limited for the construct and is improving. The redefine schema construct cannot be used with JAX RPC, but is compliant with JAXB 2 so any other web services programming model that uses JAXB 2 like JAX WS or Spring Web Services can be used.

Wednesday, July 7, 2010

Anatomy of a signed SOAP message

I will explain a WSS signed web service SOAP message,signed using a X509 certificate.

The sample signed message is:


The following illustrates the anatomy of the message
1. SignedInfo

The SignedInfo element describes the signed content of the message.



1.1. CanonicalizationMethod
The element CanonicalizationMethod is used to describe the canonicalization algorithm used on the xml for the generation of the digest.
1.2. SignatureMethod
The element SignatureMethod is used to describe the algorithm used for the generation of the SignatureValue from the output of the canonicalization algorithm.
1.3. Reference
The optional URI attribute for Reference element identifies the data object that was signed.

In the above case the body is being signed thus the URI attribute refers to the soap body.
Transform Algorithm indicates the transformation algorithm. I still need to understand why do we need a duplicate of the canonicalization algortithm?
DigestMethod Algorithm indicates the algorithm used to generate the digest value and DigestValue contains the computed digest value.
2.SignatureValue
SignatureValue contain the signature value, which is actually the encrypted digest value. This value is the output of the Signature Method Algorithm indicated

3.BinarySecurityToken
The signed data contain a core bare name reference (as defined by the XPointer specification [XPointer]) to the element that contains the security token referenced, or a core reference to the external data source containing the security token.
In this example the BinarySecurityToken contains the Base64Encoded public key that can be used for verification.

The signed content was created using a Microsoft file (.pfx) containing x509 certificates. The public key can be regenerated using the BinarySecurityToken element.
Sample code to generate .cer from BinarySecurityToken

// from tag BinarySecurityToken
private static final String b64Str = "MIIECTCCAvGgAwIBAgICLy4wDQYJKoZIhvcNAQEFBQAwdjELMAkGA1UEBhMCQUUxETAPBgNVBAoTCEV0aXNhbGF0MSQwIgYDVQQLExtFdGlzYWxhdCBlQnVzaW5lc3MgU2VydmljZXMxLjAsBgNVBAMTJUNvbXRydXN0IFVzZXIgQ2VydGlmaWNhdGlvbiBBdXRob3JpdHkwHhcNMDkwMjEyMDQ0ODMxWhcNMTEwMjEyMDQ0ODMxWjCBqjELMAkGA1UEBhMCQUUxDjAMBgNVBAcTBUR1YmFpMQ8wDQYDVQQKEwZFVENEQzIxHjAcBgNVBAsTFURlbHV4ZSBJbnRsIENhcmdvIExMQzENMAsGA1UELhMENTgwODEnMCUGCSqGSIb3DQEJARYYZGVsdXhlcGFAZW1pcmF0ZXMubmV0LmFlMSIwIAYDVQQDExlBYmR1bCBTYXR0YXIgQWJkdWwgUmF3b29mMIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQDfge/HNFnO/SEVdvNPTQ0ziLEMDz/EKwSWWCBK94yU58y75AsTArK/QG4+wHSALd9HDW+wxaBLd7ZLz4mjMgAJgGEEWIP9XYMvSTO2li7SI9fKANQ3/uoXTgJU0N/CLyLBZkW2Z7Vb6bsdhN6HGzsFcd5SoDmxYDh+z26RenbtUQIDAQABo4HvMIHsMAkGA1UdEwQCMAAwIwYDVR0RBBwwGoEYZGVsdXhlcGFAZW1pcmF0ZXMubmV0LmFlMEwGA1UdIARFMEMwQQYLKwYBBAGyXQIBAQAwMjAwBggrBgEFBQcCARYkaHR0cDovL2NvbXRydXN0LmV0aXNhbGF0LmFlL2Nwcy5odG1sMA4GA1UdDwEB/wQEAwIE8DAfBgNVHSMEGDAWgBTOP/R2v2Tj4qbCev148AwSjFT9fjA7BgNVHR8ENDAyMDCgLqAshipodHRwOi8vY29tdHJ1c3QuZXRpc2FsYXQuYWUvY3JsL3VzZXJjYS5jcmwwDQYJKoZIhvcNAQEFBQADggEBAKO44b564tmzCLCZhlE5gQkGzQF1tgW954nJMcfthO89C9X3QuLbBoNLrrKeQoqumKYDMiODF5Rkn1pRlgJlGSWKOkjPwF+wB4PlHjd/BijNDnyv2VJUWw7gqE6uffu2E0c4kEfun2leNY03Qtcvu9FmUL7JDj0seibEhOXzy63r+o5rf5x5/vER8vUz1MBypHea3EWbCSJ2yAEw2fJ3Syq/vuihr4yP3VOb7KBeVXL353J5pdpql4UjAwlGAdmiihAAQMCKicE6qDZ2i4jC4bS+lSDv2wE/CiTCj1DN1eEyQnajuTWvFYq88ZAHtru7q5CrsMcHMa8WXENMrUzlKdM=";

public static int decode(char c) {
if (c >= 'A' && c <= 'Z')
return c - 65;
else if (c >= 'a' && c <= 'z')
return c - 97 + 26;
else if (c >= '0' && c <= '9')
return c - 48 + 26 + 26;
else
switch (c) {
case '+':
return 62;
case '/':
return 63;
case '=':
return 0;
default:
throw new RuntimeException(
new StringBuffer("unexpected code: ").append(c)
.toString());
}
}

public static byte[] decode(String s) {

int i = 0;
ByteArrayOutputStream bos = new ByteArrayOutputStream();
int len = s.length();

while (true) {
while (i < len && s.charAt(i) <= ' ')
i++;

if (i == len)
break;

int tri = (decode(s.charAt(i)) << 18)
+ (decode(s.charAt(i + 1)) << 12)
+ (decode(s.charAt(i + 2)) << 6)
+ (decode(s.charAt(i + 3)));

bos.write((tri >> 16) & 255);
if (s.charAt(i + 2) == '=')
break;
bos.write((tri >> 8) & 255);
if (s.charAt(i + 3) == '=')
break;
bos.write(tri & 255);

i += 4;
}
return bos.toByteArray();
}

public static void main(String[] args) throws Exception {
byte[] back = decode(b64Str);
OutputStream out = new FileOutputStream("aa.cer");
out.write(back);
//perform your exception handling
out.close();
}


4. KeyInfo


In order to ensure a consistent processing model across all the token types supported by WSS: SOAP Message
Security, the element specify all references to X.509 token types in signature or encryption elements that comply with this profile.
The element contains a element that specifies the token data by means of a X.509 SubjectKeyIdentifier reference.
Reference:-
http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-x509-token-profile-1.0.pdf

Normalization and Canonicalization of XML

Normalized xml is the XML stripped of white spaces.
Multiple methods can be applied by using the following schema types:-
  • xsd:normalizedString (http://www.w3.org/TR/xmlschema11-2/#normalizedString)
  • xsd:token(http://www.w3.org/TR/xmlschema11-2/#token)
These types do not restrict the use of white spaces rather are instructions to the processor to ignore the spaces (according to their respective rules).
e.g xsd:token is supposed to merge multiple white spaces into one, so for an element defined in xsd as
<xs:element name="tkn" type="xs:token"/>

the value can be provided as:-
<tkn>toks        en     </tkn>

This will not result in an schema validation error but the parser should treat it like a string with the following value:-
<tkn>toks en</tkn>


Canonical form of an XML
The canonical form of an XML document is physical representation of the document produced by the following method:-
  • The document is encoded in UTF-8
  • Line breaks normalized to #xA on input, before parsing
  • Attribute values are normalized, as if by a validating processor
  • Character and parsed entity references are replaced
  • CDATA sections are replaced with their character content
  • The XML declaration and document type declaration (DTD) are removed
  • Empty elements are converted to start-end tag pairs
  • Whitespace outside of the document element and within start and end tags is normalized
  • All whitespace in character content is retained (excluding characters removed during line feed normalization)
  • Attribute value delimiters are set to quotation marks (double quotes)
  • Special characters in attribute values and character content are replaced by character references
  • Superfluous namespace declarations are removed from each element
  • Default attributes are added to each element
  • Lexicographic order is imposed on the namespace declarations and attributes of each element
The rules for Canonical form of xsd are very detailed and do not cover the normalization of elements. Both of these forms supplement each other.

Canonical form is very useful while generating hash for the xml and are used in generating the WS-Security BinarySecurityToken.

Sunday, May 16, 2010

Executable jar file from command line

Executing an executable jar (the jar contains the name of the main file in manifest) from command line.

Simple?

Say the jar name is exec.jar and is located in c:\java.
Go to :-
c:\
cd c:\java
java -jar exec.jar

This executes fine.
But why do I need to go into the directory where the jar exists. Say I am in the drive c: I can always do:-
java -classpath c:\java\exec.jar -jar exec.jar
Unable to access jarfile exec.jar
No it does not work.

Lets try set classpath=.;c:\java\exec.jar;
and then execute
java -jar exec.jar
Still the same error

Now lets try
java -jar c:\java\exec.jar
Works fine...

But why is the classpath not working.
The reason is provided in the java tool documentation from the Sun site.

"-jar Execute a program encapsulated in a JAR file.
.....
......
When you use this option, the JAR file is the source of all user classes, and other user class path settings are ignored. "


The classpath option does not work with java -jar option.

ref: java tool documentation

Thus the options if you want to execute a jar file are:-
  • Go to the directory of jar and execute the java -jar exec.jar
  • From any other directory execute java -jar c:\java\exec.jar
  • If you want to read it from the classpath, use the workaround by retrieving the name of the main class from mainifest and executing it:-
  • java -classpath c:\java\exec.jar Main

Tuesday, April 20, 2010

Generating documentation with APT and Maven

Purpose:-
Display the usage of APT to generate traceability matrix for requirement and test case.

This is not a fully implemented feature rather it displays the concepts and the setup.
This assumes basic understanding of APT and maven.

Steps:-
1. Setup project 'TestAnnotation' for annotation processor - change pom, write annotation and process classes
2. Update project 'Test' for annotation processing
3. execute apt

Step 1:

1.1 Create a new maven project 'TestAnnotation' and update the pom dependency to include tools.jar.

<dependency>
<groupid>com.sun</groupid>
<artifactid>tools</artifactid>
<version>1.4.2</version>
<scope>system</scope>
<systempath>${java.home}/../lib/tools.jar</systempath>
</dependency>


1.2 Write the annotation class. This class needs to be shared with the Test project. This can be done by copying the source unto the project or having a common dependency between the projects.

package com.sash;

import java.lang.annotation.Documented;
import java.lang.annotation.ElementType;
import java.lang.annotation.Retention;
import java.lang.annotation.RetentionPolicy;
import java.lang.annotation.Target;

@Documented
@Retention(RetentionPolicy.SOURCE)
@Target({ElementType.TYPE,
ElementType.METHOD,
ElementType.CONSTRUCTOR,
ElementType.PACKAGE,
ElementType.FIELD})
public @interface TestCaseDetails {
String code();
String description();
String useCase();
}
1.3 Create Annotation Processing Factory

The factory is responsible for creating processors for one or more annotation types. The factory is said to support these types.
package com.sash;

import java.util.Collection;
import java.util.Collections;
import java.util.Set;

import com.sun.mirror.apt.AnnotationProcessor;
import com.sun.mirror.apt.AnnotationProcessorEnvironment;
import com.sun.mirror.apt.AnnotationProcessorFactory;
import com.sun.mirror.apt.AnnotationProcessors;
import com.sun.mirror.declaration.AnnotationTypeDeclaration;

public class TestCaseDetailsProcessorFactory implements
AnnotationProcessorFactory {

public AnnotationProcessor getProcessorFor(
Set <> declarations,
AnnotationProcessorEnvironment env) {
AnnotationProcessor result;
if (declarations.isEmpty()) {
result = AnnotationProcessors.NO_OP;
} else {
result = new TestCaseDetailsProcessor(env);
}
return result;

}

public Collection <> supportedAnnotationTypes() {
return Collections.singletonList(TestCaseDetails.class.getName());
}

public Collection <> supportedOptions() {
return Collections.emptyList();
}
}

1.4 Create the annotation processor

package com.sash;

import java.util.Collection;
import java.util.Map;

import com.sun.mirror.apt.AnnotationProcessor;
import com.sun.mirror.apt.AnnotationProcessorEnvironment;
import com.sun.mirror.declaration.AnnotationMirror;
import com.sun.mirror.declaration.AnnotationTypeDeclaration;
import com.sun.mirror.declaration.AnnotationTypeElementDeclaration;
import com.sun.mirror.declaration.AnnotationValue;
import com.sun.mirror.declaration.Declaration;
import com.sun.mirror.util.SourcePosition;

public class TestCaseDetailsProcessor implements AnnotationProcessor {

private AnnotationProcessorEnvironment environment;

private AnnotationTypeDeclaration annotationTypeDeclaration;

public TestCaseDetailsProcessor(AnnotationProcessorEnvironment env) {
System.out.println("TestCaseDetailsProcessor.TestCaseDetailsProcessor()");
environment = env;
annotationTypeDeclaration = (AnnotationTypeDeclaration) environment
.getTypeDeclaration(TestCaseDetails.class.getName());
}

public void process() {
System.out.println("TestCaseDetailsProcessor.process()");
Collection declarations = environment
.getDeclarationsAnnotatedWith(annotationTypeDeclaration);
for (Declaration declaration : declarations) {
processNoteAnnotations(declaration);
}
}

private void processNoteAnnotations(Declaration declaration) {
System.out.println("TestCaseDetailsProcessor.processNoteAnnotations()");
Collection annotations = declaration
.getAnnotationMirrors();
for (AnnotationMirror mirror : annotations) {
if(mirror.getAnnotationType().getDeclaration().equals(
annotationTypeDeclaration)) {

SourcePosition position = mirror.getPosition();
Map values = mirror
.getElementValues();

System.out.println("Declaration: " + declaration.toString());
System.out.println("Position: " + position);
System.out.println("Values:");
for (Map.Entry entry : values
.entrySet()) {
AnnotationTypeElementDeclaration elemDecl = entry.getKey();
AnnotationValue value = entry.getValue();
System.out.println(" " + elemDecl + "=" + value);
}
}
}
}

}


2. Update the pom of project 'Test' for annotation processing to add apt maven plugin.


<plugin>
<groupId>org.codehaus.mojo</groupId>
<artifactId>apt-maven-plugin</artifactId>
<version>1.0-alpha-3</version>
</plugin>


Update the test case to use the annotations.

package com.test; import com.sash.TestCaseDetails;
public class Testing1 {
@Test
@TestCaseDetails(code="1", description="2", useCase="3")
public void testing() {
//actual code goes here
}
}


3. Execute the maven command 'mvn apt:test-process' to execute the command. The following output can be seen:-


Values:
code()="1"
description()="2"
useCase()="3"


The example only shows a print into the console. This should be written into a file with the expected format.

The apt command can also be executed without maven using the following command.


apt \TestAnnotation-1.jar -factory com.sash.TestCaseDetailsProcessorFactory \Testing1.java


Note: The code provided has been written by using articles and documentation available. This is only an aggregation of information available online.

Reference and further reading:
http://www.oracle.com/technology/pub/articles/marx-jse6.html
http://www.javalobby.org/java/forums/t17876.html
http://mojo.codehaus.org/apt-maven-plugin/examples/configuring-a-factory.html