springmodules: Spring Modules - Modules, add-...

来源:百度文库 编辑:神马文学网 时间:2024/04/28 12:38:32
Spring Modules - Modules, add-ons and integration tools for Spring
Reference Documentation
Rob Harrop
Steven Devijver
Costin Leau
Jan Machacek
Thierry Templier
Thomas Risberg
Alex Ruiz
Uri Boness
Gurwinder Singh
Sergio Bossa
Omar Irbouh
Juergen Hoeller
Dave Syer
Version 0.8
Copies of this document may be made for your own use and for distribution to others, provided that you do not charge any fee for such copies and further provided that each copy contains this Copyright Notice, whether distributed in print or electronically.
Table of Contents
Preface1.Introduction2.Ant Integration2.1.Introduction2.2.Setting up Spring Configuration2.2.1.Overriding the BeanFactory locations2.2.2.Example BeanFactory Configuration
2.3.Exposing a Spring Bean to Ant2.4.Evaluating an Expression on a Spring Bean in Ant2.5.Dependency Injection into a Custom Ant Task2.6.Configuring Ant2.6.1.Definitions2.6.2.Classpath2.6.3.Example
3.Caching3.1.Introduction3.2.Uses3.3.Configuration3.4.Cache Provider3.4.1.EHCache3.4.2.JBoss Cache3.4.3.Java Caching System (JCS)3.4.4.OSCache
3.5.Declarative Caching Services3.5.1.Caching Advice3.5.2.Caching Models3.5.3.Caching Listeners3.5.4.Key Generator3.5.5.Flushing Advice3.5.6.Flushing Models
3.6.Strategies for Declarative Caching Services3.6.1.CacheProxyFactoryBean3.6.2.Source-level Metadata-driven Autoproxy3.6.2.1.Jakarta Commons-Attributes3.6.2.2.JDK 1.5+ Annotations
3.6.3.BeanNameAutoProxyCreator
3.7.Programmatic Use
4.Commons Support4.1.Introduction4.2.Commons Configuration integration
5.db4o5.1.Introduction5.2.Configuration5.2.1.Configuring an ObjectContainer5.2.2.Configuring an ObjectServer5.2.3.Using db4o‘s Configuration object
5.3.Inversion of Control: Template and Callback5.4.Transaction Management5.5.Outside the Spring container
6.Flux6.1.Introduction6.2.Exposing Flux as a Spring Bean6.3.Getting Help
7.Hivemind Integration7.1.Introduction7.2.Configure an Hivemind Registry7.3.Exposing HiveMind Services as Spring Beans
8.JavaSpaces8.1.Introduction8.2.JavaSpaces configuration8.2.1.Using specialized classes8.2.1.1.Blitz8.2.1.2.GigaSpaces
8.2.2.Using a generic Jini service
8.3.Inversion of Control: JavaSpaceTemplate and JavaSpaceCallback8.4.Transaction Management8.5.Remoting: JavaSpaceInterceptor8.6.GigaSpaces Spring Integration8.6.1.Simplifying Business Logic Abstraction8.6.2.Online Wiki Documentation
GigaSpaces Spring Integration1.2.2.1.Introduction – Give Spring Some Space2.1.1.Simplify business logic abstraction using Spring/POJO support
2.2.Integration Components2.2.1.Common Services2.2.2.Data-Grid2.2.3.Messaging Grid2.2.4.Parallel Processing – Business logic Remote invocation 2.2.5.Service Grid
2.3.Integration Implementation Classes2.3.1.org.springmodules.javaspaces.gigaspaces.GigaSpacesFactoryBean2.3.2.org.springmodules.javaspaces.gigaspaces.GigaSpacesDaoSupport2.3.3.org.springmodules.javaspaces.JavaSpaceTemplate2.3.4.org.springmodules.javaspaces.gigaspaces.GigaSpacesTemplate2.3.5.org.springmodules.javaspaces.gigaspaces.GigaSpacesLocalTransactionManagerFactoryBean
2.4.Spring Configuration Files2.4.1.Application Context xml 2.4.2.The Dao xml2.4.3.transaction.xml2.4.4.Pojo Primary Key setting
2.5.3rd party packages2.6.References
9.jBPM 3.1.x9.1.Introduction9.2.Configuration9.2.1.LocalJbpmConfigurationFactoryBean9.2.2.Inversion of Control: JbpmTemplate and JbpmCallback9.2.3.ProcessDefinitionFactoryBean9.2.4.Outside Spring container
9.3.Accessing Spring beans from jBPM actions
10.Java Content Repository (JSR-170)10.1.Introduction10.2.JSR standard support10.2.1.SessionFactory10.2.1.1.Namespace registration10.2.1.2.Event Listeners10.2.1.3.NodeTypeDefinition registration
10.2.2.Inversion of Control: JcrTemplate and JcrCallback10.2.2.1.Implementing Spring-based DAOs without callbacks
10.2.3.RepositoryFactoryBean10.2.3.1.Jackrabbit10.2.3.2.Jackrabbit RMI support10.2.3.3.Jeceira
10.3.Extensions support10.3.1.Transaction Manager10.3.1.1.LocalTransactionManager10.3.1.2.JTA transactions10.3.1.3.SessionHolderProviderManager and SessionHolderProvider
10.4.Mapping support10.5.Working with JSR-170 products10.5.1.Alfresco10.5.2.Magnolia
11.JSR9411.1.Introduction11.2.JSR94 support11.2.1.Provider11.2.2.Administration11.2.3.Execution11.2.4.Definition of a ruleset11.2.5.Configure the JSR94 template11.2.6.Using the JSR94 template
11.3.Configuration with different engines11.3.1.JRules11.3.2.Jess11.3.3.Drools
12.Lucene12.1.Introduction12.2.Indexing12.2.1.Root entities12.2.2.Configuration12.2.2.1.Configuring directories12.2.2.2.Configuring a SimpleIndexFactory12.2.2.3.Dedicated namespace
12.2.3.Document type handling12.2.4.Template approach12.2.4.1.Template configuration and getting12.2.4.2.Basic operations12.2.4.3.Usage of InputStreams with templates12.2.4.4.Usage of the DocumentHandler support with templates12.2.4.5.Work with root entities12.2.4.6.Template and used resources
12.2.5.Mass indexing approach12.2.5.1.Indexing directories12.2.5.2.Indexing databases
12.3.Search12.3.1.Root entities12.3.2.Configuration12.3.2.1.Configuring a SimpleSearcherFactory12.3.2.2.Configuring a MultipleSearcherFactory12.3.2.3.Configuring a ParallelMultipleSearcherFactory
12.3.3.Template approach12.3.4.Object approach
13.Apache OJB 13.1.OJB setup in a Spring environment13.2.PersistenceBrokerTemplate and PersistenceBrokerDaoSupport13.3.Transaction management
14.O/R Broker14.1.Introduction14.2.Setting up the Broker14.3.BrokerTemplate and BrokerDaoSupport14.4.Implementing DAOs based on plain O/R Broker API
15.OSWorkflow15.1.Introduction15.2.Configuration15.3.Inversion of Control: OsWorkflowTemplate and OsWorkflowCallback15.4.Working with workflow instances15.5.Acegi integration15.6.OSWorkflow 2.8+ support
16.Spring MVC extra16.1.About16.2.Usage guide16.2.1.Using the ReflectivePropertyEditor16.2.2.Using the ReflectiveCollectionEditor16.2.3.Using EnhancedSimpleFormController and EnhancedAbstractWizardFormController16.2.4.Using the FullPathUrlFilenameViewController16.2.5.Using the AbstractRssView
17.Validation17.1.Commons Validator17.1.1.Configure an Validator Factory17.1.2.Use a dedicated validation-rules.xml17.1.3.Configure a Commons Validator17.1.4.Server side validation17.1.5.Partial Bean Validation Support17.1.6.Client side validation
17.2.Valang17.2.1.Valang Syntax17.2.1.1.Rule Configuration17.2.1.2.Expression Language
17.2.2.Valang Validator Support17.2.2.1.ValangValidator
17.2.3.Client Side Validation17.2.3.1.Getting Started17.2.3.2.Customization17.2.3.3.Localization17.2.3.4.Troubleshooting
17.3.Bean Validation Framework17.3.1.Introduction17.3.2.Using the Framework17.3.2.1.The Validation Rule17.3.2.2.Validation Configuration & Configuration Loader17.3.2.3.XML Configuration17.3.2.4.Java 5 Annotation Configuration17.3.2.5.Condition Expression Language (CEL) & Function Expression Language (FEL)17.3.2.6.The BeanValidator17.3.2.7.Application Context Configuration
17.3.3.Future Directions
18.XT Framework18.1.About XT Framework18.1.1.XT Modeling Framework18.1.2.XT Ajax Framework
18.2.XT Modeling Framework18.2.1.Introduction18.2.2.Base Concepts18.2.2.1.Introductor18.2.2.2.Generator18.2.2.3.Notifications18.2.2.4.Specifications
18.2.3.Advanced Concepts18.2.3.1.Other annotations18.2.3.2.Apache Commons Predicates integration
18.3.XT Ajax Framework18.3.1.Introduction18.3.2.Base Concepts18.3.2.1.Ajax Events18.3.2.2.Ajax Handlers18.3.2.3.Associating events with handlers: the AjaxInterceptor18.3.2.4.Ajax Actions18.3.2.5.Identifying page parts: exact and wildcard matching18.3.2.6.Components18.3.2.7.Ajax Response18.3.2.8.More about the Ajax request processing flow18.3.2.9.Handling exceptions
18.3.3.Advanced Concepts18.3.3.1.Core Javascript libraries18.3.3.2.Optional Javascript libraries
18.3.4.Tutorials18.3.4.1.Working with Ajax action events.18.3.4.2.Working with Ajax submit events.18.3.4.3.Working with Ajax validation.
Preface
This document provides a reference guide to Spring Modules‘ features. Spring Modules contains various add-ons for the core Spring Framework and as such this document assumes that you are already familiar with Spring itself. Since this document is still a work-in-progress, if you have any requests, or comments, please post them on the user mailing list or on the Spring Modulesforum. If you want to report a bug please use the Spring Modulesissue tracking instance.
As a general rule, Spring Modules projects are tested against Spring latest stable branch (2.0.x, at the moment of writing this document). The projects might be compatible with older Spring versions (such as 1.2.x) but this is not mandatory. Please see the documentation of each project and in case, the compatibility is important and can be easily achieved, contact the module owner through the forums or JIRA.
Before we go on, a few words of gratitude: Chris Bauer (of the Hibernate team) prepared and adapted the DocBook-XSL software in order to be able to create Hibernate‘s reference guide, also allowing us to create this one.
Chapter 1. Introduction
Spring Modules is a collection of tools, add-ons and modules to extend theSpring Framework. The core goal of Spring Modules is to facilitate integration between Spring and other projects without cluttering or expanding the Spring core.
Chapter 2. Ant Integration
2.1. Introduction
This module provides custom Ant artifacts that expose Spring beans into an Ant project in various ways. This is a very poweful idiom for adding rich behaviour to Ant, for example in a code generation step during a build. Can also be used to provide a convenient framework for scripting and automating tasks that require Spring services. More information about Ant can be found at:http://ant.apache.org.
The source code for the examples here is in CVS under src/etc/test-resources. They can be run from the ant subdirectory of Springmodules projects using
$ ant examples
Springmodules Ant is shipped with explicit dependencies on Spring 2.0, but it all works just as well with 1.2.8.
2.2. Setting up Spring Configuration
The first step before using any of the features of ant integration is to configure a Spring BeanFactory with the beans (e.g. services) that you need.
The basic mechanism is provided by the Spring SingletonBeanFactoryLocator. This involves setting up a master BeanFactory which contains beans that are themselves BeanFactory instances. The default search path for the master BeanFactory is classpath*:beanRefContext.xml, which means that all files on the classpath called beanRefContext.xml will be included.
Inside the master BeanFactory are one or more BeanFactory instances. The active BeanFactory for the custom Ant elements in this package can be chosen by specifying the bean id with the factoryKey attribute.
2.2.1. Overriding the BeanFactory locations
The location of the master BeanFactory can be overridden with the contextRef attribute of the custom Ant elements provided by this package.
2.2.2. Example BeanFactory Configuration
An example beanRefContext.xml:
classpath:bootstrapContext.xmlclasspath:childContext.xml
2.3. Exposing a Spring Bean to Ant
The most flexible way to use Spring in an Ant project is to expose a Spring bean as a project reference, and then use it in a normal Ant script target. For this we use the Ant custom type . The bean is referred to by name, and copied to an Ant project reference with the given id. Example:

2.4. Evaluating an Expression on a Spring Bean in Ant
As a simple alternative to writing a script, when the desired operation on the Spring bean is something simple like a method call, we can simply evaluate an expression on the bean using the custom task . The language used isOGNL and the bean is the root of the expression. The expression context also contains references to the Ant project (can be referred to in the expression as #project). Again the bean is referred to by name. Example:

2.5. Dependency Injection into a Custom Ant Task
The task is useful if you want to take advantage of Ant features (e.g. file globbing) or prefer for other reasons to write an Ant Task, but need it to be injected with services that Ant does not know about. You can autowire a task by type or by name (the default) by changing the autowire attribute (legal values are "byName" and "byType"). Example:

2.6. Configuring Ant
2.6.1. Definitions
The Ant elements provided by this project are defined in the jar file for this project in a file called org/springmodules/ant/antlib.xml.
2.6.2. Classpath
All the custom elements require Spring to be on the classpath (spring-core?). The SpringBeanTask also requires OGNL. The relevant jar files can be added to your .ant/lib directory (the standard way of extending the ant classpath), or they can be added using an additional custom task (springextend) provided as part of this package.
2.6.3. Example

Chapter 3. Caching
3.1. Introduction
The Caching Module provides a consistent abstraction for performing caching, delivering the following benefits.
Provides a consistent programming model across different caching APIs such asEHCache,JBoss Cache,Java Caching System (JCS) andOSCache.
Provides a unified, simpler, easier to use, API for programmatic use of caching services than most of these previously mentioned APIs.
Supports different strategies for declarative caching services.
The Caching Module may be easily extended to support additional cache providers.
3.2. Uses
Caching is frequently used to improve application performance. A good example is the caching of data retrieved from a database. Even though ORM frameworks such asiBATIS andHibernate already provide built-in caching, the Caching Module can be useful when executing methods that perform heavy calculations, are time consuming, and/or are resource hungry.
Caching can be added to frameworks without inherent caching support, such as JDBC orSpring JDBC.
The Caching Module may be used to have more control over your caching provider.
3.3. Configuration
Caching and cache-flushing can be easily configured by following these steps.
Set up the cache provider. Instead of imposing the use of a particular cache implementation, the Caching Module lets you choose a cache provider that best suites the needs of your project.
Enable the caching services. The Caching Module provides two ways to enable caching services.
Declarative caching services.
CacheProxyFactoryBean.
Source-level metadata attributes usingCommons-Attributes orJDK 1.5+ Annotations.
AutoProxy with MethodMapCachingInterceptor and MethodMapFlushingInterceptor.
 
Programmatic use (via a single interface, org.springmodules.cache.provider.CacheProviderFacade).
3.4. Cache Provider
The Caching Module provides a common interface that centralizes the interactions with the underlying cache provider. Each facade must implement the interface org.springmodules.cache.provider.CacheProviderFacade or subclass the template org.springmodules.cache.provider.AbstractCacheProviderFacade.
Each strategy has the following properties.
cacheManager (required)
A cache manager administrates the cache. In general, a cache manager should be able to:
 
Store objects in the cache.
Retrieve objects from the cache.
Remove objects from the cache.
Flush or invalidate one or more regions of the cache, or the whole cache (depending on the cache provider.)
 
The Caching Module provides factories that allow setting up cache managers and ensure that the created cache managers are properly released and destroyed before the Spring application context is closed.
org.springmodules.cache.provider.jboss.JbossCacheManagerFactoryBean
 
org.springmodules.cache.provider.jcs.JcsManagerFactoryBean
 
org.springmodules.cache.provider.oscache.OsCacheManagerFactoryBean
These factories have a common, optional property, configLocation, which can be any resource used for configuration of the cache manager, such as a file or class path resource.
failQuietlyEnabled (optional)
If true, any exception thrown at runtime by the cache manager will not be rethrown, allowing applications to continue running even if the caching services fail. The default value is false: any exception thrown by the cache manager will be propagated and eventually will stop the execution of the application.
 
serializableFactory (optional)
Some cache providers, like EHCache and JCS, can only store objects that implement the java.io.Serializable interface, which may be necessary when storing objects in the file system or replicating changes in the cache to different nodes in a cluster.
 
Such requirement imposes a problem when we need to store in the cache objects that are not Serializable and we do not have control of, for example objects generated by JAXB.
 
A possible solution could be to "force" serialization on such objects. This can be achieved with a org.springmodules.cache.serializable.SerializableFactory. The Caching Module currently provides one strategy, org.springmodules.cache.serializable.XStreamSerializableFactory, which usesXStream to
Serialize objects to XML before they are stored in the cache.
Create objects back from XML after being retrieved from the cache.
This feature is disabled by default (the value of serializableFactory is null.)
 
3.4.1. EHCache
EHCache can be used as cache provider through the facade org.springmodules.cache.provider.ehcache.EhCacheFacade. It must have a net.sf.ehcache.CacheManager as the underlying cache manager.
 

 
For more details about using EHCache with Spring, please refer to this excellentarticle by Omar Irbouh.
3.4.2. JBoss Cache
JBoss Cache can be used as cache provider through the facade org.springmodules.cache.provider.jboss.JbossCacheFacade. It must have a org.jboss.cache.TreeCache as the underlying cache manager.
 

 
3.4.3. Java Caching System (JCS)
JCS can be used as cache provider through the facade org.springmodules.cache.provider.jcs.JcsFacade. It must have a org.apache.jcs.engine.control.CompositeCacheManager as the underlying cache manager.
 

 
3.4.4. OSCache
OSCache can be used as cache provider through the facade org.springmodules.cache.provider.oscache.OsCacheFacade. It must have a com.opensymphony.oscache.general.GeneralCacheAdministrator as the underlying cache manager
 

 
Rob Harrop posted anarticle explaining how to set up OSCache in Spring without using any factory.
3.5. Declarative Caching Services
The Caching Module offers declarative caching services powered bySpring AOP.
Declarative caching services offers a non-invasive solution, eliminating any dependencies on any cache implementation from your Java code.
The following sections describe the internal components common to the different strategies for declarative caching services.
3.5.1. Caching Advice
A caching advice applies caching to the return value of advised methods. It first checks that a value returned from a method call, with the same method arguments, is already stored in the cache. If a value is found, it will skip the method call and return the cached value. On the other hand, if the advice cannot find a cached value, it will proceed with the method call, store the return value of the call in the cache and finally return the new cached value.
Methods that do not have a return value (return value is void) are ignored, even if they were registered for aspect weaving.
3.5.2. Caching Models
Caching models encapsulate the rules to be followed by the caching advice when accessing the cache for object storage or retrieval. The Caching Module provides caching models for each of the supported cache providers:
org.springmodules.cache.provider.ehcache.EhCacheCachingModel specifies the name of the cache to use.
org.springmodules.cache.provider.jboss.JbossCacheCachingModel specifies the fully qualified name (FQN) of the node of the TreeCache to use.
org.springmodules.cache.provider.jcs.JcsCachingModel specifies the name of the cache and (optionally) the group to use.
org.springmodules.cache.provider.oscache.OsCacheCachingModel specifies the names of the groups to use, the cron expression to use to invalidate cache entries and the number of seconds that the object can stay in cache. All these properties are optional.
 
Caching advices can be configured to have caching models in a java.util.Map having each entry defined using standard Spring configuration:
<-- property of some caching advice -->
The type of caching model must match the chosen cache implementation: the example above must useJava Caching System (JCS) as the cache provider.
Caching advices can also have caching models as java.util.Properties resulting in a less verbose configuration:
<-- property of some caching advice -->cacheName=someCache;group=someGroup
 
The caching model has been defined as a String in the format propertyName1=propertyValue1;propertyName2=propertyValue2 which the Caching Module will automatically convert into a caching model using a PropertyEditor provided by the CacheProviderFacade.
The key of each entry is different for each declarative caching service strategy. More details will be provided in further sections.
3.5.3. Caching Listeners
An implementation of the interface org.springmodules.cache.interceptor.caching.CachingListener. A listener is notified when an object is stored in the cache. The Caching Module does not provide any implementation of this interface.
3.5.4. Key Generator
An implementation of org.springmodules.cache.key.CacheKeyGenerator. Generates the keys under which objects are stored in the cache. Only one implementation is provided, org.springmodules.cache.key.HashCodeCacheKeyGenerator, which creates keys based on the hash code of the object to store and a unique identifier.
3.5.5. Flushing Advice
A flushing advice flushes one or more groups of the cache, or the whole cache (depending on the cache provider) before or after an advised method is executed.
3.5.6. Flushing Models
Similar tocaching models. Flushing models encapsulate the rules to be followed by the flushing advice when accessing the cache for invalidation or flushing. The Caching Module provides flushing models for each of the supported cache providers:
org.springmodules.cache.provider.ehcache.EhCacheFlushingModel specifies which caches should be flushed.
org.springmodules.cache.provider.jboss.JbossCacheFlushingModel specifies the FQN of the nodes to be removed from the TreeCache.
org.springmodules.cache.provider.jcs.JcsFlushingModel specifies which groups in which caches should be flushed. If the a cache is specified without groups, the whole cache is flushed.
org.springmodules.cache.provider.oscache.OsCacheFlushingModel specifies which groups should be flushed. If none is specified, the whole cache is flushed.
 
Like caching advices, flushing advices can be configured to have flushing models in a java.util.Map:
<-- property of some Flushing advice -->
The type of flushing model must match the chosen cache implementation: the example above must useJava Caching System (JCS) as the cache provider.
Flushing advices can also have flushing models as java.util.Properties resulting in a less verbose configuration:
<-- property of some flushing advice -->cacheName=someCache;group=group1,group2
 
The flushing model has been defined as a String in the format propertyName1=propertyValue1;propertyName2=propertyValue2 which the Caching Module will automatically convert into a flushing model using a PropertyEditor provided by the CacheProviderFacade.
The key of each entry is different for each declarative caching service strategy. More details will be provided in further sections.
3.6. Strategies for Declarative Caching Services
The following sections describe the different strategies for declarative caching services provided by the Caching Module.
3.6.1. CacheProxyFactoryBean
A CacheProxyFactoryBean applies caching services to a single bean definition, performing aspect weaving using a NameMatchCachingInterceptor ascaching advice and a NameMatchFlushingInterceptor asflushing advice.
 
Luke SkywalkerLeia OrganacacheName=testCachecacheNames=testCache
 
In the above example, cacheableServiceTarget is the advised or proxied object, i.e. the bean to apply caching services to.
The caching interceptor will use a NameMatchCachingModelSource to get the caching models defining the caching rules to be applied to specific methods of the proxied class. In our example, it will apply caching to the methods starting with the text "get."
In a similar way, the flushing interceptor will use a NameMatchFlushingModelSource to get the flushing models defining the flushing rules to be applied to specific methods of the proxied class. In our example, it will flush the cache "testCache" after executing the methods starting with the text "update."
3.6.2. Source-level Metadata-driven Autoproxy
Autoproxying is driven by metadata. This produces a similar programming model to Microsoft‘s .Net ServicedComponents. AOP proxies for caching services are created automatically for the beans containing source-level, caching metadata attributes. The Caching Module supports metadata provided byCommons-Attributes andJDK 1.5+ Annotations. Both approaches are very flexible, because metadata attributes are restricted to describe whether caching services should be applied instead of describing how caching should occur. The how is described in the Spring configuration file.
Setting upautoproxy is quite simple:

 
3.6.2.1. Jakarta Commons-Attributes
The attributes org.springmodules.cache.interceptor.caching.Cached and org.springmodules.cache.interceptor.flush.FlushCache are used to indicate that an interface, interface method, class, or class method should be target for caching services.
public class CacheableServiceImpl implements CacheableService {/*** @@org.springmodules.cache.interceptor.caching.Cached(modelId="testCaching")*/public final String getName(int index) {// some implementation}/*** @@org.springmodules.cache.interceptor.flush.FlushCache(modelId="testFlushing")*/public final void updateName(int index, String name) {// some implementation}}
 
Now we need to tell Spring to apply caching services to the beans having Commons-Attributes metadata:
cacheName=testCachecacheNames=testCacheLuke SkywalkerLeia Organa
 
The property modelId of the metadata attribute Cached should match the id of a caching model configured in the caching advice (in our example the bean with id cachingInterceptor.) This way the caching advice will know which caching model should use and how caching should be applied. In the above example, the caching advice will store in the EHCache testCache the return value of the method getName.
The same matching mechanism is applied to flushing models. The property modelId of the metadata attribute FlushCache shoule match the id of a flushing model configured in the flushing advice (the bean with id flushingInterceptor.) The flushing advice will know which flushing model to use. In the above example, the EHCache testCache will be flushed after executing the method updateName.
Usage of Commons-Attributes requires an extra compilation step which generates the code necessary to access metadata attributes. Please refer to itsdocumentation for more details.
3.6.2.2. JDK 1.5+ Annotations
Source-level metadata attributes can be declared using JDK 1.5+ Annotations:
public class TigerCacheableService implements CacheableService {@Cacheable(modelId = "testCaching")public final String getName(int index) {// some implementation.}@CacheFlush(modelId = "testFlushing")public final void updateName(int index, String name) {// some implementation.}}
 
The annotations org.springmodules.cache.annotations.Cacheable and org.springmodules.cache.annotations.CacheFlush work exactly the same as their Commons-Attributes counterparts. Configuration in the Spring context is also very similar:
cacheName=testCachecacheNames=testCacheLuke SkywalkerLeia Organa
 
By using JDK 1.5+ Annotations, we don‘t need the extra compilation step (required by Commons-Attributes.) The only downside is we can not use Annotations with JDK 1.4.
3.6.3. BeanNameAutoProxyCreator
 
cacheName=testCachecacheNames=testCachecachingInterceptorflushingInterceptor
Luke SkywalkerLeia Organa
 
Assuming that we already have a CacheProviderFacade instance in our ApplicationContext, the first thing we need to do is create the caching advice MethodMapCachingInterceptor and the flushing advice MethodMapFlushingInterceptor to use. AOP proxies are created for the objects which match the given fully qualified class name and method name (which accepts wildcards.)
Once we have the advices, we feed them to a BeanNameAutoProxyCreator along with the names of the beans in the ApplicationContext we want to apply caching services to.
3.7. Programmatic Use
First, we need to configure a org.springmodules.cache.provider.CacheProviderFacade in the Spring ApplicationContext (please refer toSection 3.4, “Cache Provider” for more details.) Then we need to obtain a reference to it and call any of this methods from our Java code:
void cancelCacheUpdate(Serializable key) throws CacheException;void flushCache(FlushingModel model) throws CacheException;Object getFromCache(Serializable key, CachingModel model) throws CacheException;boolean isFailQuietlyEnabled();void putInCache(Serializable key, CachingModel model, Object obj) throws CacheException;void removeFromCache(Serializable key, CachingModel model) throws CacheException;
 
Chapter 4. Commons Support
4.1. Introduction
The Commons module provides integration between Spring and variousJakarta Commons libraries.
4.2. Commons Configuration integration
Commons Configuration provides a generic configuration abstraction which enables an application to read configuration data from a variety of sources.
Spring Modules integration packages allows easy Spring configuration of Commons Configuration, returning the Configuration instance as a Properties object making it an excellent candidate for Spring‘sPropertyPlaceholderConfigurer. The core class is org.springmodules.commons.configuration.CompositeConfigurationFactoryBean which merges various Configuration beans into a composite configuration.Each configuration bean is a wrapper to a configuration source, according to supported features of commons-configuration (system properties, properties files, XML files, JDBC tables...)
There are two way to set the configuration sources:
Through locations property which uses Spring resource abstraction to get content from various locations
Through configurations property, which relies on custom Configuration bean and any ReloadingStrategy defined for them
The snippet below creates a composite configuration from three sources:

The factory bean produces a Properties object.to get a hold of the factory internal CompositeConfiguration object, call getConfiguration method:

Please note the use of the & prefix, encoded as an XML entitty, to acces the factory bean.
Chapter 5. db4o
5.1. Introduction
db4o module facilitates integration between Spring framework anddb4o, allowing easier resource management, DAO implementation support and transaction strategies. In many respects, this modules is similar in structure, naming and functionality to Spring core modules for Hibernate, JPA or JDO - users familiar with Spring data access packages should feel right at home when using db4o spring integration.
As samples, a web application named Recipe Manager and some examples ‘converted‘ from db4o distribution (mainly chapter 1) are available.
5.2. Configuration
Before being used, db4o has to be configured. db4o module makes it easy to externalize db4o configuration (be it client or server) from the application into Spring application context xml files, reducing the code base and allowing decoupling of application from the environment it runs in. The core class for creating db4o‘s ObjectContainer is the ObjectContainerFactoryBean . Based on the various parameter passed to it, the objectcontainer can be created from a db4o database file, from an ObjectServer or based on a Configuration object.
5.2.1. Configuring an ObjectContainer
The FactoryBean will create ObjectContainers based on its properties, using the algorithm below:
if the databaseFile is set, a local file based client will be created
if memoryFile is set, a local memory based client will be instantiated
if a server property is set, a client ObjectContainer will be created within the VM using the given server object
if all the above fail, a connection to a (possibly) remote machine will be opened using the hostName, port, user and password properties.
For example in order to create a memory based file ObjectContainer , the following configuration can be used:

For an ObjectContainer connected to a (remote) server:

While creating a database file based, local ObjectContainer can be achieved using a bean definition such as:

For local configurations, it is possible to pass a db4o Configuration object (if no configuration is given, as in the examples above, the JVM global configuration is being used):
...
See thedb4o configuration section for more information on defining and using a Configuration object.
5.2.2. Configuring an ObjectServer
ObjectServerFactoryBean can be used for creating and configuring an ObjectServer :

Note the userAccessLocation property which specifies the location of a Properties file that will be used for user acess - the properties file keys will be considered the user names while the values as their passwords.
5.2.3. Using db4o‘s Configuration object
When a complex configuration is required, ConfigurationFactoryBean offers an extensive list of db4o parameters which can be used to customize db4o ObjectContainers . The FactoryBean can work with the global JVM db4o configuration, a cloned configuration from the global one or a newly created (which ignored the settings on the global) based on the configurationCreationMode parameter:

5.3. Inversion of Control: Template and Callback
The core classes of db4o module that are used in practice, are Db4oTemplate and Db4oCallback . The template translates db4o exceptions into Spring Data Access exception hierarchy (making it easy to integrate db4o with other persistence frameworks supported by Spring) and maps most of db4o ObjectContainer and ExtObjectContainer interface methods, allowing one-liners:
db4oTemplate.activate(personObject, 4); // ordb4oTemplate.releaseSemaphore("myLock");
 
5.4. Transaction Management
db4o module provides integration with Spring‘s excellent transaction support through Db4oTransactionManager class. Since db4o statements are always executed inside a transaction, Spring transaction demarcation can be used for commiting or rolling back the running transaction at certain points during the execution flow.
Consider the following example (using Spring 2.0 transactional namespace):
// more bean definition follow
5.5. Outside the Spring container
It is important to note that db4o-spring classes rely as much as possible on db4o alone and they work with objects either configured by the developer or by Spring framework. The template as well as the FactoryBeans can be instantiated either by Spring or created programatically through Java code.
Chapter 6. Flux
6.1. Introduction
Flux is a job scheduler, workflow engine, and business process management (BPM) engine. More information about Flux can be found at:http://www.fluxcorp.com.
6.2. Exposing Flux as a Spring Bean
A Flux Spring bean can be created using one of the following methods:
Use the following configuration to create a Flux spring bean with the default configuration options

Use the following configuration to create a Flux spring bean from the configuration properties that are defined in the "fluxconfig.properties" file.
fluxconfig.properties
An XML Engine bean and a Configuration bean can also be created in similar ways. To create these beans, use the "org.springmodules.scheduling.flux.XmlEngineBean" and "org.springmodules.scheduling.flux.ConfigurationBean" classes.
6.3. Getting Help
If you have any questions, feel free to contact our support team.
Email
support@fluxcorp.com
Telephone
+1 (406) 656-7398
Chapter 7. Hivemind Integration
7.1. Introduction
Hivemind is lightweight container providing IoC capabilities similar to Spring. More information about HiveMind can be found at:http://jakarta.apache.org/hivemind.
7.2. Configure an Hivemind Registry
In HiveMind, the Registry is the central location from which your application can gain access to services and configuration data. The RegistryFactoryBean allows for a HiveMind Registry to be configured and started within the Spring ApplicationContext:
There are two ways configure this Registry with the RegistryFactoryBean class:
No configuration location is specified. In this case, Hivemind looks for an XML file named hivemodule.xml in the META-INF directory.
One or more configuration file locations are specified. In this case, Spring Modules will use these configuration files to configure Registry instance.
The code below shows how to configure a RegistryFactoryBean that loads Registry configuration from a file called configuration.xml:
configuration.xml
The RegistryFactoryBean uses Spring‘s resource abstraction layer allowing you to specify any valid Spring Resource path for the configLocations property.
7.3. Exposing HiveMind Services as Spring Beans
Using the ServiceFactoryBean it i spossible to expose any service defined in a HiveMind Registry to your application as a Spring bean. This can be desirable if you want to make use of features found in both products but you want your application to code only to one.
The ServiceFactoryBean class requires access to a HiveMind Registry, and as such, you generally need to configure both a RegistryFactoryBean and a ServiceFactoryBean as shown below:
configuration.xmlorg.springmodules.samples.hivemind.service.ISampleServiceinterfaces.SampleService
Whether you define both serviceInterface and serviceName or just serviceInterface depends on how your HiveMind Registry is configured. Consult the HiveMind documentation for more details on how HiveMind services are identified and accessed.
Chapter 8. JavaSpaces
8.1. Introduction
JavaSpaces module offers Spring-style services, like transaction management, template, callback and interceptor as well as remoting services to JavaSpaces based environments.
8.2. JavaSpaces configuration
One challenge when dealing with Jini-based environments(like JavaSpaces) is retrieving the appropriate services. JavaSpaces module addresses this problem by providing generic as well as customized classes to work with various JavaSpaces implementations as well as Jini services in a simple and concise manner.
8.2.1. Using specialized classes
JavaSpaces modules offers configuration support out of the box for:
8.2.1.1. Blitz
Blitz is an open-source implementation of JavaSpaces. JavaSpaces module provides two Blitz-based factory beans:
...
8.2.1.2. GigaSpaces
GigaSpaces is a commercial JavaSpaces implementation that provides a free Community Edition. See the dedicated documentation for more support information and the online Wiki documentation which is available athttp://gigaspaces.com/wiki/display/GS/Spring
/./myCache?properties=gs&
8.2.2. Using a generic Jini service
For generic Jini services (including other JavaSpaces implementations), JiniServiceFactoryBean can be used for retrieval:

 
8.3. Inversion of Control: JavaSpaceTemplate and JavaSpaceCallback
JavaSpaceTemplate is one of the core classes of the JavaSpaces module. It allows the user to work directly against the native JavaSpace API in a consistent manner, handling any exceptions that might occur and taking care of the ongoing transaction (if any) as well as converting the exceptions into Spring DAO and Remote exception hierarchy. The template can be constructed either programmatically or declaratively (through Spring‘s xml) and requires a JavaSpace implementation instance:
...
Once constructed the JavaSpaceTemplate offers shortcut methods to the JavaSpace interface as well as native access:
spaceTemplate.execute(new JavaSpaceCallback() {public Object doInSpace(JavaSpace js,Transaction transaction)throws RemoteException, TransactionException,UnusableEntryException, InterruptedException {...Entry myEntry = ...;js.write(myEntry, transaction, Lease.FOREVER);Entry anotherEntry = ...;js.read(anotherEntry, transaction, Lease.ANY);return null;}});
The advantage of the JavaSpaceCallback is that it allows several operations on the JavaSpace API to be grouped and used inside the transaction or with other Jini transactions (for example if using multiple nested JavaSpaceCallbacks).
8.4. Transaction Management
One important feature of JavaSpaces module is the JiniTransactionManager which integrates Jini transaction API with Spring transaction infrastructure. This allows users for example to use declaratively or programmatically use Jini transactions in the same manner as with JDBC-based transactions - without any code change or API coupling; changing the transaction infrastructure is as easy as changing some configuration lines (see Springreference documentation for more information). Using JiniTransactionManager is straight forward:
...
JiniTransactionManager requires two parameters:
transactionManager - an instance of net.jini.core.transaction.server.TransactionManager. In most cases this is provided byMahalo and can be retrieved using the generic JiniServiceFactoryBean as we have discussed.
transactionalContext - which is a simple object used for detecting the transaction context as Jini transactions can spawn across several contexts.
To some extent, the JiniTransactionManager is similar to Spring‘sJtaTransactionManager, providing integration with a custom transactional API.
Note that JiniTransactionManager is not JavaSpace specific - it can be used on any Jini resource.
8.5. Remoting: JavaSpaceInterceptor
JavaSpaceInterceptor represents an important feature that allows method calls to be ‘published‘ and retrieved transparently to and from the space (in a manner similar to delegate worker). The calls can be blocking (synchronous) or non-blocking (asynchronous). Consider the following context:
500spaceInterceptororg.springmodules.beans.ITestBeanorg.springframework.core.Orderedorg.springmodules.beans.ITestBeanrod34
There are several important elements inside the context:
proxy - represents the client side -all calls made to it will be delegated to the JavaSpace. The JavaSpaceInterceptor will transform all Method Invocations into JavaSpace entries and publish them into the space. Interested parties (which can execute the call) will pickup the entry and the write back the result which is returned to the caller.
testBeanWorker - represents the server side. JavaSpaces Module provides already an implementation through DelegatingWorker which watches the JavaSpace and will pick any method calls which it can compute. The call entries are transformed into method invocations which are delegated to the appropriate implementation - in our case testBean.
 
8.6. GigaSpaces Spring Integration
8.6.1. Simplifying Business Logic Abstraction
The GigaSpaces Spring integration plays a major part in GigaSpaces Write Once Scale Anywhere roadmap. It allows you to write your POJO once using Spring and scale it anywhere using GigaSpaces middleware. Spring provides a framework for implementing the application business logic, while GigaSpaces implements the middleware and service framework for executing this business logic efficiently, in a scalable fashion.
8.6.2. Online Wiki Documentation
Please refer to the online Wiki documentation which is available athttp://gigaspaces.com/wiki/display/GS/Spring
GigaSpaces Spring Integration
Shay Hassidim
Gershon Diner
Lior Ben Yizhak
 
 
2.1.  Introduction – Give Spring Some Space
This chapter describes the integration between GigaSpaces and the Spring Framework (www.springframework.org).
2.1.1.  Simplify business logic abstraction using Spring/POJO support
GigaSpaces Spring integration plays a major part in GigaSpaces "Write Once Scale Anywhere" roadmap. It allows you to write your POJO once using Spring and Scale it Anywhere using the GigaSpaces middleware - Spring provides a framework for implementing the application business logic and GigaSpaces implements the middleware and service framework for executing this business logic efficiently in a scalable fashion.
GigaSpaces Spring integration contains two main parts:
Middleware abstraction – DAO, JavaSpace , Transaction, , JDBC, Remoting, Parallel processing , JMS – Enabling a relatively none intrusive approach for implementing the business logic on top of GigaSpaces. With this approach GigaSpaces users can leverage the rich functionality and simplification of the Spring framework and the scalability of GigaSpaces.
Service Abstraction – Enable dynamic deployment of Spring beans into the Grid
The goal of this architecture is to enable end-to-end dynamic scalability of stateful applications across the grid.
The following diagram illustrates the different components the integration includes.
 

Figure 1.
 
2.1.1.1.  Middleware Abstraction
The middleware abstraction maps specific Spring interfaces into the relevant GigaSpaces middleware component i.e. the Data-Grid, Messaging Grid and Parallel Processing. This allows Spring based application to benefit from the performance, dynamic scalability and clustering capabilities of the GigaSpaces middleware without going through any complex development phase.
The middleware abstraction includes the following common components – these are shared across the different GigaSpaces components:
POJO2Entry Converter – The POJO to entry model is common to all middleware components and is used map an existing POJO into the data grid. The approach taken here is very similar to the O/R mapping approach. Class metadata such as indexes, update mode, serialization mode, persistency mode can be added at the class level or attribute level using Java annotation or using the gs.xml files and Spring XML configuration file.
The POJO-Space support is an enhancement of to the existing JavaSpaces interface. This enhancement adds capabilities to write and read POJO‘s directly through the Space API. It adds additional behavior required to address specific requirements in the Messaging or Data-Grid world such as oneway operations (aka send and forget), update semantics etc.
Transaction support – Spring provides a transaction abstraction layer that can be used to plug-in different transaction implementation without changing application code. The GigaSpaces transaction provides support through that interface to the Jini Transaction and Local Transaction managers.
2.1.1.1.1.  Data Grid Abstraction
2.1.1.1.1.1.
JavaSpace and GigaSpace Templates
The JavaSpacesTM technology designed to help you solve two related problems: distributed persistence and the design of distributed algorithms. JavaSpaces services use RMI and the serialization feature of the Java programming language to accomplish these goals.
See:
http://www.jini.org/nonav/standards/davis/doc/specs/html/js-spec.html
The Spring JavaSpace template used to map existing objects into the space and allow JavaSpace operations to use the Spring transaction abstraction behavior.
The GigaSpace template provides extensions to the JavaSpace template and support batch operations , enhanced notifications options , Pojo support , optimistic locking , update semantics , count , clean , fifo , security and more.
The advantages using this approach are:
Performance – Object can be written into the local space memory and synchronized on the background with a backend Data Base.
Built-In clustering – Data written into the space becomes immediately available to all instances holding DOA reference to this cluster.
Advanced data distribution - Data written into the space can leverage the existing data distribution topologies i.e. partitioning, replication, master/local without changing the code and use choose the appropriate model at deployment.
OO Support – Since the space provide built-in POJO support object can be written directly into the space without going through any O/R mapping. The same objects can be queried using the SQL syntax since the space implements a built-in indexing mechanism. Through the hibernate CacheStore plug-in those object can be stored in any DB with a user defined custom O/R mapping capabilities. With this approach users can benefit from the performance and simplicity of the space model and still use hibernate O/R mapping support to map those objects into an existing database.
2.1.1.1.1.2.
JDBC Template
Since GigaSpaces provides JDBC support users can write their code using the standard SQL syntax and that code will work with other JDBC compliant implementation (NOTE the opposite direction i.e. taking an existing JDBC implementation into this model is not fully supported yet and will require additional manual migration effort).
2.1.1.1.2.  Messaging Abstraction
GigaSpaces Spring integration provides messaging abstraction in two forms:
2.1.1.1.2.1.
JMS template
In this case GigaSpaces behaves just like a standard JMS provider through the JMS implementation. Users that are already using JMS in their implementation could benefit from the data-virtualization capabilities GigaSpaces provides and the ability to scalae JMS based application using the partitioned GigaSpaces cluster.
2.1.1.1.2.2.
Remoting
The remoting interface is used to invoke a bean using variety pluggable transport implementation. Spring support Remote Method Invocation (RMI) , Spring‘s HTTP invoker ,Hessian, Burlap or JAX RPC to be used as the transport implementation in addition to the space based remoting. A space based remoting implementation takes advantage of the space high availability and implicit content based routing semantics to enable scalable communication between different services.
The benefits of this approach are:
Transparency - A call to a space based remoting looks exactly the same to the any other remoting. Moving an implementation from one implementation to a space based approach can be made in a completely seamless manner.
Reliability - The space can ensure the execution of a method in several ways:
Retries
Durability – the request can be sent even if the service is not available.
Transactions – ensures consistency recoverability in case the service failed during the execution of a certain operation.
Fail-over – A request is replicated to a backup space which takes over if the space fails and ensures continues high availability of the system.
Transparent collocation optimization - Through the embedded space topology the service can be co-located or run as remote process. In case of local communication the request goes through local references. When the service is distributed it will go through the network. Since the space is a shared entity both models can co-exist without changing the configuration. i.e. some service instance can be collocated and other can be remote. All this is done transparent to the client application.
Scalability - The same request can be targeted to the multiple services that will compete on serving that request and through that share the load amongst themselves.
The services can scale across the network dynamically by monitoring the backlog (the amount of pending requests).
Partitioning – Request‘s can be portioned based on class-name, method argument and in this way ensure that requests that have dependency between themselves in terms of execution order will be routed to the same space instance where the order of execution can be guaranteed. In this way parallelism can be achieved on stateful operations and not just stateless ones.
2.1.1.1.3.  Parallel Processing Abstraction
A private case for using the remoting interface mentioned above is for parallel processing in a similar way to the master/worker pattern used with the space.
 

Figure 2.
 

Figure 3.
 
In this case each method call is a task and each return value is a return on the task. Tasks can be executed by multiple service instances each can be running on a different machine and thus leverage its CPU power to increase the processing capacity of the for serving that service. From the end user perspective it looks like he‘s interacting with a single service. The execution balancing achieved through the space pull-model. i.e. the services blocks for requests, if a worker is under load it will simply pull less requests, otherwise it will pull more requests the same is true if the worker is running on a more powerful machine.
2.1.1.2.  Service Abstraction -Turns POJO‘s into distributed services using the Service Grid.
You can select a bean from a Spring bean descriptor file and deploy it onto the grid, scale it dynamically by adding more instances of that service and manage fail-over scenarios i.e. if one instance fails, the Service Grid will automatically detect that and re-deploy it on another Container running at a different machine. It will also automate the deployment procedure and will select the appropriate machine instance that has the appropriate spring support built into it out of the pool of the available machine. If such machine is not available it will postpone that deployment and re-deploy it as soon as it will become available.
 

Figure 4.
 
2.2.  Integration Components
The GigaSpaces Spring integration includes the following Components:
2.2.1.  Common Services
2.2.1.1.  Automatic POJO to Entry Translation
Currently, the Jini/JavaSpace specification dictates that all Space operation should be conducted using Java classes that implement the marker interfacenet.jini.core.entry.Entry.
In order to users will be able to use GigaSpaces capabilities without modifying existing POJOs and alleviate migration from existing object stores or caching facilities (Hibernate, OJB, etc.) to GigaSpaces, the API exposed to client allows writing and reading ordinary POJO objects which do not implement the Entry interface. All relevant conversions done internally in transparent manner.
In order to support the conversion, additional meta-data should be supplied via configuration files named *.gs.xml (similar to Hibernate’s *.hbm.xml descriptors) or via using Java annotations. These files describe the POJO’s properties which are related to GigaSpaces’ behavioral aspects of storing and looking objects in the space, for example, indexing, fifo enabled , timetolive , replicatable , persistent , etc.
Client developers are given the option to use a base/support class which is used for writing applicative DAO or service objects which need to access a Space. The DAO support class maintains a 1:1 relationship with the injected template object, which, in turn, accesses the space to which it is holding a reference.
2.2.1.2.  Transaction Support
GigaSpaces supports 3 types of transactions: Jini transactions (using Jini “Mahalo” Transaction manager), local transactions and JTA/XA transactions. The GigaSpaces Spring integration provides support for the local transaction as well as the Jini distributed transactions. Configuration of the transaction management done via Spring’s configuration file (declaratively), or via coding/annotation (programmatically).
That primarily means that when switching from one transactional model to another, no code changes needed, only configuration modification via the standard Spring beans configuration file.
The Transaction manager is responsible for creating, starting, suspending, resuming, committing and rolling back the transactions which encompass Space resource(s).
The Transaction Manager is injected to Spring’s generic Transaction Interceptor, which intercepts calls to services available on the application context using a proxy, and maintains transactional contexts for these calls, based on configuration details including propagation, isolation, etc. These configuration details may be defined as configuration data in the bean descriptor xml file, using Java 5 annotations in the code, or via any other valid implementation of Spring‘s TransactionAttributeSource interface.
2.2.2.  Data-Grid
In order to utilize GigaSpaces Data-Grid you can use either the JavaSpace Spring template and the JDBC Spring template. See below examples:
2.2.2.1.  JavaSpaces Template Example
See below Pojo based JavaSpaces based application code example.
2.2.2.1.1.  The BaseSimpleBean POJO
This is a base class we will use as part of the example.
public class BaseSimpleBean {
private String firstName = null;
public BaseSimpleBean() {}
public BaseSimpleBean(String test) {
this.firstName = test;
}
public boolean equals(Object other) {
if(other == null || !(other instanceof SimpleBean))
return false;
else {
SimpleBean otherBean = (SimpleBean)other;
return (otherBean.getFirstName().equals(firstName));
}
}
public String getFirstName(){return firstName;}
public void setFirstName(String test) {this.firstName = test; }
public String toString(){return "firstName: "+firstName;}
}
2.2.2.1.2.  The SimpleBean POJO
This Pojo extends the BaseSimpleBean.
public class SimpleBean extends BaseSimpleBean{
private String secondName;
private Integer age;
public SimpleBean() {}
public SimpleBean(String name, Integer age) {
this.secondName = name;
this.age = age;
}
private Integer getAge()                                   { return age; }
private void setAge(Integer age)    { this.age = age; }
private String getSecondName()                                     { return secondName; }
private void setSecondName(String name)   { this.secondName = name; }
public boolean equals(Object other) {
if(other == null || !(other instanceof SimpleBean))
return false;
else {
SimpleBean otherBean = (SimpleBean)other;
return ((otherBean.secondName != null && otherBean.secondName.equals(secondName) || otherBean.secondName == secondName )) && (otherBean.age == age)
&& (otherBean.age == age)
&& ((otherBean.getFirstName() != null && otherBean.getFirstName().equals(getFirstName()) || otherBean.getFirstName() ==getFirstName()));
}
}
public String toString()
{
return super.toString()+ ", secondName: "+secondName+", age: "+age;
}
}
2.2.2.1.3.  Simple DAO Object used by the application
The following code example demonstrate JavaSpace write and read operations using Spring:
public class myMain
{
public static void main(String[] args) {
ApplicationContext context = new ClassPathXmlApplicationContext("gigaspaces.xml");
GigaSpacesTemplate template = (GigaSpacesTemplate)context.getBean("gigaspacesTemplate");
template.clear(null);
SimpleBean pojo = new SimpleBean("second name", new Integer(32));
pojo.setFirstName("first name");
template.write(pojo, Lease.FOREVER);
System.out.println("Writing pojo to space...Done!");
SimpleBean templatePojo = new SimpleBean();
SimpleBean pojoResult = (SimpleBean)template.read(templatePojo, Long.MAX_VALUE);
}
}
2.2.2.2.  JDBC Template Example
The following application code using standard Spring JdbcTemplate to create table , insert , delete and query data from GigaSpaces Data-Grid.
public class HelloJdbc
{
public static void main(String[] args) {
{
System.out.println("\nWelcome to GigaSpaces Spring JDBC HelloWorld example.");
System.out.println("This example uses Spring JDBCTemplate to write, read and "+
"delete entries to space...\n" );
ApplicationContext context = new ClassPathXmlApplicationContext("jdbc_gigaspaces.xml");
JdbcTemplate template = (JdbcTemplate) context.getBean("jdbcTemplate");
/* SQL CREATE TABLE statement
*
*/
String createSQL = "CREATE TABLE Person(FirstName varchar2 INDEX, " +
"LastName varchar2)";
System.out.println("Create table...");
try {
template.execute( createSQL );
System.out.println("Create table... Done!");
} catch (Exception e) {
System.out.println("\nTable may exist already... ");
System.out.println("Restart or clean (space-browser) space !");
}
/* SQL INSERT statement
*
*/
int maxRows = 10;
String insertSQL = "INSERT INTO Person VALUES(?,?)";
System.out.println("Insert into table...");
for (int i = 1; i < maxRows; i++) {
Object[] params = new Object[] {"FirstName" + i,"LastName" + i};
template.update(insertSQL, params);
System.out.println("Insert into table... Done!");
}
/* SQL DELETE statement
*
*/
String deleteSQL="DELETE FROM Person WHERE FirstName=‘FirstName3‘";
System.out.println("Delete from table...");
template.execute( deleteSQL );
System.out.print("Delete from table...Done!");
/* SQL SELECT statement
*
*/
String selectSQL="SELECT * FROM Person ORDER BY Person.FirstName";
System.out.println("Select from table...");
template.query( selectSQL, new RowCallbackHandler() {
public void processRow(ResultSet rs) throws SQLException
{
System.out.println("FirstName : " + rs.getString("FirstName"));
System.out.println("LastName : "+rs.getString("LastName"));
}
});
System.out.println("Select from table... Done!");
}
}
}
2.2.2.2.1.  Application Context xml - jdbc_gigaspaces.xml
The following file includes the properties to inject into org.springframework.jdbc.datasource.SingleConnectionDataSource and org.springframework.jdbc.core.JdbcTemplate:
 




class="org.springframework.jdbc.datasource.SingleConnectionDataSource" destroy-method="destroy"
singleton="false">
value="com.j_spaces.jdbc.driver.GDriver" />
value="jdbc:gigaspaces:url:rmi://localhost:10098/./helloJDBCTemplate" />



class="org.springframework.jdbc.core.JdbcTemplate">





2.2.3.  Messaging Grid
GigaSpaces JMS Spring Integration allows users to use GigaSpaces middleware with existing JMS based applications.
2.2.3.1.  The JMS Spring application Example
Below is are standard JMS Spring based applications - Sender and Receiver using GigaSpaces:
public class SenderToQueue
{
public static void main(String[] args) {
final int              NUM_MSGS;
final String           MSG_TEXT = new String("This is a simple message");
if ( (args.length < 1)) {
System.out.println("Usage: java SenderToQueue []");
System.exit(1);
}
ApplicationContext context = new ClassPathXmlApplicationContext("jms_gigaspaces.xml");
//get the Spring JMSTemplate (here we use the JMS 102 template
JmsTemplate102 jmsTemplate102 = (JmsTemplate102) context.getBean("jmsQueueTemplate");
if (args.length == 1){
NUM_MSGS = (new Integer(args[0])).intValue();
} else {
NUM_MSGS = 1;
}
for (int i = 0; i < NUM_MSGS; i++)
{
final String theMessage = MSG_TEXT + " " + (i + 1);
System.out.println("Sending message: " + theMessage);
jmsTemplate102.send(new MessageCreator() {
public Message createMessage(Session session)
throws JMSException {
return session.createTextMessage(theMessage);
}
});
}
}
}
public class SynchQueueReceiver
{
public static void main(String[] args) {
ApplicationContext context = new ClassPathXmlApplicationContext("jms_gigaspaces.xml");
//get the Spring JMSTemplate (here we use the JMS 102 template
JmsTemplate102 jmsTemplate102 = (JmsTemplate102) context.getBean("jmsQueueTemplate");
while (true)
{
try{
Message msg = jmsTemplate102.receive();
if (msg instanceof TextMessage) {
TextMessage textMessage = (TextMessage) msg;
System.out.println("Reading message: " + textMessage.getText() );
} else {
// Non-text control message indicates end of messages.
break;
}
}catch(Exception e){
e.printStackTrace();
}
}
}
}
2.2.3.1.1.  Application Context xml - jms_gigaspaces.xml
This file includes the GigaSpaces JMS properties to inject into org.springframework.jndi.JndiTemplate , org.springframework.jms.core.JmsTemplate102 and org.springframework.jndi.JndiObjectFactoryBean:



class="org.springframework.jndi.JndiTemplate">


com.sun.jndi.rmi.registry.RegistryContextFactory
rmi://localhost:10098




class="org.springframework.jms.core.JmsTemplate102">







false


20000



class="org.springframework.jndi.JndiObjectFactoryBean">




GigaSpaces;helloJMSTemplate_container;helloJMSTemplate;GSQueueConnectionFactoryImpl


class="org.springframework.jndi.JndiObjectFactoryBean">




GigaSpaces;helloJMSTemplate_container;helloJMSTemplate;jms;destinations;MyQueue



2.2.4.  Parallel Processing – Business logic Remote invocation
In order to allow users to utilize GigaSpaces Grid, you can invoke your business logic on remote processes. Proxies to the remote objects are generated automatically. Remoting implemented using JavaSpaces as the transport layer similar to existing Spring remoting technologiessuch as RMI or Web Services.
The Remoting support composed from the following 3 logical units: Taker, worker and delegate. The taker is responsible for accessing the delegate, which, in turn, executes code located on the worker. These 3 units may be co-located within the same VM, or separately deployed on three different nodes/jvm’s. The different parties communicate via Task and Result objects.
2.2.4.1.  Remoting Example
This section illustrate remoting example. It describe Master , Worker , Task and Result classes implementation and their related classes.
2.2.4.1.1.  The Master
The Master executes remote business logic running at the Worker. The ITask implementation is the actual business logic executed at the worker.
The remote worker returns Result object.
ITask proxy = (ITask)applicationContext.getBean("proxy");
Result res = proxy.execute("data");
2.2.4.1.2.  The Worker
The Worker implementation. It is using the generic DelegatingWorker.
public class Worker
{
//member for gigaspaces template
private GigaSpacesTemplate template;
//The delegator worker
private DelegatingWorker iTestBeanWorker;
private ApplicationContext               applicationContext;
private Thread itbThread;
protected void init() throws Exception {
applicationContext = new ClassPathXmlApplicationContext("gigaspaces_master_remoting.xml");
template = (GigaSpacesTemplate)applicationContext.getBean("gigaspacesTemplate");
iTestBeanWorker = (DelegatingWorker)applicationContext.getBean("testBeanWorker");
}
protected void start() {
itbThread = new Thread(iTestBeanWorker);
itbThread.start();
}
public static void main(String[] args) {
try {
System.out.println("\nWelcome to Spring GigaSpaces Worker remote Example!\n");
Worker worker = new Worker();
worker.init();
worker.start();
} catch (Exception ux) {
ux.printStackTrace();
System.err.println("transError problem..." + ux.getMessage());
}
}
}
2.2.4.1.3.  The ITask
The task interface.
public interface ITask extends Serializable{
public Result execute(String data);
}
2.2.4.1.4.  The Task
This is the ITask interface implementation used by the worker:
public class Task implements ITask{
private long counter = 0;
public Task() {
}
/**
* Execute the task
*/
public Result execute(String data)
{
counter++;
System.out.println("I am doing the task id = "+counter+" with data : "+data);
Result result = new Result();
result.setTaskID(counter);
// do the calc
result.setAnswer(data);
return result ;
}
}
2.2.4.1.5.  The Result
The Result object used to transport the Answer back to the client caller:
public class Result implements Serializable
{
private long taskID; // task id
private String answer = null; // result
public Result() {}
public String getAnswer() {return answer;   }
public void setAnswer(String answer){this.answer = answer;}
public long getTaskID(){   return taskID;}
public void setTaskID(long taskID){this.taskID = taskID;}
}
2.2.4.1.6.  gigaspaces_master_remoting.xml
The gigaspaces_master_remoting.xml includes properties injected into the following classes:
Table 1.
Class
Description
Bean Name
org.springmodules.javaspaces.gigaspaces.GigaSpacesUidFactory
Used the generate unique UID for tasks. When using partitioned space the uid hashcode determines the target space of the entry
gigaSpacesUidFactory
org.springframework.spaces.DelegatingWorker
The Generic worker invoking the Task business logic
testBeanWorker
com.gigaspaces.spring.GigaSpacesInterceptor
The Interceptor that pass the task from the client into the worker via the space
javaSpaceInterceptor
org.springframework.aop.framework.ProxyFactoryBean
FactoryBean implementation for use to source AOP proxies from a Spring BeanFactory
proxy
com.gigaspaces.spring.examples.remote.Task
The Remote Task implementation
taskBean
2.2.4.1.7.  GigaSpacesUidFactory
To ensure each client will get the relevant result object back from the worker the task and result injected with unique uid generated at the client side.
Table 2.
Property
Description
Type
Space
The space template
Reference
2.2.4.1.7.1.
DelegatingWorker
The org.springmodules.javaspaces.gigaspaces.DelegatingWorker configure the worker. This is not a singleton class. The The DelegatingWorker includes the following properties:
Table 3.
Property
Description
javaSpaceTemplate
The GigaSpaces Spring template
delegate
The "Task" bean to be injected into the worker
businessInterface
The "Task" class interface
2.2.4.1.7.2.
GigaSpacesInterceptor
The com.gigaspaces.spring.GigaSpacesInterceptor controls the client side Interceptor that submits the task into the space and getting the result back. Getting the result back can be done is synchronous or asynchronous manner allowing the client to wait or continue with its activity before the actual result has been sent back from the worker. The GigaSpacesInterceptor extends the JavaSpaceInterceptor that support UID injection to the task and result objects allowing client to retrieve back the related result for specific task.
Table 4.
Property
Description
Type
Value
javaSpaceTemplate
The JavaSpace template
Reference
uidFactory
The task includes unique identifier. This ensures that each client will get correct result object in return.
Reference
synchronous
Should client wait until master returns result before continues. When running in asynchronous mode the and result has not been sent back from the worker the client will wait specified time as defined as part of the timeoutMillis parameter in case matching result does not exists within the space.
boolean
false/true
timeoutMillis
Time in millisecond to wait for matching result to be found within the space (take timeout times).
long
serializableTarget
Causes this target to be passed to space in a RunnableMethodEntry
Reference
2.2.4.1.7.3.
ProxyFactoryBean
org.springframework.aop.framework.ProxyFactoryBean – the client-side proxy.
Table 5.
Property
Description
Type
Values
interceptorNames
Definition of the Spring AOP interceptor chain. The spaceInterceptor must be the last interceptor as there is no local target to invoke.
Any number of other interceptors can be added, e.g. to monitor performance ,add security or other functionality
list
javaSpaceInterceptor
PerformanceMonitorInterceptor
proxyInterfaces
list
com.gigaspaces.spring.examples.remote.ITask
2.2.4.1.7.4.
Application context file





class="org.springmodules.javaspaces.gigaspaces.GigaSpacesFactoryBean">


jini://*/*/remotingSpace




class="org.springmodules.javaspaces.gigaspaces.GigaSpacesUidFactory">



class="org.springmodules.javaspaces.gigaspaces.GigaSpacesTemplate">


class=" org.springmodules.javaspaces.DelegatingWorker"
singleton="false" >


com.gigaspaces.spring.examples.remote.ITask

class=" org.springmodules.javaspaces.gigaspaces.GigaSpacesInterceptor">


true

3000




class="org.springframework.aop.framework.ProxyFactoryBean">


javaSpaceInterceptor





com.gigaspaces.spring.examples.remote.ITask







2.2.5.  Service Grid
The Service Grid allows users to build Pojo or Spring based application as usual and deploy these into the Grid as Services. The Service Grid managing the life cycle of the deployed Service by provisioning , starting and managing it when running at the Service Grid container.
See below simple Hello class implementation and the required steps to deploy it into the Service Grid.
2.2.5.1.  Hello Interface
Your Pojo should implement an interface:
package example;
import java.rmi.RemoteException;
public interface Hello {
/**
* Say hello!
*/
String sayHello(String greetings) throws RemoteException;
}
2.2.5.2.  Hello implementation
Here is the Hello class implementation:
package example;
public class HelloImpl implements Hello {
public String sayHello(String greetings) {
System.out.println("**** Greeter says : "+greetings);
return("Hello!");
}
public HelloImpl()
{
System.out.println("**** Hello Service Started! ****");
}
}
2.2.5.3.  The Deployment File
This is the Service Grid deployment file. This should include the example.Hello interface , the Implementation Class and the relevant libraries information:

"java://java.net.InetAddress.getLocalHost().getHostAddress()">


"java://java.lang.System.getProperty(com.gs.jini_lus.groups)">
]>


&CodeServerURL;

&Group;




example.Hello


hello-dl.jar
JSpaces.jar


example.HelloImpl

hello.jar
JSpaces.jar




Name="serviceBeanFactory"
Value="new com.gigaspaces.grid.bean.BeanFactory()"/>



1


2.2.5.4.  The build file
Here is the any bbuild.xml you should have to build the hello library:



































basedir="${example.classes}"/>


basedir="${example.classes}"
excludes="**/HelloImpl.class"/>


2.2.5.5.  Deploying the Pojo
Start the \GigaSpacesEE5.0\ServiceGrid\bin\gsc.cmd – this will start Service Grid container
Start the \GigaSpacesEE5.0\ServiceGrid\bin\gsm.cmd – this will start Service Grid Manager
Start \GigaSpacesEE5.0\ServiceGrid\bin\gs.cmd – this will start the Service Grid command interactive line shell.
gs> deploy hello.xml
total 1
Deploying [Hello World Example], total services [1] ...
[1] Hello provisioned to       10.0.0.13
Deployment notification time 1062 millis, Command completed
The Service Grid container should Display:
Jun 8, 2006 1:35:21 PM com.gigaspaces.grid.gsc.GSCImpl$InitialServicesLoadTask loadInitialServices
CONFIG: Loading [0] initialServices
**** Hello Service Started! ****
The Service Grid manager should display:
Jun 8, 2006 1:36:19 PM org.jini.rio.monitor.ServiceElementManager verify
FINE: ServiceElementManager.verify(): [Hello] actual [0], pending [0], maintain [1]
Jun 8, 2006 1:36:19 PM org.jini.rio.monitor.ServiceResourceSelector selectServiceResource
FINER: Grid Service Container at [10.0.0.13] has [0] instance(s), planned [1] of [Hello]
Jun 8, 2006 1:36:19 PM org.jini.rio.monitor.InstantiatorResource canProvision
FINER: Grid Service Container at [10.0.0.13] meets qualitative requirements for [Hello]
Jun 8, 2006 1:36:19 PM org.jini.rio.monitor.ServiceProvisioner$ProvisionTask doProvision
FINER: Allocating [Hello] ...
Jun 8, 2006 1:36:20 PM org.jini.rio.monitor.ServiceProvisioner$ProvisionTask doProvision
FINER: Allocated [Hello]
Jun 8, 2006 1:36:20 PM org.jini.rio.monitor.ServiceElementManager$JSBProvisionListener serviceProvisioned
FINE: [Hello] service provisioned, instance=Instance=[1] Proxy=[$Proxy15] ID=[863469f1-8974-4dbc-80d9-307148547b65] Host
Address=[10.0.0.13]
2.3.  Integration Implementation Classes
The architecture of the GigaSpaces integration with Spring is very similar to Hibernate implementation on spring. The implementation is based on the Spring standards, including dependency injection, transaction attributes sources, configurable proxies/exportes for remote services, etc.
Basic support for accessing a Space is provided via a GigaSpacesFactoryBean which is configured in Spring‘s xml definition file. Configuration primarily includes the String array of space URLs. The Factory creates a singleton Space proxy or running an embedded space when using embedded space URL.
The factory extends AbstarctJavaSpaceFactoryBean which has createSpace() template method and add listeners implementation if specified.
2.3.1.  org.springmodules.javaspaces.gigaspaces.GigaSpacesFactoryBean
An entry point for the GigaSpaces Spring support. This is a standard Spring factory bean.
The following properties are injected:
To ensure each client will get the relevant result object back from the worker the task and result injected with unique uid generated at the client side.
Table 6.
Property
Description
Type
urls
list of GigaSpaces space URL. When the URL represents remote URL these will be accessed one by one until a connection will be established. When using embedded space URL it will start a new space instance at the running application memory address
listeners
notify templates allowing notifications when matching entries are written to the space
The standard GigaSpacesFactoryBean.getObject() method creates or accesses IJSpace object according to the provided url’s that used by the the GigaSpacesTemplate, GigaSpacesDaoSupport, or GigaSpacesLocalTransactionManagerFactoryBean.
2.3.2.  org.springmodules.javaspaces.gigaspaces.GigaSpacesDaoSupport
The GigaSpacesDaoSupport extends the org.springframework.dao.support.DaoSupport.
This is a support class, intended for extension by the application developer for writing Data Access Objects which perform domain-level operations on the supplied space. In order to operate, the Dao object should be injected either with a pre instantiated GigaspacesTemplate, or with an IJSpace. Extending classes will typically use the getGigaSpaceTemplate() method for performing space operations, but direct access to the space via the IJSpace is also possible.
2.3.3.  org.springmodules.javaspaces.JavaSpaceTemplate
Implementation of the Spring "template" concept for JavaSpaces. Translates exceptions into Spring exception hierarchy. Simplifies the performance of several operations in a single method.
The JavaSpaceTemplate provides the following methods:
Table 7.
Return value
Method
void
afterPropertiesSet()
java.lang.Object
execute(JavaSpaceCallback jsc)
Perform multiple JavaSpaces tasks in the one transaction.
net.jini.space.JavaSpae
getSpace()
Return the Javaspace this template operates on
boolean
isUseTransaction()
return true if transaction used.
net.jini.core.event.EvntRegistration
notify(net.jini.core.entry.Entry template, net.jini.core.event.RemoteEventListener listener, long millis, java.rmi.MarshalledObject handback)
When entries are written that match this template notify the given listener with a RemoteEvent that includes the handback object.
net.jini.core.entry.Enry
read(net.jini.core.entry.Entry template, long millis)
Read using the current transaction any matching entry from the space, blocking until one exists.
net.jini.core.entry.Entry
readIfExists(net.jini.core.entry.Entry template, long millis) Read using the current transaction any matching entry from the space, returning null if there is currently is none
void
setSpace(net.jini.space.JavaSpace space)
void
setUseTransaction(Boolean useTransaction)
set to true to use transactions with space operation.
net.jini.core.entry.Enry
snapshot(net.jini.core.entry.Entry entry)
return formatted entry. The snapshot method gives the JavaSpaces service implementor a way to reduce the impact of repeated use of the same entry
net.jini.core.entry.Enry
take(net.jini.core.entry.Entry template, long millis) Take using the current transaction a matching entry from the space, waiting until one exists
net.jini.core.entry.Enry
takeIfExists(net.jini.core.entry.Entry template, long millis)
Take using the current transaction a matching entry from the space, returning null if there is currently is none
net.jini.core.lease.Lease
write(net.jini.core.entry.Entry entry, long millis)
Write using the current transaction a new entry into the space.
2.3.4.  org.springmodules.javaspaces.gigaspaces.GigaSpacesTemplate
The GigaSpacesTemplate extends the JavaSpaceTemplate and provides GigaSpaces enhanced JavaSpaces operations.
Responsible for supplying application developers with a collection of helper methods for accessing the space, while wrapping specific checked exceptions thrown due to Space operations with Spring‘s generic runtime exceptions. The template also exposes one general-purpose method, which accepts a JavaSpaceCallback object from the client application. This callback is where application logic code may be implemented, directly working with the space. The callback mechanism allows exception conversion to take place even when writing low-level code.
The GigaSpacesTemplate method‘s accept not only objects implementing the Entry interface (as defined by JavaSpaces specification) but every type of object which has a void constructor and exposes its meaningful data members via accessor / mutator methods - in other words, a POJO.
The template object exposes a general purpose method, execute(), that accepts a JavaSpaceCallback object, where application logic is implemented. The method invokes the callback object, wrapping the applicative logic with exception conversion mechanism.
The GigaSpacesTemplate includes the following methods:
Table 8.
Return value
method
com.j_spaces.core.client.NotifyDelegator
addNotifyDelegatorListener(org.springframework.spaces.JavaSpaceListener javaSpaceListener, boolean fifoEnabled, int notifyMask)
When entries are written that match this template notify the given listener with a RemoteEvent that includes the handback object.
com.j_spaces.core.client.NotifyDelegator
addNotifyDelegatorListener(net.jini.core.event.RemoteEventListener listener, java.lang.Object templatePojo, java.rmi.MarshalledObject handback, boolean fifoEnabled, long lease, int notifyMask)
When Pojo‘s are written that match this template notify the given listener with a RemoteEvent that includes the handback object.
void
afterPropertiesSet()
Override the method in JavaSpaceTemplate not, throw exception if space is null
void
clean()
Cleans this space.
void
clear(net.jini.core.entry.Entry entry)
Removes the entries that match the specified template and the specified
void
clear(java.lang.Object pojo)
Removes the entries that match the specified template and the specified transaction from this space.
int
count(net.jini.core.entry.Entry entry)
Counts the number of entries that match the specified template and the specified transaction..
int
count(java.lang.Object pojo)
Counts the number of entries that match the specified template and the specified transaction..
void
dropClass(java.lang.String className)
Drops all Class‘s entries and all its templates from the space.
java.lang.Object
execute(org.springframework.spaces.JavaSpaceCallback jsc)
Checks if the space is null before execute
java.lang.Object
getAdmin()
Returns the admin object to the remote part of this space
java.lang.String
getName()
Returns the name of this space.
int
getReadTakeModifiers()
Gets the proxyReadTakeModifiers.
int
getUpdateModifiers()
Gets the proxyUpdateModifiers.
boolean
isEmbedded()
Checks whether proxy is connected to embedded or remote space.
boolean
isFifo()
Returns true if this proxy FIFO enabled, otherwise false.
boolean
isNOWriteLeaseMode()
Checks the write mode.
boolean
isOptimisticLockingEnabled()
Returns status of Optimistic Lock protocol.
boolean
isSecured()
Returns an indication : is this space secured.
net.jini.core.event.EventRegistration
notify(net.jini.core.entry.Entry template, net.jini.core.event.RemoteEventListener listener, long millis, java.rmi.MarshalledObject handback, net.jini.core.transaction.Transaction tx)
When entries are written that match this template notify the given listener with a RemoteEvent that includes the handback object.
net.jini.core.event.EventRegistration
notify(java.lang.Object templatePojo, net.jini.core.event.RemoteEventListener listener, long millis, java.rmi.MarshalledObject handback, net.jini.core.transaction.Transaction tx)
When Pojo‘s are written that match this template notify the given listener with a RemoteEvent that includes the handback object.
void
ping()
Checks whether the space is alive and accessible.
java.lang.Object
read(java.lang.Object pojo, long lease)
Read the pojo from the space
java.lang.Object
readIfExists(java.lang.Object pojo, long lease)
Read the pojo from the space if exist
net.jini.core.entry.Entry[]
readMultiple(net.jini.core.entry.Entry entry, int maxEntries)
Reads all the entries matching the specified template from this space.
Object[]
readMultiple(java.lang.Object pojo, int maxEntries)
Reads all the entries matching the specified template from this space.
void
setFifo(boolean enable)
Sets FIFO mode for proxy.
void
setNOWriteLeaseMode(boolean enable)
Set noWriteLease mode enabled
void
setOptimisticLocking(boolean enable)
Enable/Disable Optimistic Lock protocol.
int
setReadTakeModifiers(int newModifiers)
Sets the read-take mode modifiers for proxy level.
int
setUpdateModifiers(int newModifiers)
Sets the update mode modifiers for proxy level.
java.lang.Object
snapshot(java.lang.Object obj)
Snapshot the pojo
java.lang.Object
take(java.lang.Object template, long millis)
Take the pojo form the space
java.lang.Object
takeIfExists(java.lang.Object template, long millis)
Take the pojo form the space if exists
java.lang.Object[]
takeMultiple(net.jini.core.entry.Entry entry, int maxEntries)
Takes all the entries matching the specified template from this space.
java.lang.Object[]
takeMultiple(java.lang.Object pojo, int maxEntries)
Takes all the entries matching the specified template from this space.
net.jini.core.entry.Entry
update(net.jini.core.entry.Entry newEntry, long lease, long timeout)
Updates the first entry matching the specified template, if found and there is no transaction conflict.
net.jini.core.entry.Entry
update(net.jini.core.entry.Entry newEntry, long lease, long timeout, int updateModifiers)
Updates the first entry matching the specified template, if found and there is no transaction conflict.
java.lang.Object
update(java.lang.Object newPojo, long lease, long timeout)
Updates the first entry matching the specified template, if found and there is no transaction conflict.
java.lang.Object
update(java.lang.Object newPojo, long lease, long timeout, int updateModifiers)
Updates the first entry matching the specified template, if found and there is no transaction conflict.
java.lang.Object[]
updateMultiple(net.jini.core.entry.Entry[] entries, long[] leases)
Updates a group of entries.
java.lang.Object[]
updateMultiple(net.jini.core.entry.Entry[] entries, long[] leases, int updateModifiers)
Updates a group of entries.
java.lang.Object[]
updateMultiple(java.lang.Object[] pojos, long[] leases)
Updates a group of pojo‘s.
java.lang.Object[]
updateMultiple(java.lang.Object[] pojos, long[] leases, int updateModifiers)
Updates a group of pojo‘s.
net.jini.core.lease.Lease
write(java.lang.Object pojo)
Write the pojo to the space with lealse long.MAX_VALUE
net.jini.core.lease.Lease
write(java.lang.Object pojo, long lease)
Write the pojo to the space
net.jini.core.lease.Lease[]
writeMultiple(net.jini.core.entry.Entry[] entries, long lease)
Writes the specified entries to this space.
net.jini.core.lease.Lease[]
writeMultiple(java.lang.Object[] pojos, long lease)
Writes the specified entries to this space.
2.3.5.  org.springmodules.javaspaces.gigaspaces.GigaSpacesLocalTransactionManagerFactoryBean
Extends the org.springframework.transaction.jini.AbstarctTransactionManagerFactoryBean class defined in Spring, which integrates with Spring‘s existing transaction management mechanism.
The class implements the template method createTransactionManager() which create the local transaction manager using the GigaSpaces LocalTransactionManager.
The GigaSpaces Spring Transaction is responsible for creating, starting, suspending, resuming, committing and rolling back the transactions which encompass Space resource(s). The Transaction Manager is injected to Spring‘s generic Transaction Interceptor, which intercepts calls to services available on the application context using a proxy, and maintains transactional contexts for these calls, based on configuration details including propagation, isolation, etc. These configuration details may be defined as configuration data in the bean descriptor xml file, using Java 5 annotations in the code, or via any other valid implementation of Spring‘s TransactionAttributeSource interface.
The following transaction propagation behaviors are supported:
·         RequiredNew
·         Never
·         Required
·         Mandatory
·         Supports
·         NotSupported
2.4.  Spring Configuration Files
2.4.1.  Application Context xml
Includes the GigaSpacesFactoryBean:




class=" org.springmodules.javaspaces.gigaspaces.GigaSpacesFactoryBean">


jini://*/*/myCache




2.4.2.  The Dao xml
Defines client’s Pojo DAO. For each field will be indicator if its PK and whether it needs to be calculated.



persistent="true" replicatable="false" fifo="true" timetolive="Long.MAX_VALUE">
primary-key="false" auto-generate-pk="false" />

class-ref="com.gigaspaces.spring.examples.BaseSimpleBean" />

name="com.gigaspaces.spring.examples.BaseSimpleBean"
persistent="true" replicatable="false" fifo="true" timetolive="Long.MAX_VALUE">
primary-key="false" auto-generate-pk="false" />


2.4.2.1.
A class-descriptor and the associated java classClassDescriptor encapsulate metadata information of concrete class.
Table 9.
Attribute
Description
name
contains the full qualified name of the specified class. As this attribute is of the XML type ID there can only be one class-descriptor per class.
persistent
indicates of the transient field in the ExternalEntry.
fifo
indicates if the Pojo will be save in a Fifo order in the space.
Timetolive
time (in milliseconds) left for this entry to live. This value is correct for the operation time
2.4.2.2.
A field descriptor contains mapping info for a primitive typed attribute of a persistent class.
Table 10.
Attribute
Description
Name
holds the name of the persistent classes attribute.
Index
indicates which fields are indexed in the space. Takes the first member indexed for hashing
Primarykey
specifies if the field is marked as a primary key, default value is false. It‘s possible to auto assign primary key fields (see more details below). Field must to have toString() method that can’t be changed in runtime
auto-generate-pk
specifies if the values for the persistent attribute should be automatically generated by the space. The filed must be from type java.lang.String
2.4.2.3.
A reference-descriptor contains mapping info for an attribute of a class that is not primitive but references another entity Object.
2.4.2.4.
The class-ref attribute contains the full qualified name of the specified class.
2.4.3.  transaction.xml
This xml includes gigaspacesTransactionAttributeSource and the TransactionInterceptor settings:




class="org.springmodules.javaspaces.gigaspaces.GigaSpacesFactoryBean">


rmi://localhost:10098/./myCache




class="org.springmodules.javaspaces.gigaspaces.transaction.GigaSpacesLocalTransactionManagerFactoryBean">



class="org.springmodules.javaspaces.transaction.jini.JiniTransactionManager ">



class="org.springframework.transaction.interceptor.NameMatchTransactionAttributeSource">



PROPAGATION_MANDATORY


PROPAGATION_NEVER


PROPAGATION_REQUIRED


PROPAGATION_REQUIRES_NEW


PROPAGATION_SUPPORTS


PROPAGATION_NOT_SUPPORTED,
+java.lang.RuntimeException


PROPAGATION_REQUIRES_NEW,
+java.lang.RuntimeException


PROPAGATION_REQUIRES_NEW,
+java.lang.RuntimeException





class="org.springframework.transaction.interceptor.TransactionInterceptor">





class="com.gigaspaces.spring.examples.transaction.TransactedDao">



class="com.gigaspaces.spring.GigaSpacesTemplate">


class="org.springframework.aop.framework.ProxyFactoryBean">


com.gigaspaces.spring.examples.transaction.ITransactedDao




txInterceptor




class="com.gigaspaces.spring.examples.transaction.SimpleBean"/>

2.4.4.  Pojo Primary Key setting
A Pojo can be declared with or without primary key. The primary key type can be java.lang.String or any other type, as long it implements the toString() where the toString() return value cannot be changes for the object life time period. The following table describes the operation support when using the Primary Key field.
Table 11.
Operation
Write
Take/Read
Update
Without primary key
Supported
Supported
Not Supported
With primary key - Auto generator
Supported
Supported when sending the pk field as not null or call toEntry() with parameter isIgnoreGenerateAutoPk = true.
Supported
With primary key -No auto generator
Supported
Supported
Supported
Note:
If there are more than one primary key with Auto generator, the converter will generate UID for each primary key. The UIDs will also be set for the Pojo primary key fields.
2.4.4.1.  Example



persistent="true" replicatable="false" fifo="true" timetolive="100">
primary-key="false" auto-generate-pk="false" />






class-ref="com.gigaspaces.spring.tests.app.BaseSimpleBean" />

name="com.gigaspaces.spring.tests.app.BaseSimpleBean"
persistent="true" replicatable="false" fifo="true" timetolive="100">
primary-key="false" auto-generate-pk="false" />







2.5.  3rd party packages
The following libraries are used as part of the GigaSpaces Spring integration.
Apache Digester - commons-digester-gs-1.7.jar.
Apache Commons - commons-beanutils-gs.jar
Apache Velocity - velocity-1.4.jar
Remoting - cglib-nodep-2.1_3.jar
Transaction support - jta.jar
2.6.  References
Spring Framework -http://www.springframework.org ,http://www.interface21.com/
Beanutils - Usinghttp://jakarta.apache.org/commons/beanutils for reflection investigating classes Meta data in order to build ExternalEntery from POJO.
Digester - Usinghttp://jakarta.apache.org/commons/digester for parsing the gs.xml which describes the POJO to ExternalEntry.
Hibernate -http://hibernate.org
Velocity - Usinghttp://jakarta.apache.org/velocity
Chapter 9. jBPM 3.1.x
Note
The following documentation can be used as reference documentation for Spring Modules jBPM 3.0.x support as well.
9.1. Introduction
jBPM module offers integration between theSpring andjBPM allowing for reuse of Spring‘sHibernate support along with the IoC container. The module allows jBPM‘s underlying Hibernate sessionFactory to be configured through Spring and jBPM actions to access Spring‘s context.
9.2. Configuration
Users familiar with Spring will see that the jBPM module structure resembles with the orm package from the main Spring distribution. The module offers a central template class for working with jBPM, a callback to access the native JbpmContext and a local factory bean for configuration and creating a jBPM instance.
......classpath:/org/springmodules/workflow/jbpm31/someOtherWorkflow.xmlset
The example above shows how (existing) Spring-managed Hibernate SessionFactories and transaction management can be reused with jBPM.
9.2.1. LocalJbpmConfigurationFactoryBean
The main element is LocalJbpmConfigurationFactoryBean which should be familiar to users acustomed to Spring. Based on the jbpm configuration file and the given SessionFactory, it will create a jBPM configuration which can be used for working with the given process definitions. It is possible to replace jBPM xml configuration with jBPM 3.1.x newly added ObjectFactory - note that if both are present the xml configuration is preffered. LocalJbpmConfigurationFactoryBean allows the creation of the underlying schema based on the process definitions loaded automatically at startup.
Note that the sessionFactory property is not mandatory - Hibernate SessionFactory can be reused with jBPM or jBPM can work by itself without any integration with the existing infrastructure. However, in most scenarios, using LocalJbpmConfigurationFactoryBean allows one to take advantage of Springtransaction management infrastructure so it‘s possible without any code change to use jBPM, Hibernate and jdbc-based code inside the same transactional context, be it managed locally or globally (JTA). Moreover, it is possible to usethread-bound session or OpenSessionInView patterns with jBPM.
LocalJbpmConfigurationFactoryBean is also aware of the enclosing applicationContext lifecycle - jBPM will be initialized once the context is started (usually application startup) and will be closed properly when the context is destroyed (application is shutdown).
Note that LocalJbpmConfigurationFactoryBean can be configured programatically and can be used standalone only to build an jBPM context which can be used independently of Spring Modules jBPM support.
9.2.2. Inversion of Control: JbpmTemplate and JbpmCallback
Another important feature of Spring Modules jBPM support is JbpmTemplate. The template offers very convient ways of working directly with process definitions as well as jBPM API taking care of handling exceptions (be it jBPM or Hibernate based) in respect to the ongoing transaction (if it‘s present), the underlying Hibernate session (if pesistent services are used) and the jBPM context. jBPM exceptions (and the underlying Hibernate information) are translated into Spring‘s DAO exception hierarchy. Everything happens in a transparent and consistent manner.This is possible, as with every Spring-style template,even when direct access to the native JbpmContext is desired, through the JbpmCallback:
public ProcessInstance findProcessInstance(final Long processInstanceId) {return (ProcessInstance) execute(new JbpmCallback() {public Object doInJbpm(JbpmContext context) {// do something...return context.getGraphSession().loadProcessInstance(processInstanceId.longValue());}});}
As well, as LocalJbpmConfigurationFactoryBean, the JbpmTemplate can be configured programatically and can be used standalone on a pre-existing jbpmContext (configured through LocalJbpmConfigurationFactoryBean or not) and can be used independently of Spring Modules jBPM support.
9.2.3. ProcessDefinitionFactoryBean
ProcessDefinitionFactoryBean is a simple reader that loads jBPM process definition using Spring‘sResourceLoaders. Thus, the xml files can be load using the classpath, relative or absolute file path or even from the Servlet Context. See the officialdocumentation for more information.
Note
Asreported on theforums, using ProcessDefinitionFactoryBean jBPM 3.1.1will trigger a new process definition to be persisted(through deployProcessDefinition) at each startup. While this is useful in development when the database is created on application startup and destroyed on closing, for cases where the definition doesn‘t change, the process should not be declared inside Spring XML files.
Note
As reportedhere, due to the static nature of jBPM, process definitions which include sub processes are not loaded properly if a JbpmContext does not exist at the time of the loading (no exception is thrown whatsoever). As a workaround consider using the LocalJbpmConfigurationFactoryBean‘s processDefinitionsResources property.
9.2.4. Outside Spring container
It is important to note that while our example showed LocalJbpmConfigurationFactoryBean and JbpmTemplate template inside a Spring xml, these classes do not depend on each other or on Spring application context. They can be just as well configured programatically and can
9.3. Accessing Spring beans from jBPM actions
Another important feature of Spring Modules jBPM integration is allowing Spring configured beans to be reused inside jBPM actions. This allows one to leverage Spring container capabilities (bean lifecycles, scoping, injection, proxying just to name a few) in a transparent way with jBPM. Consider the following Spring application context:
.....
and jBPM process definition:
jbpmActionjbpmConfiguration
JbpmHandlerProxy transparently locates Spring applicationContext and searches the bean identified by the targetBean parameter (in this case jbpmAction) and delegate all calls to the jBPM action. This way, one is not limited only to the injection offered by jBPM container and can integrate and communicate in a very easy manner with other Spring managed beans. Moreover, your action lifecycle can be sigleton (one shared instance) or prototype (every call gets a new instance) or in Spring 2.0 scoped to a certain application component (like one instance per http session).
The optional factoryKey parameter specified in this example should be used when one is dealing with more then one jBPM configuration inside the same classloader (not common in practice). The factoryKey should be the same as the bean name of the LocalJbpmConfigurationFactoryBean to be used (in our case jbpmConfiguration).
Chapter 10. Java Content Repository (JSR-170)
10.1. Introduction
JSR-170 defines "a standard, implementation independent, way to access content bi-directionally on a granular level within a content repository. A Content Repository is a high-level information management system that is a superset of traditional data repositories. A content repository implements "content services" such as: author based versioning, full textual searching, fine grained access control, content categorization and content event monitoring. It is these "content services" that differentiate a Content Repository from a Data Repository." (taken from the JSR-170 description page).
More information about Java Content Repository (from here on refered as JCR) can be found athere.
The package has been designed to resemble as much as possible the ORM packages from the main Spring distribution. Users familiar with these can start using the JCR-support right away without much hassle; the documentation resembles the main documentation structure also. For those who haven‘t used them, please refer to the main Spring documentation, mainlychapter 12 (Data Access using O/R Mappers) as the current documentation focuses on the JCR specific details, the Spring infrastructure being outside the scope of this document. As the ORM package, the main reason for the JCR support is to ease development using Spring unchecked DAOexception hierarchy, integrated transaction management, ease of testing.
Before going any further I would like to thank Guillaume Bort and Brian Moseley which worked on some implementation of their own and were kind enough to provide their code and ideas when I started working on this package.
10.2. JSR standard support
The standard support works only with the JSR-170 API (represented by javax.jcr package) without making any use of specific features of the implementations (which we will discuss later).
10.2.1. SessionFactory
JSR-170 doesn‘t provide a notion of SessionFactory but rather a repository which based on the credentials and workspace provided returnes a session. The SessionFactory interface describes a basic contract for retrieving session without any knowledge of credentials, it‘s implementation acting as a wrapper around the javax.jcr.Repository:

The only requirement for creating a sessionFactory is the repository (which will be discussed later). There are cases were credentials have to be submitted. One problem that new users have is that javax.jcr.SimpleCredentials requires a char array (char[]) as constructor parameter and not a String and the current Spring distribution (1.2.5) does not contains a PropertyEditor for char arrays. The following examples (taken from the sample) shows how we can use String statical methods to obtain a char array:

Using the static toCharArray (from java.lang.String) we transformed the String supplied as password (with value ‘pass‘) to SimpleCredentials for user ‘bogus‘. Note that JcrSessionFactory can also register namespaces, add listeners and has utility methods for determing the underlying repository properties - see the javadoc and the samples for more information.
10.2.1.1. Namespace registration
The JcrSessionFactory allows namespace registration based on the standard JSR-170 API. It is possible to override the existing namespaces (if any) and register namespaces just during the existence of the JcrSessionFactory. By default, the given namespaces are registered only if they occupy free prefixes and be kept in the repository even after the SessionFactory shuts down.
To register the namespaces, simply pass them as a property object, with the key representing the prefix and the value, representing the namespace:
...http://bar.com/jcrhttp://pocus.com/jcr
One can customize the behavior of the JcrSessionFactory using 3 flags:
forceNamespacesRegistration - indicates if namespaces already registered under the given prefixes will be overridden or not(default). If true, the existing namespaces will be unregistered before registering the new ones. Note however that most (if not all) JCR implementations do not support namespace registration.
keepNewNamespaces - indicates if the given namespaces are kept, after being registered (default) or unregistered on the SessionFactory destruction. If true, the namespaces unregistered during the registration process will be registered back on the repository. Again, as noted above, this requires the JCR implementation to support namespace un-registration.
skipExistingNamespaces - indicates if the during the registration process, the existing namespaces are being skipped (default) or not. This flag is used as a workaround for repositories that don‘t support namespace un-registration (which render the forceNamespacesRegistration and keepNewNamespaces useless). If true, will allow registration of new namespaces only if they use a free prefix; if the prefix is taken, the namespace registration is skipped.
10.2.1.2. Event Listeners
JSR-170 repositories which support Observation, allow the developer to monitor various event types inside a workspace. However, any potential listener has to be register per-session basis which makes the session creation difficult. JcrSessionFactory eases the process by supporting global (across all sessions) listeners through its EventListenerDefinition, a simple wrapper class which associates a JCR EventListener with event types, node paths and uuids (which allows, if desired, the same EventListener instance to be reused across the sessions and event types).
Configuring the listener is straight forward:
...
10.2.1.3. NodeTypeDefinition registration
JCR 1.0 specifications allows custom node types to be registered in a repository but it doesn‘t standardises the process, thus each JCR implementation comes with its own approach. For Jackrabbit, the JCR module provides a dedicated SessionFactory, the JackrabbitSessionFactory which allows node type definitions in theCND format, to be added to the repository:
...classpath:/nodeTypes/wikiTypes.cndclasspath:/nodeTypes/clientATypes.cnd
If there is no need to register any custom node types, it‘s recommended that the JcrSessionFactory is used since it works on all JCR repositories.
10.2.2. Inversion of Control: JcrTemplate and JcrCallback
Most of the work with the JCR will be made through the JcrTemplate itself or through a JcrCallback. The template requires a SessionFactory and can be configured to create sessions on demand or reuse them (thread-bound) - the default behavior.

JcrTemplate contains many of the operations defined in javax.jcr.Session and javax.jcr.query.Query classes plus some convenient ones; however there are cases when they are not enought. With JcrCallback, one can work directly with the Session, the callback begin thread-safe, opens/closes sessions and deals with exceptions:
public void saveSmth() {template.execute(new JcrCallback() {public Object doInJcr(Session session) throws RepositoryException {Node root = session.getRootNode();log.info("starting from root node " + root);Node sample = root.addNode("sample node");sample.setProperty("sample property", "bla bla");log.info("saved property " + sample);session.save();return null;}});} 10.2.2.1. Implementing Spring-based DAOs without callbacks
The developer can access the repository in a more ‘traditional‘ way without using JcrTemplate (and JcrCallback) but still use Spring DAO exception hierarchy. SpringModules JcrDaoSupport offers base methods for retrieving Session from the SessionFactory (in a transaction-aware manner is transactions are supported) and for converting exceptions (which use SessionFactoryUtils static methods). Note that such code will usually pass "false" into getSession‘s the "allowCreate" flag, to enforce running within a transaction (which avoids the need to close the returned Session, as it its lifecycle is managed by the transaction):
public class ProductDaoImpl extends JcrDaoSupport {public void saveSmth()throws DataAccessException, MyException {Session session = getSession();try {Node root = session.getRootNode();log.info("starting from root node " + root);Node sample = root.addNode("sample node");sample.setProperty("sample property", "bla bla");log.info("saved property " + sample);session.save();return null;}catch (RepositoryException ex) {throw convertJcrAccessException(ex);}}}
The major advantage of such direct JCR access code is that it allows any checked application exception to be thrown within the data access code, while JcrTemplate is restricted to unchecked exceptions within the callback. Note that one can often defer the corresponding checks and the throwing of application exceptions to after the callback, which still allows working with JcrTemplate. In general, JcrTemplate‘s convenience methods are simpler and more convenient for many scenarios.
10.2.3. RepositoryFactoryBean
Repository configuration have not been discussed by JSR-170 and every implementation has a different approach. The JCR-support provides an abstract repository factory bean which defined the main functionality leaving subclasses to deal only with the configuration issues. The current version supports jackrabbit and jeceira as repository implementations but adding new ones is very easy. Note that through Spring, one can configure a repository without the mentioned RepositoryFactoryBean.
10.2.3.1. Jackrabbit
JackRabbit is the default implementation of the JSR-170 and it‘s part of the Apache Foundation. The project has graduated from the incubator and had an initial 1.0 release in early 2006. JackRabbit support both levels and all the optional features described in the specifications.

-- or --

Note that RepositoryFactoryBean makes use of Spring Resource to find the configuration file.
10.2.3.2. Jackrabbit RMI support
Jackrabbit‘s RMI server/client setup is provided through org.springmodules.jcr.jackrabbit.RmiServerRepositoryFactoryBean though Spring itself can handle most of the configuration without any special support:

10.2.3.3. Jeceira
Jeceira is another JSR-170 open-source implementation though as not as complete as Jackrabbit. Support for it can be found under org.springmodules.jcr.jeceira package:

10.3. Extensions support
JSR-170 defines 2 levels of complaince and a number of optional features which can be provided by implementations, transactions being one of them.
10.3.1. Transaction Manager
One of the nicest features of the JCR support in Spring Modules is transaction management (find out more about Spring transaction management inChapter 8 of the Spring official reference documentation). At the moment, only Jackrabbit is known to have dedicated transactional capabilities. One can use LocalTransactionManager for local transactions or Jackrabbit‘s JCA connector to enlist the repository in a XA transaction through a JTA transaction manager. As a side note the JCA scenario can be used within an application server along with a specific descriptor or using a portable JCA connector (likeJencks) which can work outside or inside an application server.
10.3.1.1. LocalTransactionManager
For local transaction the LocalTransactionManager should be used:
PROPAGATION_REQUIREDPROPAGATION_REQUIRED, readOnly
for which only the sessionFactory is required.
Note that when using transactions in most cases you want to reuse the session (which means allowCreate property on jcrTemplate should be false (default)).
10.3.1.2. JTA transactions
For distributed transactions, using JCA is recommend in JackRabbit‘s case. An example is found inside the sample. You are free to use your application server JCA support; Jencks is used only for demonstrative purpose, the code inside the jackrabbit support having no dependency on it.
10.3.1.3. SessionHolderProviderManager and SessionHolderProvider
Because JSR-170 doesn‘t directly address transaction, details vary from repository to repository; JCR module contains (quite a lot of) classes to make this issue as painless as possible. Normally users should not be concern with these classes,however they are the main point for adding support for custom implementations.
In order to plug in extra capabilities one must supply a SessionHolderProvider implementation which can take advantage of the underlying JCR session feature. SessionHolderProviderManager acts as a registry of SessionHolderProviders for different repositories and has several implementations that return user defined provider or discover them automatically.
By default, ServiceSessionHolderProviderManager is used, which is suitable for most of cases. It usesJDK 1.3+ Service Provider specification (also known as META-INF/services) for determining the holder provider. The class looks on the classpath under META-INF/services for the file named "org.springmodules.jcr.SessionHolderProvider" (which contains the fully qualified name of a SessionHolderProvider implementation). The providers found are instantiated and registered and later on used for the repository they support. The distribution for example, contains such a file to leverage Jackrabbit‘s transactional capabilities.
Besides ServiceSessionHolderProviderManager, one can use ListSessionHolderProviderManager to manually associate a SessionHolder with a certain repository:
...
10.4. Mapping support
Working with the JCR resembles to some degree to working with JDBC. Mapping support for the JCR seems to be the next logical step but the software market doesn‘t seem to offer any mature solution. The current package offers support forjcr-mapping which is part ofGraffito project which belongs to the Apache foundation. However, Graffito itself is still in the incubator and the jcr-mapping is described as a prototype. The current support provides some base functionality taken from a snapshot which can be found inside the distribtution which most probably is old.
Moreover as jcr-mapping is clearly in alpha stage and a work in progress, users should not invest too much in this area but are encouraged to experiment and provide feedback. At the moment, the support contains a JcrMappingCallback and Template plus a FactoryBean for creating MappingDescriptors (which allows using more then one mapping file which is not possible at the moment in the jcr-mapping project).
10.5. Working with JSR-170 products
Even though the documentation uses as examples stand-alone JSR-170 implementations, the JCR module can work against any library/product which supports the JCR API. The only difference is that the JCR repository retrieval, which has to be changed based on the product used settings. Usually, reading the product documentation suffices however, to ease integration, this documentation includes hints for major JCR-compatible products (if you are working with a major product which is not mentioned below, please contribute the instructions through the projectissue tracker).
10.5.1. Alfresco
Alfresco is an open-source enterprise content managment system which usesSpring framework as its core. To get a hold of the JCR connector, one can:
bootstrap Alfresco application context and do a dependency lookup:
ApplicationContext context = new ClassPathXmlApplicationContext("classpath:alfresco/application-context.xml");Repository repository = (Repository)context.getBean("JCR.Repository");
- or -
let the container inject the dependency:
...
For more information, see AlfrescoJCR documentation.
10.5.2. Magnolia
Magnolia aims to make Enterprise Content Management simple by being user-friendly, battle-tested, enterprise-ready and open-source. Mangnolia itself is not a repository and replies on a JCR implementation(Jackrabbit in particular) for its backend. Thus, connecting through JSR-170 to Magnolia is identical to connecting to a Jackrabbit repository. See MagnoliaFAQ, formore information.
Chapter 11. JSR94
11.1. Introduction
As described in the scope section of the specification document, JSR94 defines a lightweight-programming interface. Its aim is to constitute a standard API for acquiring and using a rule engine.
"The scope of the specification specifically excludes defining a standard rule description language to describe the rules within a rule execution set. The specification targets both the J2SE and J2EE (managed) environments.
The following items are in the scope of the specification:
The restrictions and limits imposed by a compliant implementation.
The mechanisms to acquire interfaces to a compliant implementation.
The mechanisms to acquire interfaces to a compliant implementation.
The mechanisms to acquire interfaces to a compliant implementation.
The interfaces through which rule execution sets are invoked by runtime clients of a complaint implementation.
The interfaces through which rule execution sets are loaded from external resources and registered for use by runtime clients of a compliant implementation.
The following items are outside the scope of the specification:
The binary representation of rules and rule execution sets.
The syntax and file-formats of rules and rule execution sets.
The semantics of interpreting rules and rule execution sets.
The mechanism by which rules and rule execution sets are transformed for use by a rule engine.
All minimal system requirements required to support a compliant implementation."
Spring Modules provides a support for this specification in order to simply the use of its APIs according to the philosophy of the Spring framework.
11.2. JSR94 support
This section describes the different abstractions to configure in order to administer and use rule engines with the JSR94 support.
11.2.1. Provider
The first step to use JSR94 in a local scenario is to configure the rule engine provider. You must specify its name with the provider property and its implementation class with the providerClass property.
These properties are specific to the used rule engine. For more informations about the configuration of different rule engines, see the following configuration section.
Here is a sample configuration of a rule provider:
org.jcp.jsr94.jessorg.jcp.jsr94.jess.RuleServiceProviderImpl
Important note: When you get the JSR94 RuleAdministrator and RuleRuntime from JNDI, you don‘t need to configure this bean in Spring.
11.2.2. Administration
There are two possibilities to configure the RuleAdministrator abstraction:
Local configuration as a bean.
Remote access from JNDI.
These two scenarios are supported. Firstly, the local configuration uses the RuleAdministratorFactoryBean which needs to have a reference to the JSR94 provider, configured in the previous section, with its serviceProvider property.

The version 0.1 doesn‘t support the configuration of a RuleAdministrator from JNDI with a Spring‘s FactoryBean.
11.2.3. Execution
As for the RuleRuntime abstraction, there are two possibilities to configure the RuleRuntime abstraction (local and from JNDI).
Here is a sample of local configuration as bean:

The version 0.1 doesn‘t support the configuration of a RuleRuntime from JNDI with a Spring‘s FactoryBean.
11.2.4. Definition of a ruleset
To administer and execute rules, the JSR94 support introduces the RuleSource abstraction. It provides two different features:
Automatic configuration of the rule or ruleset for a rule engine.
Wrapper of JSR94 APIs for execution.
Important note: A RuleSource is a representation of an unique rule or ruleset.
These two features work respectively upon the JSR94 RuleAdministrator and RuleRuntime abstractions. That‘s why , to configure the RuleSource, you have two possibilities:
Firstly, you can inject these two beans previously configured (see the two previous sections).
Secondly, you can inject the JSR94 provider. So the rule source will create automatically these two beans .
You need to specify too some specific properties for the rule:
Bind uri of the rule. The value of the bindUriproperty will be use when invoking the corresponding rule.
Implementation of the rule. The JSR94 support is based on the Spring resource concept and the source property is managed in this way. So, by default, the ruleset source file is looked for in the classpath.
Here is a sample rule set configuration using the DefaultRuleSource class with a RuleRuntime and a RuleAdministrator:
/testagent.drltestagent
Here is an other sample of rule set configuration using the DefaultRuleSource with a RuleServiceProvider.
/testagent.drltestagent
Important note: If you don‘t specify the bindUri property, the JSR94 support will use the string returned by the getName method of the underlying RuleExecutionSet created for the RuleSource.
On the other hand, JSR94 provides some ways to specify additional configuration properties for specific rule engines.
Firstly some rule engines need to have custom properties to configure rules. These properties can be specify with the ruleSetProperties property of the type Map. This property is passed to the createRuleExecutionSet method (the last argument) of the JSR94 LocalRuleExecutionSetProvider interface. For example, JRules needs to specifiy the rulesetProperties property (see the configuration section).
Then some parameters need to be specified in order to get an implementation of the JSR94 LocalRuleExecutionSetProvider abstraction. These properties can be specify with the providerProperties property as a map.
Finally some parameters need to be specified in order to register a JSR94 RuleExecutionSet implementation.
Here is the the code of the registerRuleExecutionSets method of DefaultRuleSource class to show how the previous maps are used. Note that the DefaultRuleSource class is the default implementation of the RuleSource interface of the JSR94 support.
RuleExecutionSet ruleExecutionSet = ruleAdministrator.getLocalRuleExecutionSetProvider(providerProperties).createRuleExecutionSet(source.getInputStream(), rulesetProperties);ruleAdministrator.registerRuleExecutionSet(bindUri, ruleExecutionSet, registrationProperties);
11.2.5. Configure the JSR94 template
In order to execute rules, you need to use the dedicated JSR94Template class. This class must be configured with a RuleSource instance.
There are two ways to configure this class.
Firstly, you can define the template directly in Spring as a bean. In this way, you can make your service extend the Jsr94Support abstract class. This class defines get/set methods for the JSR94Template and provides the associated template to the service thanks to the getJSR94Template method.
...
Secondly, you can directly inject the configured RuleSource in your service. You can make too your service extend the JSR94Support abstract class. This class defines get/set methods for the RuleSource, creates automatically and provides the associated template to the service thanks to the getJSR94Template method.
...
Then the MyService class can directly use the template (injected or created with the RuleSource) with the help of the getJSR94Template method.
public class MyServiceImpl extends JSR94Support implements MyService {public void serviceMethod() {getJSR94Template.execute(...);}}
Important note: Because Java doesn‘t support multiple inheritance, you can‘t always extend Jsr94Support class because your service classes can already have a super class. In this case, you need to define the get/set methods or instance the template by yourself.
11.2.6. Using the JSR94 template
In order to execute rules, you need to use the JSR94Template class configured in the previous section.
JSR94 defines two session modes to execute rules. A session is a runtime connection between the client and the rule engine.
Stateless mode. "A stateless rule session provides a high-performance and simple API that executes a rule execution set with a List of input objects." (quotation of the JSR94 specification)
Stateful mode. "A stateful rule session allows a client to have a prolonged interaction with a rule execution set. Input objects can be progressively added to the session and output objects can be queried repeatedly." (quotation of the JSR94 specification)
So this template defines two corresponding executing methods: executeStateless for stateless sessions and executeStateful for stateful ones.
To execute rules in a stateless mode, you need to use the following execute method of the template.
public Object executeStateless(final String uri, final Map properties,final StatelessRuleSessionCallback callback) {//...}
This method needs an implementation of the callback interface, StatelessRuleSessionCallback. This interface defines a method to which an instance of StatelessRuleSession is provided. The developer doesn‘t need to deal with the release of the resources and the management of technical exceptions.
Moreover, if you need to specify additional parameters to create the session, you can use the second parameter of the method (named properties and which is a map).
public interface StatelessRuleSessionCallback {Object execute(StatelessRuleSession session)throws InvalidRuleSessionException, RemoteException;}
Here is a sample of use:
List inputObjects=...;List outputObjects=getTemplate().executeStateless("ruleBindUri",null,new StatelessRuleSessionCallback() {public Object execute(StatelessRuleSession session)throws InvalidRuleSessionException, RemoteException {return session.executeRules(inputObjects);}});
The JSR94 support uses the same features to execute rules in a stateful mode. Here is the dedicated executing method.
public Object executeStateful(final String uri, final Map properties,final StatefulRuleSessionCallback callback) {//...}
This method needs an implementation of the callback interface, StatefulRuleSessionCallback. This interface defines a method to which an instance of StatefulRuleSession is provided. As for stateless sessions, the developer doesn‘t need to deal with the release of the resources and the management of technical exception.
Moreover, if you need to specify additional parameters to create the session, you can use the second parameter of the method (named properties and which is a map).
public interface StatefulRuleSessionCallback {Object execute(StatefulRuleSession session)throws InvalidRuleSessionException, InvalidHandleException, RemoteException;}
Here is a sample of use:
List inputObjects=...;List outputObjects=getTemplate().executeStateful("ruleBindUri",null,new StatefulRuleSessionCallback() {public Object execute(StatelessRuleSession session)throws InvalidRuleSessionException, RemoteException {statefulRuleSession.addObjects(inputs);statefulRuleSession.executeRules();return statefulRuleSession.getObjects();}});
11.3. Configuration with different engines
This section will describe the way to configure different rule engines in Spring using the JSR 94 support. This section describes the configuration of the following rule engines:
Ilog JRules. Seehttp://www.ilog.com/products/jrules/.
Jess. Seehttp://herzberg.ca.sandia.gov/jess/.
Drools. Seehttp://drools.org/.
Although all samples inject RuleRuntime and RuleAdministrator instances, you can inject the JSR94 provider used directly in a local scenario (according to previous sections of the documentation).
11.3.1. JRules
With JSR94, you can only access rules configured in an embedded rule engine. At the time of writing, JRules 5.0 doesn‘t provide an implementation of JSR94 to execute and administer rules deployed in an BRES (Business Rule Engine Server).
Important note: To use the BRES with Spring, you need to make your own integration code direclty based on the JRules APIs.
Firstly you need to configure the JSR94 provider specific to JRules. The name of the class for JRules is ilog.rules.server.jsr94.IlrRuleServiceProvider. There is no need to define specific parameters for the RuleRuntime and RuleAdministrator beans.
http://www.ilog.comilog.rules.server.jsr94.IlrRuleServiceProvider
Then you need to configure the different rulesets for the embedded rule engine. In order to do this, the DefaultRuleSource can be used. You need to inject the instances of RuleRuntime and RuleAdministrator, specifiy the source of the ruleset (an irl file in the case of JRule and the binding uri for this ruleset.
Note: The language to write JRules‘ ruleset is IRL (Ilog Rule Language). This language is similar to Java and introduces specific keyworks for rules.
Endly, you need to configure specific properties for JRules:
IlrName: This key describes the internal name of the configured ruleset.
IlrRulesInILR: This key specifies that the ruleset of the configured file is written in IRL.
/cars_rules.irlcarscars_rulestrue
11.3.2. Jess
The reference implementation of the JSR94 specification is a wrapper for the Jess rule engine. We have used the samples provided in the specification to describe the configuration of this rule engine.
Firstly you need to configure the RuleServiceProvider, RuleAdministrator and RuleRuntime abstractions as beans in Spring.
org.jcp.jsr94.jessorg.jcp.jsr94.jess.RuleServiceProviderImpl
Then you need to configure rulesets in Spring using the JSR94 support.
/org/jcp/jsr94/tck/tck_res_1.xmltck_res_1
Jess doesn‘t need specific additional configuration for the rule source.
11.3.3. Drools
An other interesting rule engine is Drools. It provides too an integration with JSR94. We have used the samples provided in the Drools distribution to describe its configuration.
Firstly you need to configure the RuleServiceProvider, RuleAdministrator and RuleRuntime abstractions as beans in Spring.
http://drools.org/org.drools.jsr94.rules.RuleServiceProviderImpl
Then you need to configure rulesets in Spring using the JSR94 support.
/testagent.drltestagent
As Jess, Drools doesn‘t need specific additional configuration for the rule source.
Chapter 12. Lucene
12.1. Introduction
According to the home page project, "Apache Lucene is a high-performance, full-featured text search engine library written entirely in Java. It is a technology suitable for nearly any application that requires full-text search, especially cross-platform".
The project is hosted by Apache. It allows to make scalable architecture based on distributed indexes and provides several kinds of indexes (in-memory, file-system based, database based).
Spring Modules offers a Lucene support in order to provide more flexibility in its APIs use. It adds some new abstractions to facilitate the management of IndexReader, IndexWriter and Searcher, the index locking and concurrent accesses, the query creation and the results extraction. It provides too facilities to index easily sets of files and database rows.
The support provides too a thin layer upon the Lucene API in order to hide the underlying resources and to make easier unit tests of classes using Lucene. As a matter of fact, Lucene does not use interfaces and is essentially based on concrete entities.
On the other hand, the support provides a generic document handling feature in order to offer dedicated entities to create documents . This feature manages too the document and handler association. The handler has the responsability to create a document from an object or an InputStream. The feature is particularly useful to manage the indexing of different file formats.
The Open Source community provides too an interesting tool which makes easier the use of Lucene, the Compass framework. The Lucene support of Spring Modules is different from this tool because the later hides all the interactions with the index by using a CompassSession, an entity similar to the Hibernate Session. The aim of the Lucene support is to leave access to the root resources of Lucene for more flexibilty.
On the other hand, if you are looking for a tool to manage paradigm conversions like object, resource or xml to index, the Compass framework is the right tool for you. This later provides too supports and integrations with different tools and frameworks like Hibernate with its GPS feature and allows the use of transactions upon Lucene, feature not supported natively by Lucene.
12.2. Indexing
In this section, we will describe how the Lucene support makes easier the manage of a Lucene index. We firtly deal with the root entities of the support and the way to configure them, then show how to interact with the index and finally describe how to manage concurrency.
12.2.1. Root entities
Lucene provides two main root entities in order to interact with an index and index documents, the classes IndexReader and IndexWriter. These classes are concrete and can make difficult the implementation of the tests unit. That‘s why the Lucene support introduces two interfaces, respectively the interfaces LuceneIndexReader and LuceneIndexWriter, in order to define the contracts of those classes. So these interfaces offers the same methods as the Lucene classes IndexReader and IndexWriter.
The following code describes the methods offered by the LuceneIndexReader interface:
public interface LuceneIndexReader {void close() throws IOException;void deleteDocument(int docNum) throws IOException;int deleteDocuments(Term term) throws IOException;Directory directory();int docFreq(Term t) throws IOException;Document document(int n) throws IOException;Collection getFieldNames(IndexReader.FieldOption fldOption);TermFreqVector getTermFreqVector(int docNumber, String field) throws IOException;TermFreqVector[] getTermFreqVectors(int docNumber) throws IOException;long getVersion();boolean hasDeletions();boolean hasNorms(String field) throws IOException;boolean isCurrent() throws IOException;boolean isDeleted(int n);int maxDoc();byte[] norms(String field) throws IOException;void norms(String field, byte[] bytes, int offset) throws IOException;int numDocs();void setNorm(int doc, String field, byte value) throws IOException;void setNorm(int doc, String field, float value) throws IOException;TermDocs termDocs() throws IOException;TermDocs termDocs(Term term) throws IOException;TermPositions termPositions() throws IOException;TermPositions termPositions(Term term) throws IOException;TermEnum terms() throws IOException;TermEnum terms(Term t) throws IOException;void undeleteAll() throws IOException;LuceneSearcher createSearcher();Searcher createNativeSearcher();}
The following code describes the methods offered by the LuceneIndexWriter interface:
public interface LuceneIndexWriter {void addDocument(Document doc) throws IOException;void addDocument(Document doc, Analyzer analyzer) throws IOException;void addIndexes(Directory[] dirs) throws IOException;void addIndexes(IndexReader[] readers) throws IOException;void close() throws IOException;int docCount();Analyzer getAnalyzer();long getCommitLockTimeout();Directory getDirectory();PrintStream getInfoStream();int getMaxBufferedDocs();int getMaxFieldLength();int getMaxMergeDocs();int getMergeFactor();Similarity getSimilarity();int getTermIndexInterval();boolean getUseCompoundFile();long getWriteLockTimeout();void optimize() throws IOException;void setCommitLockTimeout(long commitLockTimeout);void setInfoStream(PrintStream infoStream);void setMaxBufferedDocs(int maxBufferedDocs);void setMaxFieldLength(int maxFieldLength);void setMaxMergeDocs(int maxMergeDocs);void setMergeFactor(int mergeFactor);void setSimilarity(Similarity similarity);void setTermIndexInterval(int interval);void setUseCompoundFile(boolean value);void setWriteLockTimeout(long writeLockTimeout);}
The main advantage of this mechanism is the possibility to dissociate logical and physiacal resources. The physical resources are the Lucene resources which directly interact with the index, i.e. the instances of IndexReader and IndexWriter.
The logical resources are high level resources which allow a more flexible management of resources in order to integrate concurrency and transaction managements. The logical resources are not provided by Lucene but interfaces of the Lucene support, the interfaces LuceneIndexReader and LuceneIndexWriter.
In order to create these resources, the Lucene support implements the factory pattern based on the IndexFactory interface. This interface allows and hide the creation of logical resources. So, with this mechanism, you only need to configure an implementation of this interface in order to specify the strategy of resource management.
The following code describes the methods offered by the IndexFactory interface:
public interface IndexFactory {LuceneIndexReader getIndexReader();LuceneIndexWriter getIndexWriter();}
Because the factory handle only logical resources, it do not provide directly instances of IndexReader and IndexWriter. The latters are managed implementations of the LuceneIndexReader and LuceneIndexWriter.
The Lucene support introduces the following implementations of the IndexFactory interface:
Table 12.1. Different implementations of the IndexFactory interface
IndexFactory implementation Logical resource implementations Description
SimpleIndexFactory SimpleLuceneIndexReader and SimpleLuceneIndexWriter Simple wrapping of the physical ressources.
12.2.2. Configuration
Spring Modules provides at this time only one index factory based on a directory and an analyzer. It provides too support for configuring the main directory types.
12.2.2.1. Configuring directories
The root Lucene concept is the Directory which represents physically the index. Lucene supports different types of storage of the index. The Lucene support allows to configure an in-memory index (a RAM directory) and a persistent index (file system directory) with dedicated Spring FactoryBeans.
The first type of Directory, the RAM directory, can be configured using the RAMDirectoryFactoryBean class, as in the following code:

You must be careful when you use this type of storage because all the informations of the index are in memory and are never persist on the disk.
The second type of Directory, the file system directory, can be configure using the FSDirectoryFactoryBean class. This class is much more advanced because it allows to manage the creation of the index. The only mandatory property is location which specifies the location of the index on the disk basing on the facilities of the Resource interface of Spring.
The following code describes how to configure a directory of this type:

The sandbox of the Lucene project defines other kinds of directory (database...) which are not supported by the Lucene support at this time.
12.2.2.2. Configuring a SimpleIndexFactory
The SimpleIndexFactory class is the default factory to manipulate index. This entity provides logical resources which simply wrap the physical resources on the index basing on the SimpleLuceneIndexReader and SimpleLuceneIndexWriter classes.
This factory must be configured with the Lucene directory to access and eventually a default analyzer. In order to configure this class in a Spring application context, the support provides the SimpleIndexFactoryBean class whose configuration is described below:

The SimpleIndexFactory class allows too to management the creation and the locking of the index using respectively the resolveLock and create properties. The default values of these properties are false. The first one specifies that the index will be automaticaly unlock if it is lock during the first resource creation. The second specifies that the index structure will be created if not during the creation of the first IndexWriter.
This factory is based on the IndexReaderFactoryUtils and IndexWriterFactoryUtils classes to manage the LuceneIndexReader and LuceneIndexWriter creation and getting. However, no concurrency management is provided by this entity. You must be aware that opening an index in a write mode will lock it until the writer is closed. Moreover some operations are forbidden between the reader and the writer (for example, a document delete using the reader and document addition using the writing).
For more informations, see the following section about the IndexFactory management.
12.2.2.3. Dedicated namespace
The Lucene support provides a dedicated namespace which make easier the configuration of an index and its associated IndexFactory.
//TODO: finish to implement to namespace and describe it

12.2.3. Document type handling
The informations used to populate the index can come from different sources (types of files, objects...). An unified support is provided in order to
The central entity of this unified support is the interface DocumentHandler which defines the contract to create a document from an object. DocumentHandler is an high level interface while it does not tie to any source: it have no dependency with the Java IO API. The following code describes this interface:
public interface DocumentHandler {boolean supports(Class clazz);Document getDocument(Map description, Object object) throws Exception;}
The interface provides two different methods. The first, the method supports, specifies which class the implementation of the interface will be able to handle. The second, the method getDocument, has the responsability to create a document from an object and different other informations.
Different implementations of the interface DocumentHandler are provided in the support to handle file formats, as shown in the following table:
Table 12.2. Different implementations of the DocumentHandler interface
Format Tool Implementation
Texte - TextDocumentHandler
PDF PdfBox PdfBoxDocumentHandler
Rtf - DefaultRtfDocumentHandler
Excel JExcel JExcelDocumentHandler
Excel POI POIExcelDocumentHandler
Word POI POIWordDocumentHandler
Other implementations are also provided in order to create Lucene documents from POJO. The informations to index are determined used metadatas configured from different ways, as shown in the following table:
Table 12.3. Different implementations of the DocumentHandler interface
Metadatas Implementation
Properties file PropertiesDocumentHandler
Object (with reflection) ReflectiveDocumentHandler
Annotations AnnotationDocumentHandler
The support offers too a generic entity which allows to handle the association between entities and document handlers. The aim is .
This mechanism is by the DocumentHandlerManager interface which allows to determine the document handler to use in order to create a Lucene document from a file or an object. If no document handler is available, a DocumentHandlerException is thrown. Other methods are provided too in order to register and unregister document handlers. The following code shows the code of the DocumentHandlerManager interface:
public interface DocumentHandlerManager {DocumentHandler getDocumentHandler(String name);void registerDefaultHandlers();void registerDocumentHandler(DocumentMatching matching, DocumentHandler handler);void unregisterDocumentHandler(DocumentMatching matching);}
In order to determine with which document handler a entity type is associated, the DocumentMatching interface is introduced. The latter defines only one method which allows to check if a String corresponds to an internal criteria, as shown in the following code:
public interface DocumentMatching {boolean match(String name);}
This mechanism allows to make correspond several entities to one document handle based on a criteria like an regexp for example. Different implementations of this interface are provided by the support, as shown in the following table:
Table 12.4. Different implementations of the DocumentHandler interface
Implementation Description
IdentityDocumentMatching PropertiesDocumentHandler
Object (with reflection) ReflectiveDocumentHandler
Annotations AnnotationDocumentHandler
According to the method signatures of the DocumentHandlerManager and DocumentMatching interfaces, the support offers the possibility to use different implementations of the DocumentMatching interface with the same DocumentHandlerManager entity.
Endly, a dedicated FactoryBean is provided by the support in order to configure programmatically DocumentHandler and when use them. This FactoryBean is generic and corresponds to a DocumentHandlerManager. In order to configure it, you must specify the implementations of DocumentHandlerManager and DocumentMatching interfaces to use basing respectively on the documentHandlerManagerClass and documentMatchingClass properties. By default, the implementations of these entities are DefaultDocumentHandlerManager and IdentityDocumentMatching.
The following code shows how to configure this FactoryBean:
(...)
The FactoryBean registers automatically the default handlers of the DocumentHandlerManager by calling its registerDefaultHandlers method. You can too register programmatically other DocumentHandler, as shown in the following code:
org.springmodules.lucene.index.document.handler.file.PdfBoxDocumentHandler(...)
We will see later that all these entities can be use directly or internally by templates.
12.2.4. Template approach
The Lucene support provides a template approach like Spring JDBC to make easier the use and the manipulation of an index.
LuceneIndexTemplate is the central interface of the Lucene support core package (org.springmodules.lucene.index.core) for the indexing. It simplifies the use of the corresponding Lucene APIs since it handles the creation and release of resources and allow you to configure declaratively the resource management. This helps to avoid common errors like forgetting to always close the index reader/writer. It executes the common operations on an index leaving application code the way to create or delete a document and questionwork on the index (numDoc property, optimization of an index, deleted documents...).
The Lucene support provides a default implementation of this interface, the DefaultLuceneIndexTemplate class, which is created and used by default.
12.2.4.1. Template configuration and getting
In order to configure and get an instance of a LuceneIndexTemplate, the support provides the LuceneIndexSupport class. It allows to inject instances of the IndexFactory and DocumentHandlerManager interfaces and of the Analyzer class. These entities are used to create an instance of a template which can be reached by using the getTemplate method.
The following code shows a class based on the LuceneIndexSupport class:
public class TestIndexImpl extends LuceneIndexSupport implements TestIndex {public void getElement() {LuceneIndexTemplate template = getLuceneIndexTemplate();(...)}}
The following code shows the configuration of the TestIndexImpl class in a Spring application context:
(...)
12.2.4.2. Basic operations
The LuceneIndexTemplate class provides the basic operations in order to manipulate an index: create, update and delete documents. Different groups of methods can be distinguished, as shown in the following code:
public interface LuceneIndexTemplate {(...)/* Document(s) addition(s) */void addDocument(Document document);void addDocument(Document document, Analyzer analyzer);void addDocument(DocumentCreator creator);void addDocument(DocumentCreator documentCreator, Analyzer analyzer);void addDocuments(List documents);void addDocuments(List documents, Analyzer analyzer);void addDocuments(DocumentsCreator creator);void addDocuments(DocumentsCreator creator, Analyzer analyzer);/* Document(s) update(s) */void updateDocument(Term identifierTerm, DocumentModifier documentModifier);void updateDocument(Term identifierTerm, DocumentModifier documentUpdater, Analyzer analyzer);void updateDocuments(Term identifierTerm, DocumentsModifier documentsModifier);void updateDocuments(Term identifierTerm, DocumentsModifier documentsModifier, Analyzer analyzer);/* Document(s) deletion(s) */void deleteDocument(int internalDocumentId);void deleteDocuments(Term term);void undeleteDocuments();boolean isDeleted(int internalDocumentId);boolean hasDeletions();(...)}
The first group of methods allows to create and add documents to an index. Lucene document can be used as parameters of the addDocument methods. The Lucene support provides too the interface DocumentCreator which defines the way to create the document, exceptions thrown during the creation of a document are now managed by the template. The following code describes this interface:
public interface DocumentCreator {Document createDocument() throws Exception;}
The same mechanism is available to create and add several documents and the interface used is DocumentsCreator which returns a list of documents. The following code describes this interface:
public interface DocumentsCreator {List createDocuments() throws Exception;}
The following code shows an example of creation of a document based on an addDocument method of the template:
getLuceneIndexTemplate().addDocument(new DocumentCreator() {public Document createDocument() throws Exception {Document newDocument = new Document();(...)return newDocument;}});
Lucene do not provide support in order to modify a document in the index. An addition and a deletion must be made successively and you need to use an IndexReader instance and then an IndexWriter instance. The Lucene support provide an interface, the interface DocumentModifier, in order to specify how to update a document, as shown in the following code:
public interface DocumentModifier {Document updateDocument(Document document) throws Exception;}
You can use then the updateDocument method to really update the document basing on this interface. The first parameter of these method, a parameter of type Term, is used to identify the document to update. It must identify only one document.
The template provides too two other methods in order to update several documents at the same time, the updateDocuments methods. These later use the same mechanism as the updateDocument method and are based on the DocumentsModifier interface which takes a list of documents to update, as shown in the following code:
public interface DocumentsModifier {List updateDocuments(LuceneHits hits) throws IOException;}
You can use then the updateDocuments method to really update a set of documents basing on this interface. The first parameter of these method, a parameter of type Term, is used to identify the set of documents to update.
The following code shows a example of use of the updateDocument method:
getLuceneIndexTemplate.updateDocument(new Term("id", "anId"), new DocumentModifier() {public Document updateDocument(Document document) throws Exception {Document newDocument = new Document();(...)return newDocument;}});
The last group of methods can be used to delete documents in the index. The deleteDocument method deletes only one document based on its internal identifier whereas the deleteDocuments deletes several documents based on a Term. The following code shows an example of use of this method:
getLuceneIndexTemplate.deleteDocument(new Term("attribute", "a value"));
12.2.4.3. Usage of InputStreams with templates
The template offers the possibility to create a document basing on an InputStream with two different addDocument methods, as shown in the following code:
public interface LuceneIndexTemplate {(...)void addDocument(InputStreamDocumentCreator creator);void addDocument(InputStreamDocumentCreator documentCreator, Analyzer analyzer);(...)}
These later methods are the responsability to manage the InputStream, i.e. to get an instance of it, to manage IOExceptions and to close the InputStream.
These addDocument methods are based on the InputStreamDocumentCreator which specifies how to initialize the InputStream and use it in order to create a document. The following code shows the detail of this interface and its two methods, createInputStream and createDocumentFromInputStream:
public interface InputStreamDocumentCreator {InputStream createInputStream() throws IOException;Document createDocumentFromInputStream(InputStream inputStream) throws Exception;}
The following code shows a sample use of this interface within the addDocument of the template:
final String fileName = "textFile.txt";getTemplate().addDocument(new InputStreamDocumentCreator() {public InputStream createInputStream() throws IOException {return new FileInputStream(fileName);}public Document createDocumentFromInputStream(InputStream inputStream) throws Exception {Document document = new Document();String contents = IOUtils.getContents(inputStream);document.add(new Field("contents", contents, Field.Store.YES, Field.Index.TOKENIZED));document.add(new Field("fileName", fileName, Field.Store.YES, Field.Index.UN_TOKENIZED));return document;}});
12.2.4.4. Usage of the DocumentHandler support with templates
The template offers the possibility to use the DocumentHandler support in order to create a document basing on an InputStream. This feature is based on the mechanism described in the previous section. An dedicated implementation of the InputStreamDocumentCreator, the InputStreamDocumentCreatorWithManager class, is provided. This class takes an instance of the DocumentHandlerManager interface and selects the right DocumentHandler to use in order to create the document from an InputStream.
The InputStreamDocumentCreatorWithManager class defines two abstract methods in order to select the name and the description of the resource associated with the InputStream, as following in the following code:
public abstract class InputStreamDocumentCreatorWithManager implements InputStreamDocumentCreator {(...)public InputStreamDocumentCreatorWithManager(DocumentHandlerManager documentHandlerManager) {this.documentHandlerManager = documentHandlerManager;}protected abstract String getResourceName();protected abstract Map getResourceDescription();(...)}
Note that the InputStreamDocumentCreatorWithManager class must be initialized with an instance of the DocumentHandlerManager interface.
The following code shows an example of use of this class with an addDocument method of the template:
DocumentHandlerManager manager = (...)final String fileName = "textFile.txt";getLuceneIndexTemplate.addDocument(new InputStreamDocumentCreatorWithManager(manager) {public InputStream createInputStream() throws IOException {return new FileInputStream(fileName);}protected String getResourceName() {return fileName;}protected Map getResourceDescription() {return null;}});
12.2.4.5. Work with root entities
Some other methods of the template allow you to work directly on logical resources of the index basing on callback interfaces and methods. The template uses these callbacks in order to provide instances of LuceneIndexReader and LuceneIndexWriter to the application. The following code describes the read and write methods of the template which are based on these callbak interfaces:
public interface LuceneIndexTemplate {(...)Object read(ReaderCallback callback);Object write(WriterCallback callback);(...)}
These two methods are based on the ReaderCallback and WriterCallback which allow the template to give the resources to the application. The following code describes these two interfaces:
public interface ReaderCallback {Object doWithReader(LuceneIndexReader reader) throws Exception;}public interface WriterCallback {Object doWithWriter(LuceneIndexWriter writer) throws Exception;}
The following code shows an sample of use of the WriterCallback interface with the template in order to index documents:
LuceneIndexTemplate template = (...)template.write(new WriterCallback() {public Object doWithWriter(LuceneIndexWriter writer) throws IOException {Document document = new Document();(...)writer.addDocument(document);return null;}});
12.2.4.6. Template and used resources
The LuceneIndexTemplate hides the resource used in order to execute an operation and its managment. The developer has now no need to know the Lucene API. The following table shows the underlying resources used by the template‘s methods:
Table 12.5. Resource used by the template methods
LuceneIndexTemplate method group Corresponding resource used
deletion methods IndexReader
addition methods IndexWriter
get methods IndexReader
optimize methods IndexWriter
In the context of the LuceneIndexTemplate, the calls of different methods have sense only if the underlying resources stay opened during several calls. For exemple, the call of the hasDeletion method always returns false if resources are used only for the call of a method.
12.2.5. Mass indexing approach
The support offers facilities to index an important number of documents or datas from a directory (or a set of directory) or a database. It is divided into two parts:
Indexing a directory and its sub directories recursively. This approach allows you to register custom handlers to index several file types.
Indexing a database. This approach allows you to specify the SQL requests in order to get the datas to index. A callback is then provided to create a Lucene document from a ResultSet. this feature is based on the Spring JDBC framework.
Every classes of this approach are located in the org.springmodules.lucene.index.object package and its sub packages.
12.2.5.1. Indexing directories
Indexing directories is implemented by the DirectoryIndexer class. To use it, you simply call its index method which needs the base directory. This class will browse this directory and all its sub directories, and tries to index every files which have a dedicated handler.
public class DirectoryIndexer extends AbstractIndexer {(...)public void index(String dirToParse) { ... }public void index(String dirToParse,boolean optimizeIndex) { ... }(...)}
Important note: If you set the optimizeIndex parameter as true, the index will be optimized after the indexing.
This class is based on a mechanism to handle different file types. It uses the DocumentHandlerManager interface seen in the previous section. It allows the indexer to be extended and supports other file formats.
You can add too listeners to be aware of directories and files processing. In this case, you only need to implement the DocumentIndexingListener on which different methods will be called during the indexing. So the implementation will receive the following informations:
The indexer begins to handle all the files of a directory.
The indexer has ended to handle all the files of a directory
The indexing of a file begins.
The indexing of a file is succesful.
The indexing of a file has failed. The exception is provided to the callback.
The indexer haven‘t the specific handler for the file type.
public interface DocumentIndexingListener {void beforeIndexingDirectory(File file);void afterIndexingDirectory(File file);void beforeIndexingFile(File file);void afterIndexingFile(File file);void onErrorIndexingFile(File file,Exception ex);void onNotAvailableHandler(File file);}
To associate a listener with the indexer, you can simply use its addListener method and to remove one, the removeListener method. The following code describes these two methods:
public class DirectoryIndexer extends AbstractIndexer {(...)public void addListener(DocumentIndexingListener listener) { ... }public void removeListener(DocumentIndexingListener listener) { ... }(...)}
The following code shows of use of all these entities:
public class SimpleDirectoryIndexingImplimplements DirectoryIndexing,InitializingBean {private IndexFactory indexFactory;private DocumentHandlerManager documentHandlerManager;private DirectoryIndexer indexer;public SimpleDirectoryIndexingImpl() {}public void afterPropertiesSet() throws Exception {if( indexFactory!=null ) {throw new IllegalArgumentException("indexFactory is required");}this.indexer = new DirectoryIndexer(indexFactory,documentHandlerManager);}public void indexDirectory(String directory) { indexer.index(directory,true); }public void prepareListeners() {DocumentIndexingListener listener = new DocumentIndexingListener() {public void beforeIndexingDirectory(File file) {System.out.println("Indexing the directory : "+file.getPath()+" ...");}public void afterIndexingDirectory(File file) {System.out.println(" -> Directory indexed.");}public void beforeIndexingFile(File file) {System.out.println("Indexing the file : "+file.getPath()+" ...");}public void afterIndexingFile(File file) {System.out.println(" -> File indexed ("+duration+").");}public void onErrorIndexingFile(File file, Exception ex) {System.out.println(" -> Error during the indexing : "+ex.getMessage());}public void onNotAvailableHandler(File file) {System.out.println("No handler registred for the file : "+file.getPath()+" ...");}};indexer.addListener(listener);}public IndexFactory getIndexFactory() { return indexFactory; }public void setIndexFactory(IndexFactory factory) { indexFactory = factory; }public DocumentHandlerManager getDocumentHandlerManager() {return documentHandlerManager;}public void setDocumentHandlerManager(DocumentHandlerManager manager) {documentHandlerManager = manager;}}
The following code describes the configuration of the later class in a Spring application context:
(...)
12.2.5.2. Indexing databases
The support for the database indexing looks like the previous. It is implemented by the DatabaseIndexer class. To use it, you simply use its index method which needs the JDBC DataSource to use. This class will execute every sql requests registred, and tries to index every corresponding resultsets with the dedicated request handlers.
public class DatabaseIndexer extends AbstractIndexer {(...)void index(DataSource dataSource) { ... }void index(DataSource dataSource,boolean optimizeIndex) { ... }(...)}
Important note: If you set the optimizeIndex parameter as true, the index will be optimized after the indexing.
This class is based on a mechanism to handle different queries. It allows the indexer to execute every specified requests. To make a new handler, we only need to implement the SqlDocumentHandler interface which specifies the way to construct a Lucene document from a result set.
public interface SqlDocumentHandler {Document getDocument(SqlRequest request,ResultSet rs) throws SQLException;}
As you can see in the method signatures, we need to use the SqlRequest class to specify the SQL request to execute and its parameters. It defines two constructors according to the request (with or without parameters):
public class SqlRequest {(...)public SqlRequest(String sql) { ... }public SqlRequest(String sql,Object[] params,int[] types) { ... }(...)}
To add and remove requests, you can respectively use the registerDocumentHandler and unregisterDocumentHandler methods. The following code describes these two methods:
public class DatabaseIndexer extends AbstractIndexer {(...)public void registerDocumentHandler(SqlRequest sqlRequest,SqlDocumentHandler handler) { ... }public void unregisterDocumentHandler(SqlRequest sqlRequest) { ... }(...)}
You can add too listeners to be aware of requests processing. In this case, you only need to implement the DatabaseIndexingListener on which different methods will be called during the indexing. So the implementation will receive the following informations:
The indexing of a request begins.
The indexing of a request is succesful.
The indexing of a request has failed. The exception is provided to the callback.
public interface DatabaseIndexingListener {void beforeIndexingRequest(SqlRequest request);void afterIndexingRequest(SqlRequest request);void onErrorIndexingRequest(SqlRequest request,Exception ex);}
To associate a listener with the indexer, you can simply use its addListener method.
public class DatabaseIndexer extends AbstractIndexer {(...)public void addListener(DatabaseIndexingListener listener) { ... }public void removeListener(DatabaseIndexingListener listener) { ... }(...)}
The following code shows of use of all these entities:
public class SimpleDatabaseIndexingImplimplements DatabaseIndexing, InitializingBean {private DataSource dataSource;private IndexFactory indexFactory;private DatabaseIndexer indexer;public SimpleDatabaseIndexingImpl() {}public void afterPropertiesSet() throws Exception {if( indexFactory!=null ) {throw new IllegalArgumentException("indexFactory is required");}this.indexer=new DatabaseIndexer(indexFactory);}public void prepareDatabaseHandlers() {//Register the request handler for book_page table without parametersthis.indexer.registerDocumentHandler(new SqlRequest("select book_page_text from book_page"),new SqlDocumentHandler() {public Document getDocument(SqlRequest request,ResultSet rs) throws SQLException {Document document=new Document();document.add(Field.Text("contents", rs.getString("book_page_text")));document.add(Field.Keyword("request", request.getSql()));return document;}});}public void indexDatabase() {indexer.index(dataSource,true);}public void prepareListeners() {DatabaseIndexingListener listener=new DatabaseIndexingListener() {public void beforeIndexingRequest(SqlRequest request) {System.out.println("Indexing the request : "+request.getSql()+" ...");}public void afterIndexingRequest(SqlRequest request) {System.out.println(" -> request indexed.");}public void onErrorIndexingRequest(SqlRequest request, Exception ex) {System.out.println(" -> Error during the indexing : "+ex.getMessage());}};indexer.addListener(listener);}public IndexFactory getIndexFactory() { return indexFactory; }public void setIndexFactory(IndexFactory factory) { indexFactory = factory; }public DataSource getDataSource() { return dataSource; }public void setDataSource(DataSource source) { dataSource = source; }}
The following code describes the configuration of the later class in a Spring application context:

12.3. Search
In this section, we will describe how the Lucene support makes easier the search on a Lucene index. We firtly deal with the root entities of the support and the way to configure them, then show how to make a search on the index.
12.3.1. Root entities
Lucene provides two entities in order to make search on index, the classes Searcher and Hits. These classes are concrete and can make difficult the implementation of the tests unit. That‘s why the Lucene support introduces two interfaces, respectively the interfaces LuceneSearcher and LuceneHits, in order to define the contracts of those classes. So these interfaces offers the same methods as the Lucene classes Searcher and Hits.
The following code describes the methods offered by the LuceneSearcher interface:
public interface LuceneSearcher {void close() throws IOException;Document doc(int i) throws IOException;int docFreq(Term term) throws IOException;int[] docFreqs(Term[] terms) throws IOException;Explanation explain(Query query, int doc) throws IOException;Explanation explain(Weight weight, int doc) throws IOException;Similarity getSimilarity();int maxDoc() throws IOException;Query rewrite(Query query) throws IOException;LuceneHits search(Query query) throws IOException;LuceneHits search(Query query, Filter filter) throws IOException;void search(Query query, Filter filter, HitCollector results) throws IOException;TopDocs search(Query query, Filter filter, int n) throws IOException;TopFieldDocs search(Query query, Filter filter, int n, Sort sort) throws IOException;LuceneHits search(Query query, Filter filter, Sort sort) throws IOException;void search(Query query, HitCollector results) throws IOException;LuceneHits search(Query query, Sort sort) throws IOException;void search(Weight weight, Filter filter, HitCollector results) throws IOException;TopDocs search(Weight weight, Filter filter, int n) throws IOException;TopFieldDocs search(Weight weight, Filter filter, int n, Sort sort) throws IOException;void setSimilarity(Similarity similarity);IndexReader getIndexReader();}
The following code describes the methods offered by the LuceneHits interface:
public interface LuceneHits {int length();Document doc(int n) throws IOException;float score(int n) throws IOException;int id(int n) throws IOException;Iterator iterator();}
The main advantage of this mechanism is the possibility to dissociate logical and physiacal resources. The physical resources are the Lucene resources which directly make search on the index, i.e. the instances of Searcher.
In order to create these resources, the Lucene support implements the factory pattern based on the SearcerFactory interface. This interface allows and hide the creation of logical resources. So, with this mechanism, you only need to configure an implementation of this interface in order to specify the strategy of resource management.
The following code describes the methods offered by the SearcherFactory interface:
public interface SearcherFactory {LuceneSearcher getSearcher() throws IOException;}
Because the factory handle only logical resources, it do not provide directly instances of Searcher. The latters are managed implementations of the LuceneSearcher.
12.3.2. Configuration
Spring provides several factories to make searchs on a single index, on several indexes in a simple or parallel maner and on one or several remote indexes.
12.3.2.1. Configuring a SimpleSearcherFactory
The SimpleSearcherFactory class is the simplest factory in order to get instances of the LuceneSearcher interface. This factory is only based on a single Directory. Its configuration is described in the following code:

12.3.2.2. Configuring a MultipleSearcherFactory
The MultipleSearcherFactory class allows to make searchs across several indexes. It is based on the Lucene MultiSearcher class and can be configured with several Directory, as shown in the following example:

12.3.2.3. Configuring a ParallelMultipleSearcherFactory
The MultipleSearcherFactory class allows to make searchs across several indexes in a parallel manner. It is based on the Lucene ParallelMultiSearcher class and can be configured with several Directory, as shown in the following example:

12.3.3. Template approach
The Lucene support provides a template approach like Spring for JDBC, JMS... to make searchs. The developer has not to know how to interact with the Lucene API in order to make searchs.
LuceneSearchTemplate is the central class of the Lucene support core package (org.springmodules.lucene.search.core) for the search. It simplifies the use of the corresponding Lucene APIs since it handles the creation and release of resources. This helps to avoid common errors like forgetting to always close the searcher. It executes the search leaving application code the way to create a search query and extract datas from results.
The template uses the QueryCreator abstraction to create a query basing on its createQuery method which must be contain the way to create the query. The following code describes the definition of this interface:
public interface QueryCreator {Query createQuery(Analyzer analyzer) throws ParseException;}
If you don‘t inject an Analyzer instance in the template, this analyzer parameter of the method createQuery will be null. As a matter of fact, an analyzer isn‘t always mandatory to create a query.
The support provides a ParsedQueryCreator implementation to help to construct a query based on a QueryParser or a MultiFieldQueryParser. It uses an inner class QueryParams to hold the document fields to use and the query string. This class is used at the query creation and must be created by the configureQuery method. If you need to configure the created query (for example with a call of the setBoost method), you must overwrite the setQueryProperties method which gives it as method parameter.
public abstract class ParsedQueryCreator implements QueryCreator {public abstract QueryParams configureQuery();protected void setQueryProperties(Query query) { }public final Query createQuery(Analyzer analyzer) throws ParseException { (...) }}
In order to construct a collection of objects from the result of a search, the Lucene support provides the HitExtractor interface, as described in the following code:
public interface HitExtractor {Object mapHit(int id, Document document, float score);}
The LuceneSearcherTemplate class provides several search methods in order to make a search on the index. These methods use as parameters different entities of Lucene (Query, Fitler, Sort and HitCollector) and the Lucene support (QueryCreator, HitExtractor and SearcherCallback). The following code describes the LuceneSearchTemplate interface:
public interface LuceneSearchTemplate {List search(QueryCreator queryCreator, HitExtractor extractor);List search(Query query, HitExtractor extractor);List search(QueryCreator queryCreator, HitExtractor extractor, Filter filter);List search(Query query, HitExtractor extractor, Filter filter);List search(QueryCreator queryCreator, HitExtractor extractor, Sort sort);List search(Query query, HitExtractor extractor, Sort sort);List search(QueryCreator queryCreator, HitExtractor extractor, Filter filter, Sort sort);List search(Query query, HitExtractor extractor, Filter filter, Sort sort);void search(QueryCreator queryCreator, HitCollector results);Object search(SearcherCallback callback);}
The following example constructs a query (basing on the QueryParser class) to search a text in the "contents" property of indexed documents. Then it constructs SearchResult objects with the search results. These objects will be added in a list by the support.
The following code describes an example of use of a search method of the template:
final String textToSearch = (...)List results = getTemplate().search(new ParsedQueryCreator() {public QueryParams configureQuery() {return new QueryParams("contents", textToSearch);}}, new HitExtractor() {public Object mapHit(int id, Document document, float score) {return new SearchResult(document.get("filename"), score);}});
Finally the search template provides a callback order to work directly on a LuceneSearcher instance, the logical resource to use to make searchs.The callback is based on the SearcherCallback interface, as shown in the following code:
public interface SearcherCallback {Object doWithSearcher(LuceneSearcher searcher) throws Exception;}
The callback interface is used by a dedicated search method of the LuceneSearchTemplate interface, as show in the following code:
public class LuceneSearchTemplate {(...)Object search(SearcherCallback callback);(...)}
The following code describes an example of use of this search method of the template:
final String textToSearch = (...)List results = getTemplate().search(new SearcherCallback() {public Object doWithSearcher(LuceneSearcher searcher) throws Exception {Query query = new TermQuery(new Term("attribute", textToSearch));Hits hits = searcher.search(query);(...)}});
12.3.4. Object approach
The Lucene support allows the creation of search queries. Every classes of this approach are internally based on the LuceneSearchTemplate class and its mechanisms.
The base class is LuceneSearchQuery of the queries. The internal LuceneSearchTemplate instance is configured by injecting the SearcherFactory and Analyzer instances to use. As this class is abstract, you must implement the search method in order to specify the way to make your search and how handle the results.
public abstract class LuceneSearchQuery {private LuceneSearchTemplate template = new LuceneSearchTemplate();public LuceneSearchTemplate getTemplate() { (...) }public void setAnalyzer(Analyzer analyzer) { (...) }public void setSearcherFactory(SearcherFactory factory) { (...) }public abstract List search(String textToSearch);}
As this class is very generic, Spring Modules providers a simple sub class to help you to implement your search queries. The abstract SimpleLuceneSearchQuery class implements the search methods leaving you to construct the query and specify the way to extract the results.
public abstract class SimpleLuceneSearchQuery extends LuceneSearchQuery {protected abstract Query constructSearchQuery(String textToSearch) throws ParseException;protected abstract Object extractResultHit(int id,Document document, float score);public final List search(String textToSearch) { ... }}
The following code describes an exampel of use based on the SimpleLuceneSearchQuery class:
String textToSearch = (...)LuceneSearchQuery query = new SimpleLuceneSearchQuery() {protected abstract Query constructSearchQuery(String textToSearch) throws ParseException;QueryParser parser = new QueryParser("contents",getAnalyzer());return parser.parse(textToSearch);}protected abstract Object extractResultHit(int id,Document document, float score) {return document.get("filename");}};List results = query.search(textToSearch);
Chapter 13. Apache OJB
Note
Starting with release 0.6, Spring Modules hosts the Apache OJB project found in the main Spring distribution previous to 2.0 RC4.
Apache OJB (http://db.apache.org/ojb) offers multiple API levels, such as ODMG and JDO. Aside from supporting OJB through JDO, Spring also supports OJB‘s lower-level PersistenceBroker API as data access strategy. The corresponding integration classes reside in the org.springmodules.orm.ojb package.
13.1. OJB setup in a Spring environment
In contrast to Hibernate or JDO, OJB does not follow a factory object pattern for its resources. Instead, an OJB PersistenceBroker has to be obtained from the static PersistenceBrokerFactory class. That factory initializes itself from an OJB.properties file, residing in the root of the class path.
In addition to supporting OJB‘s default initialization style, Spring also provides a LocalOjbConfigurer class that allows for using Spring-managed DataSource instances as OJB connection providers. The DataSource instances are referenced in the OJB repository descriptor (the mapping file), through the "jcd-alias" defined there: each such alias is matched against the Spring-managed bean of the same name.
.........
A PersistenceBroker can then be opened through standard OJB API, specifying a corresponding "PBKey", usually through the corresponding "jcd-alias" (or relying on the default connection).
13.2. PersistenceBrokerTemplate and PersistenceBrokerDaoSupport
Each OJB-based DAO will be configured with a "PBKey" through bean-style configuration, i.e. through a bean property setter. Such a DAO could be coded against plain OJB API, working with OJB‘s static PersistenceBrokerFactory, but will usually rather be used with Spring‘s PersistenceBrokerTemplate:
... public class ProductDaoImpl implements ProductDao {private String jcdAlias;public void setJcdAlias(String jcdAlias) {this.jcdAlias = jcdAlias;}public Collection loadProductsByCategory(final String category) throws DataAccessException {PersistenceBrokerTemplate pbTemplate =new PersistenceBrokerTemplate(new PBKey(this.jcdAlias);return (Collection) pbTemplate.execute(new PersistenceBrokerCallback() {public Object doInPersistenceBroker(PersistenceBroker pb)throws PersistenceBrokerException {Criteria criteria = new Criteria();criteria.addLike("category", category + "%");Query query = new QueryByCriteria(Product.class, criteria);List result = pb.getCollectionByQuery(query);// do some further stuff with the result listreturn result;}});}}
A callback implementation can effectively be used for any OJB data access. PersistenceBrokerTemplate will ensure that PersistenceBrokers are properly opened and closed, and automatically participate in transactions. The template instances are thread-safe and reusable, they can thus be kept as instance variables of the surrounding class. For simple single-step actions such as a single getObjectById, getObjectByQuery, store, or delete call, PersistenceBrokerTemplate offers alternative convenience methods that can replace such one line callback implementations. Furthermore, Spring provides a convenient PersistenceBrokerDaoSupport base class that provides a setJcdAlias method for receiving an OJB JCD alias, and getPersistenceBrokerTemplate for use by subclasses. In combination, this allows for very simple DAO implementations for typical requirements:
public class ProductDaoImpl extends PersistenceBrokerDaoSupport implements ProductDao {public Collection loadProductsByCategory(String category) throws DataAccessException {Criteria criteria = new Criteria();criteria.addLike("category", category + "%");Query query = new QueryByCriteria(Product.class, criteria);return getPersistenceBrokerTemplate().getCollectionByQuery(query);}}
As alternative to working with Spring‘s PersistenceBrokerTemplate, you can also code your OJB data access against plain OJB API, explicitly opening and closing a PersistenceBroker. As elaborated in the corresponding Hibernate section, the main advantage of this approach is that your data access code is able to throw checked exceptions. PersistenceBrokerDaoSupport offers a variety of support methods for this scenario, for fetching and releasing a transactional PersistenceBroker as well as for converting exceptions.
13.3. Transaction management
To execute service operations within transactions, you can use Spring‘s common declarative transaction facilities. For example:
...
Note that OJB‘s PersistenceBroker level does not track changes of loaded objects. Therefore, a PersistenceBroker transaction is essentially simply a database transaction at the PersistenceBroker level, just with an additional first-level cache for persistent objects. Lazy loading will work both with and without the PersistenceBroker being open, in contrast to Hibernate and JDO (where the original Session or PersistenceManager, respectively, needs to remain open).
PersistenceBrokerTransactionManager is capable of exposing an OJB transaction to JDBC access code that accesses the same JDBC DataSource. The DataSource to expose the transactions for needs to be specified explicitly; it won‘t be autodetected.
Chapter 14. O/R Broker
14.1. Introduction
O/R Broker is a convenience framework for applications that use JDBC. It allows you to externalize your SQL statements into individual files, for readability and easy manipulation, and it allows declarative mapping from tables to Java objects. Not just JavaBeans.
Spring Modules Integration for O/R Broker aims at simplifying the use of O/R Broker from within Spring applications. This module supports the same template style programming provided for JDBC, Hibernate, iBATIS, JPA...
Transaction management can be handled through Spring‘s standard facilities. As with iBATIS, there are no special transaction strategies for O/R Broker, as there is no special transactional resource involved other than a JDBC Connection. Hence, Spring‘s standard JDBC DataSourceTransactionManager or JtaTransactionManager are perfectly sufficient.
14.2. Setting up the Broker
To use O/R Broker you need to create the Java classes and configure the mappings. Spring Modules Integration for O/R Broker provides a factory called BrokerFactoryBean that loads the resources and creates the Broker.
public class Account {private Integer id;private String name;private String email;public void setId(Integer id) {this.id = id;}public Integer getId() {return id;}public String getName() {return this.name;}public void setName(String name) {this.name = name;}public String getEmail() {return this.email;}public void setEmail(String email) {this.email = email;}}
To map this class, we need to create the following account-broker.xml. The Sql statement "getAccountById" is used to retrieve the accounts through their ids. "insertAccount" is used to create new accounts.

Using Spring, we can now configure a Broker through the BrokerFactoryBean:
...
As you can see from the previous config, account-broker.xml is saved under the META-INF folder and loaded using a classpath resource.
14.3. BrokerTemplate and BrokerDaoSupport
The BrokerDaoSupport class offers a supporting class similar to the HibernateDaoSupport and the JdoDaoSupport classes. Let‘s implement a DAO:
public class BrokerAccountDao extends BrokerDaoSupport implements AccountDao {public Account getAccount(Integer id) throws DataAccessException {return (Account) getBrokerTemplate().selectOne("getAccountById", "id", id);}public void insertAccount(Account account) throws DataAccessException {getBrokerTemplate().execute("insertAccount", "account", account);}}
In the DAO, we use the pre-configured BrokerTemplate to execute the queries, after setting up the BrokerAccountDao in the application context and wiring it with our Broker instance:
...
The BrokerTample offers a generic execute method, taking a custom BrokerCallback imlpementation as argument. This can be used as follows:
public class BrokerAccountDao extends BrokerDaoSupport implements AccountDao {...public void insertAccount(final Account account) throws DataAccessException {getBrokerTemplate().execute(new BrokerCallback() {public Object doInBroker(Executable executable) throws BrokerException {executable.execute("insertAccount", "account", account);}});}}
Any BrokerException thrown will automatically get converted to Spring‘s generic DataAccessException hierarchy.
14.4. Implementing DAOs based on plain O/R Broker API
DAOs can also be written against plain O/R Broker API, without any Spring dependencies, directly using an injected Broker. A corresponding DAO implementation looks like as follows:
public class BrokerAccountDao implements AccountDao {private Broker broker;public void setBroker(Broker broker) {this.broker = broker;}public Account getAccount(Integer id) {Query qry = this.broker.startQuery();qry.setParameter("id", id);try {return (Account) qry.queryForOne("getAccountById");}catch (Throwable ex) {throw new MyDaoException(ex);}finally () {qry.close();}}...}
Configuring such a DAO can be done as follows:
...
Chapter 15. OSWorkflow
15.1. Introduction
OSWorkflow module offersSpring-style support forOSWorkflow allowing easy configuration and interaction with its API. For OSWorkflow version 2.8 and upwards, beans maintained by Spring contained can be accessed by OSWorkflow definition as conditions, functions, etc...
15.2. Configuration
OSWorkflow module offers ConfigurationBean for configuring OSWorkflow resources:
classpath:/org/springmodules/examples/workflow/osworkflow/service/documentApproval.xml
ConfigurationBean is not a FactoryBean as OSWorkflow already manages the creation of workflow instances. The bean extends OSWorkflow‘s DefaultConfiguration and allows workflows to be loaded using Spring‘sResourceLoaders and the underlying persistence store to be either injected or configured and managed by OSWorkflow. Note that by default, ConfigurationBean uses a memory based storage (MemoryWorkflowStore).
15.3. Inversion of Control: OsWorkflowTemplate and OsWorkflowCallback
One of the core classes of OSWorkflow module is OsWorkflowTemplate which greatly simplifies interaction with the OSWorkflow API by hiding the management of context parameters such as the caller and workflow ID besides offering the usual advantages of Spring‘s template pattern such as exception translation (from OSWorkflow checked exception into unchecked ones). The template mirrors most of the OSWorkflow API methods; however for lengthy interactions or cases where the native Workflow is required, OsWorkflowCallback should be used.
It is important to note, that OsWorkflowTemplate manages all instances of a single workflow within an application - that is there should be one template per workflow definition. This results in simple method calls as the workflow name or id are not required - they will be automatically passed in by the template. Consider the following example:
public class SomeWorkflowFacade{private OsWorkflowTemplate template;public void setTemplate(OsWorkflowTemplate template) {this.template = template;}...public void executeSomeAction(int actionNumber) {template.doAction(actionNumber);}public void addSomeInput(Object input) {template.doAction(INPUT_ACTION, "some_input", input);}public void accessNativeWorkflowObject() {template.execute(new OsWorkflowCallback(){public Object doWithWorkflow(Workflow workflow) throws WorkflowException {// call the OSWorkflow API directlyworkflow.changeEntryState(someInstanceId, someState);}});}}
In this case, the facade uses the injected template to execute several actions on the workflow - note that workflow id or caller are never specified as the template determines them internally. The template is thread safe; the same template instance can be used with different instances of the same workflow.
15.4. Working with workflow instances
As mentioned previously, the template transparently handles the workflow instance ID and caller on which the methods are executed. Both ID and caller values can be retrieved and set using OsWorkflowContextHolder and WorkflowContext. OSWorkflow module offers two convenient classes when working with Spring MVC:
AbstractWorkflowContextHandlerInterceptor - abstract base class which can set the workflow id from HTTP parameters and store it on the HttpSession
DefaultWorkflowContextHandlerInterceptor - default implementation which retrieves the workflow caller from HttpRequest
AcegiWorkflowContextHandlerInterceptor -Acegi specific implementation - the workflow caller will be retrieved fromAcegi.
Spring Modules CVS contains an osworkflow sample which shows the Spring MVC Handler in action along with the rest of the OSWorkflow module.
15.5. Acegi integration
Besides the already mentioned Acegi web handler , OSWorkflow module also offers out of the box an Acegi aware OSWorkflow condition that can be used inside workflow definition:
...org.springmodules.workflow.osworkflow.support.AcegiRoleConditionROLE_CREATOR......
AcegiRoleCondition will check the current Acegi authorities against the ‘role‘ parameter specified in the workflow definition and return true if a match is found or false otherwise.
Note
Spring Modules CVS contains a comprehensive OSWorkflow module sample which uses the classes discussed.
15.6. OSWorkflow 2.8+ support
OSWorkflow 2.8 release added two important componets:
TypeResolver - which allows business components to be resolved and used inside workflow definitions. For Spring users, the most important subclass isSpringTypeResolver which creates a bridge between Spring application context and OSWorkflow so it‘s possible to simply reuse beans just by using their name.
VariableResolver which adds translation capabilities for variables.
However, as OsWorkflow 2.7 is widely deploy, OSWorkflow module adds support for the new features under a special package : org.springmodules.workflow.osworkflow.v28. A typical configuration under OSWorkflow 2.8 might look like this:
-- Spring application context --

-- OSWorkflow workflow definition --
...whiteHorseFunction...
In this case, the whiteHorseFunction is retrieved from Spring application context and used inside the workflow instance; this is a power concept as the business components can be configured inside Spring and take advantage of the advanced IoC functionality as transaction demarcation or custom scoping.
Chapter 16. Spring MVC extra
16.1. About
The Spring MVC extra module contains classes for improving and extending the Spring MVC Framework.
16.2. Usage guide
16.2.1. Using the ReflectivePropertyEditor
The org.springmodules.web.propertyeditors.ReflectivePropertyEditor is aproperty editor implementation capable of converting any object type to and from text, using Java reflection. It converts objects to text and vice-versa thanks to four configurable parameters:
dataAccessObject, the object used for converting from text to the actual desired object: it could be a Factory, or a DAO.
dataAccessMethod, the method of the dataAccessObject object to call for converting from text to object.
propertyName, the name of the property which will represent the object text value.
stringConvertor, for converting the string value to be passed to the dataAccessMethod.
16.2.2. Using the ReflectiveCollectionEditor
The org.springmodules.web.propertyeditors.ReflectiveCollectionEditor is aproperty editor implementation for converting a collection of strings to a collection of objects and vice-versa. For converting you have to define the following:
dataAccessObject, the object used for converting from text to the actual desired object: it could be a Factory, or a DAO.
dataAccessMethod, the method of the dataAccessObject object to call for converting from text to object.
propertyName, the name of the property which will represent the object text value.
stringConvertor, for converting the string value to be passed to the dataAccessMethod.
This class is to be used for binding collections in Spring MVC: for example, if you want to bind a collection of customers starting from a collection of customer ids, obtained from some kind of selection list, you can use this class for automatically converting the collection of customer ids (strings) to a collection of actual customer objects.
16.2.3. Using EnhancedSimpleFormController and EnhancedAbstractWizardFormController
The org.springmodules.web.servlet.mvc.EnhancedSimpleFormController and org.springmodules.web.servlet.mvc.EnhancedAbstractWizardFormController are Spring MVC controllers which provide facilities for setting customproperty editors in a declarative way. Using the setCustomEditor(Map ) method you can set a map of custom property editors containing, as key, the class of the property in the form class:CLASS_NAME if you want to edit all properties of the given type, or its path in the form property:PROPERTY_PATH if you want to edit only the given property, and as value the name of a bean in the application context. Please note that the bean have to be a PropertyEditor and must be declared as prototype. If the class and property prefixes above are missed, the key is treated as a class name. So, if you extend the EnhancedSimpleFormController for your controllers, you can use the method above and avoid to overwrite and manually code theinitBinder method.
Here is an XML snippet of EnhancedSimpleFormController custom editors configuration into the Spring application context, using the default prefix:
org.acme.OfficeofficeEditor
This one uses the class prefix:
class:org.acme.OfficeofficeEditor
Finally, this one uses the property prefix:
property:officeofficeEditor
The same applies to the EnhancedAbstractWizardFormController.
16.2.4. Using the FullPathUrlFilenameViewController
The org.springmodules.web.servlet.mvc.FullPathUrlFilenameViewController is anAbstractUrlViewController which like theUrlFilenameViewController transforms the page name at the end of a URL into a view name, but preserves the full path in the web URL. For example, the URL "/foo/index.html" will correspond to the the "foo/index" view name.
16.2.5. Using the AbstractRssView
The org.springmodules.web.servlet.view.AbstractRssView is an abstract superclass for creating RSS views, with the capability of supporting many syndication formats through the use of theRome library.
AbstractRssView uses ATOM 1.0 as its default syndication format: you can change it by setting the feed type through the following method:
public void setDefaultFeedType(String)
Moreover, you can select the syndication format on the fly, using the HTTP request parameter type; i.e., the http://www.example.org/example.xml?type=rss_1.0 make a request for an RSS 1.0 file.
The feed type naming format is explainedhere.
Then, for constructing your feed, you need to override the following method:
abstract protected void buildFeed(Map,HttpServletRequest,HttpServletResponse,SyndFeed)
Here, you can construct your feed filling theSyndFeed object.
Chapter 17. Validation
17.1. Commons Validator
The Commons Validator is a library that allows you to perform validation based on rules specified in XML configuration files.
TODO: Describe the concepts of Commons Validator in more details.
17.1.1. Configure an Validator Factory
Firstly you need to configure the Validator Factory which is the factory to get Validator instances. To do so, the support provides the class DefaultValidatorFactory in the package org.springmodules.validation.commons
You need to specify with the property validationConfigLocations the file containing the Commons Validator rules and the file containing the validation rules specific to the application.
The following code shows how to configure this factory.
/WEB-INF/validator-rules.xml/WEB-INF/validation.xml
17.1.2. Use a dedicated validation-rules.xml
The file validation-rules.xml must contain Commons Validator elements based on classes provided by the support of this framework in Spring Modules.
For example, the configuration of the entities "required" and "requiredif" must be now in the validation-rules.xml file.

The validation sample of the distribution provides a complete validation-rules.xml based on the classes of the support.
You must note that the support of validwhen is not provided at the moment in the support. However, some codes are provides in JIRA. For more informations, see the issuesMOD-38 andMOD-49.
17.1.3. Configure a Commons Validator
Then you need to configure the Validator itself basing the previous Validator Factory. It corresponds to an adapter in order to hide Commons Validator behind a Spring Validator.
The following code shows how to configure this validator.

17.1.4. Server side validation
Spring MVC provides the implementation SimpleFormController of the interface Controller in order to process HTML forms. It allows a validation of informations processing by the controller by using the property validator of the controller. In the case of Commons Validator, this property must be set with the bean beanValidator previously configured.
The following code shows how to configure a controller which validates a form on the server side using the support of Commons Validator.
(...)(...)
The beanValidator bean uses the value of the property commandClass of the controller to select the name of the form tag in the validation.xml file. The configuration is not based on the commandName property. For example, with the class name org.springmodules.sample.MyForm, Commons Validator must contain a form tag with myForm as value of the name property. The following code shows the contents of this file.
Important
In version 0.6 the logic to resolve the form names has changed. In the previous versions org.springframework.util.StringUtils.uncapitalize(...) was used to transform the command class name to the form name. From version 0.6 java.beans.Introspector.decapitalize(...) is used instead. The main difference between the two approaches is that the second one better complies to the javabean naming conventions, so for example, URLCommand would be translated to URLCommand and not uRLCommand.

17.1.5. Partial Bean Validation Support
Partial validation support enables partial validation of beans where not all properties are validated but only selected ones.
Commons validator enables partial validation by specifying the page attribute for each field in the form configuration:
test(*this* == password)

The org.springmodules.validation.commons.ConfigurablePageBeanValidator and org.springmodules.validation.commons.DefaultPageBeanValidator classes support partial validation by setting their page property. The value of this property will be matched with the page attribute in the form configuration, and only the fields with the appropriate page configured will be validated.
The following is an example of a partial validation support usage within a wizard controller:
personPage0personPage1...
The controller will look like this:
public class PersonWizardController extends AbstractWizardFormController {...protected void validatePage(Object command, Errors errors, int page) {Validator[] validators = getValidators();for (int i=0; i17.1.6. Client side validation
The support of Commons Validator in Spring Modules provides too the possibility to use a client side validation. It provides a dedicated taglib to generate the validation javascript code. To use this taglib, we firstly need to declare it at the beginnig of JSP files as following.
<%@ tglib uri="http://www.springmodules.org/tags/commons-validator" prefix="validator" %>
You need then to include the generated javascript code in the JSP file as following by using the javascript tag.

At last, you need to set the onSubmit attribute on the form tag in order to trigger the validation on the submission of the form.

17.2. Valang
Valang (Va-lidation Lang-uage), provides a simple and intuitive way for creating spring validators. It was initially create with three goals in mind:
Enables writing validation rules quickly, without the need of writing classes or even any java code.
Ease the use of Spring validation tools.
Make validation rules compact, readable and easily maintainable.
 
Valang is built upon two major constructs - The valang expression language and valang validators. The former is a generic boolean expression language that enables expressing boolean rule in a "natural language"-like fashion. The later is a concrete implementation of the Spring Validator interface that is built around the expression language.
Before going into details, lets first have a look at a small example, just to have an idea of what valang is and how it can be used. For this example, we‘ll assume a Person class with two properties - firstName and lastName. In addition, there are two main validation rules that need to be applied:
The first name of the person must be shorter than 30 characters.
The last name of the person must be shorter than 50 characters.
 
One way of applying these validation rules (and currently the most common one) is to implement the Validator interface specifically for the Person class:
public class PersonValidator implements Validator {public boolean supports(Class aClass) {return Person.class.equals(aClass);}public void validate(Object person, Errors errors) {String firstName = ((Person)person).getFirstNam();String lastName = ((Person)person).getLastName();if (firstName == null || firstName.length() >= 30) {errors.reject("first_name_length", new Object[] { new Integer(30) },"First name must be shorter than 30");}if (lastName == null || lastName.length() >= 50) {errors.reject("last_name_length", new Object[] { new Integer(50) },"Last name must be shorter than 50");}}}
While this is a perfectly valid approach, it has its downsides. First, it is quite verbose and time consuming - quite a lot of code to write just for two very simple validation rules. Second, it required an additional class which clutters the code (in case it is an inner-class) or the design - just imagine having a validator class for each of the domain model objects in the application.
The following code snippet shows how to create a valang validator to apply the same rules as above:

There are a few things to notice here. First, no new class is created - with valang, one can reuse a predefined validator class (as shown here). Second, This validator is not part of the java code, but put in the application context instead - In the above case, the ValangValidator is instantiated and can be injected to other objects in the system. Last but not least, The validation rules are defined using the valang expression language which is very simple and quick to define.
The following two sections will elaborate on the expression language and the use of the Valang validator in greater details.
17.2.1. Valang Syntax
The valang syntax is based on the valang expression language and the valang validation rule configuration. As mentioned above, the former is a boolean expression language by which the validation rules predicates (conditions) are expressed. The later binds the rule predicates to a key (usually a bean property), error message, and optionally error code and arguments.
17.2.1.1. Rule Configuration
Here is the basic structure of the valang rule configuration:
{ : : [: [: ]] }
- The key to which the validation error will be bound to. (mandatory)
- A valang expression that defines the predicate (condition) of the validation rule. (mandatory)
- The error message of the validation rule. The message is mandatory but can be an empty string if not used. This message is also used as the default message in case the error code could not be resolved. (mandatory)
- An error code that represents the validation error. Used to support i18n. (optional)
- A comma separated list of arguments to associate with the error code. When error codes are resolved, this arguments may be used in the resolved message. (optional)
17.2.1.2. Expression Language
As mentioned, the valang expression language is used to define the predicate to be associated with the validation rule. The expression is always evaluated against a context bean. The expression can be defined as follows:
::= ( ( "AND" | "OR" ) )+ |
The in an evaluation that is composed of operators, literals, bean properties, functions, and mathematical expressions.
Operators
The following are the supported operators:
Binary Operators:
String, boolean, date and number operators:
= | == | IS | EQUALS
!= | <> | >< | IS NOT | NOT EQUALS
 
Number and date operators:
> | GREATER THAN | IS GREATER THAN
< | LESS THAN | IS LESS THAN
>= | => | GREATER THAN OR EQUALS | IS GREATER THAN OR EQUALS
<= | =< | LESS THAN OR EQUALS | IS LESS THAN OR EQUALS
 
Unary Operators:
Object operators:
NULL | IS NULL
NOT NULL | IS NOT NULL
 
String operators:
HAS TEXT
HAS NO TEXT
HAS LENGTH
HAS NO LENGTH
IS BLANK
IS NOT BLANK
IS UPPERCASE | IS UPPER CASE | IS UPPER
IS NOT UPPERCASE | IS NOT UPPER CASE | IS NOT UPPER
IS LOWERCASE | IS LOWER CASE | IS LOWER
IS NOT LOWERCASE | IS NOT LOWER CASE | IS NOT LOWER
IS WORD
IS NOT WORD
 
Special Operators:
BETWEEN
NOT BETWEEN
IN
NOT IN
NOT
 
These operators are case insensitive. Binary operators have a left and a right side. Unary operators only have a left side.
Value types on both sides of the binary operators must always match. The following expressions will throw an exception:
name > 0age == ‘some string‘
BETWEEN / NOT BETWEEN Operators
The BETWEEN and NOT BETWEEN operators have the following special syntax:
::= BETWEEN AND ::= NOT BETWEEN AND
Both the left side and the values can be any valid combination of literals, bean properties, functions and mathematical operations.
Examples:
width between 10 and 90length(name) between minLength and maxLength
IN / NOT IN Operators
The IN and NOT IN operators have the following special syntax:
::= IN ( "," )* ::= NOT IN ( "," )*
Both the left side and the values can be any valid combination of literals, bean properties, functions and mathematical operations.
There‘s another special syntax where a java.util.Collection, java.util.Enumeration, java.util.Iterator or object array instance can be retrieved from a bean property. These values are then used as right side of the operator. This feature enables to create dynamic sets of values based on other properties of the bean.
::= IN "@" ::= NOT IN "@"
Examples:
size in ‘S‘, ‘M‘, ‘L‘, ‘XL‘size in @sizes
NOT Operator
The not operator has the following special syntax:
::= "NOT"
This operator inverses the result of one or a set of predicates.
Literals
Four type of literals are supported by valang: string, number, date, and boolean.
Strings are quoted with single quotes:
‘Bill‘, ‘George‘, ‘Junior‘
 
Number literals are unquoted and are parsed by java.math.BigDecimal:
0.70, 1, 2000, -3.14
 
Date literals are delimited with square brackets and are parsed upon each evaluation by a special date parser. [TODO: write documentation for date parser]
[T 
Boolean literals are not quoted and have the following form:
::= ( "TRUE" | "YES" | "FALSE" | "NO" )
 
Bean Properties
As mentioned above, the valang always evaluates the expressions against a context bean. Once can access this bean‘s properties directly within the expression. To better understand how this works lets assume a Person class with the following properties:
name (String)
address (Address)
specialFriends (Map)
friends (Person[])
enemies (List)
 
 
The Address class has the following properties:
street (String)
city (String)
Country (String)
 
The context bean properties can be accessed directly by using their names:
name, address, attributes
 
Accessing nested properties is also supported by using a dot-separated expression. For example, accessing the street of the person can be done as follows:
address.street
 
List and/or array elements can be access by their index number as follows:
friends[1].nameenemies[0].address.city
 
Map entries can also be accessed by their keys:
specialFriends[bestFriend].name
 
Functions
Valang expressions can contain functions. A function is basically an operation which accepts arguments and returns a result. Functions can accept one or more arguments where each may be either a literal, bean property, or a function as described in the following definition:
function ::= "(" [ "," ]* ")" ::= | |
 
Valang ships with the following predefined functions:
Table 17.1. Functions
Name Description
length Returns the size of the passed in collection or array. If the passed in argument is neither, the length of the string returned from the toString() call on the passed in argument.
len See length above
size See length above
count See length above
match Matches the given regular expression (first argument) to the string returned from the toString() call on the passed in value (second argument).
matches See match above.
email Returns true if the string returned from the toString() call on the passed in argument represents a valid email
upper Converts the string returned from the toString() call on the argument to upper case.
lower Converts the string returned from the toString() call on the argument to lower case.
! Not operation on a boolean value.
resolve Wrap string in org.springframework.context.support.DefaultMessageSourceResolvable.
inRole Accepts a role name as an argument and returns true if the current user has this role. This function uses Acegi to fetch the current user.
 
Examples:
length(?)size(upper(‘test‘))upper(address.city)
 
One of the more powerful features in Valang expression language is that it is extensible with custom functions. To add a custom function one first needs to implement the org.springmodules.validation.valang.functions.Function interface or extend the org.springmodules.validation.valang.functions.AbstractFunction. Then, when using the ValangValidatorFactoryBean or ValangValidator, register the new function with the customFunctions property using the function name as the key. [TODO: show an example of a custom function]
Mathematical Expressions
The following mathematical operators are supported:
+
-
*
/ | div
% | mod
 
 
Parentheses are supported and expression are parsed left to right so that
2 - 3 + 5 = 4
Values in the mathematical expression can be literals, bean properties, and functions.
Examples:
(2 * (15 - 3) + ( 20 / 5 ) ) * -1(22 / 7) - (22 div 7)10 % 3length(?) mod 4
 
17.2.2. Valang Validator Support
As we saw in the previous chapter, Valang offers quite a reach and powerful expression language to represent the validation rules. Language that for most cases relieves the user from creating custom Validator classes.
The only missing piece of the puzzle now is to see how this expression language and the validation rule configuration integrate with Spring validation support.
The 2 most important constructs of Spring validation are the org.springframework.validation.Validator and org.springframework.validation.Errors classes. The Errors class serves as a registry for validation errors that are associated with an object (a.k.a the target object). The Validator interface provides a mechanism to validate objects and register the various validation error within the passed in Errors.
Valang ships with some support classes that leverage the power of the Valang expression language and validation rule configuration, and integrates nicely with Spring validation. The most important of them all is the org.springmodules.validation.valang.ValangValidator class.
17.2.2.1. ValangValidator
The org.springmodules.validation.valang.ValangValidator class is a concrete implementation of Spring‘s Validator interface. The most important property of this validator is the valang property.
The valang property is of type java.lang.String and holds a textual representation of the validation rules that are applied by the validator. We saw in the previous section that a single validation rule is represented in valang using the following format:
{ : : [: [: ]] }
Since, a validator may apply more then just one rule, the valang property accepts a set of such rule definitions.
Example:
{ firstName : length(?) < 30 : ‘First name too long‘ : ‘first_name_length‘ : 30}{ lastName : length(?) < 50 : ‘Last name too long‘ : ‘last_name_length‘ : 50 }
 
There are two ways to use the valang validator. It can be explicitly instantiated and initialized with the rule definitions by calling the setValang(String) method on it. But the recommended way is actually to let the Spring IoC container do this job for you. The valang validator was design as a POJO specifically for that reason - to easily define it within Spring application context and inject it to all other dependent objects in the application.
Here is an example of how to define a simple valang validator within the application context:

This validator defines two validation rules - one for the maximum size of the first name of the person and the other for the maximum size of the last name of the person.
Also notice that the above validator is unaware of the object type it validates. The valag validator is not restricted to a specific class to be validated. It will always apply the defined validation rules as long as the validated object has the validated properties (firstName and lastName in this case).
This configuration should be enough for most cases. But there are some cases in which you need to apply extra configuration. With ValangValidator it is possible to register custom function (thus, extend the valang expression language). This can be done by registering the functions within the customFunctions property, where the function name serves as the registration key.
Here is an example of a valang validator configuration with a custom function:
org.springmodules.validation.valang.functions.DoItFunction
It is also possible to register extra property editors and custom date parsers for valang to use. For more details about valang validator configuration options, please refer to the class javadoc.
17.2.3. Client Side Validation
The Valang to JavaScript conversion service is a simple extension to the Valang validator package that allows you to use the same Valang validation rules for client-side JavaScript validation that you are already using for your server-side controller validation.
The default JavaScript validator will be activated when the user tries to submit the form being validated and any errors detected by the validator will be presented in an alert box, but if you wish, you may customize any aspect of the validator to suit you needs.
17.2.3.1. Getting Started
Note
This Getting Started guide assumes that you are using Valang as the Validator implementation for your Spring MVC controllers and using JSP for your views.
There are 3 simple steps that are needed to enable the Valang to JavaScript translation:
Step 1 - Add the Valang rules exporter to your DispatcherServlet configuration
Because the translation of your validation rules happens in a custom tag it is necessary for any Valang validation rules used in your controller to be exported into the JSP page context. A convenient Spring MVC interceptor is provided that will automatically export the Valang validation rules for any controllers that make use of them.
If you are using the default handler mapping provided by the DispatcherServlet you will need to add the following to your dispatcher config file:

or, if you have already configured an alternative handler mapping, all you need to do is include the additional ValangRulesExportInterceptor in the list of interceptors used by your custom handler mapping:
......Step 2 - Import the Valang custom tag and JavaScript codebase into your JSP view
In the JSP file that is used to render the view the form you wish to validate you will need to import the Valang custom tag library by including the following line at the top of the file:
<%@taglib uri="/WEB-INF/tlds/valang.tld" prefix="valang" %>
then somewhere in the HTML section of your JSP template include the JavaScript codebase that you previously saved to your web application.
......
includeScriptTags - indicates whether the generated code should also generate the wrapping Tip
Under the js/lib/core you‘ll find also a springxt-min.js file: it is a minified version of springxt.js you can include in your pages for reducing the download size.
18.3.3.2. Optional Javascript libraries
XT Ajax Framework integrates with a number of other Javascript libraries, in order to provide additional Ajax functionalities:
Prototype (version 1.5.0 rc1 or higher) : see javadocs for the org.springmodules.xt.ajax.action.prototype classes.
Script.aculo.us (version 1.6.4 or higher) : see javadocs for the org.springmodules.xt.ajax.action.prototype.scriptaculous classes.
18.3.4. Tutorials
In this section we‘ll show you, by practical, step-by-step tutorials, how to work with XT Ajax Framework. All tutorials are based on XT Ajax samples: take a look at the samples provided with the main distribution orcheck out them if you want to take a look at the full source code.
18.3.4.1. Working with Ajax action events.
Ajax action events are used for updating pages without submitting data to (eventually configured) Spring MVC controllers: so, the execution of an Ajax action event doesn‘t call controllers.
In this tutorial we‘ll implement a simple Ajax sample that let you fill a selection box with a list of Office names after clicking a button, showing you how to:
Write the web page.
Write the Ajax handler.
Map the Ajax handler to the web page URL.
18.3.4.1.1. Step 1 : Writing the web page.
Writing a web page that fires an Ajax action event is not different than writing a normal JSP based web page as you‘d usually do.
First, import the core XT Ajax javascript library:

Our web page must fill the selection list after clicking a button. So, write a button input field that fires an Ajax action event with loadOffices as event id:

Then, write the select HTML element to update and give it an id:

Recall that the id attribute is used for identifying the page part to update, that is, the element to fill with new content.
That‘s all ... let‘s write our Ajax handler!
18.3.4.1.2. Step 2 : Writing the Ajax handler.
Our Ajax handler will extend the org.springmodules.xt.ajax.AbstractAjaxHandler, so it will have a method called after the Ajax event to handle, that will accept an org.springmodules.xt.ajax.AjaxActionEvent:
public AjaxResponse loadOffices(AjaxActionEvent event)
Now, let us analyze how to handle the event, by implementing the loadOffices method above.
First, we have to retrieve a list of offices from some kind of data access object:
Collection offices = store.getOffices();
Then, we have to create the components to render: a list of org.springmodules.xt.ajax.component.Option components, representing the option HTML elements and containing the office id as value and the office name as content.
// Create the options list:List options = new LinkedList();// The first option is just a dummy one:Option first = new Option("-1", "Select one ...");options.add(first);// Create options representing offices:for(IOffice office : offices) {Option option = new Option(office, "officeId", "name");options.add(option);}
Now, we have to replace the HTML content of the select element showed above, so we have to create an org.springmodules.xt.ajax.action.ReplaceContentAction, adding it the components to render (the list of options):
ReplaceContentAction action = new ReplaceContentAction("offices", options);
Note that the ReplaceContentAction updates the HTML element with offices as id.
Finally, we create an org.springmodules.xt.ajax.AjaxResponse, add the action and return it!
AjaxResponse response = new AjaxResponseImpl();response.addAction(action);return response;
That‘s the simple implementation of the loadOffices method!
18.3.4.1.3. Step 3 : Mapping the Ajax handler to the web page URL.
Say the web page URL is: www.example.org/xt/ajax/tutorial1.page. Mapping the Ajax handler is simply a matter of configuring the Ajax handler bean (LoadOfficesHandler in the snippet below) in the Spring application context and mapping it in the AjaxInterceptor:
ajaxLoadOfficesHandler
18.3.4.2. Working with Ajax submit events.
Ajax submit events are used for updating pages after submitting data to your Spring MVC controllers.
In this tutorial we‘ll implement a simple Ajax sample that let you choose an office and list its employees in a table after submitting the form by clicking a button. We‘ll see how to:
Write the web page.
Write the Spring MVC controller.
Write the Ajax handler.
Map the Ajax handler to the web page URL.
18.3.4.2.1. Step 1 : Writing the web page.
Writing a web page that fires a submit event is not different than writing a normal JSP based web page as you‘d usually do.
First, import the core XT Ajax javascript library:

Employees are listed in an HTML table after clicking a button. So, you have to write a button input field that fires an Ajax submit event with listEmployees as event id:

Then, write the HTML table element to use for listing the employees:
FirstnameSurnameMatriculation Code

Please note the table body, with an employees id attribute: this is the page part that will be updated with the employees list.
That‘s all ... let‘s take a look at our Spring MVC controller!
18.3.4.2.2. Step 2 : Writing the Spring MVC controller.
XT Ajax Framework requires only little changes to the way you write Spring MVC controllers.
For the purposes of our example, the most interesting part of our Spring MVC controller is the onSubmit method:
protected ModelAndView onSubmit(Object command, BindException errors)throws Exception {// Take the command object and the office contained in it:EmployeesListForm form = (EmployeesListForm) command;Office office = form.getOffice();// Take a list of employees by office:Collection employees = store.getEmployeesByOffice(office);// Construct and return the ModelAndView:Map model = new HashMap(1);model.put("employees", employees);return new XTModelAndView(this.getSuccessView(), errors, model);// The model map contains the employee list that will be rendered using ajax!}
The only difference is the use of the XTModelAndView (see org.springmodules.web.servlet.XTModelAndView javadoc), carrying the BindException errors object required by the Ajax framework.
Note
The XTModelAndView object behaves exactly the same as a standard ModelAndView object.
Let‘s go with our Ajax handler!
18.3.4.2.3. Step 3 : Writing the Ajax handler.
Our Ajax handler will extend the org.springmodules.xt.ajax.AbstractAjaxHandler, so it will have a method called after the Ajax event to handle, that will accept an org.springmodules.xt.ajax.AjaxSubmitEvent:
public AjaxResponse listEmployees(AjaxSubmitEvent event)
Let‘s talk about the listEmployees method implementation.
We want to show the employees belonging to the selcted office, so we have to retrieve the model map from the event object, and the employee list contained in it:
Map model = event.getModel();Collection employees = (Collection) model.get("employees");
Then, we have to create the components to render: a list of org.springmodules.xt.ajax.component.TableRow components, containing the employees:
// Create the rows list:List rows = new LinkedList();for(IEmployee emp : employees) {// Every row is an employee:TableRow row = new TableRow(emp, new String[]{"firstname", "surname", "matriculationCode"}, null);rows.add(row);}
Now we have to replace all the rows in the HTML table, so we have to create an org.springmodules.xt.ajax.action.ReplaceContentAction, adding it the components to render:
ReplaceContentAction action = new ReplaceContentAction("employees", rows);
Note that the ReplaceContentAction updates the HTML element with employees as id.
Finally, we have to create an org.springmodules.xt.ajax.AjaxResponse and return it!
AjaxResponse response = new AjaxResponseImpl();response.addAction(action);return response;
That‘s the listEmployees method implementation!
18.3.4.2.4. Step 4 : Mapping the Ajax handler to the web page URL.
Say the web page URL is: www.example.org/xt/ajax/tutorial2.page. Mapping the Ajax handler is simply a matter of configuring the Ajax handler bean (ajaxListEmployeesHandler in the snippet below) in the Spring application context and mapping it in the AjaxInterceptor:
ajaxListEmployeesHandler
18.3.4.3. Working with Ajax validation.
Ajax validation is a common use case, so the XT Ajax Framework provides the org.springmodules.xt.ajax.validation.DefaultValidationHandler for doing Ajax based validation in a very simple way.
In this tutorial we‘ll implement a simple Ajax validation use case, validating an employee matriculation code. We‘ll see how to:
Use the DefaultValidationHandler.
Write the Spring MVC validator.
Write the web page.
18.3.4.3.1. Step 1 : Using the DefaultValidationHandler.
If you want to use the DefaultValidationHandler without any customization, you must simply configure and map it into the Spring application context as you‘d usually do with any other handler:
ajaxValidationHandler
By default, the DefaultValidationHandler displays and highlights error messages in the submitted web page, and redirects to the success page on successfull.
Note
You can customize how error messages are rendered, by providing a custom implementation of the org.springmodules.xt.ajax.validation.ErrorRenderingCallback class, and how successfull validation is handled, by providing a custom implementation of the org.springmodules.xt.ajax.validation.SuccessRenderingCallback class.
18.3.4.3.2. Step 2 : Writing the Spring MVC validator.
XT Ajax Framework doesn‘t require to change the validator code: it is completely independent.
So here is the validator:
public class EmployeeValidator implements Validator {public boolean supports(Class aClass) {return IEmployee.class.isAssignableFrom(aClass);}public void validate(Object object, Errors errors) {if (this.supports(object.getClass())) {IEmployee emp = (IEmployee) object;if (emp.getMatriculationCode() == null || emp.getMatriculationCode().equals("")) {errors.rejectValue("matriculationCode","employee.null.code", "No Matriculation Code!");}}}}
The employee.null.code is the error code whose message will be rendered in the web page.
18.3.4.3.3. Step 3 : Writing the web page.
First, import the core XT Ajax javascript library, plus the Prototype and Script.aculo.us libraries:

Then, you have to mark the HTML elements where to show the errors sent by the validator: this requires only to write HTML elements whose id is the same as the error codes you want to show.
In our sample, we have an employee.null.code error code, and we want to have a div element containing all employee related errors, and another one containing just the employee.null.code error; here is what we have to write:
......

The DefaultValidationHandler will use exact and wildcard matching for filling the elements above with proper error messages.
Note
Error messages filled by the DefaultValidationHandler are internationalized.
Finally, you have to simply call the DefaultValidationHandler by firing an Ajax submit event in the following way:
Note
validate is the mandatory event name associated with the DefaultValidationHandler.