It just takes 15 seconds. Send your
details and we’ll get back to you soon.

Abhishek Tejpaul

http://www.IntelliGrape.com

An open-source and Agile enthusiast.

Posts by Abhishek Tejpaul:

Iterate through two distinct dates – Groovy 2.2

Hi everyone,

Ever wished for a ‘groovier’ way to iterate through two distinct dates? Well, Groovy 2.2 gives you two methods via the ‘DateGroovyMethods’ class to do exactly that: upto() and downto().

Here is the code showing the usage:

Date startDate = new Date() - 7
Date endDate = new Date()

startDate.upto(endDate) { it -> // 'it' - here is a Date object 

// Some business logic involving dates - for ex. build up the total amount for last 7 days' transactions

}

endDate.downto(startDate) { it -> // this iterates through the dates but in reverse order 

// Some business logic involving dates

}

These upto() and downto() methods even work on Calendar instances as well.

Please note that these are just the convenience methods added in Groovy 2.2. Prior to 2.2 release, we could have achieved the same behavior via Groovy Ranges i.e.

Date startDate = new Date() - 7
Date endDate = new Date()

(startDate..endDate).each { it ->

// Some business logic involving dates

}

Day One at Groovy & Grails eXchange 2013

The day one at GGX 2013 London was really awesome. The atmosphere, the sessions and the speakers really made the day wonderful. We could feel the buzz and energy of the thriving and closely knitted Groovy/Grails community all day long. The show was very well managed by Skills Matter and nicely driven by Peter Ledbrook.

Here are some of the highlights of the day:

The Groovy Update – by Guillaume Laforge:

In this session, Guillaume talked about some of the advanced features of Groovy 2.2 release such as implicit closure coercion, AST transformations (such as @Memoized), few GDK and Groovy shell enhancements. He then talked about the new features to be shipped with Groovy 2.3 release foremost being the ‘traits’ which gives the ability to define the methods in the Java interfaces. The presentation can be downloaded from here.

DEVQA – by Alvaro Sanchez:
This session addresses the issue of ‘eternal ambiguity’ over who should write the test code (esp. Functional tests) for the large projects: whether it should be written by developers or QA engineers. The speaker talked about the process which he follows in his project so that the developers and QAs can have their different set of testing tools and can live happily together.

Modern Groovy Enterprise Stack – by Marcin Erdmann:
In this session, the presenter talked about new technologies that he has used in his current Groovy/Grails project. He talked about the usage of Spring Integration, RabbitMQ, Dropwizard (for RESTful web services), Gradle, Spock, Jenkins and Grails to create, test and deploy a scalable and modular mesage driven architecture.

NoSQL with Grails – by Joseph Nusairat:
The talk revolved around the emergence and embracing of NoSQL datastores in the techonolgy stacks built to handle humongous (and highly unstructured) amounts of data. The speaker talked about some of the cool features of MongoDB (the one that I liked the most is GridFS) and how you can integrate it with your Grails app.

Polyglot Programming in Grails 2 – by Jeff Brown:
An excellent session given by Jeff Brown detailing the capability of Grails framework to accomodate the code written in any of the JVM scripting languages. In this talk, he demostrated the execution of the code written in Clojure scripting language inside the Grails app via the use of Grails Clojure plugin that he has written himself. Isn’t it interesting? :)

Making Java APIs Groovy – by Cédric Champeau:
In this session, Cédric Champeau talked about ways to add Groovy sugar to existing Java APIs. He carried the audience step by step through the DSL building technique where each step was an improvement over the previous step. The session covered usage of Groovy Categories, ExtensionModules etc.

Architectural Flexibility With Groovy – by David A. Dawson:
In this session David A. Dawson talked about the factors which could influence the architecture of the applications, and how Groovy can give much needed flexibility to the applications. The speaker emphasized on the importance of understanding the problem “context” in order to be decisive about choosing an approach. The session discussed about usage of APIs, Events, REST, User DSLs while trying to solve the architectural problems.

Ratpack: A Toolkit For JVM Web Applications – by Luke Daley:
In this session Luke Daley showed us how to create a simple application using light-weight, high performance framework, Ratpack(built in Java). Luke made use of Guice and Gradle to create a basic web application. The sample application covered request/response, get/post, request params, using (injecting) services, rendering the views/json and finally testing.

Message Driven Architecture in Grails – by Dan Woods:
In this session, the speaker discussed about how the Message Driven Architecture can improve the modularity, reusability, scalability and maintainablity of the appplication. In this session Dan used an example of Hotel searching and booking usecase. He made use of a Message Driven pattern based on Spring Integration and showed by example the usefulness of the pattern.

Most of the session podcasts can be seen by clicking on ‘more…’ links at Skills Matter website here.

Looking forward to a new day of explorations and learning at GGX’13 tomorrow.


Grails GSP tag: grep

The other day I was reading the Grails docs and I came across a useful GSP tag: grep. I have been using Grails for over 3 years now but just recently got to see this new tag which has eased my life a bit in situations where the list of objects have to be filtered and iterated at the same time to perform some operations on them. The GSP ‘grep’ tag (equivalent to the ‘grep’ method in Groovy) filters the list on a given condition and iterates over the filtered list allowing you to do some operations on the objects (or whatever you want to do with the filtered objects).

I will give you a small example: Suppose we have a list of Person objects. You want to display the names starting with A, B, C and so on in different color schemes. How would you do it?

One implementation is to send different lists from the controllers (this would mean sending 26 lists i.e. one list for each character in the English alphabet) and render them in different color schemes. The other way would be to iterate over the Persons list in the GSP and have  different <g:if>…<:g:elseif> statements (for each character).

There is another smarter way: you could also use the ‘findAll‘ method of the List in the GSP <g:each> tag’s ‘in’ attribute i.e <g:each in=${personList.findAll {it.name.startsWith(‘A’)}} var=”person”> … </g:each>.

Similarly, we can use ‘grep’ tag here. We can filter out the list like this: <g:grep in=”${personList.name}” filter=”~/^A.*/”> … </g:grep>.

Notice the use of ‘name‘ field of the Person object in the ‘in‘ attribute of the grep tag. Also, the attribute ‘filter’ takes many forms of filter options; here we are using regex to filter the persons with the name starting with ‘A’.

I am not saying that this tag provides some new functionality which we could not have achieved by any other means but this tag makes it easier to filter out the lists on some specific parameters.

You can read more about this tag here. Also you can read about the Groovy GDK’s grep method usage here to see how and in what scenarios you can use this tag.

Hope this helps !!

- Abhishek Tejpaul
abhishek@intelligrape.com


Sending email through command line with Mutt

In one of the shell scripts I was reading I saw a usage of mutt text-based command line email client. Mutt can be used to send out the emails from the production machines or any other servers where you do not have access to the browsers or UI based email clients. Sometimes it is essential to send out a particular log file to certain people for bug fixing while you are checking on the prod machine.

Here is how you can send out the emails from the Linux (at-least for most of the Debian based distributions such as Ubuntu)

  sudo apt-get install mutt

Now you can type in the following command:

 mutt -s <emailSubject> <recipientEmail> -c <ccRecipientEmail> -b <bccRecipientEmail> -a </path/to/the/attachmentFile> < </path/to/the/fileContainingTheMessage>

We can see an example here:

 mutt -s "Demo Subject Line" xyz@xyz.com -c abc@xyz.com -b blind-recipient@xyz.com -a /tmp/meetingDetails.xls < /tmp/emailBody.txt

As you can see, the -s flag is used to specify the subject line of the email and the optional flags -c and -b are used to specify the “cc” recipient and “bcc” recipient respectively. Also, -a flag is used to specify the location of the attachment file. There is one caveat regarding the -a flag i.e. -a option must be placed at the end of command line options. You can read the mutt documentation for more information on this.

This utility is a nice to know tool if you are a developer working in Linux and can use it in shell scripts when ever there is an apt use-case. You can read more about mutt and other cool things it can do by clicking here or by typing the ‘man mutt‘ command in your terminal.

Hope this helps !!

- Abhishek Tejpaul
abhishek@intelligrape.com


Using git diff feature on Github

Hi Folks,

Recently I came across a cool way to compare differences between two branches, two tags, or between two commits on Github. Many a times in our project we have to thoroughly see what has been a specific change in the code base before we push it on our production branch.

Here is how you can view the differences in commits:

On the Github, go to the Source view of your project. You will see a link named ‘Branch List’. Once the page opens you can see a list of all the remote branches. Hit on the Compare button in front of any of the available branches to see the difference between two branches.

Now note the URL in the address bar. It should end with something like ‘…/compare/<x>…<y>’ where x and y are separated by three dots(…) and their values could be project’s branch names

Isn’t it good?

Well, Git’s (read about git diff) and Github’s goodness does not stop just here. Instead of branch names as the values of x and y, you can also put two different commit hashes or tag names to view the differences in the code-base. More so, the commit hashes do not have to belong to the same branch. So you can pretty much compare your code’s current snap-shot with any of its past snap-shot irrespective of branch or a tag or a commit hash.

Hope this helps someone.

Abhishek Tejpaul
abhishek@intelligrape.com
[IntelliGrape Software Pvt. Ltd.]


Fetching an old deleted file of a project using Git

Recently in one of my projects, I had to bring the content of old-forgotten-deleted file back to the application code base using Git. The problem took a ‘fancy’ turn as the path to the file was no more a valid path in Git as there had been a major refactorings in our project esp. renaming of the packages as well as the folder structure since the file in question was deleted.

This is how I could see the deleted file (& its content) using the Git commands:

1. First of all, we will try to get the revision number of the git commit which committed our deleted file.
Type in, git log
on your console at the location where you have cloned/forked your Git project. This command will show a all the commits that has been done since the start of the project.
If you know at what period of time the deleted file was committed, you can use another flavor of git log command i.e.
git log –before=”Oct 01 2010″, or,
git log –after=”Oct 01 2010″
If you have an idea about who all could have or have committed the deleted file, then you can also use another option provided by git log command i.e
git log –author=”Author Name” or
git log –author=authorId
The commands above are self-explanatory.

NOTE: The step above would be helpful only if your team has followed good practices such as putting appropriate and self-explanatory comments while committing. So like if you have deleted a file & the comment while committing specifies that in some way or other it would be really easy for you to find out the probable revision number of that old commit by seeing the logs.

2. Now once we have the revision number, we can try using the

git show 

command to make sure that the deleted file was actually present in the commit. Type the following:

git show --name-only <revisionNumber> 

Once you see your sought-after file there, then you can see the contents of your file by typing

  git show <revisionNumber> 

This command will show you the change-set and the content of the file. Now you can fetch or copy the code out of the deleted file and place it in your new code or create an entirely new file out of it.

Hope this helps !!!

Reference: http://cheat.errtheblog.com/s/git

– Abhishek Tejpaul
abhishek@intelligrape.com
[IntelliGrape Software Pvt. Ltd.]


Passing parameters to sub-reports in Jasper

In one of our recent project, we made quite a good use of Grails Jasper plugin (v-0.9.7) to generate reports in the form of PDFs and Excel sheets. The reports contain sub-reports, which in many cases, contain sub-reports again. The way to pass parameters to sub-reports at the deeper level is to use the following tag in the .jrxml file: <subreportParameter>

In the Grails Jasper plugin, the Jasper controller searches for the “SUBREPORT_DIR” in the params when the request is forwarded to it. If it does not find any key named “SUBREPORT_DIR” in the params map, it initializes the variable SUBREPORT_DIR with the value of “jasperFilePath” i.e. the location where the Jasper’s .jasper and .jrxml files can be found. Now suppose you want to give seperate file location for your sub-reports, you can easily do this using the following code in the .jrxml file inside the tag:


   ...
 
        
 
  ...

Note the value of “![CDATA[$P{SUBREPORT_DIR}]]” parameter can be passed as a parameter from the controller to the master report but afterwards you can make use of the tag to pass on the information to the subsequent sub-reports to any level.

For more information on the jasper .jrxml tags, please visit the following link here: http://jasperforge.org/uploads/publish/jasperreportswebsite/trunk/schema.reference.html

Hope this helps !!

- Abhishek Tejpaul
abhishek@intelligrape.com
[IntelliGrape Software Pvt. Ltd]


Grails Transactions using @Transactional annotations and Propagation.REQUIRES_NEW

Hi All,

Here is how you can implement a new transaction in an already executing transaction in Grails which uses nothing but Spring framework’s Transaction mechanism as an underlying implementation. Spring provides @Transactional annotations to provide declarative transactions. We can use the same in our Grails project to achieve the transactional behavior.

Here is the scenario: You have two domain classes named SecuredAccount and AccountCreationAttempt. You try to transactionally save the SecuredAccount object which in turn creates a AccountCreationAttempt object which writes to the database stating: “There is an attempt to create a new SecuredAccount at this time: <current date and time>”. Point to note here is that even if the creation of the new SecuredAccount object fails, the record must still be written to the database so that the Administrator can validate whether the attempt at the specific time was by a legitimate user or an attacker.

Here is the code:

import org.springframework.transaction.annotation.*
Class MyService {
 
static transactional = false
def anotherService
 
@Transactional
def createSecuredAccount() {
def securedAccount = new SecuredAccount(userId:"John")
securedAccount.save(flush:true)
anotherService.createAccountCreationAttempt()
throw new RuntimeException("Error thrown in createSecuredAccount()")
}
}
import org.springframework.transaction.annotation.*
class AnotherService {
 
static transactional = false
 
@Transactional(propagation = Propagation.REQUIRES_NEW)
def createAccountCreationAttempt() {
def accountCreationAttempt = new AccountCreationAttempt(logRemarks: "There is an attempt to create a new SecuredAccount at this time: {new Date()}")
accountCreationAttempt.save(flush:true)
}
}

Now in this scenario, AccountCreationAttempt object always gets persisted whether or not the transaction for creating SecuredAccount object fails.

Here are few gotchas regarding the above transactions:

1.) First of all, for Propagation.REQUIRES_NEW to work as intended, it has to be inside a new object i.e. a new service in our example. If we had put the createAccountCreationAttempt() method in the MyService there would be no new transaction spawned and even no log entry would be made. This is Spring’s proxy object implementation of transactions and you can read more about it here:

http://static.springsource.org/spring/docs/3.0.x/spring-framework-reference/html/transaction.html#transaction-declarative-annotations.

Please pay special attention to the “NOTE” sub-section.This is what it states:

“In proxy mode (which is the default), only external method calls coming in through the proxy are intercepted. This means that self-invocation, in effect, a method within the target object calling another method of the target object, will not lead to an actual transaction at runtime even if the invoked method is marked with @Transactional.”

2.) Secondly, all the @Transactional methods should have a public visibility i.e. createSecuredAccount() and createAccountCreationAttempt() methods should be public methods, and not private or protected. This again is Spring’s @Transactional annotations implementation and you can read about it at the same link as provided above. Note the right side-bar titled “Method visibility and @Transactional“.

Well, once you keep note of these gotchas I guess you are all set to make good use of @Transactional annotations and its full power.

Cheers !!!

- Abhishek Tejpaul
abhishek@intelligrape.com
[IntelliGrape Software Pvt. Ltd.]


Tomcat 6 in-memory session replication

Hi All,

Here are the few basic steps that you need to follow in order to achieve the in-memory session replication between two or more Tomcat 6 instances.

This blog refers the Apache Tomcat documentation as found here: http://tomcat.apache.org/tomcat-6.0-doc/cluster-howto.html.
The Tomcat documentation provides more detailed explanation of the Clustering concepts as well as the definition of tags, attributes etc. used in server.xml file.

Step 1: Include the <distributable> tag in web.xml file i.e. you can simple write the following line in your deployment descriptor(i.e. web.xml):

 <distributable />

Please read the following link to know more about this tag: http://wiki.metawerx.net/wiki/Web.xml.Distributable

Step 2: Add the following lines of XML in the server.xml file inside the <Engine> element/tag:

<Engine name="Catalina" defaultHost="localhost">
      <Cluster className="org.apache.catalina.ha.tcp.SimpleTcpCluster"
            channelSendOptions="8">
                     <Manager className="org.apache.catalina.ha.session.DeltaManager"
                      expireSessionsOnShutdown="false" notifyListenersOnReplication="true"/>
                     <Channel className="org.apache.catalina.tribes.group.GroupChannel"> 
                               <Membership className="org.apache.catalina.tribes.membership.McastService" 
                               address="228.0.0.4" port="45564" frequency="500" dropTime="3000"/> 
                              <Receiver className="org.apache.catalina.tribes.transport.nio.NioReceiver" 
                                 address="auto" port="4000" autoBind="100" selectorTimeout="5000" maxThreads="6"/>
                               <Sender className="org.apache.catalina.tribes.transport.ReplicationTransmitter"> 
                                             <Transport className="org.apache.catalina.tribes.transport.nio.PooledParallelSender"/> 
                                </Sender>
                      <Interceptor className="org.apache.catalina.tribes.group.interceptors.TcpFailureDetector"/>
                      <Interceptor className="org.apache.catalina.tribes.group.interceptors.MessageDispatch15Interceptor"/>
                  </Channel>
           <Valve className="org.apache.catalina.ha.tcp.ReplicationValve" filter=""/> 
          <Valve className="org.apache.catalina.ha.session.JvmRouteBinderValve"/> 
          <Deployer className="org.apache.catalina.ha.deploy.FarmWarDeployer" 
                   tempDir="/tmp/war-temp/" deployDir="/tmp/war-deploy/" watchDir="/tmp/war-listen/"
                   watchEnabled="false"/> 
                   <ClusterListener className="org.apache.catalina.ha.session.JvmRouteSessionIDBinderListener"/>
                  <ClusterListener className="org.apache.catalina.ha.session.ClusterSessionListener"/>
     </Cluster>
 .
 .
 .
 .
 </Engine>

Please note that there might be some other elements such as <Host>, <Realm> etc. inside the <Engine> element. Also, for each tomcat instance the value defined for the ‘port’ attribute of the <Receiver> tag must be unique.

And, that’s all. You now have a basic session-replication in place. Your sessions are replicated amongst all the tomcat instances which are part of your cluster.

NOTE: Please make sure if you make any changes to the “context.xml” file, you have to delete the following xml file located in the <TOMCAT_HOME>/conf/Catalina/localhost/{yourAppName}.xml. If you don’t delete this file, then your changes will be ignored and the settings defined in this file will take effect.

Cheers!!!

Abhishek Tejpaul
abhishek@intelligrape.com
[Intelligrape Software Pvt. Ltd.]


Tomcat 6 Session Persistence through JDBCStore

In one of our recent projects, we needed to save the HTTP session in the database.
This blog refers the Apache documentation as found here: http://tomcat.apache.org/tomcat-6.0-doc/config/manager.html

These are the following steps that need to be followed:

Step 1: Create a database named tomcat (as shown in our example in Step 3 below) or any other name as specified in the ‘connectionURL’ attribute of the <Store> element.

Step 2: Create the following table in the newly created database:

create table sessions (
session_id     varchar(100) not null primary key,
valid_session  char(1) not null,
max_inactive   int not null,
last_access    bigint not null,
app_name       varchar(255),
session_data   mediumblob,
KEY kapp_name(app_name)
);

Step 3: Copy the context.xml file available at the global level at this location: <TOMCAT_HOME>/conf to your application’s META-INF folder and replace the <Manager> element with the following:

<Manager className='org.apache.catalina.session.PersistentManager'
 saveOnRestart='false' minIdelSwap='0' maxIdleSwap='0'  maxIdleBackup='1'> 
<Store className="org.apache.catalina.session.JDBCStore" 
driverName="com.mysql.jdbc.Driver"
connectionURL="jdbc:mysql://localhost/tomcat?user=username&amp;password=password" 
sessionTable="sessions" 
sessionIdCol="session_id"
sessionDataCol="session_data"
sessionValidCol="valid_session" 
sessionMaxInactiveCol="max_inactive"
sessionLastAccessedCol="last_access"
sessionAppCol='app_name' />
 </Manager>

If these settings are placed in the ‘<TOMCAT_HOME>/conf/context.xml‘, it will have a global effect on all the applications running on the server. For an application specific setting, you can place a newly created “context.xml” file in the <APP_HOME>/META-INF sub-folder inside the webapps folder.
Please note that attributes used in the <Store> element make use of the table columns as created in the database in Step 2.

Step 4: Set the following system properties named – ‘org.apache.catalina.session.StandardSession.ACTIVITY_CHECK‘ to ‘true‘.
For further information on this, read the ‘Persistent Manager Implementation’ section at this link : http://tomcat.apache.org/tomcat-6.0-doc/config/manager.html

Step 5: Make sure you have placed the MySql jar in the <TOMCAT_HOME>/lib folder.

Now you can try to hit the application’s URL and check the database table to see the newly persisted session. Please note that it takes around 60 seconds to see the stored session in the database so you might have to wait a bit. You can have multiple instances of tomcat running your application pointing to the same database and can share this persisted session in case any of the tomcat instance crashes.

Cheers!!!

Abhishek & Imran

abhishek@intelligrape.com | imran@intelligrape.com

[Intelligrape Software Pvt. Ltd.]


Archives

Get Our Latest Updates. Subscribe Now

Categories