On a quest of reducing Jenkins build time – Part 2

Previous – On a quest of reducing Jenkins build time – Part 1.

With all our efforts we had managed to get our build pipeline time down to under 12 minutes. Our quest still continues…

pipe line 1

We were using cvs and our build job was taking couple of minutes just to determine the change log. The next target was to check if migrating from cvs to svn would help us reduce our build times further.

Matthew Meyers (our infrastructure guru) who works at IDeas Inc, had setup a cool Jenkins infrastructure in IDeaS Data Center for our next generation product. While having a conversation with him he suggested that we could do an experiment of migrating the local Jenkins infrastructure to data center and also migrating from cvs to svn to check if it helps us further reduce our build times.

Matthew created a dedicated Jenkins Slave for this effort, migrated cvs to svn and did the setup of this parallel experimental infrastructure. After some initial hiccups, we got all the tests running fine and we were delighted to see that the total build pipeline time had reduced down to less than 8 minutes. SVN migration, bigger better machines, SAN infrastructure had helped reduce the build timings.

We finalized a date when we would cut over to this new infrastructure. The cut over went through fine too. Now we have a nice consolidate Jenkins infrastructure in our data center.

This migration had its own share of new learning…


Matthew introduced me to robocopy. Robocopy is so cool. Earlier we were using simple copy command to copy workspace, database and files in general. Robocopy has a feature of mirroring files in the source folder to destination folder and the cool thing is that it can automatically skip copying files that are not modified. This feature helps in saving lot time while doing file operations.

This ability helped us in adding two more test jobs 1) REST Test 2) Business Driven Tests in our build pipeline which we could not do earlier as both the jobs required us to deploy the application and have a larger pre-populated database to carry out our tests, which earlier was a time consuming activity.

Could not reserve enough space for object heap

As we added two more test jobs to run in parallel, the jobs started failing with error “Could not reserve enough space for object heap“. So far I had faced this issue when I exceed the -Xmx<size> (set maximum Java heap size memory) limit for a 32 bit process. In this case it was a 64 bit process, we had 32 GBs of ram available and while the jobs were running I could see ample memory space still available on the box.

After spending couple of hours googling and observing the task manager while the jobs were running, I learnt something new. The task manager has a section called as “Committed Memory”

Task Manager

I observed that though the machine was showing lot of free memory available, I was getting the memory error whenever the committed memory was crossing the physical memory threshold of 32Gbs. You can see the actual memory consumed and the committed memory per process in the task manager.

Committed memory

Using the task manager, I found that all the java processes and some mysql processes were showing that most of them were consuming 2+ GBs of committed memory. After reducing the value of -Xmx, the committed memory on the java processes went down.

After reducing the size of mysql variable key_buffer_size, the committed memory for mysql processes went down.

Finally, after bringing down the committed memory size, all the jobs started running in parallel without any issue.

Now even after adding two more test jobs, our build pipeline time is down to 10 minutes.


Previous –  On a quest of reducing Jenkins build time – Part 1.



Gamifying Agile Adoption – An Experiment – Part 2

Gamifying Agile Adoption – An Experiment – Part 1

Gamification is now in place for over two and a half months at IDeaS and the data is looking better.

Iteration Burn Down

The game was introduced on the second day of Iteration 8. The iteration burn down data started looking better since then. Stories got planned well, they started getting accepted earlier in the iteration. Iteration 11 and 12 data is not looking as good because of holidays and 5th ShipIt day that was scheduled on 1st two days of iteration 12.

Iteration Burndown

Velocity Chart

Velocity Chart is also showing better data starting iteration 8. In-iteration story acceptance has increased. Due to holidays, the velocity data is not up to the mark for iteration 11 and 12.



Behavioral Changes

We saw some interesting behavioral changes

  • The team members started following processes well.
  • There was more interaction and collaboration between SD, QA, Product Owners and Product Managers.
  • The negative points started bothering the team members and they became a bit risk averse.
  • Some members started pushing product owners and product managers for early story acceptance to get additional bonus stars at the cost of quality.

Enhancements to the game

  • Now -ve stars do not impacts the total stars that are already credited. -ve stars give you Devil Badges.
  • Earlier “Badges” is now called as “Levels”.
  • Introduced “Badges” –
    • Badges give specific feedback on the contributions done in a specific area.
    • Different departments can have different badges.
      • Badges have three levels
        • Bronze
        • Silver
        • Gold
      • Different badges can have different star count to reach a level within the badge.
    • There are Angel and Devil badges.
      • The goal is to earn Angel badges.
      • Devil badges represents activities that one should not be performing.
    • new-badge-list
  • Leader Board now has a left panel that shows Badge Leader board.
    • new-leader-board
  • Self User Profile
    • Clicking on the user name on the right top corner brings the user on the User Profile page. The user profile page also two sections
      • Left Panel shows
        • User section
          • Total number of stars earned so far.
          • All the badges earned by the user. The color represents the badge level.
          • Appreciate someone by clicking on thumbs up icon.
          • Create new mission by clicking on the bulls eye icon.
        • Department Section
          • List all the users in the department in descending order of the number of stars received.
          • Appreciate the individual using the thumbs up icon.
      • Right Panel
        • Accordion shows all the badges that the user has received.
        • Progress bar shows the number of stars received for a given badge and the number of stars to achieve to reach to the next level of the badge.
        • Photos
          • The middle photo is of the self user.
          • The left hand side photo represents the user who is just ahead of the self user.
          • The right hand side photo represents the user who is just behind the self user.
    • new-user-profile
  • User Details
    • Click on any image on any screen and you would land up on the user details screen. The screen shows all the badges earned by the particular user.
    • Hover on the badge and you will see the reason why the user received stars for the concerned badge.
    • new-user-details

IDeaS Rock Stars is now opensource!!!

You can now download IDeaS Rock Star application war and use it in your organization. The source code and configuration help is available at Github.

Download Bots



Gamifying Agile Adoption – An Experiment

While having a chat with Naresh Jain, he suggested me to go through the Ted Talk – “Gaming can make a better world” by Jane McGonigal. I found the title very weird and was wondering how is that possible? After going through the talk though, I was amazed. I started wondering if I can use the gamification technique in Agile Adoption, in our Products, in Performance Management Systems, in Employee Engagement Programs?

Dhaval Dalal introduced me to Prof. Kevin Werbach’s definition of Gamification – “The use of game elements and game design techniques in non-game contexts.

For our 4th ShipIt Day, organized on 25th/26th Sept 2014 at IDeaS, I decided to explore the idea of using game elements and game design techniques in the context of Agile Adoption. The idea was to create a gaming system which will automatically collect data, i.e. without explicit user intervention,  from multiple sources like Jenkins, Rally and manually from individuals and offer Star’s for positive behavior and deduct Star’s otherwise.

The aim was to help the team get continuous visual feedback on how they are doingadopt agile practices, visualize sense of accountability, visualize sense of achievementdrive positive behavior, create healthy competitioncreate a culture of appreciationhelp performance tracking and create transparency.

We are moving from traditional water fall model of development to Agile way of development. We use Jenkins for continuous integration and Rally to track our stories.

Jenkins had some part of gamification techniques already available via The Continuous Integration Game plugin. Below are the rules of the Jenkins CI game

  • -10 points for breaking a build
  • 0 points for breaking a build that already was broken
  • +1 points for doing a build with no failures (unstable builds gives no points)
  • -1 points for each new test failures
  • +1 points for each new test that passes

In Rally, we do story mapping. We follow certain processes like

  • Every story in the iteration should have story level planned estimates.
  • Every story in the iteration should have tasks.
  • One should update the Actual and TODO for the Rally task they are working on every day.
  • The stories should be completed and accepted within the iteration.
  • Story spill overs into next iteration should be low.

It was not possible to work on this idea alone and I am glad and thankful that I got support from Umesh Kale – who helped with the UX, Vishal Gokhale – who helped with the Jenkins Plugin. Paresh Dahiwal – who helped with the Rally plugin and Sameer Shah – who helped with QA.

I am glad to present the game “IDeaS Rock Stars“.

The web-app has two sections

  • A departmental leader board – that shows
    1. Star of the Day ( Top three individuals, who earned most stars in a day. )
    2. Star of the Week.
    3. Star of the Month.
    4. Most Appreciated ( Top three individuals who have earned most stars so far. )
    5. Most Appreciative ( Individuals who appreciate other’s contribution. Encourages individuals to recognize other’s contributions. )
    6. Open Missions ( Open missions/quests are visible across departments. Quests help in creating the culture of taking initiatives, being pro-active; helps in earning more Stars. Encourages cross department collaboration in addressing issues.)

Leader Board

  • Self-Board – that shows
    1. Stars that I have got ( Show off your stars with pride. )
    2. My Department ( Where do I stand with respect to others in my department? instant feedback on how hard I have to work to earn stars. )
    3. Who is getting Stars? ( Why are others getting stars? how can I earn more stars. )
    4. Why are my stars are getting added or deducted ( Instant and specific feedback, encourages positive behavior.)

My Screen

  • Self-Board – Appreciate Someone
    • encourages the culture of appreciation.
    • individual can give +1 star per appreciation.
    • managers can give or deduct more than 1 stars.

Appreciate Someone

  • Self-Board – Create Mission
    • Managers can create additional missions.
    • Missions/Quests are a nice way of earning more Stars.
    • Missions/Quests are visible across departments. Other department may be in a better position to solve your problems! Encourages cross department collaboration.

Create New Mission

  • Badges
    • Show off your badges as you earn Stars.


Jenkins Plugin – an extension of The Continuous Integration Game plugin which sends out stars to the IDeaS Rock Star game. It also sends stars for the downstream builds.

Rally Plugin – custom standalone java program that scans Rally and sends out stars to the IDeaS Rock Star game based on the rules that we have defined.

Based on our teams values and processes, below list shows how to earn Star’s. Other teams/departments can choose to have their own structure of earning Stars.

Reason Stars Behavior
for breaking a build that already was broken. 0 discourages developers from checking in code when the build is already broken. Encourages them to fix a broken build.
for doing a build with no failures. unstable builds gives no stars. 1 encourages developers to produce good builds. Encourages developers to check in more frequently to collect more stars. More frequent check-in leads to faster code integration.
for each new test that passes. 1 encourages developers to add new test scenarios with every check in. More quality, automated test scenarios, lesser effort required on manual testing. Aids quick feedback.
for updating actual and todo in rally every day. 1 encourages developers to follow standard process which helps in getting correct velocity.
for setting planned estimates at story/defect level. Stars given to story owners, story task owners. 1 encourages developers to follow standard process which helps in getting correct velocity/burn down.
for completing the story within the iteration. Stars given to story owners, story task owners. 10 encourages team work, encourages developers/qa/product owners to better slice the story and finish the story in one iteration.
for getting the story accepted in the same iteration. Stars given to story owners, story task owners. 10 encourages team work, encourages developers/qa/product owners to better slice the story, finish and get the story accepted within iteration.
for giving internal training, presentations; knowledge sharing. you have to claim these stars from your manager 30 encourages and enhances presentation skills, knowledge sharing, encourages continuous dialogue with managers.
for participating and presenting your idea in ShipIt. you have to claim these stars from your manager. 50 encourages innovation, initiative, proactivness.
for writing technical blogs, articles, answering stack overflow questions etc. you have to claim these stars from your manager. 50 encourages knowledge sharing, community contribution and expression of thoughts.
for open source contribution. you have to claim these stars from your manager. 75 encourages community contribution.
for becoming ShipIt Innovator. you have to claim these stars from your manager. 100 encourages innovation, initiative, proactivness.
for giving presentations in technology conferences. you have to claim these stars from your manager. 200 encourages developers/qa to participate in conferences, knowledge sharing, branding.
Wild Card – other tangible/intangible value addition/creation. You have to claim these stars from your managers. These stars are discretionary; based on perceived value addition. * encourages going above and beyond your responsibilities. Encourages continuous dialog with managers.
for each new test failures -1 discourages bugs, encourages fixing flaky tests.
if story/defect does not have planned estimates (per day). Stars removed from story owners, story task owners. Associate leads and above get -2 per story that does not have planned estimates. -2 encourages dialog between leads and team members.
For not tasking a story (per day). Stars removed from story owners. Associate leads and above get -2 per story that does not have tasks. -5 encourages planning
for NOT updating actual and todo in rally every day. -5 encourages developers/qa to follow process. Historical actual data helps to determine how much efforts went on a particular module/feature/release etc.
for breaking a build. -10 encourages taking builds on local machine before check-in. encourages fixing flaky tests. encourages team members to respect each others valuable time.
for story spill over. Stars removed from story owners, story task owners. Associate leads and above get -5 per story that gets spilled over. -20 encourages planning, story mapping, slicing and completing the story in the given iteration and getting quicker feedback.
for leaving the story in prior iterations without getting those accepted. Stars removed from story task owners. -20 encourages dialog between product owners, managers and members and making sure that the work is complete.
Production Bug. Stars deducted from all the individuals involved in the story. Product owners, developers, qa. Based on managers discretion as not every bug has the same impact. -200 encourages code quality.

Simple way of gaining big stars -> If I am working on a story for a complete iteration (8 working days) and I check in code 3 times a day and I add two new test scenarios per check-in and I make sure that the story is complete and accepted within the iteration, it gives me 100 stars. Better sliced stories, following agile practices give me even better reward!

Reason Stars per check-in Code check in frequency Total
Weekly Stars
completing the story 10 10
getting the story accepted in same iteration 10 10
Total weekly Stars 20
Daily Stars
giving stable build 1 3 3
adding new test scenarios per check-in 2 3 6
updating rally 1 1
Total Daily Stars 10
Total Stars for 8 days i.e (10*8) 80

Once you are done with your stories, you can earn additional stars by helping other members of the team in achieving their stories within the iteration.

The game has been running for the last two weeks and my team has gladly agreed to run a trial period of about a month to see how it works.

The next step is to put up LCD to show off the departmental leader board.

I have seen a lot of excitement around the game… Only time will tell how it works… Statistics to follow…

Gamifying Agile Adoption – An Experiment – Part 2

The problem with duplication

Prev – On a quest of reducing Jenkins build time.

While digging into the issue, I found that most of the test jobs were first deleting their workspace and then copying the workspace of the base job, of roughly 1.8GBs, to their own workspace before running tests. This activity was taking roughly 10 minutes. This time was easily reduced by using the utility mklink, which allows creating directory links in windows. So now that the workspace for rest of the jobs were linked to the workspace of the head job, there was no need of deleting OR copying workspace. Girish, my colleague, later highlighted that I could also use “Use custom workspace” option available under “Advanced Project Options” of Jenkins. lol!

Well the problem with duplication is not just limited to code..

Next – The discovery of Ant JUnit task options

On a quest of reducing Jenkins CI build time

In my organization we are using Jenkins as our CI tool. The core build is followed by multiple jobs consisting of unit tests, integration tests, SAS integration tests, PMD, all running in parallel, running over 3000+ tests which took the entire build pipeline to run over 1 hour 30 minutes to produce the build artifacts. The amount of time taken was too high and it was very frustrating specially when the tests failed, as multiple developers would check in files while the earlier build was in progress, the next job would start and by the time issue was identified and fixed, it would take more than 4/5 hours to get a stable build. QA would not get build artifacts on time. Valuable development time was getting lost. There were frustrations all around.

This persistent issue pushed me on a quest to reduce the CI build time.

The problem with duplication

The discovery of Ant JUnit task options

The assumptions around IO and SSD

The alternative for SSD – in-memory/in-process db

The “eureka” moment – discovery of RAM Disk Drives

The excitement and the disappointment

Test on smaller data set

CPU profiling for rescue

Today Ajay moved our Jenkins VM to a box having hybrid disk and now the build pipeline time has reduced from 25 minutes to 15 minutes and all the test jobs run in less than 10 minutes!!! And I am feeling very happy and satisfied on my quest of reducing the build time. This journey took more than two months, during which I have learnt a lot.

 On a quest of reducing Jenkins build time – Part 2