Most Viewed Pages

20.11.15

JBehave Results Report format for BDD Story

How and what the JBehave result report displays, and how to update the format

  • All the scenarios mentioned with the 'Scenario:' tag in the story file are counted as 'Total' number of scenarios per story file - it does not matter whether they have GWT steps defined/implemented or not, if there is a 'Scenario:' tag, it would be counted.
  • Scenarios without the GWT steps defined in the story file would be counted as 'Pending' and would also be marked as 'Successful'. This makes sense because those scenarios have not failed/excluded technically because their steps are not even defined, forget implemented. Having a 'skip' tag for such scenarios does not make sense and is redundant.
  • This would be also true for the scenarios with GWT steps defined but not yet fully implemented; these would be counted as 'Pending' as well. In short, if GWT steps are not defined/implemented fully, the scenario would be marked as pending, and in this case, Meta filtering to skip the test is Not necessary.
  • Only those scenarios are counted as 'Excluded' which 1. have the GWT steps defined/implemented  and 2. have the skip filter added against them. Scenarios without GWT steps, even though marked as 'skip-ed' would still be counted as 'Pending' and not 'Excluded', which is correct.
  • The following sections are not needed and should be removed - how exactly needs to be added here.
    • AfterStories
    • BeforeStories
    • GivenStory Scenarios
  • Run Config for IDE/Maven command-line:
-Dbrowser=FIREFOX -DrunEnv=test1 -DmetaFilters=+Regression-skip-Manual






How the JBehave tests should be run -


Its a good idea that the final RunConfig should be tested only via maven command-line and not via any IDE. Also, when we want to run the tests, we should fire the mvn command and not use the IDE. 
This will ensure that we are testing with the same command that we want to put on CI, and also, bring to surface any issues when running via just command-line which is how all the tests are supposed to be triggered.


Bug with JBehave results reports -


Issue - Old results are not deleted from the /target/jbehave/ dir, due to which results which are not part of the current also get included in the latest results. 
When we do a selective run of only the scenarios that we want to run [only those scenarios as per the filter provided], and only they do get run, but if we have run any other stories previously [which were not part of the current selective run] then their results keep lingering around and are present in the final result report - we should have a clean run everytime, but not have to download the dependencies or resources everytime.


Solution - For now, this has been taken care by manually deleting the contents of the /target/jbehave/ dir [the 'view' folder also gets deleted, but it gets recreated when the tests are run] as part of the BeforeStories method. 

Also, we should be just deleting the files, in the target and view folders, and not the other folders and files in these 2 folders.

A better solution would be to figure out the jbehave config that can help delete older results before each run.




  • The default format that comes with the simple archetype is not good enough, and is very bland. It is sans colours and has a lot of redundant info.
  • To update the Report format, the pom from the simple archetype needs to be updated with the one from CCentric, specially the element with **/*Stories.java has to be changed to **/*Scenarios.java.
  • This has to be replaced with the one from CodeCentric [LT also used the one similar to CodeCentric] This is much better than the default one, but still has two issues - 1. The 'GivenStory Scenarios' section still appears in the report, and is always blank - This is not the BeforeStories section, as it BeforeStories appear along with regular stories. 2. The 'Totals' showing the number of stories in on RHS, though this is not a deal-breaker, and is a good enough version to start with.
  • There is a third report format that I have seen which does not have 'GivenStory Scenarios' section, nor the incorrect Totals section. So, the goal is to have the report in this format.
  • But do not try to build this format by editting the FTL files yourself - its much more time consuming and tricky than you think. Instead try to search the config that will help generate the report in the format that you need.
  • Also, there are a few extra options that can be included in the reports [in the order of priority] -
  • Attach the screenshots
  • Have the 'Jenkins Build Console Log' link to display the log file info
  • Attach the test data files - input & output - as a link for each scenario result


Info about FTL files -

  • FreeMarker (Template) by FreeMarker ProjectFreeMarker is a 'template engine'; a generic tool to generate text output (anything from HTML to autogenerated source code) based on templates. It's a Java package, a class library for Java programmers. It's not an application for end-users in itself, but something that programmers can embed into their products. This association is classified as Source Code.
  • This uses the .FTL files that we have under the view/ftl folder for jbehave.

Conventions to write BDD Story files in JBehave

Writing BDD Stories in Gherkin is easy, but Conventions would help us scale -up and manage them much more effectively.


  • Do not create too many feature files for each Jira story/task that you have, instead create feature files based on the functional aspects or features of your application. This will prevent proliferation of too many scenarios spread across too many story files.
  • Define a unique parent tag for each feature file that you create. This will be used to run all the scenarios in this story file. Define the parent Meta Tag as per the following format - @PROJECTNAME_FEATURENAME
  • Define a unique scenario-level Meta Tag as per the following format - @COMPONENT_FEATURE_TC<scenario-num> This will be used to run only the selected test-scenario from that story.
  • Along with the Feature, we may also want to add a tag, for the Jira Story, if needed, like @JIRA_<jiraID>. This can come handy when we only want to run tests based on each Jira Story or Epic
  • Define a common tag for different Category of tests, like @REGRESSION, @SMOKE, @API, etc. This will help run all the regression or smoke tests across different story files.
  • Meta tags can be defined in upper or lower case, but upper case is preferred
  • Do NOT use '-' in any of the meta tags. Using '-' in a tag name will actually not execute that test scenario
  • Do NOT have spaces in between in any tag's name, use '_' instead
  • Now, all my tests are not automated, and they wont run but they sure should be in the story files. So have additional tags for tests that are not automated, are still work in progress, or you just dont want to run, like below. [All these tags would be used to skip the tests]
  • @SKIP - denotes that these tests should be skipped and not run at all when running the JBehave Stories.
  • @MANUAL - denotes that the tests will not be automated, and have to be executed manually.
  • @WIP - denotes that these tests are not ready to be run in an automated manner
  • One of the reasons to include manual tests in the stories as well is to ensure that we have just one place to maintain all our tests, and its easier to report status.

Refer this post to put these conventions to use - How to use Meta Filters in JBehave

Add Meta Filters for JBehave Story

Meta filters in JBehave are very useful when we want to run only small subset of test scenarios across multiple Story files. 

For instance when there are 100 tests spread across 5 different Story files and we want to run just 20 tests (4 tests from each of the 5 stories). If we have Meta-tags defined, we just have to specify the tags for the tests we want to run or skip.

To enable filtering via 
Meta tags in JBehave, just add the following 3 lines in your Stories.java class, under the Stories() Constructor.

public Stories() {

  configuredEmbedder().embedderControls()
  .doGenerateViewAfterStories(true)
  .doIgnoreFailureInStories(true)
  .doIgnoreFailureInView(true)
  .useThreads(2)
  .useStoryTimeoutInSecs(300);

      //Custom Config >> Added to enable Meta tag based filtering
        List<String> metaFilters = new ArrayList<String>();
        metaFilters.add(System.getProperty("metaFilters", "-skip"));
        configuredEmbedder().useMetaFilters(metaFilters);

    }

@SKIP - this is the meta tag that can now be added to any test to skip its execution.

Refer this post to run JBehave stories by specifying Meta filters via Maven

Configure QTP to work with Firefox

Steps to configure QTP to work with Firefox 10.0.3, so that QTP is able to identify objects on Firefox.

  • Install QTP and Un-install any previous instances of Firefox
  • Install Firefox 10.0.3 by opening the file "WX7-FireFoxESR-10-0-3-R1.EXE" in "Run as Admin" mode
  • Restart the system
  • Install the QTP Patches in the following sequence:
    • a. QTPWEB_00090.EXE
    • b. QTPWEB_00092.EXE
  • Info about installed Patches would be available at: C:\Program Files (x86)\HP\QuickTest Professional\HotfixReadmes
  • Restart the system
  • Open: C:\Program Files (x86)\HP\QuickTest Professional\bin\Mozilla\Common\install.rdf
  • Copy the em:ID for QTP, at the top of the file, not for the Firefox itself.
  • Create a new empty file with this ID as the file name, do not give any extension to this file. Eg: {9F17B1A2-7317-49ef-BCB7-7BB47BDE10F8}
  • Enter this line in the file and click Save: C:\Program Files (x86)\HP\QuickTest Professional\bin\Mozilla\Common
  • Paste this file at: C:\Program Files (x86)\Mozilla Firefox\extensions
  • Admin privileges are need to drop this file!
  • On the desktop shortcut for QTP, right click and select "Run as Admin", let QTP get opened completely
  • Open Firefox, and install the QTP Plug-in Add-on when prompted

After this, QTP should be able to identify the objects on Firefox.
[This post needs to be updated for UFT and latest firefox versions]