Most Viewed Pages

20.11.15

JBehave Results Report format for BDD Story

How and what the JBehave result report displays, and how to update the format

  • All the scenarios mentioned with the 'Scenario:' tag in the story file are counted as 'Total' number of scenarios per story file - it does not matter whether they have GWT steps defined/implemented or not, if there is a 'Scenario:' tag, it would be counted.
  • Scenarios without the GWT steps defined in the story file would be counted as 'Pending' and would also be marked as 'Successful'. This makes sense because those scenarios have not failed/excluded technically because their steps are not even defined, forget implemented. Having a 'skip' tag for such scenarios does not make sense and is redundant.
  • This would be also true for the scenarios with GWT steps defined but not yet fully implemented; these would be counted as 'Pending' as well. In short, if GWT steps are not defined/implemented fully, the scenario would be marked as pending, and in this case, Meta filtering to skip the test is Not necessary.
  • Only those scenarios are counted as 'Excluded' which 1. have the GWT steps defined/implemented  and 2. have the skip filter added against them. Scenarios without GWT steps, even though marked as 'skip-ed' would still be counted as 'Pending' and not 'Excluded', which is correct.
  • The following sections are not needed and should be removed - how exactly needs to be added here.
    • AfterStories
    • BeforeStories
    • GivenStory Scenarios
  • Run Config for IDE/Maven command-line:
-Dbrowser=FIREFOX -DrunEnv=test1 -DmetaFilters=+Regression-skip-Manual






How the JBehave tests should be run -


Its a good idea that the final RunConfig should be tested only via maven command-line and not via any IDE. Also, when we want to run the tests, we should fire the mvn command and not use the IDE. 
This will ensure that we are testing with the same command that we want to put on CI, and also, bring to surface any issues when running via just command-line which is how all the tests are supposed to be triggered.


Bug with JBehave results reports -


Issue - Old results are not deleted from the /target/jbehave/ dir, due to which results which are not part of the current also get included in the latest results. 
When we do a selective run of only the scenarios that we want to run [only those scenarios as per the filter provided], and only they do get run, but if we have run any other stories previously [which were not part of the current selective run] then their results keep lingering around and are present in the final result report - we should have a clean run everytime, but not have to download the dependencies or resources everytime.


Solution - For now, this has been taken care by manually deleting the contents of the /target/jbehave/ dir [the 'view' folder also gets deleted, but it gets recreated when the tests are run] as part of the BeforeStories method. 

Also, we should be just deleting the files, in the target and view folders, and not the other folders and files in these 2 folders.

A better solution would be to figure out the jbehave config that can help delete older results before each run.




  • The default format that comes with the simple archetype is not good enough, and is very bland. It is sans colours and has a lot of redundant info.
  • To update the Report format, the pom from the simple archetype needs to be updated with the one from CCentric, specially the element with **/*Stories.java has to be changed to **/*Scenarios.java.
  • This has to be replaced with the one from CodeCentric [LT also used the one similar to CodeCentric] This is much better than the default one, but still has two issues - 1. The 'GivenStory Scenarios' section still appears in the report, and is always blank - This is not the BeforeStories section, as it BeforeStories appear along with regular stories. 2. The 'Totals' showing the number of stories in on RHS, though this is not a deal-breaker, and is a good enough version to start with.
  • There is a third report format that I have seen which does not have 'GivenStory Scenarios' section, nor the incorrect Totals section. So, the goal is to have the report in this format.
  • But do not try to build this format by editting the FTL files yourself - its much more time consuming and tricky than you think. Instead try to search the config that will help generate the report in the format that you need.
  • Also, there are a few extra options that can be included in the reports [in the order of priority] -
  • Attach the screenshots
  • Have the 'Jenkins Build Console Log' link to display the log file info
  • Attach the test data files - input & output - as a link for each scenario result


Info about FTL files -

  • FreeMarker (Template) by FreeMarker ProjectFreeMarker is a 'template engine'; a generic tool to generate text output (anything from HTML to autogenerated source code) based on templates. It's a Java package, a class library for Java programmers. It's not an application for end-users in itself, but something that programmers can embed into their products. This association is classified as Source Code.
  • This uses the .FTL files that we have under the view/ftl folder for jbehave.

Conventions to write BDD Story files in JBehave

Writing BDD Stories in Gherkin is easy, but Conventions would help us scale -up and manage them much more effectively.


  • Do not create too many feature files for each Jira story/task that you have, instead create feature files based on the functional aspects or features of your application. This will prevent proliferation of too many scenarios spread across too many story files.
  • Define a unique parent tag for each feature file that you create. This will be used to run all the scenarios in this story file. Define the parent Meta Tag as per the following format - @PROJECTNAME_FEATURENAME
  • Define a unique scenario-level Meta Tag as per the following format - @COMPONENT_FEATURE_TC<scenario-num> This will be used to run only the selected test-scenario from that story.
  • Along with the Feature, we may also want to add a tag, for the Jira Story, if needed, like @JIRA_<jiraID>. This can come handy when we only want to run tests based on each Jira Story or Epic
  • Define a common tag for different Category of tests, like @REGRESSION, @SMOKE, @API, etc. This will help run all the regression or smoke tests across different story files.
  • Meta tags can be defined in upper or lower case, but upper case is preferred
  • Do NOT use '-' in any of the meta tags. Using '-' in a tag name will actually not execute that test scenario
  • Do NOT have spaces in between in any tag's name, use '_' instead
  • Now, all my tests are not automated, and they wont run but they sure should be in the story files. So have additional tags for tests that are not automated, are still work in progress, or you just dont want to run, like below. [All these tags would be used to skip the tests]
  • @SKIP - denotes that these tests should be skipped and not run at all when running the JBehave Stories.
  • @MANUAL - denotes that the tests will not be automated, and have to be executed manually.
  • @WIP - denotes that these tests are not ready to be run in an automated manner
  • One of the reasons to include manual tests in the stories as well is to ensure that we have just one place to maintain all our tests, and its easier to report status.

Refer this post to put these conventions to use - How to use Meta Filters in JBehave

Add Meta Filters for JBehave Story

Meta filters in JBehave are very useful when we want to run only small subset of test scenarios across multiple Story files. 

For instance when there are 100 tests spread across 5 different Story files and we want to run just 20 tests (4 tests from each of the 5 stories). If we have Meta-tags defined, we just have to specify the tags for the tests we want to run or skip.

To enable filtering via 
Meta tags in JBehave, just add the following 3 lines in your Stories.java class, under the Stories() Constructor.

public Stories() {

  configuredEmbedder().embedderControls()
  .doGenerateViewAfterStories(true)
  .doIgnoreFailureInStories(true)
  .doIgnoreFailureInView(true)
  .useThreads(2)
  .useStoryTimeoutInSecs(300);

      //Custom Config >> Added to enable Meta tag based filtering
        List<String> metaFilters = new ArrayList<String>();
        metaFilters.add(System.getProperty("metaFilters", "-skip"));
        configuredEmbedder().useMetaFilters(metaFilters);

    }

@SKIP - this is the meta tag that can now be added to any test to skip its execution.

Refer this post to run JBehave stories by specifying Meta filters via Maven

Configure QTP to work with Firefox

Steps to configure QTP to work with Firefox 10.0.3, so that QTP is able to identify objects on Firefox.

  • Install QTP and Un-install any previous instances of Firefox
  • Install Firefox 10.0.3 by opening the file "WX7-FireFoxESR-10-0-3-R1.EXE" in "Run as Admin" mode
  • Restart the system
  • Install the QTP Patches in the following sequence:
    • a. QTPWEB_00090.EXE
    • b. QTPWEB_00092.EXE
  • Info about installed Patches would be available at: C:\Program Files (x86)\HP\QuickTest Professional\HotfixReadmes
  • Restart the system
  • Open: C:\Program Files (x86)\HP\QuickTest Professional\bin\Mozilla\Common\install.rdf
  • Copy the em:ID for QTP, at the top of the file, not for the Firefox itself.
  • Create a new empty file with this ID as the file name, do not give any extension to this file. Eg: {9F17B1A2-7317-49ef-BCB7-7BB47BDE10F8}
  • Enter this line in the file and click Save: C:\Program Files (x86)\HP\QuickTest Professional\bin\Mozilla\Common
  • Paste this file at: C:\Program Files (x86)\Mozilla Firefox\extensions
  • Admin privileges are need to drop this file!
  • On the desktop shortcut for QTP, right click and select "Run as Admin", let QTP get opened completely
  • Open Firefox, and install the QTP Plug-in Add-on when prompted

After this, QTP should be able to identify the objects on Firefox.
[This post needs to be updated for UFT and latest firefox versions]     

21.8.15

Log4J Configuration


Get the coordinates from maven central repo and add the dependency to your pom

    <dependency>
      <groupId>log4j</groupId>
      <artifactId>log4j</artifactId>
      <version>1.2.17</version>
    </dependency>
Create a file 'log4j.properties' and place it only under 'src/main/resources/' folder in your project, anywhere else, and it will give an error, like No Appenders found.....

This basic configuration would suffice in most cases, add it to the log4j.properties file.
 #---------------
# Root logger option
log4j.rootLogger=INFO, stdout, file

# Redirect log messages to console
log4j.appender.stdout=org.apache.log4j.ConsoleAppender
log4j.appender.stdout.Target=System.out
log4j.appender.stdout.layout=org.apache.log4j.PatternLayout
log4j.appender.stdout.layout.ConversionPattern=%d{yyyy-MM-dd HH:mm:ss} %-5p %C{1}.%M.%L: %m%n

# Redirect log messages to a log file, support file rolling.
log4j.appender.file=org.apache.log4j.RollingFileAppender
log4j.appender.file.File=E:\\ToDelete\\logger.log
log4j.appender.file.MaxFileSize=5MB
log4j.appender.file.MaxBackupIndex=10
log4j.appender.file.layout=org.apache.log4j.PatternLayout
log4j.appender.file.layout.ConversionPattern=%d{yyyy-MM-dd HH:mm:ss} %-5p %C{1}.%M.%L: %m%n
#---------------

NB:

  • To use the logger, just declare the class as follows.
    private final static Logger LOGGER = Logger.getLogger(BaseStep.class);

  • And you can start logging as follows.
LOGGER.info("Anything you want");

  • This would create logs in the format Class.Method.LineNo
  • For most part, we can set the logging level to INFO. DEBUG mode will cause log-blindness!
  • Also, if the appender file location [folder] does not exist, it will create it during run-time, provided it has the rights.
  • Although, it is recommended to declare this under each new class that you create, but if you declare it as public, in your base abstract class, then you would not have to declare it everywhere, and should be able to directly use it.


Maven - core ideas

Maven is a project management tool which encompasses a project object model, a set of standards, a project lifecycle, a dependency management system, and logic for executing plugin goals at the defined phases in a lifecycle.  Understanding how maven works is very critical for any developer.     


Maven reads the POM, then downloads the dependencies into the local repo, then executes the lifecycle, build phases and goals via the plugins, as per the specified build profile.

Philosophy
  • Maven suggests we follow (default) Convention over (custom) Configuration
  • Maven incorporates this concept by providing commonly followed default behavior for projects, without customization. The idea behind maven is about encouraging a set of standards, a common interface, a life-cycle, a standard repository format, a standard directory layout, etc.
  • The community needs to evolve beyond seeing technology as a zero-sum game between unfriendly competitors in a competition for users and developers.


Maven defaults –
  • Source code is assumed to be in ${basedir}/src/main/java
  • Resources are assumed to be in ${basedir}/src/main/resources
  • Tests are assumed to be in ${basedir}/src/test
  • A project is assumed to produce a JAR file. Maven assumes that you want the compile bytecode to ${basedir}/target/classes and then create a distributable JAR file in ${basedir}/target.
  • ~/.m2/settings.xml - A file containing user-specific configuration for authentication, repositories, and other information to customize the behavior of Maven.
  • ~/.m2/repository/ - This directory contains your local Maven repository. When you download a dependency from a remote Maven repository, Maven stores a copy of the dependency in your local repository.


POM –
  • POM is an XML representation of the project resources like dependencies, plugins, etc.
  • It is located in the root dir of the project.
  • POM specifies what needs to built, not how to build. How a project gets build is governed by the build phases and goals.
  • The modelVersion element sets what version of the POM model being used, and it corresponds to the the Maven version being used. Version 4.0.0 corresponds to Maven versions 2 and 3.
  • The groupId element corresponds to the name of the root java package of the project, but its not necessary that both be same, but again conventionally the pom groupId is same as the root package.
  • artifactID element contains the name of the java project. The JAR that gets eventually generated, uses the artifactID for the name.


Maven Build Process –
  • Maven Build Process consists of build lifecycle, phases and goals.
  • A build lifecycle consists of a pre defined sequence of phases, and each phase consists of a sequence of goals.
  • If we run a command to execute the lifecycle, then all the build phases in that lifecycle are executed.
  • If we run a command to execute the build phase, then all the build phases before it, in the pre defined sequence of build phases get executed too.
  • Sequence of Phases of Maven Build Lifecycle: validate >> compile >> test >> package >> verify >> install >> deploy
    • validate will check if the pom is correct
    • test will run the unit tests for classes
    • package will generate Jar/war as per POM file and will add the packaged jar or war to your target folder.
    • verify will run the integration tests
    • install will do all the things that package does, and then it will also add the packaged jar or war in your local .m2 repo
    • deploy will publish the jar to the remote/network repo

Maven Commands – 
  • To run the mvn command you have to pass the name of a build life cycle, phase or goal to it, which Maven then executes.
mvn install
This command executes the build phase called install. It executes all build phases before install in the build phase sequence, then executes the install phase, which builds the project and creates the packaged JAR file into the local Maven repository.
  • We can execute multiple build life cycles or phases by passing more than one argument to the mvn command.
mvn clean install
This command first executes the clean build life cycle, which removes compiled classes from the Maven output directory, and then it executes the install build phase. The target directory, under the project root dir, is created by Maven. It contains all the compiled classes, JAR files etc. produced by Maven. When executing the clean build phase, it is this target directory which gets cleaned.
  • You can also execute a Maven goal (a subpart of a build phase) by passing the build phase and goal name concatenated with a : in between, as parameter to the Maven command.
mvn dependency:resolve
     This would download all the jars for dependencies in the POM

mvn dependency:resolve -U
      U means force update of dependencies.

mvn help:effective-pom
This displays the entire pom for the project, containing even the inherited/hidden dependencies.

mvn integration-test --log-file log.log
This will run the integration test and output the result text in the log.log file in the root dir.

mvn archetype:generate
Here, 'mvn' is the Maven command and 'archetype:generate' is called a Maven goal. Also, 'archetype' is the identifier [ID] of a plugin and 'generate' is the identifier [ID] of the goal.


Plugin –
  • A Maven Plugin is a collection of one or more goals. A Plugin Contains Goals.
  • Examples of Maven plugins can be simple core plugins like the Jar plugin, which contains goals for creating JAR files, Compiler plugin, which contains goals for compiling source code and unit tests, Surefire plugin, which contains goals for executing unit tests and generating reports. 

Goal –
  • A Goal is a specific task that may be executed as a standalone goal or along with other goals as part of a larger build. A goal is a ‘unit of work’ in Maven.
  • Examples of goals include the compile goal in the Compiler plugin, which compiles all of the source code for a project, or the test goal of the Surefire plugin, which can execute unit tests.
  • The basic syntax for calling/executing a goal via a plugin is:
mvn pluginID:goalID
  • So basically, the idea is that there are different plugins and each plugin can do specialized set of tasks, and when we run the mvn command, we are basically telling mvn, to use a particular plugin, and with that plugin, accomplish a particular task [or 'goal'].
And, you should know various common plugins that would be used for automation - some plugins come bundled with maven core, and some might have to be added. And along with the maven plugins, you need to know various commands [goals] that each plugin can execute.


Lifecycle –
  • A Lifecycle is a default set of phases which get executed in the sequence defined for that life-cycle keyword, and in each phase various applicable goals are executed [various goals are 'bound' to phases].
Also, what is achieved via the life-cycle command can also be achieved by specifying the plugin goals in the same sequence as they would be executed via the life-cycle. But it is much easier to execute lifecycle phases than it is to specify explicit goals on the command line, and the common lifecycle allows every project that uses Maven to adhere to a well-defined set of standards.
  • Maven coordinates are often written using a colon as a delimiter in the following format: groupId:artifactId:packaging:version. In the pom.xml file for a project, its coordinates are represented as mavenbook:myapp:jar:1.0-SNAPSHOT
A project’s groupId:artifactId:version combination make that project unique.
  • The packaging format of a project is also an important component in the Maven coordinates, but it isn’t a part of a project’s unique identifier. A project with packaging 'jar' produces a JAR archive; a project with packaging 'war' produces a web application. Also, when you create a JAR for a project, dependencies are not bundled with the generated artifact; they are used only for compilation. When you use Maven to create a WAR or an EAR file, you can configure Maven to bundle dependencies with the generated artifact.
  • Maven has 3 built in lifecycles - default, clean and site. The default life cycle handles everything related to compiling and packaging your project. The clean life cycle handles everything related to removing temporary files from the output directory, including generated source files, compiled classes, previous JAR files etc. The site life cycle handles everything related to generating documentation for your project. You cannot execute the default life cycle directly. You have to specify a build phase or goal inside the default life cycle.

GroupId –

    • The group, company, team, organization, project, or other group. The convention for group identifiers is that they begin with the reverse domain name of the organization that creates the project.

ArtifactId –

    • A unique identifier under groupId that represents a single project.

Version –

    • A specific release of a project. Projects undergoing active development can use a special identifier that marks a version as a SNAPSHOT.


Dependency –
  • Dependency is an external JAR used in the project.
  • One of the most basic uses of maven is to manage all the external JARs used in the project, by adding them as dependencies in the POM.
If dependencies are not found in the local repo, then maven downloads them from the Central Repo and puts them in the local repo. [Local repo is just a folder containing all the jars, generally located in the .m2 folder under each user's profile folder]
  • Even if a maven dependency is not available in the central repo, we can still put the dependency [jar] in the local repo ourselves. To put the dependency manually, we need to adhere to the groupId, artifactID and version format while creating the directory structure.
            <dependency>
                        <groupId>org.seleniumhq.selenium</groupId>
                        <artifactId>selenium-java</artifactId>
                        <version>2.46.0</version>
            </dependency>
For the above dependency, the selenium jar needs to be located under MAVEN_REPOSITORY_ROOT/org/seleniumhq/selenium/selenium-java/2.46.0/ This can be done for jars which have a simple structure, but for jars with multiple sub dir it can be tricky.

Maven reads the pom file, then downloads the dependencies into the local repo, then executes the lifecycle, build phases and goals via the plugins, as per the specified build profile.

External Dependencies -


An external dependency in Maven is a dependency (JAR file) which is not located in a Maven repository (neiterh local, central or remote repository). The word "external" thus means external to the Maven repository system - not just external to the project. Most dependencies are external to the project, but few are external to the repository system (not located in a repository) - these can be proprietory jars which are usually not free/OS. There 3 options to use such jars in the project.

Option 1: We can directly add such jars in the IDE and run our tests via testNG. We can keep such jars in the src/main/resources/lib folder and add their references in the project build path. Then these jars can be committed in the VCS and as such would be present along with the source code of the project. As such we can now run our tests via TestNG without any issues. But we cannot run like this on CI because these jars are not present in the POM, hence, we have to use option 2.

Option 2: Keep these jars in the src/main/resources/lib folder in the project. Then add their dependencies in the POM with dummy groupId, artifactID and version. But the path to these jars should always be relative to the src dir. We configure an external dependency like below:
<dependency>
<groupId>mydependency</groupId>
<artifactId>mydependency</artifactId>
<version>1.0</version>
<scope>system</scope>
<systemPath>${project.basedir}/src/main/resources/lib/mydependency.jar</systemPath>
</dependency>

Option 3: Use the maven-install-plugin or the maven-external-dependency-plugin.
  • If a project A depends on project B, which in turn depends on project C, then we only have to include the dependency of project B in the pom of project A -  Maven takes care of child dependencies for project B [project C] implicitly.
Also, there is an idea of dependency 'scope' for a particular 'goal'. When a dependency has a scope of test, it will not be available to the compile goal of the Compiler plugin, it would be available only for the 'test'-esque goals. A test-scoped dependency is a dependency that is available on the classpath only during test compilation and test execution. If your project has war or ear packaging, a test-scoped dependency would not be included in the project’s output archive. To add a test-scoped dependency, add the dependency element to your project’s dependencies section, as shown in the following example:

 <dependency>
   <groupId>org.apache.commons</groupId>
   <artifactId>commons-io</artifactId>
   <version>1.3.2</version>
   <scope>test</scope>
 </dependency>


SureFire plugin to run TestNG suite-
  • Maven SureFire Plugin is a very widely used plugin to execute tests
  • You don’t have to do anything special to run a unit test; the test phase is a normal part of the Maven lifecycle. You run Maven tests whenever you run 'mvn package' or 'mvn install' commands. If you would like to run all the lifecycle phases up to and including the test phase, run 'mvn test' command.
  • When Maven encounters a build failure, its default behavior is to stop the current build. To continue building a project even when the Surefire plugin encounters failed test cases, you’ll need to set the testFailureIgnore configuration property of the Surefire plugin to true.
<build>
 <plugins>
  <plugin>
   <groupId>org.apache.maven.plugins</groupId>
   <artifactId>maven-surefire-plugin</artifactId>
    <configuration>
     <testFailureIgnore>true</testFailureIgnore>
    </configuration>
  </plugin>
 </plugins>
</build>

This property can also be set from the command line using the -D parameter:
mvn test -Dmaven.test.failure.ignore=true

Maven also provides for the ability to skip unit tests using the skip parameter of the Surefire plugin. To skip tests from the command line, simply add the maven.test.skip property to any goal:
mvn install -Dmaven.test.skip=true

Another way to configure Maven to skip unit tests is to add this configuration to your project’s pom.xml. To do this, you would add a plugin element to your build configuration.
<build>
 <plugins>
  <plugin>
   <groupId>org.apache.maven.plugins</groupId>
   <artifactId>maven-surefire-plugin</artifactId>
    <configuration>
     <skip>true</skip>
    </configuration>
  </plugin>
 </plugins>
</build>


  • To have Maven run our tests in parallel, we need to run the tests as 'methods' and not 'classes', along with the fork count being equal to the runner methods, without re-using the threads/forks.
  • If we run tests as 'classes', then they will not run in parallel. Basically, the way you define the execution config for TestNG, the same should be followed for SureFire also.
  • Maven and its plugins use the TestNG Annotations to invoke the methods/classes as tests via SureFire plugin.
  • Just add the below build tag in the pom before dependency tags to control how you build and run tests


<build>
<!-- Source directory configuration -->
<sourceDirectory>src</sourceDirectory>
<plugins>

<!-- Following plugin executes the TestNG tests -->
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-surefire-plugin</artifactId>
<version>3.0.0-M4</version>
<configuration>
<parallel>methods</parallel>
<forkCount>3</forkCount>
<!-- TestNG Suite Config file to use for test execution -->
<suiteXmlFiles>
<suiteXmlFile>./Config/testng.xml</suiteXmlFile>
</suiteXmlFiles>
</configuration>
</plugin>

<!-- Compiler plugin configures java version for compiling the code -->
<plugin>
<artifactId>maven-compiler-plugin</artifactId>
<version>3.8.1</version>
<configuration>
<source>1.8</source>
<target>1.8</target>
</configuration>
</plugin>
</plugins>
</build>


Passing TestNG XML at run-time via mvn -

  • Running TestNG via command line is very cumbersome [as it involves knowing and passing various parameters and even updating the ClassPath Env Var] hence, instead its much better to run the tests via Maven SureFire plugin. We should use TestNG XML just to specify the tests to be run.
  • We can even the testNG xml at run time via mvn command and then read them in the POM to run that suite, like below.
<suiteXmlFiles>
<suiteXmlFile>${suiteXmlFile}</suiteXmlFile>
                        <suiteXmlFile>./Config/testng.xml</suiteXmlFile> 
</suiteXmlFiles>
  • And call as below
mvn test -DsuiteXmlFile=/dirPath/fileName.xml
  • Or as
mvn test -Dsurefire.suiteXmlFile=/dirPath/fileName.xml
  • In a way, this also shows how we can pass any 'value' via mvn command and then have them read as '{parameters}' anywhere in the pom.xml
  • We can even specify multiple testNG XMLs and when we run the tests they get executed in the sequence they are added in the pom. The only issue with this is that none of the tests would run and execution would fail if even one of the XMLs is not specified/empty in case we just use the 'mvn test' command.
  •  Just remember that mvn commands are case sensitive so, ensure to use exact parameter names in '-D' coordinates.



Maven Settings File –
  • Maven has two settings files. In the settings files you can configure settings for Maven across all Maven POM files. The two settings files are located at:
The Maven installation directory: $M2_HOME/conf/settings.xml
The user's home directory: ${user.home}/.m2/settings.xml
  • Both files are optional. If both files are present, the values in the user home settings file overrides the values in the Maven installation settings file.
  • We can change the location of the local repository by setting the directory inside our Maven settings file. Maven settings file is also located in our user-home/.m2 directory and is called settings.xml. Here is how we can specify another location for the local repository:
<settings>
    <localRepository>
        d:\data\java\products\maven\repository
    </localRepository>
</settings>


Maven Central Repository -


Remote Repository –
  • Sometimes for some stupid reason or just to frustrate everyone companies decide to change the internal source repo and make the new repo just imposible to access or have limited connectivity via some type of servers. Hence, its a good idea to mention the exact repo that you are using directly in the pom because the repo in the pom overrides the repo mentioned in the local settings.xml.
  • A remote repository is a repository on a web server from which Maven can download dependencies, just like the central repository. A remote repository can be located anywhere on the internet, or inside a local network.
  • You can configure a remote repository in the POM file. Put the following XML elements right after the <dependencies> element:
<repositories>
   <repository>
       <id>automation.code</id>
       <url>http://test.automation.com/maven2/lib</url>
   </repository>
</repositories>


Build Profiles –
  • Maven build profiles enable you to build your project using different configurations. Instead of creating two separate POM files, you can just specify a profile with the different build configuration, and build your project with this build profile when needed.
  • Maven build profiles are specified inside the POM file, inside the profiles element. Each build profile is nested inside a profile element.
  • Build Profiles help build the same maven project in different ways, according to the end use. 
  • For example, the build profile for test and production environments would be different, again, the profile to be used for test automation would be different, and can have its own specifications as per execution environments [on local desktop or on CI box]. These different build profiles can be added to the pom and specified in the maven command.
 <profiles>
      <profile>
          <id>test</id>
          <activation>...</activation>
          <build>...</build>
          <modules>...</modules>
          <repositories>...</repositories>
          <pluginRepositories>...</pluginRepositories>
          <dependencies>...</dependencies>
          <reporting>...</reporting>
          <dependencyManagement>...</dependencyManagement>
          <distributionManagement>...</distributionManagement>
      </profile>


  </profiles>


<resources> tag in the POM - 
  • This feature allows you, for example, to use the variables defined in the POM, inside your .properties files. Just declare the directory, where your .properties are, in <resources> section, and you'll be able to use ${project.name}.
  • Refer - http://maven.apache.org/plugins/maven-resources-plugin/examples/filter.html
<resources>
        <resource>
            <directory>src/main/resources</directory>
            <filtering>true</filtering>
        </resource>
    </resources>
              
  • This is not essential though could be useful in some cases for passing Env variables via maven coordinates.

<pluginManagement> tag in the POM -
  • Difference between plugins and plugiManagement tags –
    • <pluginManagement/> defines the settings for plugins that will be inherited by modules in your build. This is great for cases where you have a parent pom file.
    •  <plugins/> is an actual invocation of the plugin. It may or may not be inherited from a <pluginManagement/>.
  • pluginManagement: is an element that is seen along side plugins. Plugin Management contains plugin elements in much the same way, except that rather than configuring plugin information for this particular project build, it is intended to configure project builds that inherit from this one. However, this only configures plugins that are actually referenced within the plugins element in the children. The children have every right to override pluginManagement definitions.
  • You don't need to have a <pluginManagement/> in your project, if it's not a parent POM.
  • This is not essential for the JB project, the CC POM does not have it.
  • Refer - http://stackoverflow.com/questions/10483180/maven-what-is-pluginmanagement/

Maven Folder Structure -

  • Resources folder - 
    • Usually the 'resources' folder in all maven projects is located under - ./src/main/resources
    • If for some reason you decide to keep your data files (like .xls) in this folder and if you have any of it opened while you are running the cmd 'mvn test' then the step would fail with the error that ab.xls is already open and being used by other process - even if that data file is not being used in the current cmd at all. So, try to avoid keeping data-files in the resources folder, if you have to keep many of them open while running tests


ANT -

  • Maven Ant is Deprecated.
  • Gradle = Maven + Ant != good for QA
  • Ant is just a build tool, and is very procedural, in the sense, everything has to be defined explicitly, as there are no defaults.