18.3.21

Build a Real-time Automation Dashboard

 


The above picture conveys how you can feed test results from all your different tests into a database and then display it over a dashboard.

As your automation matures and grows in size/coverage, everyone expects more from it, and expect it to be faster and simpler, and do all kinds of things.
If you have 1000s of tests, running on a regular basis, keeping track of all those and then conveying this to all the stakeholders can become a big task in itself. And with 1000s of tests, its difficult to condense and provide an overall picture and status of the testing.

This is where having a Dashboard helps, as it radiates the status and progress of each test/feature in real-time. 
Not just automated tests results, it can also pull data from Jira for Manual Tests, and keep all this info current, and easy to understand, and all your Directors and MDs would love to see a browser based real-time dashboard!

Also, the dashboard should not cause too much overhead in terms of implementation and actual use because otherwise it increases development and testing time. 
Hence, ideally, the Dashboard calls should be wrapped in the Reporting framework itself, so that its use becomes seamless

Installing a dashboard solution on a Linux box seems to be the only way to get our own custom dashboard for reports.
Since, installing stuff on linux in enterprise setups is very time consuming I wanted the shortest/fastest possible solution with minimal dependencies.

Also, I need a more real-time solution which takes the data via an API/JSON rather than from a log/file because setting that up on CI/Agent would also be difficult.

Below is my experience of evaluating different solutions to implement a real-time Dashboard for Automated Test Results, where I have compared different options and tried to chose what best suites our need.

Have evaluated the following
  • ELK
  • ReportPortal
  • Allure
  • Klov
  • Grafana + InfluxDB

ReportPortal
  • ReportPortal seems to be open source but has certain pre-requisites that need to be installed before we can use it - RabbitMQ, PostGreSQL and Elasticsearch; after this we still need to install ReportPortal and all the plugins that we need. 
  • If we have to end up installing ELK then I dont think there is any need to bother with ReportPortal, we can get the dashboard done with just ELK - no need for one more layer. 
  • Also, since this could only be installed on Linux, and I did not have a linux box of my own to play with, it was difficult to evaluate and use.

Allure
  • The problem that I faced with Allure was that even though it generates report files for each run, those report files can only be read after loading and processing in Allure, not directly like the way some other libraries that generate a easy to use HTML Result. 
  • This defeats the purpose of having a real-time report and a dashboard. This feels a veiled attempt to cause stickiness with the product, and I do not want to be ham-strung and be covertly dependent on any product. 
  • So clearly, Allure was not the product to use.

Klov
  • This comes from the makers of ExtentReports, and I really like ExtentReports, as its open source, and is really built from a tester's point of view, and is easy to implement and use - not to mention how the beautiful reports look. 
  • But I did not have an option to pay for Klov as its no open-source, and at some point I felt that I was getting too dependent on just one solution, which could cause difficulties in the future if there are some changes in the product which couldn't be handled. 
  • So thought of using other products, but I still continue to use ExtentReports. 
  • Also, paying for a Dashboard solution (eg., Klov) may not be an option for many teams, specially only just for reporting purposes, when your entire tool-set is open source.

In the end, I was left with just ELK and Grafana.

ELK
  • This is a great solution, and is being increasingly adopted by many teams. It has high scalability and has a ton of features and implementation options and is a becoming more of a standard now.
  • Though ELK might be now set up in many organizations already, if its not, getting it set up could be difficult, at least it was difficult for me. 
  • And getting to install and configure all the different components could also prove time consuming which many QA teams cannot afford because you do need to configure different listners and appenders to capture and transmit all the info. 
  • It like installing many different pieces each with their own config and then trying to use all of them. 
  • Needless to say this has a long learning curve too.

Grafana
  • Grafana is a multi-platform open source analytics dashboard and InfluxDB is an open-source time series database.
  • Grafana + InfluxDB does not need any middleware messaging hub/node as they communicate directly via API, so this is a simpler solution. 
  • Grafana fires up fast and has many options to customize the way you want to present your data on the dashboard. You install InfluxDB and then forget about it, there is no overhead. Its super simple to use and read.
  • Also, the installation for both Grafana and InfluxDB is pretty simple without any major dependencies. 
  • And it can also be installed directly on Windows too, which really helped in evaluating all the different options that we were planning to build.

A Jira based dashboard is also possible and could be great too as it has its own well defined API, but I did not evaluate it this time, may be later.

So, at this point InfluxDB + Grafana seems to be the really simple solution but far better suited to our needs, though not easy in any way as it does have some learning curve.

Refer below links to get started with Grafana and InfluxDB

How to use Grafana Dashboard

Grafana is a multi-platform open source analytics dashboard that provides charts, graphs, and alerts for the web when connected to supported disparate data sources. We can create custome monitoring dashboards using interactive query builders.


How to install -
download v7.0.0
Download the installer (grafana-7.0.0.windows-amd64.msi) and run it with default options.

How to run -
Open cmd and go to install dir of grafana and run the grafana-server.exe under bin
C:\Program Files\GrafanaLabs\grafana\bin>.\grafana-server.exe

By default, Grafana will run on port 3000. Default credentials are admin/admin
Launch grafana via URL - http://localhost:3000/


Adding Influx data source in Grafana -
To use InfluxDB in Grafana, we need to establish a Data Source Connection.
Go to Configuration > Data Sources > Add Data Source > InfluxDB.

Once configured, check whether the Data Source works by clicking the Save & Test button. You should get a message like 'Data source is working'.

You can now create custom Dashboards by using its UI interface.

How to use InfluxDB

InfluxDB is an open-source time series database optimized for fast, high-availability storage and retrieval of time series data, has no external dependencies and provides an SQL-like language for CRUD operations.

It is great option to store huge sets of time-series data and supports lightning fast retrieval.


How to install -

download v1.8.0

https://dl.influxdata.com/influxdb/releases/influxdb-1.8.0_windows_amd64.zip

Just unzip it to any dir - no other installation step involved.

dir - C:\influxdb

You may change the conf to disable reporting to influx website.


How to run -

Open cmd and go to install dir of influxdb and run the influxd.exe like below

C:\influxdb>.\influxd.exe

By default, an InfluxDB instance runs on port 8086

This will start the instance, not the database. 

To interact with a database, run the influx.exe which is a CLI tool for working with influx databases.


13.2.21

Working with TeamCity CI Server

 TeamCity is a CI server which allows us to integrate, run and test parallel builds simultaneously on different platforms and environments.

It has a lot of features, and offers great flexibility in terms defining multiple custom jobs to run different kinds of builds, but its not open-source.

Jenkins is open-source and is much more prevalent in the industry and has a very rich plugin-ecosystem which helps extend its capabilities.

Below are some points that are usually learnt the hard way and are not defined in any manual...

  • Parameter values specified in the build settings anywhere are case sensitive. JDK!=jdk
  • You can run multiple builds on TeamCity Agents for both Windows and Linux platforms, but each agent runs only one build at a time, and if you want to run multiple builds on the same agent, they get queue-ed and run sequentially rather than in-parallel simultaneously.
    • Though you can always run builds in parallel (simultaneously) on different agents
  • All paths that need to be specified in the build settings should be relative to the 'Build Checkout Directory', which can be found via %teamcity.build.checkoutDir%. 
    • All the project code-files get checked out from VCS/Git in the Build Checkout Dir, before actually starting the build execution.
    • So if the checkout dir on a windows teamcity agent is actually C:\users\lokiagt\BuildAgent\work\989sds9898h989s\ then the complete path to the source of the project would be 'C:\users\lokiagt\BuildAgent\work\989sds9898h989s\cloud-bdd\'
    • This is also the reason we should give all paths for reports/data/config relative to the Project Dir so that it can be easily accessed from CI boxes regardless of where it gets checked out to.
    • Ensure to use \ for path on windows agents and / for linux agents.
  • Use the publish artifacts feature in general settings for the build to publish any file/result/csv to be available after the build is complete. For example if the HTML results get generated in the folder cloud-bdd/reports/<build-name><build-no>, then we can direct teamcity to publish all artifacts in that directory by specifying its path like:
    • cloud-bdd\reports\%system.teamcity.buildConfName%%system.build.number% => AutoResult
    • AutoResult is the folder name which would be visible under the 'Artifacts' section of the Build and Global Build homepage under the 'blue-box-icon'.
  • Properties used to filter agents for run
    • teamcity.agent.jvm.os.name > Windows/Linux
    • teamcity.agent.name > hostname of the agent
    • env.JAVA_HOME > jdk/jre
    • There are a lot of properties that can be used for example env.TEAMCITY_BUILDCONF_NAME returns the name of the build and env.BUILD_NUMBER returns the dynamic build number associated with that build.
    • These can be called via System.getenv("env.TEAMCITY_BUILDCONF_NAME") and System.getenv("env.BUILD_NUMBER"). We use 'getenv' method as these are Environment Parameters for the Agent/Build.

  • Installing the TeamCity Windows Agent -
    • The server URL should not end with / it should be just hostname.domain.net. Also, giving just the hostname is enough, no need to give the domain if its in the same network.
    • We can use \ [for windows] while specifying the absolute path in the agent.bat file. We may use / [for linux] for relative paths.
    • The teamcity agent comes bundled with its own JRE so we should use that and not the system JDK/JRE because the local ones are not used in the agent.bat files. Hence, no need to add TEAMCITY_JRE as an Environment Parameter.
    • The parameter names given in the wrapper.conf file of the agent is nothing but the -D flags similar to jvm/mvn coordinates.
    • Flags for the jvm command should be specified like: jvm -DpathName=<path>
    • Certificate errors are thrown up when the JRE versions are different on the agent client and the agent server like [jre8.121 and jre8.202]. Though online help suggest that the path to the cacerts.jks file should be in your Path variable and you should update the cacerts file there, its actually not needed because when you add the JAVA_HOME to your path, the cacerts file also gets added to the path as it is located under the security folder in JAVA_HOME folder as %JAVA_HOME%/jre/lib/security.
    • To check if the certificate is present and installed in the correct location use this command:
      • keytool -list -keystore %JAVA_HOME%/jre/lib/security/cacerts
    • Do not install the teamcity agent on a client having QTP installed because the Environment Parameters of QTP would wreak havoc and not let it run at all. This would happen even if you removed all Env Parameters related to QTP, which is quite strange.
    • Even if you are not able to run the agent as a windows service, you can run the agent.bat file which does the same thing.
    • You can start and stop the agent service via agent.bat file via these option flags, given from the C:\BuildAgent\bin folder
        .\agent.bat start and .\agent.bat stop
    • Even though the default installation is supposed to be error free there could be lot of issues and errors in the agent.bat file that would have to be fixed before you could actually get it to run. Like providing the absolute path of the Log4J XML file, path of the cert file. Most of the times relative paths would not work so we would have to give absolute path in the jvm coordinates.
    • Sample command to be run from the C:\BuildAgent\ dir with all paths relative to the BuildAgent\ dir:
      • C:\BuildAgent\jre\bin\java.exe -ea -Xmx384m -Djava.security.debug=all -Dlog4j.configuration=file:..\conf\teamcity-agent-conf.xml -Dteamcity_logs=..\logs -Djavax.net.ssl.truststore=..\..\BuildAgentConf\cacerts.jks -Djavax.net.ssl.truststorePassword=changeit -classpath C:\Program Files (x86)\Common Files\Oracle\Java\javapath;C:\Program Files (x86)\Python37-32\Scripts\;<all other classpaths> -file ..\conf\buildAgent.properties -launcher.version=61544
    • Always use the old CMD window and never the powershell window as it does not work well with relative paths.
    • You can view the logs for the agent starter, wrapper and agent connection, under the different log files under C:\BuildAgent\logs dir

10.8.19

Selenium WebDriver Type Hierarchy

Ever wondered how is the WebDriver actually implemented or why do we use ChromeDriver but call it WebDriver or what is RemoteWebDriver? Lets find out.

  • WebDriver is an Interface.
  • JavascriptExecutor is an Interface.
  • RemoteWebDriver is the parent Class that implements the WebDriver and JavascriptExecutor interfaces. 
  • ChromeDriver and FirefoxDriver and browser drivers are the child Classes that extend the parent RemoteWebDriver class.

An Interface by definition does not have any implementation details of its methods, just the empty method declarations. Its the responsibility of the implementing class to 'implement' those methods by adding details of what the methods would do.
Since WebDriver and JavascriptExecutor are Interfaces they only have abstract (empty) methods; and the 'fully-implemented' class RemoteWebDriver actually provides the definition of the methods in these 2 interfaces - All the abstarct methods in the WebDriver and JavascriptExecutor interfaces are implemented in the RemoteWebDriver class.
Browser specific drivers like ChromeDriver and FirefoxDriver then go and extend this RemoteWebDriver class to add more methods of their own; or have their own implementations of the same methods.

But why this hierarchy?  The actual developers of Selenium don't know how all the different browsers work internally. So they just declared the methods that they thought were important to work with Selenium and left the actual implementation part of these methods to the developers of these browsers.
The real problem is that browsers are complicated software and not everything is open-source/visible to external developers, so they cannot customize.
For instance, the actual implementation of the 'Click' method for WebDriver could be different for each of Chrome and Firefox, hence, they have their own driver versions for the same (which is why we don't use Firefox driver on Chrome).
Also, in a way this puts the onus on the browser-companies to provide the implementation of their drivers to stay relevant and be widely adopted.

So can we do this?  WebDriver driver = new WebDriver();
We get a compile time error: Cannot instantiate the type WebDriver - why? because we cannot instantiate an interface ie., cannot create an object of an interface (WebDriver) and invoke its methods.
Since WebDriver is an Interface and not a 'Class', and all its methods are just empty shells (abstract), we really cannot do anything anyways by creating an object of the interface and trying to call its methods - hence, its not advised to create an object of an empty interface.
Thus, if we want to perform any action we have to invoke the implementing class of that interface.


So should we do this?  WebDriver driver = new RemoteWebDriver();
We get a compile time error: The Constructor RemoteWebDriver() is not visible - what this means is that there is no method like this to be called directly (constructor is also a method).
Though technically we can have this code -
WebDriver driver = new RemoteWebDriver(capabilities);
Or
WebDriver driver = new RemoteWebDriver(URL, capabilities.chrome());
Or
WebDriver driver = new RemoteWebDriver(commandExecutor, capabilities);
Why we don't use the above is because RemoteWebDriver is usually intended to be used while working with Selenium Grid and needs the Selenium server, wehere-as if we use ChromeDriver() we would be invoking the local installation of the chrome browser on our machines.

What about this?  ChromeDriver driver = new ChromeDriver();
Since ChromeDriver is a class, it implements all the methods of the WebDriver interface. But the 'driver' instance that gets created will only be able to use the methods implemented by ChromeDriver and supported only by the chrome browser; and as such we would be restricted to run our scripts only using the chrome browser.
To work with other browsers we will have to create individual objects via - FirefoxDriver driver = new FirefoxDriver();
And we will have to keep switching at runtime.

This is the reason we use this:  WebDriver driver = new ChromeDriver();
So that we can work with different browsers without having to update our code for every browser specific driver. And this would make our code more extensible by providing us the flexibility to work with any number of browsers (drivers).
Also, this is better design as a change in driver initialization for one browser will not be affect others, and we can have different configurations for different browsers.
Here, WebDriver is the interface, ChromeDriver() is the Constructor, new is the keyword and [new ChromeDriver()] is the object referenced by the 'driver' variable.
'Java' specific reason - WebDriver is the super interface for all browser classes like FirefoxDriver, ChromeDriver etc. So WebDriver instance can hold object of any driver class. This is also called Upcasting - When we pass the reference of a super-class [parent] to the object of its sub-class [child].

But can we do vice versa? - ChromeDriver driver = new WebDriver();
We get a compile time error: Cannot convert from WebDriver to ChromeDriver.

But then why do we have to do this? - JavascriptExecutor js = (JavascriptExecutor) driver;
WebDriver and JavascriptExecutor are two different interfaces, and they do not have any methods common. The 2 methods of the JSE (executeScript and executeAsyncScript) are not present in WebDriver interface.
But all the methods of the WebDriver and JSE interfaces have been implemented by the browser drivers.
Because we had up-cast the 'driver' object to WebDriver and WebDriver does not have the methods of JSE interface, we have to down-cast.
We wouldn't have had to down-cast had we just used [ChromeDriver driver = new ChromeDriver();] In this case, you do not need to downcast it to JavascriptExecutor as the 'driver' has visibility of all methods of JSE because the browser driver class 'ChromeDriver' extends 'RemoteWebDriver' class, hence, ChromeDriver has indirect access of  all methods of JSE via RemoteWebDriver.

Infact we can even cast it to ChromeDriver and not have to use JavascriptExecutor, like below -
JavascriptExecutor js = (ChromeDriver) driver; // This works too!!


Addtional notes -

  • SearchContext is the top most interface which has only two methods names findElement() and findElements(). These methods are abstract as SearchContext is an interface. This is the reason we do not up-cast to SearchContext because there is no point in just having 2 methods to work with; and having to downcast every time we want to use the third method.
  • WebDriver is also an interface which extends SearchContext but since WebDriver has the maximum number of methods, it is the key interface against which tests should be written. There are many implementing classes for the WebDriver interface, as listed as below:
    • AndroidDriver
    • AndroidWebDriver
    • ChromeDriver
    • FirefoxDriver
    • HtmlUnitDriver
    • InternetExplorerDriver
    • IPhoneDriver
    • IPhoneSimulatorDriver
    • SafariDriver
  • WebDriver has many abstarct methods like get(String url), close(), quit() , getWindowHandle etc. WebDriver also has nested interfaces names Window, Navigation, Timeouts etc that are used to perform specific actions like getPosition(), back(), forward() etc.
  • RemoteWebDriver is the fully implemented class for WebDriver, JavascriptExecutor and TakesScreenshot interfaces. (Fully implemented class means it defines the body for all inherited abstract methods.)
  • Then we have browser specific driver classes like ChromeDriver(), EdgeDriver(), FirefoxDriver() etc which extend RemoteWebDriver.
  • RemoteWebDriver implements JavascriptExecutor and provides definition for both methods of the JSE. Since all browser-specific driver classes like ChromeDriver etc extends RemoteWebDriver, we can execute JavaScript commands via JSE methods on these different browsers.


3.8.19

Difference between WebDriver and JavaScript Clicks

We can click on a webelement in 2 ways:
Using WebDriver click – element.click()
Using JavaScript click – ((JavascriptExecutor)driver).executeScript("arguments[0].click()", element); 

When we click on a webelement using WebDriver, it checks for 2 conditions before clicking - The element must be visible; and it must have a height and width greater than 0.
If preconditions are not satisfied, the click is not performed and we get an exception.

But the JavaScript click method does not check these preconditions before clicking. The HTMLElement.click() method simulates a mouse click on an element. When click() is used with supported elements (such as an <input>), it fires the element’s click event.
So, JavaScript can click on a webelement which may not be actually visible to the user or be clickable by WebDriver API. 

But we know - "Selenium-WebDriver makes direct calls to the browser using each browser's native support for automation." Meaning...WebDriver tries to mimic actual user behavior when working on browsers.

From Selenium 3 onwards, WebDriver APIs are designed to simulate actions similar to how a real user would perform actions on the GUI via the browser, and not use wrapped JS calls to execute different commands on the browser, like it happens via SeleniumRC.

All browsers now have their own drivers which implement the WebDriver API and Selenium communicates with these drivers via HTTP and these drivers communicate natively with the browser. So we can say that the ChromeDriver performs actions similar to a user using the chrome browser.

JavaScript bypasses this and goes to interact with the DOM of the page directly. This is not how a real user would use a browser.
This is also similar to the problem we had with v1 of Selenium where it used JavaScript to directly communicate with the browser [because SeleniumRC was just a form of wrapped JavaScript calls]

Also sometimes, use of JS methods may not trigger events which would have been otherwise triggered had we used WebDriver. For example, a subsequent onClick() event may not get triggered when a button is clicked via JS.

Hence, in order to simulate actual user behavior we should go for WebDriver and use JS sparingly and only when direct methods of WebDriver dont work.


This was precisely the problem with SeleniumRC.

SeleniumRC had 2 components - core and server. The core is basically a bunch of JavaScript code that is injected into the browsers to control/automate the behavior. Using JavaScript to control browsers caused issues, specially with IE as it has a different implementation/behavior with JavaScript. In a way, Selenium sends Selenese commands over to Se Core via JS Injections which in turn control the browser. 

Also, there was problem with the same origin policy of browsers and to overcome this, the server component was used so that all the JavaScript code injection was directed via the server, so as to appear that its originating from the same host. This caused problems when there were popups, file uploads, etc, and this was relatively slower to run.
[To avoid 'Same Origin Policy' proxy injection method is used, in proxy injection mode the Selenium Server acts as a client configured HTTP proxy, which sits between the browser and application under test and then masks the AUT under a fictional URL]

Also, there were many overlapping and confusing methods implemented which made it difficult to use.

WebDriver is a cleaner and object oriented implementation, and it controls the browsers using their native methods for browser automation, and does not rely on JavaScript injection. It works at the OS/Browser level, and does not have to use JS to control the browser.

Also SeleniumRC did not support headless testing; there was no HtmlUnitDriver.

2.8.19

NgWebDriver - Alternative to Protractor for AngularJS

AngularJS is an opensource framework for building web applications using JavaScript. Over the years it has become quite popular and any new web-app built these days has elements of AngularJS.

What this means under the hood is that AngularJS could result in some pretty complex DOM for the web pages, with its own custom tags like ng-app, ng-model, ng-bind, ng-repeat etc.

To help with testing, a new set of purpose-built tools targeting AngularJS have cropped up, and Protractor is one of the leaders in this field, and has gained good traction in the QA community.
Protractor is an end-to-end framework built using the JavaScript bindings of Selenium and its own locator methods to work with the AngularJS tags.

But the number of teams using Java + Selenium far out-number those using JavaScript + Selenium, and as such its difficult for them to incorporate Protractor in their own Java frameworks.
This is specially true for teams that already have a mature framework which supports multiple technologies, not just Web. And as it is usually the case, just UI-based testing is not the only goal of these teams. Hence, it makes little sense for them to use Protractor for testing.
And AngularJS is just one of the many ways we build apps - its just a small feature in the bigger scope of things.
And to test those, what we really need is a library that can be incorporated easily in the existing framework - not a new tool all together.

NgWebDriver is one such java library.

It has many useful methods to work with AngularJS locators, just as in Protractor (in fact it internally leverages Protractor to work with AngularJS).

With NgWebDriver we dont need to depend on Protractor at all because we can just merge this in our existing framework and use it as and when we need to.
This removes the need to change our framework just for AngularJS.

Maven dependency to add NgWebDriver library -

<dependency>
<groupId>com.paulhammant</groupId>
<artifactId>ngwebdriver</artifactId>
<version>1.1.4</version>
</dependency>

Sample code shows how to use it.

package main.java.com.automation.keyword.app;

import org.apache.log4j.Logger;
import org.apache.xmlbeans.impl.xb.xsdschema.Public;
import org.openqa.selenium.JavascriptExecutor;
import org.openqa.selenium.WebDriver;
import org.openqa.selenium.chrome.ChromeDriver;
import com.paulhammant.ngwebdriver.ByAngular;
import com.paulhammant.ngwebdriver.NgWebDriver;
import main.java.com.automation.keyword.driver.Driver;
import main.java.com.automation.keyword.driver.Utils;

public class NgWebDriverPoC {

public Utils utils = new Utils();

public Logger log = Logger.getLogger(PoC.class.getName());

public void angularWait(WebDriver driver) {

/*
* Need to downcast to the NgWebDriver in order to use some of its methods
*/
try {

NgWebDriver ngWebDriver = new NgWebDriver((JavascriptExecutor) driver);
ngWebDriver.waitForAngularRequestsToFinish();
log.info("waiting for Angular Requests to finish");

//  The waitForAngularRequestsToFinish method throws up ScriptTimeoutException many times and its better to catch it than have the script fail
} catch (ScriptTimeoutException e) {
log.info("ScriptTimeoutException while waiting for Angular Requests to finish");
}
}

public void angularJSDemo() {

String angularURL = "https://hello-angularjs.appspot.com/sorttablecolumn";

utils.openBrowser(angularURL);

WebDriver driver = Driver.getDriver();

angularWait(driver);

/*
* By adding the NgWebDriver library we can directly call some of its methods to locate angular elements We dont
* need to downcast the driver object
*/
driver.findElement(ByAngular.model("name")).sendKeys("ABC");
driver.findElement(ByAngular.model("employees")).sendKeys("100");
driver.findElement(ByAngular.model("headoffice")).sendKeys("Charlotte NC");
driver.findElement(ByAngular.buttonText("Submit")).click();

String hqCity = driver.findElement(ByAngular.repeater("company in companies").row(3).column("name")).getText();
log.info("City - " + hqCity);
hqCity = driver.findElement(ByAngular.repeater("company in companies").row(2).column("headoffice")).getText();
log.info("City - " + hqCity);

}

}



Testing APIs with RestAssured

These days there are many tools available to test REST based APIs - some of them are quite mature and feature-rich like Citrus, soapUI and Postman; but these are built only for API testing and trying to use them for other general purposes is often difficult.

If you already have a mature framework and want to 'incrementally' test APIs also as part of your existing codebase for functional automation, and want to save time in building a new framework around new tools, then you can use RestAssured in your existing framework.

RestAssured is a very capable java library for testing APIs and it even supports BDD-style spec.

Maven dependencies for Rest Assured -

<dependency>
<groupId>io.rest-assured</groupId>
<artifactId>rest-assured</artifactId>
<version>3.3.0</version>
</dependency>

<dependency>
<groupId>io.rest-assured</groupId>
<artifactId>json-path</artifactId>
<version>3.3.0</version>
</dependency>

<dependency>
<groupId>io.rest-assured</groupId>
<artifactId>xml-path</artifactId>
<version>3.3.0</version>
</dependency>

<dependency>
<groupId>io.rest-assured</groupId>
<artifactId>json-schema-validator</artifactId>
<version>3.3.0</version>
</dependency>


Sample code for working with RestAssured -

package main.java.com.automation.keyword.app;

import static io.restassured.RestAssured.*;
import static io.restassured.matcher.RestAssuredMatchers.*;
import static io.restassured.module.jsv.JsonSchemaValidator.*;
import static org.hamcrest.Matchers.*;
import static org.testng.Assert.assertEquals;
import static org.testng.Assert.assertTrue;
import org.apache.log4j.Logger;
import io.restassured.RestAssured;
import io.restassured.builder.RequestSpecBuilder;
import io.restassured.http.Header;
import io.restassured.http.Headers;
import io.restassured.path.json.JsonPath;
import io.restassured.response.Response;
import io.restassured.response.ValidatableResponse;
import io.restassured.specification.RequestSpecification;
import io.restassured.specification.ResponseSpecification;

public class RESTPoC {

private static Logger log = Logger.getLogger(RESTPoC.class.getName());

/*
* Variables for specification of different APIs though we may have only a single response specification
*/

private RequestSpecification raceReqSpec;

private RequestSpecification postmanReqSpec;

private ResponseSpecification respSpec;

private String url = "";

/*
* Function to set the config for all Req/Resp specifications
* But somehow its use is not getting implemented correctly as of now
*/

public void restConfig() {

log.info("setting proxy...");

//  RestAssured.proxy("localhost", 8888);

log.info("configuring common requestSpecification...");

RequestSpecification requestSpecification = new RequestSpecBuilder().

//    addHeader("Content-type", "json").
//    addHeader("Accept", "application/json").

build();

log.info("setting this as the specification for all REST requests...");

RestAssured.requestSpecification = requestSpecification;

}

public String getURL(String apiName) {

switch (apiName.toLowerCase()) {

case "google-books":
url = "https://www.googleapis.com/books/v1/volumes?q=isbn:0747532699";
break;
case "google-books-java":
url = "https://www.googleapis.com/books/v1/volumes?title:java";
break;
case "f1-api":
url = "http://ergast.com/api/f1/2018/circuits.json";
break;
case "postman-get":
url = "https://postman-echo.com/GET";
break;
case "400":
url = "http://ergast.com/api/f1/2018/circuits1.json";
break;
case "404":
url = "http://ergast.com/api/f1/2018/circuits.json 1";
break;
default:
break;
}

log.info("Request URL: " + url);
return url;
}

public String setURL() {

   url = "https://www.googleapis.com/books/v1/volumes?q=isbn:0747532699";
//   url = "http://ergast.com/api/f1/2018/circuits.json";
//   url = "https://postman-echo.com/GET";

log.info("Request URL: " + url);
return url;

}

/*
* This function does not take a Req Specification to get the Response from the resourceURL
* It has BDD style coding
* It also logs all requests and response steps
*/

public void getResponseBDD() {

given().log().all().

when().get(url).

then().log().all().statusCode(200);

}

/*
* This function does not take a Req Specification to get the Response from the resourceURL
* Nor does this uses BDD style coding
* @param resourceURL
*/

public Response getResponseDirectlyNoReqSpec(String resourceURL) {

Response rsp = null;

RequestSpecification rq = RestAssured.given();

rsp = rq.get(resourceURL);

log.info("------------------------------------------------------------------");

log.info("URL: " + resourceURL + " has Response: " + "\n" + rsp.asString());

log.info("------------------------------------------------------------------");

return rsp;
}

public Boolean chkInvalidResponse() {

Response rsp;

Boolean result;

//  rsp = getResponseDirectlyNoReqSpec(getURL("f1-api") + " 12312");

rsp = getResponseDirectlyNoReqSpec(getURL("404"));

if (getStatusCode(rsp) == 200) {

log.info("Valid response received");

result = true;

} else {

getStatusLine(rsp);
getAllHeaders(rsp);
result = false;
}
return result;
}

public void sampleJsonPathExp() {

Response rsp;

rsp = getResponseDirectlyNoReqSpec(getURL("google-books-java"));
JsonPath jp = rsp.jsonPath();
}

public void chkResponseGoogleBooksAPI() {

Response rsp = getResponseDirectlyNoReqSpec(getURL("google-books"));

if (getStatusCode(rsp) == 200) {

getAllHeaders(rsp);

} else {
getAllHeaders(rsp);
}
}

/*
* Function to check response of F1 API via JsonPath
*/
public void chkResponseF1API() {

Response rsp = getResponseDirectlyNoReqSpec(getURL("f1-api"));

String contentType = "";

//   Proceed only if response is 200

if (getStatusCode(rsp) == 200) {

getStatusLine(rsp);

getAllHeaders(rsp);

contentType = getHeaderValue(rsp, "Content-type");

getHeaderValue(rsp, "Server");

// Proceed only if response type is JSON

if (contentType.toLowerCase().contains("json")) {

JsonPath jp = rsp.jsonPath();

log.info("Series Name: " + jp.get("MRData.series").toString().toUpperCase());

log.info("Year: " + jp.get("MRData.CircuitTable.season"));

log.info("Circuit Name: " + jp.get("MRData.CircuitTable.Circuits[0].circuitName"));

log.info("Circuit Country: " + jp.get("MRData.CircuitTable.Circuits[0].Location.country"));

log.info("Total Circuits: " + jp.get("MRData.total"));

log.info("Getting name and country of each circuit -------------------------------------");

for (int i = 0; i < Integer.parseInt(jp.get("MRData.total")); i++) {

log.info("Circuit Name: " + jp.get("MRData.CircuitTable.Circuits[" + i + "].circuitName"));

log.info("Circuit Country: " + jp.get("MRData.CircuitTable.Circuits[" + i + "].Location.country"));

}
}
}

// TestNG Assert library

assertTrue(contentType.toLowerCase().contains("json"));

//  assertEquals(contentType, "application/json");
}

// Status Code is of type int
public int getStatusCode(Response response) {

int statusCode;
statusCode = response.getStatusCode();
log.info("Status Code: " + statusCode);
return statusCode;
}

// Status msg is of type string
public String getStatusLine(Response response) {

String statusLine;

statusLine = response.getStatusLine() + "";

log.info("Status Msg: " + statusLine);

return statusLine;
}

public String getHeaderValue(Response response, String headerName) {

String headerValue = "";

headerValue = response.getHeader(headerName) + "";

log.info("Header name: " + headerName + " - value: " + headerValue);

return headerValue;
}

public void getAllHeaders(Response response) {

log.info("Getting value of all Headers via Headers object ---------------------------------");

Headers allHeaders = response.getHeaders();

for (Header header : allHeaders) {

log.info("Header name: " + header.getName() + " - value: " + header.getValue());

}
}

/*
* Keyword sort of methods Invoking requests without first calling the config Req/Resp is successful when common
* spec for response is not used
*/
public void invokeRestNoConfig() {
getURL("f1-api");
getResponseBDD();
}

public static void main(String[] args) {

RESTPoC rd = new RESTPoC();
rd.chkResponseF1API();
rd.chkResponseGoogleBooksAPI();
rd.chkInvalidResponse();
rd.sampleJsonPathExp();

}

}

Use JavaScript with Selenium WebDriver

WebDriver is very powerful and supports lots of methods and features. But there are some cases which are best handled via JavaScriptExecutor.

JS extends the capabilities of the WebDriver and can be helpful in the below cases.

  • Submit page instead of Click - Sometimes a button on a webpage does not have any Click methods, instead its treated as a form that has to be submitted. In such cases, even though there is button you can theoritically click, its not registered as a click but rather as a submit. Hence, in such cases, the click() method of the WebDriver may not always work and we may have to submit the page/form via JS.
  • Handling nested web elements - Usual WebDriver commands like "Click" may not work on toggle always, as it may find that object is not clickable.
  • Complete area of some web elements like button, checkbox etc are not  clickable. You need to click on specific part of element to perform action. Selenium might fail here some times. In this case also JS is very useful.
  • Handling different types of Calendars.
  • Scrolling can be a big problem in selenium. Using JS, we can scroll by pixels or to the web element.
  • Handling hidden elements - JS can get text or attribute value from hidden webelements which could be difficult by direct methods.
  • Drag and drop issues can be handled via js.
  • Object Location - JS can also be used to locate web elements using below methods
    • getElementById
    • getElementsByClassName
    • getElementsByName
    • getElementsByTagName

 /*
  * Function to execute Synchronous script via JS
  * The return is of the type superclass Object
  */
 public Object executeSyncJS(String jsCode) {

   /*
   * Downcasting driver to JavascriptExecutor object because we had upcasted our driver object to WebDriver
   * We don't need to downcast had we used [ChromeDriver driver= new ChromeDriver();] instead of [WebDriver driver= new ChromeDriver();]
   */

  JavascriptExecutor jsExe = (JavascriptExecutor) getDriver();
 

  /*
   * The return type of the JS Response depends on the result of the command executed
   * It could be anything - String, boolean, Map etc
   * Since Object class is the parent class of all objects, we are setting the type of response to 'Object'
   */

  Object jsResponse = jsExe.executeScript(jsCode) ;

  return jsResponse ;
 }

 

 /*
  * Function to click a WebElement via JavaScript
  */
 public Object jsClick(WebElement elementToClick) {

  JavascriptExecutor jsExe = (JavascriptExecutor) getDriver();

  Object jsResponse = jsExe.executeScript("arguments[0].click;" , elementToClick );

   return jsResponse;
 }


  public void jsDemo() {
  

  //Scroll vertically via JS 
  executeSyncJS("window.scroll(0,1000)");

  Utils.sleep(1000);

  executeSyncJS("window.scrollTo(0,2000)");

  Utils.sleep(1000);

  executeSyncJS("window.scrollBy(0,1000)");

   //Return the result of a script execution

  Object result = executeSyncJS("return 1===2");

  log.info("Result of JS: " + result);
 
 }

 

 /*
  * Function to find element by ID via JavaScript
  * To return a WebElement we need to downcast the Object
  */
 public WebElement jsGetElementById() {

  WebElement webElement = null ;

//  getDriver().get("https://www.cleartrip.com/");

  JavascriptExecutor jsExe = (JavascriptExecutor) getDriver();

  jsExe.executeScript("return document.getElementById('FromTag')") ;

  return webElement;
 }




13.4.17

Evaluation of Automation Tools

Sometimes it feels so good to just past judgments! In fact I wanted to title the post judgement on tools, but people get real sentimental these days.

So, with all due respect to whomsoever it may concern....lets see what some of the OS tools prevalent in the market are good at, or mostly, lack.

  1. External Libraries
    1. Web Scrappers - Can these help read the html page, and give the data you want, in key:value pair?
      1. JSoup
        1. This is the only good option for Java, which is just a jar
      2. BeautifulSoup
        1. This is only for Python, but very capable
    2. XML Parsers
      1. Can these help read the XML, and give the tag:data you want, in key:value pair?
  2. White Framework
    1. supposedly to automate desktop apps, written in Java
    2. http://codoid.com/white-framework-cheat-sheet/
  3. YML Files for Test Data
  4. Protractor for AngularJS 
    1. Works with Se as well, and may have better Sync options. 
    2. Also, it is useful in building object locators for those custom tags in AngularJS
  5. Capybara 
    1. this works only with ruby
  6. Zeus
  7. Liverload
  8. Zephyr 
    1. Looks like this only provides flashy QA metrics, no other real use
  9. TestingWhiz Community edition
  10. Guice with Se
  11. VelocityDep
  12. CodedUI
    1. Comes bunndled with VS from MS, needs C# and OOOPs, can work on Web and Windows forms
    2. May have better Obj ID process, but Obj Management is difficult as no OR
  13. Freemarker library
  14. Wrappers - Discarded [DO NOT WASTE TIME ON WRAPPERS. They do not add anything new to the core Se code base, just propose to make it easier. If the wrapper goes out of business, so do you. Instead, focus on learning new generic and widely used Libraries [jars] that can add more features and make your life easier.]
    1. Robot framework
      1. This is making a lot of noise these days, may be some good, but not keen on using this
    2. SeLion
      1. Enabling Test Automation in Java. 
      2. SeLion builds on top of TestNG and Selenium to provide a set of capabilities that get you up and running with WebDriver in a short time. 
      3. It can be used for testing web and mobile applications. 
      4. Seems to be Free, not sure
      5. http://paypal.github.io/SeLion/html/documentation.html#getting-started
    3. Tellurium Discarded
      1. Its NOT Free!
    4. Fluentlenium - Discarded
      1. A wrapper framework for Se
      2. Looks like it supports only CSS not XPath
      3. http://awesome-testing.blogspot.in/2016/01/introducing-fluentlenium-1.html
    5. Selenide Discarded
      1. This is one more wrapper, and not widely used, has very different commands from my current PO framework
    6. Bromine Discarded
      1. Bromine is an open source QA tool that uses selenium RC as its testing engine. It is a test management tool for Se, and more so only for SeRC Tests, and aims to replace HPQC!
      2. Tests have to be written in java, and then uploaded to Bromine, which provides just the execution env for all the uploaded tests
      3. Bromine is a web application, and needs to be hosted on a separate server - yeah, good luck getting a new server just for this!
      4. http://www.methodsandtools.com/tools/tools.php?bromine

Eclipse shortcuts

If not in life, at least in eclipse, there are some shortcuts!

  • Ctrl M - maximize
  • Ctrl Shift B - breakpoint
  • Ctrl Shift / - collapse all
  • Ctrl Shift * - expand all
  • Ctrl Shift F - auto-format
    • Increase the max line width to 1000 to have all the code in a single line
    • Go to Window > Preferences > Java > Code Style > Formatter > Edit
  • String formatting
    • Go to Window > Preferences > Java > Editor > Typing
      • Escape text when pasting into string
  • Ctrl / - Toggle Line Comment
  • Alt Shift R - Rename Variable
  • Ctrl 1 - Show error resolution
  • Alt Shift Up Arrow - Select entire string
  • Ctrl Shift P - Jump to Opening/Closing Brace
  • Ctrl K - jump to the next instance
  • Alt Shift Up arrow - Select entire string in quotes
  • Go to Source/declaration - F3
  • Enable auto activation of content assist:
    • Go to Window > Preferences > Java > Editor > Content Assist > Auto activation triggers for Java
    • Add the string .abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ_
    • This would have to be done individually for editors of each language


Chrome shortcuts -

  • Open chrome browser without authentication/security warnings
    • chrome –ignore-certificate-errors &> /dev/null &

Troubleshooting hacks, Jugaad


1.              Guice Provision Error -
Cause - Happens when Surefire plugin is initiated on Maven 3.0.5 (which is too old now). This usually happens on Jenkins/TC when the default settings for Maven are used.
Resolution - Use latest version of Maven and specify the same in Jenkins' Maven Settings too

2.              SurefireBooterException -          
To resolve, add this config in the POM, to set useSystemClassloader to false:
<plugin>
    <groupId>org.apache.maven.plugins</groupId>
    <artifactId>maven-surefire-plugin</artifactId>
    <configuration>
        <useSystemClassLoader>false</useSystemClassLoader>
    </configuration>
</plugin>

3.              Run Maven commands without changing dir –
We don’t need to ‘cd’ to the directory containing the pom every time we want to run a mvn command, we can fire the mvn command from anywhere as long as we give the path to the pom like below:
Syntax: mvn –f <fullpath-to-pom> <goals> -D<params>
Sample: mvn –f C:/Automation/keyword/pom.xml test –Dthread1=Test1
It would be good to not have any spaces in the path so as to avoid escape char.
Use / instead of \  in the path.

4.              Invoke CMD via VbScript –
If you want to invoke the CMD utility automatically with certain parameters then use the below snippet:
Set oShell = CreateObject(“WScript.Shell”)
cmndToRun = “mvn –f C:/Automation/keyword/pom.xml test –Dthread1=Test1”
oShell.Run “cmd.exe /k “ & cmndToRun
To keep the CMD window open use /k after cmd.exe or use /c to close it.

5.              StackOverflowError –
Happens due to infinite recursion. For example when a method invokes itself during its execution or where one class object is instantiated under another class recursively. We will not get any compile-time Errors, but at runtime we will get this Error.

6.               Maven Compilation Error - package <name> does not exist
Resolution - Sometimes old or un-used packages are not found after updating versions of some libraries, which causes compilation Errors. Fastest solution is to delete these unwanted packages from Java files.

7.              Log4J package is not getting imported in the classes, hence, not able to initiate logging.
Steps to troubleshoot - A combination of these resolved this, after multiple iterations
·       Tried mvn dependency:resolve - It got successfully downloaded when resolving dependencies via mvn.
·       Delete local repo and re-download all dependencies from scratch - Even after deleting the old local repo, and rebuilding the same from scratch, it does not work
·       'Cleaned' the eclipse project - it resolved all Errors, but still Log4J is not getting imported.
·       Delete '.lastUpdated'  files from local repo
·       For Windows cd (change directory) to <user-directory>\.m2\repository and execute this command:
for /r %i in (*.lastUpdated) do del %i
·       Now update dependencies again.
·       You could also get Errors like: [Could not find artifact org.apache.logging.log4j:log4j:jar:2.6.1 in central (https://repo.maven.apache.org/maven2) -> [Help 1]]
·       Run mvn eclipse:eclipse - This could cause the following Error, visible only on eclipse, not in maven: [The project was not built due to "Resource already exists on disk: '/bddproject/target/classes/log4j.properties'.". Fix the problem, then try refreshing this project and building it since it may be inconsistent]. To resolve it, run mvn clean, as it will delete the target folder, where this Error was. Then go to eclipse and do Project > Clean. Now, all Errors should be resolved.

8.              Get 'failed to load jvm' Error when running eclipse
            Try restarting the machine, it gets resolved sometimes

9.              Even though the default story steps[in myStory] have been implemented, while running the MyStories class, they still come up as @Pending in the results.
Cause - The Pending annotation was already imported by default in the default MySteps class, which was marking all the steps as pending.
Resolution - Delete that import statement for Pending, and re run the test, it worked and all steps were Green/Run/Passed
Also, if now I add a Pending annotation but do not use it, it still runs the remaining steps, as it should run.
 
10.          Getting junk lines being reported in the console with the freemarker log -
Like - "Jul 05, 2016 1:17:56 AM freemarker.log._JDK14LoggerFactory$JDK14Logger info"
If Log4J works, then this is not needed

11.          Run via mvn is Erroring out -
mvn clean install - this command Errors out
Error - [Error] Failed to execute goal org.jbehave:jbehave-maven-plugin:4.0.5:run-stories-as-embeddables (embeddable-stories) on project bddproject: Execution embeddable-stories of goal org.jbehave:jbehave-maven-plugin:4.0.5:run-stories-as-embeddables failed: A required class was missing while executing org.jbehave:jbehave-maven-plugin:4.0.5:run-stories-as-embeddables: org/apache/log4j/Priority

12.          The simple-archetype comes with the default jbehave report template, which needs to be fixed

13.          Even though the M2E plugin is downloaded and installed, it does not show up in eclipse - there is nothing for maven

14.          Getting the following Error while running dependency:resolve command
Error - Failed to collect dependencies at org.jbehave:jbehave-core:jar:4.0.5 -> com.thoughtworks.xstream:xstream:jar:1.4.7:
Cause - Looks like the command to download the dependencies was getting timed out, as it worked well when the internet connection was strong
Resolution - Ran the dependency:resolve command again, and it was successful, without any Errors

15.          Getting the following Error when deleting and re-importing the bdd project
Error - unbound classpath variable 'm2_repo
Cause - Eclipse is not able to locate the path of the local mvn repo
Resolution - The below steps work to solve this issues
·       Open the Eclipse Preferences [Window - Preferences]
·       Go to [Java - Build Path - Classpath Variables]
·       Click New and set its name as M2_REPO
·       Click Folder and select your Maven repository folder. For example, my repository folder is C:/Users/user/.m2/repository
·       Rebuild the Project.

16.            No need to run these commands
mvn compile
mvn clean install
mvn clean

17.          Error - archive for required library cannot be read in eclipse
·       This generally happens when you are importing projects, which has external jars [added via maven POM or via direct import]
·       The first thing to try is delete those external jars and their folders, and then re-import them
·       Then in Eclipse, go to Project > Clean Project [Ensure that Build Automatically is checked]
·       If this does not resolve, then see if the jars got corrupted during copy/import, then replace them with original/valid jars

18.          Avoid having multiple versions of the same jars in the projects - only have the required version and delete the rest.

19.          If you get Errors like 'Source Not found' or 'Attach source', or 'NoClassDefFoundException' it generally means some jar is missing and not there in your build path, so find that jar, and just add it to your build path

20.          Split method in Java has a bug!
·       When we use a split function, ideally, if we don’t specify any limit, it should return all the tokens in the string, but it does not, if you have multiple delimiters in the end with empty tokens. 
·       For example in a | delimited message ("ASDAS|ASDASD|AA||||ASS|||||"), the last empty tokens would be ignored.
·       To fix this use the limit as -1
·       String[] token = sampleMsg.split("\\|" , -1);

21.          Always use string.isEmpty() method to check if the string is empty or not. 
1.    Never use null or any other method. Even if the variable is not a String, convert it to String via toString and then use isEmpty().
2.    Though a lot of people would frown upon this idea, but its simple, effective, and very easy to remember and can be implemented by a rookie in your team
22.          Ensure that you use JDK [and not JRE] in your Project Build Path

23.          Apache POI –
When adding apache poi in the dependency tree ensure to add dependencies for "poi-ooxml" and "poi-ooxml-schemas" as well, as some of the base classes for apache poi use these jars, and otherwise we would not be able to use certain classes like XSSF.

24.          If QCUtils is not working in UFT
·       Try checking the Registry values for this key.
·       'HKEY_CURRENT_USER\Software\Mercury Interactive\QuickTestProfessional\MicTest\QEEE'. 
·       This key had the parameter 'ExternalExecutionSupported', so either set it to Yes or delete it.
 
 
Other Hacks -
  • Error: Could not find PKIX Certificate Path when connecting to Artifactory.
    • Problem: When trying to run any maven commands on windows machines, sometimes we get this error where we are not able to connect to Artifactory or any Central Repo in Enterprise setups. This error will not come on your home computer but its one of the perks of working in a big Co.
    • What its not: This problem is not related to your Artifactory credentials or API Keys, or git or bitbucketor even the maven Settings.XML; although thats what you might be lead to think.
    • Cause: The problem is related to outdated Java Security Certificates or the use of incorrect ones. This happens when the java pkg gets upgraded or the one that you currently have installed does not have the required certificates. So the solution really lies in updating the security certificate file (cacertificates file in jdk dir).
    • Sol 1: Manually find and download the latest certificate and then update the cacertificates file, and then import them via the usual 'keystore' import command that you can easily google. The prob with this approach is that it needs Admin rights to edit the cacertificates file, and you will not get that ever in a big Co - another perk. So this method is DOA.
    • Sol 2: If your jdk package has recently been upgraded, then you might be lucky enough to get the latest java cacertificates files which will hopefully have the correct certificates added, and you will have to re-point your JAVA_HOME and M2_HOME and PATH variables to this new jdk pkg.
      • But that also needs Admin rights, so you will not be able to do that also. Sometimes some Cos have support teams that give you temp Admin rights, which might save the day for you, if not, read on...
      • What you can do is reset the Env Variables like JAVA_HOME and M2_HOME and PATH to new values for the current session via CMD prompt. This will work only till the time this CMD prompt is open, and all changes will be lost when you close it, and you will have to re-do these. The steps are:
        • SET JAVA_HOME=<new path>
        • SET JDK_HOME=<new path>
        • SET PATH=<new path>;%PATH%
        • Remember to append to the PATH variable otherwise it will overwrite and remove all the other values in it.
        • No need to change the variables for Maven
        • This should point your current session to the new jdk pkg folder which has the correct certs file.
    • Sol 3: Create a new folder for JDK pkg where you would have admin rights and then re-point all the variables, including Maven based, to this new folder. This approach would be helpful if you are trying to update the existing certs file with the new certs