Most Viewed Pages

18.3.21

Build a Real-time Automation Dashboard

 


The above picture conveys how you can feed test results from all your different tests into a database and then display it over a dashboard.

As your automation matures and grows in size/coverage, everyone expects more from it, and expect it to be faster and simpler, and do all kinds of things.
If you have 1000s of tests, running on a regular basis, keeping track of all those and then conveying this to all the stakeholders can become a big task in itself. And with 1000s of tests, its difficult to condense and provide an overall picture and status of the testing.

This is where having a Dashboard helps, as it radiates the status and progress of each test/feature in real-time. 
Not just automated tests results, it can also pull data from Jira for Manual Tests, and keep all this info current, and easy to understand, and all your Directors and MDs would love to see a browser based real-time dashboard!

Also, the dashboard should not cause too much overhead in terms of implementation and actual use because otherwise it increases development and testing time. 
Hence, ideally, the Dashboard calls should be wrapped in the Reporting framework itself, so that its use becomes seamless

Installing a dashboard solution on a Linux box seems to be the only way to get our own custom dashboard for reports.
Since, installing stuff on linux in enterprise setups is very time consuming I wanted the shortest/fastest possible solution with minimal dependencies.

Also, I need a more real-time solution which takes the data via an API/JSON rather than from a log/file because setting that up on CI/Agent would also be difficult.

Below is my experience of evaluating different solutions to implement a real-time Dashboard for Automated Test Results, where I have compared different options and tried to chose what best suites our need.

Have evaluated the following
  • ELK
  • ReportPortal
  • Allure
  • Klov
  • Grafana + InfluxDB

ReportPortal
  • ReportPortal seems to be open source but has certain pre-requisites that need to be installed before we can use it - RabbitMQ, PostGreSQL and Elasticsearch; after this we still need to install ReportPortal and all the plugins that we need. 
  • If we have to end up installing ELK then I dont think there is any need to bother with ReportPortal, we can get the dashboard done with just ELK - no need for one more layer. 
  • Also, since this could only be installed on Linux, and I did not have a linux box of my own to play with, it was difficult to evaluate and use.

Allure
  • The problem that I faced with Allure was that even though it generates report files for each run, those report files can only be read after loading and processing in Allure, not directly like the way some other libraries that generate a easy to use HTML Result. 
  • This defeats the purpose of having a real-time report and a dashboard. This feels a veiled attempt to cause stickiness with the product, and I do not want to be ham-strung and be covertly dependent on any product. 
  • So clearly, Allure was not the product to use.

Klov
  • This comes from the makers of ExtentReports, and I really like ExtentReports, as its open source, and is really built from a tester's point of view, and is easy to implement and use - not to mention how the beautiful reports look. 
  • But I did not have an option to pay for Klov as its no open-source, and at some point I felt that I was getting too dependent on just one solution, which could cause difficulties in the future if there are some changes in the product which couldn't be handled. 
  • So thought of using other products, but I still continue to use ExtentReports. 
  • Also, paying for a Dashboard solution (eg., Klov) may not be an option for many teams, specially only just for reporting purposes, when your entire tool-set is open source.

In the end, I was left with just ELK and Grafana.

ELK
  • This is a great solution, and is being increasingly adopted by many teams. It has high scalability and has a ton of features and implementation options and is a becoming more of a standard now.
  • Though ELK might be now set up in many organizations already, if its not, getting it set up could be difficult, at least it was difficult for me. 
  • And getting to install and configure all the different components could also prove time consuming which many QA teams cannot afford because you do need to configure different listners and appenders to capture and transmit all the info. 
  • It like installing many different pieces each with their own config and then trying to use all of them. 
  • Needless to say this has a long learning curve too.

Grafana
  • Grafana is a multi-platform open source analytics dashboard and InfluxDB is an open-source time series database.
  • Grafana + InfluxDB does not need any middleware messaging hub/node as they communicate directly via API, so this is a simpler solution. 
  • Grafana fires up fast and has many options to customize the way you want to present your data on the dashboard. You install InfluxDB and then forget about it, there is no overhead. Its super simple to use and read.
  • Also, the installation for both Grafana and InfluxDB is pretty simple without any major dependencies. 
  • And it can also be installed directly on Windows too, which really helped in evaluating all the different options that we were planning to build.

A Jira based dashboard is also possible and could be great too as it has its own well defined API, but I did not evaluate it this time, may be later.

So, at this point InfluxDB + Grafana seems to be the really simple solution but far better suited to our needs, though not easy in any way as it does have some learning curve.

Refer below links to get started with Grafana and InfluxDB

How to use Grafana Dashboard

Grafana is a multi-platform open source analytics dashboard that provides charts, graphs, and alerts for the web when connected to supported disparate data sources. We can create custome monitoring dashboards using interactive query builders.


How to install -
download v7.0.0
Download the installer (grafana-7.0.0.windows-amd64.msi) and run it with default options.

How to run -
Open cmd and go to install dir of grafana and run the grafana-server.exe under bin
C:\Program Files\GrafanaLabs\grafana\bin>.\grafana-server.exe

By default, Grafana will run on port 3000. Default credentials are admin/admin
Launch grafana via URL - http://localhost:3000/


Adding Influx data source in Grafana -
To use InfluxDB in Grafana, we need to establish a Data Source Connection.
Go to Configuration > Data Sources > Add Data Source > InfluxDB.

Once configured, check whether the Data Source works by clicking the Save & Test button. You should get a message like 'Data source is working'.

You can now create custom Dashboards by using its UI interface.

How to use InfluxDB

InfluxDB is an open-source time series database optimized for fast, high-availability storage and retrieval of time series data, has no external dependencies and provides an SQL-like language for CRUD operations.

It is great option to store huge sets of time-series data and supports lightning fast retrieval.


How to install -

download v1.8.0

https://dl.influxdata.com/influxdb/releases/influxdb-1.8.0_windows_amd64.zip

Just unzip it to any dir - no other installation step involved.

dir - C:\influxdb

You may change the conf to disable reporting to influx website.


How to run -

Open cmd and go to install dir of influxdb and run the influxd.exe like below

C:\influxdb>.\influxd.exe

By default, an InfluxDB instance runs on port 8086

This will start the instance, not the database. 

To interact with a database, run the influx.exe which is a CLI tool for working with influx databases.


13.2.21

Working with TeamCity CI Server

 TeamCity is a CI server which allows us to integrate, run and test parallel builds simultaneously on different platforms and environments.

It has a lot of features, and offers great flexibility in terms defining multiple custom jobs to run different kinds of builds, but its not open-source.

Jenkins is open-source and is much more prevalent in the industry and has a very rich plugin-ecosystem which helps extend its capabilities.

Below are some points that are usually learnt the hard way and are not defined in any manual...

  • Parameter values specified in the build settings anywhere are case sensitive. JDK!=jdk
  • You can run multiple builds on TeamCity Agents for both Windows and Linux platforms, but each agent runs only one build at a time, and if you want to run multiple builds on the same agent, they get queue-ed and run sequentially rather than in-parallel simultaneously.
    • Though you can always run builds in parallel (simultaneously) on different agents
  • All paths that need to be specified in the build settings should be relative to the 'Build Checkout Directory', which can be found via %teamcity.build.checkoutDir%. 
    • All the project code-files get checked out from VCS/Git in the Build Checkout Dir, before actually starting the build execution.
    • So if the checkout dir on a windows teamcity agent is actually C:\users\lokiagt\BuildAgent\work\989sds9898h989s\ then the complete path to the source of the project would be 'C:\users\lokiagt\BuildAgent\work\989sds9898h989s\cloud-bdd\'
    • This is also the reason we should give all paths for reports/data/config relative to the Project Dir so that it can be easily accessed from CI boxes regardless of where it gets checked out to.
    • Ensure to use \ for path on windows agents and / for linux agents.
  • Use the publish artifacts feature in general settings for the build to publish any file/result/csv to be available after the build is complete. For example if the HTML results get generated in the folder cloud-bdd/reports/<build-name><build-no>, then we can direct teamcity to publish all artifacts in that directory by specifying its path like:
    • cloud-bdd\reports\%system.teamcity.buildConfName%%system.build.number% => AutoResult
    • AutoResult is the folder name which would be visible under the 'Artifacts' section of the Build and Global Build homepage under the 'blue-box-icon'.
  • Properties used to filter agents for run
    • teamcity.agent.jvm.os.name > Windows/Linux
    • teamcity.agent.name > hostname of the agent
    • env.JAVA_HOME > jdk/jre
    • There are a lot of properties that can be used for example env.TEAMCITY_BUILDCONF_NAME returns the name of the build and env.BUILD_NUMBER returns the dynamic build number associated with that build.
    • These can be called via System.getenv("env.TEAMCITY_BUILDCONF_NAME") and System.getenv("env.BUILD_NUMBER"). We use 'getenv' method as these are Environment Parameters for the Agent/Build.

  • Installing the TeamCity Windows Agent -
    • The server URL should not end with / it should be just hostname.domain.net. Also, giving just the hostname is enough, no need to give the domain if its in the same network.
    • We can use \ [for windows] while specifying the absolute path in the agent.bat file. We may use / [for linux] for relative paths.
    • The teamcity agent comes bundled with its own JRE so we should use that and not the system JDK/JRE because the local ones are not used in the agent.bat files. Hence, no need to add TEAMCITY_JRE as an Environment Parameter.
    • The parameter names given in the wrapper.conf file of the agent is nothing but the -D flags similar to jvm/mvn coordinates.
    • Flags for the jvm command should be specified like: jvm -DpathName=<path>
    • Certificate errors are thrown up when the JRE versions are different on the agent client and the agent server like [jre8.121 and jre8.202]. Though online help suggest that the path to the cacerts.jks file should be in your Path variable and you should update the cacerts file there, its actually not needed because when you add the JAVA_HOME to your path, the cacerts file also gets added to the path as it is located under the security folder in JAVA_HOME folder as %JAVA_HOME%/jre/lib/security.
    • To check if the certificate is present and installed in the correct location use this command:
      • keytool -list -keystore %JAVA_HOME%/jre/lib/security/cacerts
    • Do not install the teamcity agent on a client having QTP installed because the Environment Parameters of QTP would wreak havoc and not let it run at all. This would happen even if you removed all Env Parameters related to QTP, which is quite strange.
    • Even if you are not able to run the agent as a windows service, you can run the agent.bat file which does the same thing.
    • You can start and stop the agent service via agent.bat file via these option flags, given from the C:\BuildAgent\bin folder
        .\agent.bat start and .\agent.bat stop
    • Even though the default installation is supposed to be error free there could be lot of issues and errors in the agent.bat file that would have to be fixed before you could actually get it to run. Like providing the absolute path of the Log4J XML file, path of the cert file. Most of the times relative paths would not work so we would have to give absolute path in the jvm coordinates.
    • Sample command to be run from the C:\BuildAgent\ dir with all paths relative to the BuildAgent\ dir:
      • C:\BuildAgent\jre\bin\java.exe -ea -Xmx384m -Djava.security.debug=all -Dlog4j.configuration=file:..\conf\teamcity-agent-conf.xml -Dteamcity_logs=..\logs -Djavax.net.ssl.truststore=..\..\BuildAgentConf\cacerts.jks -Djavax.net.ssl.truststorePassword=changeit -classpath C:\Program Files (x86)\Common Files\Oracle\Java\javapath;C:\Program Files (x86)\Python37-32\Scripts\;<all other classpaths> -file ..\conf\buildAgent.properties -launcher.version=61544
    • Always use the old CMD window and never the powershell window as it does not work well with relative paths.
    • You can view the logs for the agent starter, wrapper and agent connection, under the different log files under C:\BuildAgent\logs dir