In this article, we have discussed why testing is necessary, how testing methodologies work, and how Skeps performs these tests.
The rationale for writing tests for a piece of code is that humans are bound to make mistakes in anything they do. Rather than denying the fact, we must use this to our advantage. We can talk about how this piece of software should behave or how it should not. Both of these paradigms of thinking are essential for a full testing system.
Positive cases: defining what system should be doing to mark it as working correctly.
Negative cases: defining what system should not do to mark it as working correctly.
Ways of testing
There are different ways of testing –
Manual testing – manually testing the APIs /modules which use other small modules in it. There are some cases in which automation testing is not possible because of some constraints and, manual testing is the only way in those cases. Automated testing – automation test cases are written once test cases are defined after manual testing. These automation test cases work as future checks which, will run every time a change is made in the system. Writing automation tests helps us skip the time-consuming manual tests made on every change.
Testing methodologies from different levels of abstraction
Testing every line of code – unit test & test-driven development
There is a methodology called test-driven development in which unit tests are written even before the actual piece of code it tests. The rationale behind this is we must start with a failing test case and write a code that passes that case. That way, it is ensured that the program is doing what it intended to do. To say that a system is rigorously tested, we need to know how much of our code is covered with test cases; this is called test coverage of the code. There are many frameworks in every language which give code coverage report based on unit tests written in the code.
Who makes, thou shalt test – Dev testing
This type of testing comes literally and metaphorically between the unit and integration testing in which the creator of the code tests test cases before launching a full test cycle. It ensures that only quality code is processed by further phases of testing and avoids silly mistakes taking the testing time.
The sum of parts is not equal to the whole – Integration testing
To say in simple terms, if two systems are working stably independently, then it does not mean that a system consisting of interactions between these two systems can also be labeled as stable without testing it. Here, the integration testing methodology comes in to picture, in which rather than running a test on each module independently as in unit tests, the module is tested as a whole – its positive and negative behavior is defined and tested separately.
Automating everything – API automation testing
In web-based scenarios, different microservices interact with one another via Http API. Ensuring the health and consistent behavior of these APIs is supercritical. That is why to cover testing from a microservice point of view, API automation testing is done. In this testing, each API exposed by a microservice is given a predefined set of inputs and is tested against predefined behavior it should make. This behavior can be the state of the datastores after API success and the response shared by the API. Since this type of testing can mutate the data in datastores, it is done in a sandboxed environment. Preferably, it is done in docker containers consisting of separate datastores. These dockers are created before test cases and torn down after test execution, thereby ensuring a repeatable testing environment independent of the datastore state left by previous testing runs.
What shows matters – Web/app automation testing
In a web-driven world, users interact with a piece of software via some client, be it a browser or a mobile app. To ensure a consistent system from the user’s point of view, each of these clients must be tested on each build. This can be achieved via a mix of manual and automation testing methodology. First manual testing is done to record the ideal state, which should be shown to the customer. Then, once that is done, there are web/app testing automation frameworks like selenium/appium/puppeteer that are used to automate these user UI interactions. The results of this type of UI automation testing are either complete flow is tested, and the end state is recorded, or screenshots are taken at each stage and then analyzed either programmatically or manually. This method is still faster than complete manual testing in which a dedicated person tests every user journey by clicking and tapping on a web/mobile device. Also, with the advent of multiple browsers/multiple mobile phones of different screen sizes and operating systems automation, testing scales comparatively easier than manual testing though it also requires manual intervention but not as much as manual testing.
Humans to rule them all – manual testing
No matter how much automation frameworks we use and automated test cases we write, but they are always going to remain constant. However, in dynamic scenarios, things change in ways that are incomprehensible to machines. Automated test cases are suitable for repetitive checks, but we should think about ways a system can break. Once this case is marked as a valid test case, then it can be automated for the next releases.
Testing at Skeps
For API testing, we use the Rest Assured java library with the TestNG testing framework.
What is TestNG
TestNG is a framework designed inspired by JUnit and NUnit. It covers all types of tests: unit tests, functional, end-to-end, integration tests, etc.
A few of the advantages of using TestNG are as follows:
- It supports Annotations.
- You can run your tests in arbitrarily big thread pools with various policies available (all methods in their thread, one thread per test class, etc.).
- Supports ways to ensure testing that code is threaded safe.
- Flexible test configuration.
- Support for data-driven testing with @DataProvider.
- Support for parameters.
Another advantage of using TestNG is that it allows grouping test cases. At Skeps, we have more than 1000 active test cases that are run before every build. During development, running this large number of test cases is time-consuming and redundant. Here, the test case grouping allows developers to run test cases only for modules that are being touched by their code. It helps in a faster development cycle where complete test cases are run once development is done.
Sample testNG configuration testng.xml which, intends to run only ABC group test cases, would look like
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE suite SYSTEM "https://testng.org/testng-1.0.dtd">
<suite name="ABCFlowTestsSuite">
<test thread-count="5" name="ABCFlowTests">
<parameter name="IP" value="1.2.3.4" />
<parameter name="DealerNumber" value="13" />
<parameter name="BackendServerIP" value="55.55.33.44" />
<parameter name="param1" value="124" />
<groups>
<run>
<include name="ABCgroup" />
</run>
</groups>
<classes>
<class name="com.skeps.tests.stableflow.ABC" />
</classes>
</test>
</suite>
What is Rest Assured
Rest assured is a library we use at Skeps to facilitate API automation testing. It provides boilerplate code for interacting with Http APIs.
Including Rest Assured library is as simple as adding the following dependencies in your dependency manager config,
<dependency>
<groupId>io.rest-assured</groupId>
<artifactId>rest-assured</artifactId>
<version>LATEST</version>
</dependency>
<dependency>
<groupId>io.rest-assured</groupId>
<artifactId>json-path</artifactId>
<version>LATEST</version>
</dependency>
For eg., checking response status from an API call is as simple as (illustrative code.)
import org.json.simple.parser.ParseException;
import java.io.IOException;
import io.restassured.mapper.ObjectMapperType;
import io.restassured.response.Response;
@BeforeMethod(groups = { "ABC1" })
public void setup() throws IOException, ParseException {
Response resp = SendRequestGetResponse.sendRequestgetResponse(RequestObj, RequestUrl);
resp = resp.as(Response.class, ObjectMapperType.GSON);
Assert.assertEquals(response.getStatusCode(), HttpStatus.SC_OK);
}
Coding, test cases assertion checks with this library is a breeze due to its human-readable syntax.
E.g.,
Assert.assertTrue(StatusCodelist.contains("001"), "Status code list does not have Status code 001");
Another good reason to use this library is that it’s readily integrated with testing frameworks like TestNG or JUnit.
How do we run tests periodically?
Tests are only helpful if they are run periodically to find the bug fast and early. We have various types of testing requirements, such as daily tests, weekly tests, etc. We use the Jenkins framework to schedule crons and also for running ad hoc crons. Once a test suite is run, its results are mailed to all dev team stakeholders.
Why we use Jenkins –
It helps streamline all crons in a single place rather than having multiple cron tabs on across various servers.
At Skeps, Jenkins is run inside docker. Setting up your own Jenkins is as simple as running
docker run -p 8080:8080 -p 50000:50000 -v /your/home:/var/jenkins_home jenkins
You can read more about Jenkins athttps://www.jenkins.io/
How we setup our testing environment
To test our systems, databases like SQL, redis, etc., are required along with code. For provisioning this, we could have created a separate isolated environment of servers which, could have achieved our purpose satisfactorily. But we took one step extra in creating a docker-compose-environment for our testing infra. It enables each of our developers as well as the testing team to spawn testing setup on their own systems and tear it down in no time.
Typical docker-compose for a simple system consisting of REST API application and multiple databases is like –
Docker-compose.yml
version: '2'
services:
api_service:
environment:
LOG_LEVEL: debug
image: <your application image >
ports:
- 80:80
links:
- database
- redis
depends_on:
- database
- redis
database:
image: mysql
ports:
- 3306:3306
redis:
image: redis
ports:
- 6379:6379
Above mentioned docker-compose would spawn an API service of the given docker file along with MySQL database and Redis. The API service would be dependent on MySQL and Redis. Spawning our test infra requires a lot more services but is clearly straightforward.
Test Reports
Tests are as good as the action that is taken on their failure. So to keep everyone apprised of the testing status. We mail the periodic test reports to our dev teams and managers in a very concise and crisp format. This helps in easily analyzing reports.