A Test Automation WorkFlow

To maximise your efficiencies as a test automation engineer, define a solid approach and plan out your test development workflow.

Below, I’ve developed a system for you to follow in building your test automation cases

I invite you to follow the same path or adapt it to meet your teams requirements.

Its good practice to follow a system and with repeatable goals defined. This will make you a stronger but also adaptable tester. Good luck!

API Testing Plan

View an example of an API Contract test case

This is an example for a /users endpoint:

Identify the API implantation and variants : 

GET /usersList all users 
GET /users?name={username}Get user by username
GET /users/{id}Get user by ID
GET /users/{id}/configurationsGet all configurations for user 
POST /users/{id}/configurationsCreate a new configuration for user
DELETE /users/{id}/configurations/{id}Delete configuration for user
PATCH /users/{id}/configuration/{id}Update configuration for user

High Level Test Scope

NameVerbHowHTTP Response CodeAssertion
should return a list of X resourcesGETCall endpoint200Count property should match rows.length, Count must be greater than 1
should filters resourcesGETCall endpoint with filter parameters (limit, sort, start, filter)200Count property, rows.length, id of first and last resource
should return a specific resourceGETCall endpoint with a resource ID200Check each property
should return a 404 if resource not foundGETCall endpoint with a fake resource ID404 
     
should create a resourcePOSTSend full valid data201Check each property
should fail returning all mandatory propertiesPOSTSend a single non mandatory property400Count number of errors
should fail if …POST“Send data against business logic (null value, blank value, unicity, shorter than expected, bad relation …)”400Check reason/code of error
     
should update the resourcePATCHSend full valid data (set a property id which should be ignored)200Check each property
should fail if …PATCH“Send data against business logic (null value, blank value, unicity, shorter than expected, bad relation …)”200Check reason/code of error
should return a 404 if resource not foundPATCHCall endpoint with a fake resource ID and send full valid data404 
     
should delete the resourceDELETECall endpoint with a resource ID204If hard delete, check if the resource doesn’t exist anymore in DB. If soft delete, check the resource has a deletedAt value not null
should delete the resourceDELETECall endpoint with a fake resource ID204 

Detailed Test Scope

Where {id} is a UUID, and all GET endpoints allow optional query parameters filtersortskip and limit for filtering, sorting, and pagination. 

#Test Scenario Category Test Action CategoryTest Action Description
1Basic positive tests (happy paths)  
 Execute API call with valid required parametersValidate
status code:
1. All requests should return 2XX HTTP status code

2. Returned status code is according to spec: 
– 200 OK for GET requests
– 201 for POST or PUT requests creating a new resource 
– 200, 202, or 204 for a DELETE operation and so on
  Validate
payload:
1. Response is a well-formed JSON object

2. Response structure is according to data model (schema validation: field names and field types are as expected, including nested objects; field values are as expected; non-nullable fields are not null, etc.)
  Validate
state: 
1. For GET requests, verify there is NO STATE CHANGE in the system (idempotence)

2. For POST, DELETE, PATCH, PUT operations
– Ensure action has been performed correctly in the system by:
– Performing appropriate GET request and inspecting response
– Refreshing the UI in the web application and verifying new state (only applicable to manual testing)
  Validate
headers:
Verify that HTTP headers are as expected, including content-type, connection, cache-control, expires,
access-control-allow-origin, keep-alive, HSTS and other standard header fields – according to spec.

Verify that information is NOT leaked via headers (e.g. X-Powered-By header is not sent to user). 
  Performance sanity:Response is received in a timely manner (within reasonable expected time) – as defined in the test plan.
2Positive + optional parameters   
 Execute API call with valid required parameters AND valid optional parameters

Run same tests as in #1, this time including the endpoint’s optional parameters (e.g., filter, sort, limit, skip, etc.) 
  
  Validate
status code:
As in #1
  Validate
payload:
Verify response structure and content as in #1.  

In addition, check the following parameters:
– filter: ensure the response is filtered on the specified value. 
– sort: specify field on which to sort, test ascending and descending options. Ensure the response is sorted according to selected field and sort direction.
– skip: ensure the specified number of results from the start of the dataset is skipped
– limit: ensure dataset size is bounded by specified limit. 
– limit + skip: Test pagination

Check combinations of all optional fields (fields + sort + limit + skip) and verify expected response.  
  Validate
state:
As in #1
  Validate
headers:
As in #1
  Performance sanity:As in #1
    
3Negative testing – valid input   
 Execute API calls with valid input that attempts illegal operations. i.e.:

– Attempting to create a resource with a name that already exists (e.g., user configuration with the same name)

– Attempting to delete a resource that doesn’t
exist (e.g., user configuration with no such ID)

– Attempting to update a resource with illegal valid data (e.g., rename a configuration to an existing name)

– Attempting illegal operation (e.g., delete a user configuration without permission.)

And so forth.
  
  Validate
status code:
1. Verify that an erroneous HTTP status code is sent (NOT 2XX)

2. Verify that the HTTP status code is in accordance with error case as defined in spec 
  Validate
payload:
1. Verify that error response is received

2. Verify that error format is according to spec. e.g., error is a valid JSON object or a plain string (as defined in spec)

3. Verify that there is a clear, descriptive error message/description field

4. Verify error description is correct for this error case and in accordance with spec 
  Validate
headers:
As in #1
  Performance sanity:Ensure error is received in a timely manner (within reasonable expected time)
    
4Negative testing – invalid input  
 Execute API calls with invalid input, e.g.:

– Missing or invalid authorization token
– Missing required parameters
– Invalid value for endpoint parameters, e.g.:
– Invalid UUID in path or query parameters
– Payload with invalid model (violates schema)
– Payload with incomplete model (missing fields or required nested entities)
– Invalid values in nested entity fields
– Invalid values in HTTP headers
– Unsupported methods for endpoints 

And so on.
  
  Validate
status code:
As in #1
  Validate
payload:
As in #1
  Validate
headers:
As in #1
  Performance sanity:As in #1
    
5Destructive testing  
 Intentionally attempt to fail the API to check its robustness:
Malformed content in request

Wrong content-type in payload

Content with wrong structure

Overflow parameter values. E.g.:
– Attempt to create a user configuration with a title longer than 200 characters

– Attempt to GET a user with invalid UUID
which is 1000 characters long

– Overflow payload – huge JSON in request body

Boundary value testing 

Empty payloads

Empty sub-objects in payload

Illegal characters in parameters or payload 

Using incorrect HTTP headers (e.g. Content-Type)

Small concurrency tests – concurrent API calls that write to the same resources (DELETE + PATCH, etc.)

Other exploratory testing
  
  Validate
status
code:
As in #3. API should fail gracefully. 
  Validate payload:

Validate headers:
As in #3. API should fail gracefully. As in #3. API should fail gracefully. 
  Performance
sanity:
As in #3. API should fail gracefully. 

Browser based UX Testing

Lets set the scene

So, there you are… having a UI and an entire suite of manual test cases. These tests are tedious, take forever and lets be honest…can get repetitive and boring!

My friend, you are in the frontline for automating your testing!

Tools

Lets not beat around the bush – Selenium has been around for ages. The product has had millions of users, has become a W3 standard and is launching selenium 4 pretty soon.

This is the tool for us. 

There are others yes, most of which will wrap some selenium WebDriver capability into a pretty package and sell you that at a pretty penny.

Lets not get the wool pull over our eyes.  

We can do the same, implement at the same level and in fact, have far greater control on our test product ….cheaper, faster, better.

Ok, before we get butterflies in our tummies over this tool, there are some pitfalls…urhgggg, of course.

Not to worry, we have help – in our demo below, I will show you how I implemented Selenide – an open source project to fill the gaps that were obvious in Selenium 3.

You can read up on the tools here: 

Selenium

https://selenium.dev/downloads/

Selenide

A wrapper around selenium with a few more fluent apis for us to work with. therefore, it is my. preferred library. 

https://selenide.org/

Show me the baby trees(bacon is nice but baby trees are better)

Head on over to my github page and you will find a FEW implementations of UI testing,

https://github.com/suveerprithipal to find this code and more. Dont limit yourself. There are many ways of implementing this.

Here is one I’ve taken from https://github.com/suveerprithipal/selenideJava which implements Selenide.

https://github.com/suveerprithipal/selenideJava/blob/master/README.md

 * @author Suveer Prithipal

public class GoogleTest {
  @Test
  public void googlePageTest(){
    /**
      No need to create a webdriver instance!
      Selenide provides with easy to use API's that provide rich functionality.
      On a normal day with selenium, we would need to create a webdriver instance, and bind it to a browser.
      We would also need to define page elements to use them.
      Selenide removes the need to do this by wrapping up that into a singe API.
      Below, we use "open" to create the webdriver instance, and bind it to a class.
      Passing it a class, provides the shape for the instance, giving it methods and defined functionality.
     */
    GooglePage googlePage = open("http://www.google.com",GooglePage.class);


    /**
      Now that we have an instance of webdriver up and we are on our test app, Google.
      We can then search something.
      Searching for something means it will return a result.
      Therefore, we need a class to take the shape of these results.
     */
    SearchResultsPage searchResultsPage = googlePage.searchGoogle("selenide");


    /**
    Tests.
    Now that we have results, we can perform tests.
    Below, we use the searchResultsPage and query the class for expected results.
     */
    searchResultsPage.checkResultsSize(8);
    searchResultsPage.getResults().get(0).shouldHave(text("Selenide: concise UI tests in Java"));


    /**
    Use page object models and design patterns
    This example is to demo the ease of use with Selenide.
    Its important to separate out your implementation for better maintenance, easy of reading and debugging.
     */
  }

}

In Code – Get going in 3 easy steps

This project is written in Java, uses the page object model and is triggered by a BDD Cucumber feature file. Learn these terms well. 

Lets get into it:

Step 1: Define our Page Objects

The things you want to interact with on a page like:

  • buttons
  • lists
  • labels
  • text inputs

Step 2: Step Definition

Step definitions are the glue that will bring a workflow to your elements and bind them your feature file.

Here we chain our actions together and feed them input using our feature file.

Eg, the loginCuke() method:

  • we open up the site in a web page using Selenide. A 1 liner which will handle a lot of the grunt of selenium behind the scenes.
  • we then proceed to run the login method that will:
    • take our input username and password from the feature file and process the login method. 

Step 3: Create your feature file

Feature files are BDD scribed tests.

We write these in plain english and translate that to code…as we did above

Thats it! run your feature file and wait for the results.

You’ll see the browser opening and doing things. Screenshots are a default on failure with Selenide so you’ll have that too!

Reporting:

Reporting is a must in any project. Please have a look at my other content for more details or pursue your own.

Conclusion:

UI testing has been around for a while and is getting easier, cheaper and simpler to implement.

With the older versions selenium we had to code a lot to get a page to open, while today we can achieve this in 1 line with a very fluent api.

Write once, test 1000 times on all browsers

our tools allow us to test on all browsers and any version…. simultaneously.

A bless in the UI testing world as it allows us to drastically reduce our testing time and therefore our cost to service testing.

Writing our own frameworks gives us greater advantage in scale and capability. 

Nothing is more fore-filling than overcoming a challenge by learning, trying and failing!

Visibility – Your tests have value, showcase them.

https://www.automatetheplanet.com/wp-content/uploads/2019/01/test-automation-reporting-allure.png

Having automated tests are great! … but not sharing the results or centralizing them is not so great. 😦

Your automated tests have undeniable business value, don’t be shy about it. Excellence does not happen overnight. So even if your tests are in early development, share the results with your team.

So what can you do?

  • Setup a nightly run using some CI tool. I like Jenkins 😉
  • Promote the results by using a dashboard, email or a tool like slack.
  • Address failed tests daily. Fix them or add them to a backlog. Fix these soonest.
  • Prep a backlog of test automation scenarios that the entire team supports and can contribute towards.

In conclusion: 

Visibility and Reliability are really important. This makes them trust worthy, needed… valuable. 

Pursue it viciously. Take small incremental steps towards implementation.

UX Testing with Docker

Image result for browser automation

Introduction

Ola! Thanks for popping in and having a squiz

Today I’ll like to showcase how we can make our browser based test faster and more efficient by making use of 

Docker

Why

Docker is a great resource to use as we have the ability to spin up and environment for testing and then very quickly throw it away again.

OMG why would you do that?

Well that the power and presence of docker and containers. We can spin up, tear down and re-use this resource multiple times and we should do so, without attachment, 

Docker support is amazing. there are heaps of predefined containers that just require us to pull and use instead of us having to write and maintain those scripts, environments and data.

Its like magic at your fingertips

Image result for magic meme

Selenium Grid

Grid is an extension out from Selenium which give us the ability to run our tests remotely.

Lets look at some of the key benefits:

  • Run tests remotely
  • Run tests in different browsers with the change of a config
  • Run tests in parallel – our biggest win. 
  • Reduce test time and therefore the feedback cycle.

Implementation

Start up the Selenium Grid

We’ll make use of the predefined selenium-docker containers.

This setup makes use of a dockerfile which will spin up 3 environments:

  • Hub
  • Chrome Node
  • Firefox Node

Pull the image from: https://github.com/SeleniumHQ/docker-selenium

Head on into your download dir and docker-compose up, you will see the following: 

What we want to look out for:

  • The Hub starts up and publishes a url and port.
    • in our case: localhost:4444
  • The chrome node started up and registered against the hub
  • The firefox node started up and registered against the hub

Navigate to the grid:

In your browser, open up: http://localhost:4444/grid/console

You will see:- 

You are viewing the grid, and the child worker nodes it has available. 

Thats it! your grid is up and running –

how damn easy was that!

Image result for winning meme

But we’re not done. We need to change our test application code to hit this new environment.

Implement the Hook into the grid

To open the browser we need to establish some browser capabilities to start up our browser.

These are seen as ChromeOptions(),

We need to apply these setttings so that we can start up a browser on a linux terminal with no display adapter. We set the headless option in particular for this.

We then need to point our RemoteWebDriver to the new url,

We do so by setting the  urlToRemoteWD variable, which point to the url of the grid.

Run your test

When we run our test, – 

Our test is published to the master

The master will inspect which node is free and available and will push the test to run there.

The results are fed back to the master and then fed back to our test application for reporting. 

You are ALL DONE and ready to run a battery of tests against AWS or locally, headed or headless. 

Image result for great success

API Testing Ref Card

Mike Dean Is One Red Card Away From Giving Out 100 In The Premier ...

No. Not that type of ref card! 

Pictured above is Mike Dean, a familiar face for those English PL football lovers.

Greetings

Its been a while since we life’d per normal and I jabbered on about something. Certainly for me personally, it’s been an experience of all sorts.

Thankfully though, it has not been a difficult one and I hope the same for you.

With that, let’s get to it.

Introduction

Today, it can be no surprise to you that Orion is moving into the container world at speed.

When I think containers, I think Micro-Services and when I think micro-services I think APIs.

Jules Pulp Fiction - SAY API AGAIN SAY API ONE MORE TIME

Testing APIs

Compared to traditional test automation, API testing is so much

  • cleaner to maintain
  • faster to run
  • IMO, easier to implement
Me. Bean meme - Imgflip

A guide on REST API Testing

Identifying Test Cases 

When working with APIs, we have 4 main(but not limited to) sections that we would work with. 

Lets break it down the API

  • The endpoint

The endpoint is the actual URL under test. This is your gateway to access the information under test.

With any System under test. knowing what you’re putting in is super important as these should build the foundation of your test cases. 

  • The header

Most things in life come with info that isn’t really for you. This can be applied to phone calls, emails, dinner chats and even our APIs.

Meta is built into the header of your API. Sometimes this information is handy to you or set by the developers. working with headers is a need today

as most Auth services will embed a token in the header of the API.

  • The auth

Of course, no touching if you’re not allowed. Auth is a MUST test, must know must can do.

  • The payload

And last but not least, our apple… the payload. 

The payload is the carrier of data, messages and usually all things requested by the consumer. 

Response Codes

APis communicate over HTTP but do so using different methods and with any form of communication some feedback is always nice. 

Apis too respond in various manner depending on circumstance.

Below is a table that represents that response code and the meaning behind it. Get familiar with these as you’ll see them quite often.

Testing REST API Manually

Conclusion

Api Testing lends itself quite easily to being structured, well documented and fast to implement.

There are heaps of tools to be used and the benefits of testing repeatedly can be seen quick.

Non-Official intro to Kafka, Confluent and Data Streams

Introduction

Kafka is a damn beautiful thing, I must admit…

It makes me all googly eyed and leaves that sense of winder in me… but yeah, i’m strange like that!

I’ve been working through it for a little while now and learn some stuff. I also was met with a few challenges and wanted to share these with you. 

Let’s take a quick walk through it.

What is Kafka?

The web definition:

Kafka is a distributed streaming platform that has 3 main capabilities:

  • Publish and Subscribe to streams
  • Store messages into streams
  • Process messages before we store

That last point is what makes Kafka a beast imo. Why?

Let’s look at some pictures to understand that.

Keeping it old school

Simplified dramatically, some event driven archs may appear to look like this: https://woki.orionhealth.global/plugins/gliffy/viewer.action?wmode=opaque

While this worked for many years, we found limitations in processing performance and handling data in near real time but nothing I guess prepared our world for the

creation of the big data concept.

Storing data is always costly and usually the bulk of your software budget. To make thinks faster in this model, devs 

had to either scale up the number of Db instances they had of use a multiple Db instances. This is a sore sore SORE point for member of our tech community, big pockets or small. 

This entry point also made our applications the centre of attention and building our applications more complex.

Multi-threading and bad message handling was super key to ensure this was successful.

How do we manage the load?! 

However it came at a cost.

  • it was challenging for devs 
  • it made dev and testing complex
  • it bloated our codebases
  • it wasn’t always done right
  • issues were usually only spotted once these threads ran live and encountered the real world for the first time. We’re were now in the thick of things…

This gave rise to the idea of data streaming, which rise to Kafka!

Kafka and Data streams for the win

Yes, my eyes are still googlied <- legit

I love high performance applications. I mean, we all can build a hello world project,

but to build a highly performant, fault tolerant scalable and real time application that can process a million messages quickly

takes something hell of a special. (quick shout out to Reece and the SDP Platforms team (wink) )

KAFKA is purpose built for this very need. https://woki.orionhealth.global/plugins/gliffy/viewer.action?wmode=opaque

where is kafka

Kafka Terms and Concepts 

BROKER

A Kafka broker is just a running instance of kafka. 

Its the hub, the mother ship the ring leader, the conductor, the centre piece of the show the thriller in Manila…ok, too far but you get my vibe.

TOPIC 

Brokers are were we create topics for consuming.

A topic is really just a layer of business logic events.  for eg, if we were in the business of baking bread,

a topic of interest would be:

  • “how long has the dough rested”
  • “is the oven hot enough to begin”
  • “how long has the bun been in the oven”

PRODUCER

A producer is one that will create or add to a topic. When an event is raised, the producer springs into action.

So in our case above, the oven would produce an event to the topics:

  • “is the oven hot enough to begin”
  • “how long has the bun been in the oven”

But what about the other topic?

The oven has no need to care about how long the bread has rested. It just cares if it itself is ready and if it has completed the job.

This then means the baker would be the only producer to the other topic

  • “how long has the dough rested”

CONSUMER

A consumer of a topic is one that is simply subscribes to a topic and is updated when something in that topic changes.

In our bread winner example above, our baker or bakers would the consumers of all these topics…well, we hope for bread sakes!

The Kafka value

So far I’ve explained some differences and some terms, but not the value.

As seen in the Kafka image above, we are now able to do a couple of things BEFORE our application and databases are hit.

BOOM! VALUE! 

We now can:

  • filed multiple threads
  • compute at mad speeds
  • and store only when necessary  
  • doing all this at near real time, almost seamlessly at lower operational costs. 

MORE FOR LESS -> BOOM! EVEN MORE VALUE!

The real 11 herbs and spices are in the manner in which the data is sharded, partitioned, stored, processed and read. But that a technical walk though for another day, but dang

Its like the Oprah of big data processing eh?..You feeling my love for this now?  

Confluent

With all nice things in this world, people cant wait to just onboard. thankfully so!

Working with streams and stream applications, especially from a QA point of view, can be challenging, merely on the fact that

there is a very, very small set of tools for us to use to validate and gain some peek preview of data under the hood.

The team of confluent have done a job of this, check them out here:

https://www.confluent.io/

Accessing all things streams

I have come across 3 very handy tools that do the similar thing but can be used in vary different applications and ways.

KSQL

A neat little library that lets us query a stream as if it were a database.

Pretty handy when you want to go above and beyond being a consumer of a topic. 

https://www.confluent.io/product/ksql/

Kafka Control Centre

The control centre is UI designed to hook into your broker.

It will give you all the goodness of KSQL and a streams dashboard. Pretty neat.

https://docs.confluent.io/current/control-center/index.html#

Kafka Rest Proxy

https://docs.confluent.io/current/kafka-rest/index.html

Good as gold – well for me anyways!

I guess being in QA for so long has really trained my brain to push past what even i as a developer would deem normal behaviour.

i.e. just testing the application as a consumer is not enough.

The REST client is awesome and lets me integrate my integration testing platform  along side my dev team and application code.

As we produce topics and messages, I get to peer into these, even before consuming it.

I can validate

  • the producer 
  • the message
  • the topic
  • the partitions
  • the success
  • and the failures
  • speed at which we are processing and latency between them. STUNNING!

Conclusion 

I hope this has enlightened you a little on Kafka, streams and some tools you can use to simply your life. 

We actually live in a world a streaming data…

NETFLIX

YOUTUBE

SPOTIFY

Your favourite ONLINE RETAIL store… you know those suggestions on “OTHER ITEMS YOU MIGHT LIKE”….. well…stream data…It’s all around you and I.

The introduction of tooling over Kafka is epic.

As a QA, this was imperative.  I need to follow the data from start to end, leaving no bread unbaked. 

It was not a simple undertaking for me.

I researched and I learnt a lot, tried loads of things and failed, leaned on people with experience to help me through and eventually found success. 

All of these come neatly wrapped in docker containers. These can be run as a whole or as components. It makes working with queues a lot simpler by giving us hooks into our streams. 

Dont let a subject like big data processing scare you. Get in there, get your hands dirty and fail! Fail and fail till you bake your bread.

Manage your software delivery like a boss

Introduction

Over the past then years that I’ve been in the industry, I have been fortunate enough to bear witness to the beautiful way software delivery methods have evolved.

From long lasting design of monoliths to quick and in some cases, daily deploys of micro-services, we have come a long long way but our journey is not done. 

Today, we’re moving faster than ever into cloud, serverless, containers and SaaS.  

The Revolution

The key to this revolution was communication. There is indeed, strength in numbers. 

Knowing that we were not struggling alone. That our issues were not unique and that others solved them was comforting, motivating and enlightened us.

The Bitter Truth

We all became students and masters at the same time. We read, research, learn insatiably. GIVE ME ALL THE KNOWLEDGES! 

Learning how to do things better,

Learning how others found success in similar situations! YAS!

We’re amped! We’re pumped, we try it out and its going to be amazing…and it fails!

Its hard…its not working…its taking longer than expected.

wait.. what?!

What just happened

Variables. Life just happened. It sucks, but its a lesson that was always yours…waiting for just you.

Project Risk

As a lead, take careful consideration of your projects strength and weaknesses.

These Risks will surely impact your delivery. 

Focus on quality early

With the introduction of micro-services, we have seen testing get pushed left.

Like into the bush and we forget about it?  Erhm…NO.

By pushing testing left, we attempt to push the top tiers of the pyramid down.

So an example here would be, refining our system tests to pose as integration tests instead. 

The goal of this concept has always been the same, faster feedback.

TIP: Dont get bogged down by the concept but rather focus on the goal, shift can happen in both directions.

Have a repeatable game plan

Build a vision for delivery with your team.

Understand each move your people and software makes and draw a picture. 

Create flow diagrams and maintain this as you and the team grows…making it your own and not an internet copy and paste.

Most of the time, everything you read will be wrong for you. OMG… 

I just mean that when we read something, most of the time we dont have the full picture. The new start in your team or the exit of an old soul will dramatically change how you deliver.  Shift and adapt with it.

TIP: Dont forget to address your delivery process often! 

Below is an example and a good head start for you to build and share your flow. Make it your own!

Conclusion

As you forge forward in your ambitions to delivery high quality software sooner, build a plan that is lean but high value to your team.

  • Use tools wisely,
  • Use other’s learnings and suggestions sparingly

Working with spring properties and profiles

sptingbootMaven

Today I have a very simple application for you to demonstrate how to use profiles to control environmental variables.

To skip the build and access the code, you can find it here:

https://github.com/suveerprithipal/spring_properties_and_profiles

Lets keep going.

Why do we want this?

So we can swap profiles when testing locally vs a cloud or dev vs test

How do we build this?

Firstly, we need to create an environmental variable. To do so, on windows10,

On windows desktop, Search for “advanced system settings”

Windows_Search_Advanced_system_settings

Select ‘Advanced system settings’

Advanced_system_settings

  • Click Environment Variable
  • Click “New” under ‘System Variables’
  • Set Variable name of ‘APP_URL’
  • Set Variable value of ‘cloud’

add_system_var

 

 

 

 

 

 

 

 

 

Next up I created a Maven project and added in my Spring boot dependencies. We then need to create some profiles. Easy peasy. To do so, we actually create properties files. Yep, that simple. Under your java/test filepath, create a resources dir. Remember to right click and mark as ‘test Resources Root’ Create 2 properties files:

  • properties-local.properites
  • properties-cloud.properties

adding_properites_files

In each of them, add a variable called ‘my.app.url’ as the following demonstrates.

in properties-local.properites: my.app.url = local

local_props
in properties-cloud.properites: my.app.url = ${app_url}

cloud_props

We use “${app_url}” here as we want to point our test to use the system variable we created. When springboot runs, and see’s “${}” it knows to go looking for a definition on that variable on the system.

We’re almost done! and now in a position to add some tests to access these variables we’ve created above.. Create a test or use the one created by default if you have one.

In our test class, create a class member of type ‘Environment'(org.springframework.core.env.Environment). I called mine “env”. Don’t forget to Autowire(org.springframework.beans.factory.annotation.Autowired) this member as we want a bean created for this at run-time.

We create 2 tests:

  • localTest
  • cloudTest

testclass

These are very similar in nature, but have 2 different asserts:

  • localTest Assert.assertEquals(“local”, env.getProperty(“my.app.url”));
  • cloudTest Assert.assertEquals(“cloud”, env.getProperty(“my.app.url”));

Lastly, we want to specify which profile spring should use when running. To do this, we will add a VM Option to the Run Configuration.

I ran my 2 tests above individually. This create a run configuration for me automatically. I then go to edit these configs.

  • localTest Run config: I set the VM option to : -Dspring.profiles.active=local Apply and save.
  • cloudTest Run config: I set the VM option to : -Dspring.profiles.active=cloud Apply and save.

runconfig

Now when we run these, spring will swap out our properties file, based on the profile we selected. You will see that the local env.getProperty will return “local” and cloud env.getProperty will return cloud, which we set on the system.

That’s it! You’re done! Happy Testing.