Change your Mindset

Test Automation and ATDD

How ATDD changed the way I create solutions

My encounter with Automated Testing

The first time I heard the term Test-Driven-Development (TDD) was when talking to Marije Brummel, at the NAV Techdays in Antwerp in 2016. Not much later Luc van Vugt, thé guru about anything with testing and BC, gave a presentation ath the Dutch Dynamics Community about how to use the standard Test Automation Suite. Interesting stuff, was my first reaction. I should have a look at this. But as with so many things, I didn’t take the time to dive into it. Too busy with doing things the old way. A shame. It could have changed my way of working much earlier…

Finally, when starting to develop for appsource, I was forced to do something with Automated Testing (AT). Back then it was mandatory to have automated tests in place when submitting an app for appsource. A good push to take Luc’s book Automated Testing in Microsoft Dynamics 365 Business Central of the shelf, dust ift off and get started. Then I got hooked and a journey started. A journey with bumpy roads, side-tracks, u-turns, and dead-end roads. A journey that took some time to find my way, but one that I regret not have taken a long time ago. It finally led me to my 5-steps approach: a method that got AT and Accpetance Test-Drive Development (ATDD) working for me.

Note: Luc is about to finish a totally revised second edition: Automated Testing in Microsoft Dynamics 365 Business Central.



Before continuing, let’s get some abbreviations and definitions clear.

  • AT = Automated Testing: the automation of application tests; scripting manual application tests that check the validity of features.
  • TDD = Test Driven Development: a development process based on test cases, or as Luc describes it: ‘No tests, no code’.
  • ATDD = Acceptance Test Driven Development: a development methodology based on communication between the business customers, the developers, and the testers.

Wikipedia: ATDD is closely related to test-driven development (TDD). It differs by the emphasis on developer-tester-business customer collaboration. ATDD encompasses acceptance testing, but highlights writing acceptance tests before developers begin coding.

ATTD is more focused on the behavior the end-user is requesting for. ATDD follows a test case design pattern. For me it is a way to get AT for BC in place. Nowadays, ATDD is also used by Microsoft to test their own base and system app. There is no sense in copying good information already available. So, for more information about ATDD, see chapter 5 of Luc’s book.



So, after reading some theory, I started experimenting a bit with the test tools BC and writing test. But soon I started to struggle. As Luc writes on the first page of the first chapter:

‘Testing is not rocket science. Nor is automated testing. It’s just another learnable skill. From a developer’s perspective, however, it requires a change of mindset to write code with a totally different purpose than you are used to. And we all know that change is often not the easiest thing to achieve.’ 

I will spare you with all the experiences during my journey and focus on my findings and result. But first some background about where we come from.

In the old days, before ATDD, a NAV/BC developer started programming based on some kind of requirements: a customer’s wish like a use case or functional design. After programming, the developer tested a bit of its own work manually and then handed it over to the consultant or key-user for more testing. This resulted either in either approval or rework. The latter caused by bad programming, incomplete requirements or just misunderstanding. A process of redesign, rework and testing started. And this often repeated and could be very time consuming and tedious for the people involved.

TDD tackles this problem. With TDD the development only starts when test cases have been defined. ATDD goes a bit further, it provides a methodology and pattern to write test scenarios. This is a team effort and should be done by everyone involved: customer, product-owner, consultant and developer. The developer writes the test- and app-code based on these test scenarios. The development is finished when all tests pass. There is no repeating rework-testing process anymore. And this is what makes ATDD different from the old way of working. It is a mind-shift, especially for the many NAV/BC Dinosaur consultants, developers and managers.


So, it’s all about scenarios

In my experience, defining what a business is exactly doing and how the software must support this, has often been one of the most neglected parts in the NAV/BC development process, besides testing of course ;). ATDD solves both issues. According to Luc writing test scenarios can ‘kill five birds with one stone’. That is, break down each wish into a list of tests for the following purposes:

  1. Break down the customer wish into a list of tests (test design) describing how the software should behave, making it our primary vehicle of communication in the next steps.
  2. Implement this behavior with application code.
  3. Execute manual tests structurally, to check the behavior.
  4. Code test automation to check the behavior.
  5. Up-to-date documentation of how your solution behaves.

But how to get these test scenarios? How to specify them?

As written before, ATDD is a development methodology to write down scenarios. A scenario is a single test case and is described by a GIVEN, WHEN and THEN: GIVEN some data setup, WHEN doing a specific action, THEN I expect a certain result.

The base for writing scenarios can be any customer requirement, in the form of a User Story. This fits seamless with the agile development process called Behavior-Driven-Development (BDD). In this topic on Wikipedia, the following example is given. It shows you how to get from User Story to the scenarios in ATDD format.

Title: Returns and exchanges go to inventory.

User Story:  As a store owner, I want to add items back to inventory when they are returned or exchanged, so that I can track inventory.

Scenario 1: Items returned for refund should be added to inventory.

  • GIVEN that a customer previously bought a black sweater from me,
  • GIVEN I have three black sweaters in inventory,
  • WHEN they return the black sweater for a refund,
  • THEN I should have four black sweaters in inventory.

Scenario 2: Exchanged items should be returned to inventory.

  • GIVEN that a customer previously bought a blue garment from me,
  • GIVEN I have two blue garments in inventory
  • GIVEN and three black garments in inventory,
  • WHEN they exchange the blue garment for a black garment,
  • THEN I should have three blue garments in inventory
  • THEN and two black garments in inventory.


How it ATDD works for me

So, after experimenting a bit with writing scenarios and test functions, I came to a 5-steps method that incorporates the ATDD methodology. It works for me when developing new features for BC.

First I do an analysis and write a User Story+ (1.) for every feature to be developed, just like in the BDD example above. With the ‘+’ I indicate that only a User Story is often not enough to define the scope. It can be complemented by a Use Case, Functional Design, Flow Chart, BPMN and/or ERD. But I try to keep them as comprehensive as possible. Then I write down all possible test scenarios following to the ATDD pattern (2.). For every feature (User Story), I create 1 (or more) test codeunit(s). For every test scenario I create a test function and in this function I write down each test scenario with the GIVEN-WHEN-THEN tags, see the example below.

When that is finished, I write the app code (3.). When I think that I have finished the app code, then I often first do a quick manual test. When it fails, I go back to the app code, until it succeeds. Then I start programming the test code (4.) and test them one by one by using the AL Test Tool or using the AL Test Runner extension of James Pearson. I try to make every test function independent of the other test functions. When all test functions of the feature are finished and succeed, then I run all the test codeunits for the app. It might happen that something in the new features makes another a scenario of another function to not succeed anymore. This shouldn’t happen when the analysis was complete ;). The last step necessary to finish off the feature, is writing the end-user documentation for the (online) help. I won’t go into details about that, it’s worth a topic itself.

In the ideal situation, the above is a linear process. However, when testing the test code, I might discover that the app code is not correct. Then I must return to the app code and fix it. Notice that I don’t have to wait for a consultant or key-user to test. I might also discover that the app code is not the problem, but that the test scenario was not right. Ouch, then I must return to test scenarios. As you see, specifying the scenarios is a very important step in the process.

Note: that in step 2. I only write the GIVEN-WHEN-THEN tags in plain text. I do not program any test code yet, making use of helper functions and pseudo-code.  Because of this, according to Luc, my 5-steps are not exactly TDD.


Findings and Conclusions

  • ATDD is a methodology that requires a total different mindset and way of working; not only for the developer, but for the whole team. It might frustrate you at first: it takes time to get used to. You need more time to get the scenarios written down, before a developer can start programming. A challenge for many involved, not only developers.
  • Writing down the scenario can be a challenge. Try to make the scenarios as simple and short as possible. Although some larger scenarios cannot always be avoided. Ideally you will have all possible functional scenarios covered by a test scenario.
  • Writing the test code and, also testing the test code can take some time, especially when you have many and/or complicated scenarios. The developer does not only have to write the app code, but is also responsible for all tests to succeed. It needs discipline.
  • It can take some time before you have structured the test code: how specify your test scenario’s,  how to set up test codeunits, test functions and helper function, which fixture to use, and how to use the standard library functions and make your own. Just to name some challenges.
  • A developer doesn’t rely on consultants or key-user for testing: after finishing the test code, the development is done. Finished. When afterwards the functionality is not as expected, then most likely the scenarios were not right. The whole process is a more linear way of working.
  • A big plus when developing ATDD based is that you can easily test all your scenarios any time. You don’t need any tedious and time-consuming manual testing anymore. In my case, I have about 250 test scenario’s, they are run within 2 minutes in a Docker container on a 3 year old laptop. It takes 4 hours to test it all 400 times (I also test when different workdates), this results in 60k+ posted documents and 370k+ g/l entries. Imagine this doing manually!

It took me some time to get used to it. But now I don’t want to do without ATDD anymore. ATDD provides a structured way of working, saves time in the end, and improves the quality of the software. When you want to get started with ATDD, then start by reading Luc’s book, get your team involved and find a way to implement this new paradigm bit-by-bit. Just get started with ATDD!



Posted in

Meest recente nieuwsberichten

Test Automation and ATDD

What happened to Magium?