Run the Tests Again to Make It Passes Consistently

Unit testing is an essential instrument in the toolbox of whatever serious software developer. Even so, it can sometimes be quite difficult to write a proficient unit of measurement test for a particular piece of code. Having difficulty testing their own or someone else'due south code, developers often think that their struggles are caused by a lack of some primal testing cognition or undercover unit testing techniques.

In this unit testing tutorial, I intend to demonstrate that unit tests are quite easy; the real problems that complicate unit testing, and introduce expensive complexity, are a upshot of poorly-designed, untestable code. We will discuss what makes code hard to test, which anti-patterns and bad practices nosotros should avoid to meliorate testability, and what other benefits we tin achieve by writing testable code. Nosotros will see that writing unit tests and generating testable code is not just nearly making testing less troublesome, but about making the code itself more robust, and easier to maintain.

What is Unit of measurement Testing?

Essentially, a unit of measurement examination is a method that instantiates a small-scale portion of our application and verifies its behavior independently from other parts. A typical unit test contains three phases: First, it initializes a small piece of an application it wants to test (also known as the arrangement under test, or SUT), then it applies some stimulus to the organization under test (usually by calling a method on information technology), and finally, it observes the resulting behavior. If the observed behavior is consistent with the expectations, the unit test passes, otherwise, it fails, indicating that there is a trouble somewhere in the system nether test. These 3 unit test phases are also known every bit Arrange, Deed and Assert, or just AAA.

A unit of measurement test tin verify unlike behavioral aspects of the organisation under test, but most likely it will fall into 1 of the following two categories: state-based or interaction-based. Verifying that the system under test produces correct results, or that its resulting land is correct, is called state-based unit of measurement testing, while verifying that it properly invokes certain methods is called interaction-based unit testing.

As a metaphor for proper software unit of measurement testing, imagine a mad scientist who wants to build some supernatural chimera, with frog legs, octopus tentacles, bird wings, and a dog'due south head. (This metaphor is pretty close to what programmers actually do at piece of work). How would that scientist make sure that every part (or unit) he picked actually works? Well, he can accept, let'due south say, a unmarried frog'due south leg, apply an electrical stimulus to it, and check for proper muscle wrinkle. What he is doing is essentially the aforementioned Arrange-Act-Affirm steps of the unit test; the just deviation is that, in this example, unit refers to a concrete object, not to an abstract object we build our programs from.

what is unit testing: illustration

I will use C# for all examples in this article, only the concepts described employ to all object-oriented programming languages.

An case of a unproblematic unit exam could look like this:

          [TestMethod] public void IsPalindrome_ForPalindromeString_ReturnsTrue() {     // In the Arrange phase, we create and set upwardly a system under exam.     // A system under test could be a method, a single object, or a graph of connected objects.     // It is OK to take an empty Adjust phase, for example if we are testing a static method -     // in this example SUT already exists in a static form and we don't accept to initialize annihilation explicitly.     PalindromeDetector detector = new PalindromeDetector();       // The Act phase is where nosotros poke the system nether exam, unremarkably past invoking a method.     // If this method returns something back to us, we want to collect the result to ensure it was right.     // Or, if method doesn't return annihilation, nosotros want to check whether it produced the expected side effects.     bool isPalindrome = detector.IsPalindrome("kayak");      // The Affirm phase makes our unit exam pass or fail.     // Here we cheque that the method's beliefs is consistent with expectations.     Assert.IsTrue(isPalindrome); }                  

Unit Exam vs. Integration Test

Another important thing to consider is the difference betwixt unit of measurement testing and integration testing.

The purpose of a unit of measurement test in software technology is to verify the behavior of a relatively small piece of software, independently from other parts. Unit tests are narrow in scope, and let us to cover all cases, ensuring that every unmarried part works correctly.

On the other hand, integration tests demonstrate that different parts of a system work together in the real-life environment. They validate circuitous scenarios (we tin can remember of integration tests equally a user performing some high-level performance inside our system), and usually require external resources, like databases or spider web servers, to exist present.

Allow's become back to our mad scientist metaphor, and suppose that he has successfully combined all the parts of the chimera. He wants to perform an integration test of the resulting animal, making sure that it tin can, let's say, walk on different types of terrain. Get-go of all, the scientist must emulate an environment for the animate being to walk on. Then, he throws the brute into that environment and pokes it with a stick, observing if information technology walks and moves every bit designed. After finishing a test, the mad scientist cleans up all the clay, sand and rocks that are now scattered in his lovely laboratory.

unit testing example illustration

Notice the meaning difference betwixt unit and integration tests: A unit test verifies the behavior of minor part of the application, isolated from the environment and other parts, and is quite easy to implement, while an integration test covers interactions between unlike components, in the close-to-existent-life environment, and requires more effort, including additional setup and teardown phases.

A reasonable combination of unit and integration tests ensures that every unmarried unit works correctly, independently from others, and that all these units play nicely when integrated, giving us a high level of confidence that the whole organization works as expected.

Nonetheless, nosotros must remember to always identify what kind of exam we are implementing: a unit or an integration exam. The difference can sometimes be deceiving. If we think we are writing a unit test to verify some subtle edge case in a business concern logic class, and realize that information technology requires external resources like web services or databases to be present, something is not right — essentially, nosotros are using a sledgehammer to cleft a nut. And that means bad design.

What Makes a Good Unit Test?

Before diving into the chief role of this tutorial and writing unit of measurement tests, permit's quickly discuss the properties of a skilful unit of measurement test. Unit of measurement testing principles need that a good exam is:

  • Piece of cake to write. Developers typically write lots of unit tests to embrace different cases and aspects of the awarding's behavior, so it should be easy to code all of those test routines without enormous effort.

  • Readable. The intent of a unit test should exist clear. A good unit of measurement test tells a story about some behavioral aspect of our application, so it should be like shooting fish in a barrel to empathize which scenario is being tested and — if the test fails — like shooting fish in a barrel to notice how to accost the trouble. With a adept unit test, we can set up a problems without actually debugging the code!

  • Reliable. Unit tests should fail simply if there's a bug in the system nether exam. That seems pretty obvious, only programmers often run into an issue when their tests fail even when no bugs were introduced. For example, tests may pass when running i-by-one, just fail when running the whole test suite, or laissez passer on our development machine and fail on the continuous integration server. These situations are indicative of a design flaw. Adept unit tests should be reproducible and independent from external factors such equally the environs or running guild.

  • Fast. Developers write unit tests so they can repeatedly run them and check that no bugs have been introduced. If unit tests are tedious, developers are more likely to skip running them on their own machines. One ho-hum examination won't make a significant difference; add together thou more and we're surely stuck waiting for a while. Slow unit tests may also indicate that either the arrangement nether test, or the examination itself, interacts with external systems, making it environment-dependent.

  • Truly unit, not integration. As nosotros already discussed, unit and integration tests take dissimilar purposes. Both the unit test and the system under test should not access the network resources, databases, file system, etc., to eliminate the influence of external factors.

That'due south information technology — at that place are no secrets to writing unit tests. However, there are some techniques that allow the states to write testable code.

Testable and Untestable Code

Some code is written in such a way that it is hard, or even incommunicable, to write a skillful unit of measurement test for it. So, what makes code hard to test? Permit'due south review some anti-patterns, lawmaking smells, and bad practices that we should avert when writing testable code.

Poisoning the Codebase with Non-Deterministic Factors

Permit's start with a simple example. Imagine that we are writing a program for a smart dwelling house microcontroller, and one of the requirements is to automatically turn on the light in the backyard if some move is detected there during the evening or at dark. We have started from the lesser up by implementing a method that returns a cord representation of the approximate fourth dimension of day ("Dark", "Morning time", "Afternoon" or "Evening"):

          public static string GetTimeOfDay() {     DateTime fourth dimension = DateTime.At present;     if (time.Hour >= 0 && fourth dimension.Hour < 6)     {         return "Nighttime";     }     if (time.60 minutes >= 6 && time.Hour < 12)     {         render "Morning time";     }     if (fourth dimension.Hour >= 12 && time.Hr < eighteen)     {         return "Afternoon";     }     render "Evening"; }                  

Essentially, this method reads the current system time and returns a result based on that value. So, what'south incorrect with this code?

If we think about information technology from the unit of measurement testing perspective, we'll see that information technology is not possible to write a proper land-based unit test for this method. DateTime.At present is, essentially, a hidden input, that volition probably alter during program execution or between examination runs. Thus, subsequent calls to it will produce different results.

Such non-deterministic beliefs makes it impossible to examination the internal logic of the GetTimeOfDay() method without actually irresolute the organisation date and time. Let's have a look at how such test would demand to be implemented:

          [TestMethod] public void GetTimeOfDay_At6AM_ReturnsMorning() {     attempt     {         // Setup: modify system time to six AM         ...          // Adapt phase is empty: testing static method, cypher to initialize          // Act         cord timeOfDay = GetTimeOfDay();          // Affirm         Assert.AreEqual("Forenoon", timeOfDay);     }     finally     {         // Teardown: ringlet system time back         ...     } }                  

Tests like this would violate a lot of the rules discussed earlier. It would be expensive to write (because of the non-trivial setup and teardown logic), unreliable (information technology may fail even if in that location are no bugs in the system under test, due to organization permission problems, for example), and not guaranteed to run fast. And, finally, this test would not really exist a unit test — information technology would exist something betwixt a unit and integration examination, because it pretends to test a simple edge case just requires an surround to be prepare in a particular fashion. The result is not worth the effort, huh?

Turns out that all these testability issues are caused by the low-quality GetTimeOfDay() API. In its electric current form, this method suffers from several issues:

  • It is tightly coupled to the concrete information source. Information technology is non possible to reuse this method for processing date and time retrieved from other sources, or passed as an argument; the method works simply with the appointment and time of the detail machine that executes the code. Tight coupling is the primary root of nigh testability problems.

  • It violates the Single Responsibility Principle (SRP). The method has multiple responsibilities; information technology consumes the information and also processes it. Another indicator of SRP violation is when a single class or method has more than one reason to change. From this perspective, the GetTimeOfDay() method could be changed either because of internal logic adjustments, or because the engagement and time source should be inverse.

  • It lies most the information required to get its task done. Developers must read every line of the actual source code to understand what hidden inputs are used and where they come from. The method signature alone is not enough to empathise the method's behavior.

  • Information technology is hard to predict and maintain. The behavior of a method that depends on a mutable global state cannot be predicted past merely reading the source code; it is necessary to have into account its current value, along with the whole sequence of events that could have inverse it earlier. In a real-globe application, trying to unravel all that stuff becomes a real headache.

Afterward reviewing the API, let'southward finally gear up it! Fortunately, this is much easier than discussing all of its flaws — we just demand to suspension the tightly coupled concerns.

Fixing the API: Introducing a Method Argument

The most obvious and easy way of fixing the API is by introducing a method argument:

          public static string GetTimeOfDay(DateTime dateTime) {         if (dateTime.Hr >= 0 && dateTime.Hour < 6)     {         return "Night";     }     if (dateTime.60 minutes >= 6 && dateTime.Hour < 12)     {         return "Morning";     }     if (dateTime.Hour >= 12 && dateTime.Hour < eighteen)     {         return "Noon";     }     return "Evening"; }                  

Now the method requires the caller to provide a DateTime argument, instead of secretly looking for this information by itself. From the unit testing perspective, this is slap-up; the method is now deterministic (i.e., its return value fully depends on the input), and then country-based testing is as easy as passing some DateTime value and checking the result:

          [TestMethod] public void GetTimeOfDay_For6AM_ReturnsMorning() {     // Arrange phase is empty: testing static method, nothing to initialize      // Act     string timeOfDay = GetTimeOfDay(new DateTime(2015, 12, 31, 06, 00, 00));      // Affirm     Affirm.AreEqual("Morning", timeOfDay); }                  

Notice that this simple refactor also solved all the API issues discussed before (tight coupling, SRP violation, unclear and hard to understand API) by introducing a articulate seam between what information should exist processed and how it should be done.

Excellent — the method is testable, simply how nigh its clients? Now it is the caller's responsibility to provide date and fourth dimension to the GetTimeOfDay(DateTime dateTime) method, pregnant that they could become untestable if we don't pay plenty attention. Let'southward take a look how we can deal with that.

Fixing the Client API: Dependency Injection

Say we go on working on the smart dwelling system, and implement the following client of the GetTimeOfDay(DateTime dateTime) method — the aforementioned smart home microcontroller code responsible for turning the lite on or off, based on the time of day and the detection of motion:

          public class SmartHomeController {     public DateTime LastMotionTime { get; private set; }      public void ActuateLights(bool motionDetected)     {         DateTime time = DateTime.Now; // Ouch!          // Update the time of last motion.         if (motionDetected)         {             LastMotionTime = fourth dimension;         }                  // If motility was detected in the evening or at night, turn the lite on.         string timeOfDay = GetTimeOfDay(time);         if (motionDetected && (timeOfDay == "Evening" || timeOfDay == "Night"))         {             BackyardLightSwitcher.Instance.TurnOn();         }         // If no movement is detected for one minute, or if information technology is morning or day, turn the light off.         else if (time.Subtract(LastMotionTime) > TimeSpan.FromMinutes(1) || (timeOfDay == "Morning time" || timeOfDay == "Noon"))         {             BackyardLightSwitcher.Example.TurnOff();         }     } }                  

Ouch! We have the same kind of hidden DateTime.Now input trouble — the only difference is that information technology is located on a little flake higher of an abstraction level. To solve this issue, we can introduce another argument, once again delegating the responsibility of providing a DateTime value to the caller of a new method with signature ActuateLights(bool motionDetected, DateTime dateTime). But, instead of moving the problem a level higher in the call stack again, let's employ another technique that will allow usa to go along both ActuateLights(bool motionDetected) method and its clients testable: Inversion of Control, or IoC.

Inversion of Control is a simple, but extremely useful, technique for decoupling code, and for unit testing in particular. (After all, keeping things loosely coupled is essential for being able to analyze them independently from each other.) The key bespeak of IoC is to split controlling lawmaking (when to do something) from action lawmaking (what to practice when something happens). This technique increases flexibility, makes our lawmaking more modular, and reduces coupling between components.

Inversion of Control tin be implemented in a number of ways; allow's accept a look at one particular example — Dependency Injection using a constructor — and how information technology tin help in building a testable SmartHomeController API.

Offset, permit'due south create an IDateTimeProvider interface, containing a method signature for obtaining some appointment and time:

          public interface IDateTimeProvider {     DateTime GetDateTime(); }                  

Then, make SmartHomeController reference an IDateTimeProvider implementation, and delegate it the responsibleness of obtaining engagement and time:

          public course SmartHomeController {     private readonly IDateTimeProvider _dateTimeProvider; // Dependency      public SmartHomeController(IDateTimeProvider dateTimeProvider)     {         // Inject required dependency in the constructor.         _dateTimeProvider = dateTimeProvider;     }      public void ActuateLights(bool motionDetected)     {         DateTime time = _dateTimeProvider.GetDateTime(); // Delegating the responsibleness          // Remaining light command logic goes hither...     } }                  

Now we tin see why Inversion of Control is so called: the control of what mechanism to use for reading date and time was inverted, and now belongs to the customer of SmartHomeController, not SmartHomeController itself. Thereby, the execution of the ActuateLights(bool motionDetected) method fully depends on 2 things that tin can exist hands managed from the outside: the motionDetected statement, and a physical implementation of IDateTimeProvider, passed into a SmartHomeController constructor.

Why is this significant for unit testing? Information technology means that different IDateTimeProvider implementations can be used in production code and unit examination code. In the product surround, some real-life implementation will be injected (e.chiliad., i that reads actual system time). In the unit test, nonetheless, we tin can inject a "fake" implementation that returns a constant or predefined DateTime value suitable for testing the detail scenario.

A fake implementation of IDateTimeProvider could expect like this:

          public course FakeDateTimeProvider : IDateTimeProvider {     public DateTime ReturnValue { get; fix; }      public DateTime GetDateTime() { return ReturnValue; }      public FakeDateTimeProvider(DateTime returnValue) { ReturnValue = returnValue; } }                  

With the assist of this class, it is possible to isolate SmartHomeController from non-deterministic factors and perform a state-based unit of measurement examination. Permit's verify that, if motion was detected, the time of that motility is recorded in the LastMotionTime belongings:

          [TestMethod] void ActuateLights_MotionDetected_SavesTimeOfMotion() {     // Arrange     var controller = new SmartHomeController(new FakeDateTimeProvider(new DateTime(2015, 12, 31, 23, 59, 59)));      // Act     controller.ActuateLights(true);      // Assert     Assert.AreEqual(new DateTime(2015, 12, 31, 23, 59, 59), controller.LastMotionTime); }                  

Great! A test like this was non possible before refactoring. Now that we've eliminated non-deterministic factors and verified the state-based scenario, do you call up SmartHomeController is fully testable?

Poisoning the Codebase with Side Effects

Despite the fact that we solved the problems caused by the non-deterministic hidden input, and we were able to examination certain functionality, the code (or, at least, some of information technology) is still untestable!

Let's review the following part of the ActuateLights(bool motionDetected) method responsible for turning the low-cal on or off:

          // If movement was detected in the evening or at nighttime, plow the light on. if (motionDetected && (timeOfDay == "Evening" || timeOfDay == "Night")) {     BackyardLightSwitcher.Instance.TurnOn(); } // If no movement was detected for 1 minute, or if it is morning or day, plow the low-cal off. else if (time.Subtract(LastMotionTime) > TimeSpan.FromMinutes(1) || (timeOfDay == "Forenoon" || timeOfDay == "Apex")) {     BackyardLightSwitcher.Instance.TurnOff(); }                  

As nosotros tin can run into, SmartHomeController delegates the responsibility of turning the light on or off to a BackyardLightSwitcher object, which implements a Singleton blueprint. What's wrong with this blueprint?

To fully unit test the ActuateLights(bool motionDetected) method, we should perform interaction-based testing in addition to the land-based testing; that is, we should ensure that methods for turning the light on or off are called if, and simply if, appropriate weather condition are met. Unfortunately, the current design does not let us to do that: the TurnOn() and TurnOff() methods of BackyardLightSwitcher trigger some state changes in the system, or, in other words, produce side furnishings. The merely way to verify that these methods were called is to check whether their corresponding side effects really happened or non, which could be painful.

Indeed, let's suppose that the motion sensor, backyard lantern, and smart habitation microcontroller are continued into an Cyberspace of Things network and communicate using some wireless protocol. In this case, a unit test can make an attempt to receive and clarify that network traffic. Or, if the hardware components are connected with a wire, the unit of measurement test can bank check whether the voltage was applied to the advisable electrical circuit. Or, after all, it tin can bank check that the light actually turned on or off using an additional light sensor.

As we tin can encounter, unit testing side-effecting methods could exist equally difficult equally unit testing non-deterministic ones, and may even be incommunicable. Any endeavor will lead to issues similar to those we've already seen. The resulting examination will exist hard to implement, unreliable, potentially slow, and non-really-unit of measurement. And, after all that, the flashing of the light every fourth dimension nosotros run the exam suite will somewhen bulldoze us crazy!

Once again, all these testability problems are caused by the bad API, not the developer's ability to write unit tests. No matter how exactly light command is implemented, the SmartHomeController API suffers from these already-familiar issues:

  • It is tightly coupled to the physical implementation. The API relies on the hard-coded, concrete case of BackyardLightSwitcher. Information technology is not possible to reuse the ActuateLights(bool motionDetected) method to switch whatsoever light other than the 1 in the backyard.

  • It violates the Single Responsibleness Principle. The API has two reasons to change: Start, changes to the internal logic (such as choosing to make the light turn on only at night, simply non in the evening) and second, if the light-switching mechanism is replaced with some other one.

  • Information technology lies about its dependencies. At that place is no fashion for developers to know that SmartHomeController depends on the hard-coded BackyardLightSwitcher component, other than digging into the source code.

  • It is hard to understand and maintain. What if the light refuses to turn on when the atmospheric condition are correct? We could spend a lot of time trying to fix the SmartHomeController to no avail, only to realize that the problem was caused past a bug in the BackyardLightSwitcher (or, even funnier, a burned out lightbulb!).

The solution of both testability and low-quality API problems is, not surprisingly, to break tightly coupled components from each other. Every bit with the previous example, employing Dependency Injection would solve these issues; but add together an ILightSwitcher dependency to the SmartHomeController, delegate it the responsibility of flipping the light switch, and pass a imitation, examination-merely ILightSwitcher implementation that will record whether the advisable methods were called under the correct atmospheric condition. However, instead of using Dependency Injection again, let's review an interesting alternative approach for decoupling the responsibilities.

Fixing the API: Higher-Order Functions

This approach is an option in whatsoever object-oriented language that supports offset-form functions. Let's have advantage of C#'s functional features and make the ActuateLights(bool motionDetected) method have 2 more arguments: a pair of Action delegates, pointing to methods that should exist called to turn the calorie-free on and off. This solution will convert the method into a higher-lodge function:

          public void ActuateLights(bool motionDetected, Action turnOn, Action turnOff) {     DateTime time = _dateTimeProvider.GetDateTime();          // Update the time of concluding move.     if (motionDetected)     {         LastMotionTime = time;     }          // If motility was detected in the evening or at nighttime, plow the light on.     string timeOfDay = GetTimeOfDay(time);     if (motionDetected && (timeOfDay == "Evening" || timeOfDay == "Night"))     {         turnOn(); // Invoking a delegate: no tight coupling anymore     }     // If no motility is detected for one minute, or if it is forenoon or day, turn the light off.     else if (fourth dimension.Decrease(LastMotionTime) > TimeSpan.FromMinutes(1) || (timeOfDay == "Morn" || timeOfDay == "Noon"))     {         turnOff(); // Invoking a delegate: no tight coupling anymore     } }                  

This is a more functional-flavored solution than the classic object-oriented Dependency Injection approach we've seen before; however, it lets us accomplish the same consequence with less code, and more than expressiveness, than Dependency Injection. It is no longer necessary to implement a form that conforms to an interface in society to supply SmartHomeController with the required functionality; instead, we tin can just pass a function definition. Higher-society functions can exist thought of as another manner of implementing Inversion of Control.

Now, to perform an interaction-based unit test of the resulting method, nosotros tin laissez passer easily verifiable faux actions into information technology:

          [TestMethod] public void ActuateLights_MotionDetectedAtNight_TurnsOnTheLight() {     // Adapt: create a pair of deportment that change boolean variable instead of really turning the low-cal on or off.     bool turnedOn  = false;     Activeness turnOn  = () => turnedOn = true;     Action turnOff = () => turnedOn = false;     var controller = new SmartHomeController(new FakeDateTimeProvider(new DateTime(2015, 12, 31, 23, 59, 59)));      // Act     controller.ActuateLights(true, turnOn, turnOff);      // Affirm     Assert.IsTrue(turnedOn); }                  

Finally, we take made the SmartHomeController API fully testable, and we are able to perform both state-based and interaction-based unit tests for it. Again, notice that in addition to improved testability, introducing a seam between the decision-making and activity code helped to solve the tight coupling problem, and led to a cleaner, reusable API.

Now, in order to achieve full unit test coverage, we can simply implement a bunch of like-looking tests to validate all possible cases — non a big bargain since unit of measurement tests are now quite easy to implement.

Impurity and Testability

Uncontrolled non-determinism and side effects are like in their destructive effects on the codebase. When used carelessly, they pb to deceptive, hard to sympathise and maintain, tightly coupled, non-reusable, and untestable lawmaking.

On the other hand, methods that are both deterministic and side-effect-costless are much easier to examination, reason about, and reuse to build larger programs. In terms of functional programming, such methods are chosen pure functions. We'll rarely take a problem unit testing a pure function; all we have to practise is to laissez passer some arguments and check the result for correctness. What actually makes code untestable is hard-coded, impure factors that cannot be replaced, overridden, or abstracted abroad in another mode.

Impurity is toxic: if method Foo() depends on non-deterministic or side-effecting method Bar(), and then Foo() becomes non-deterministic or side-effecting likewise. Eventually, we may finish upwards poisoning the entire codebase. Multiply all these problems past the size of a complex real-life application, and we'll detect ourselves burdened with a hard to maintain codebase full of smells, anti-patterns, undercover dependencies, and all sorts of ugly and unpleasant things.

unit testing example: illustration

Even so, impurity is inevitable; any real-life application must, at some betoken, read and dispense state by interacting with the environment, databases, configuration files, web services, or other external systems. And then instead of aiming to eliminate impurity altogether, information technology's a good idea to limit these factors, avoid letting them poison your codebase, and break difficult-coded dependencies equally much equally possible, in order to exist able to analyze and unit of measurement test things independently.

Common Warning Signs of Hard to Test Code

Trouble writing tests? The problem's not in your test suite. It's in your code.

Finally, let'south review some common alarm signs indicating that our lawmaking might exist difficult to test.

Static Properties and Fields

Static properties and fields or, but put, global state, tin can complicate code comprehension and testability, by hiding the information required for a method to get its job done, by introducing non-determinism, or by promoting extensive usage of side furnishings. Functions that read or modify mutable global land are inherently impure.

For example, information technology is hard to reason about the post-obit code, which depends on a globally accessible property:

          if (!SmartHomeSettings.CostSavingEnabled) { _swimmingPoolController.HeatWater(); }                  

What if the HeatWater() method doesn't get called when we are certain information technology should have been? Since any part of the application might have inverse the CostSavingEnabled value, we must observe and analyze all the places modifying that value in order to find out what'south wrong. Besides, every bit nosotros've already seen, it is not possible to ready some static backdrop for testing purposes (e.g., DateTime.Now, or Environment.MachineName; they are read-only, just still non-deterministic).

On the other hand, immutable and deterministic global state is totally OK. In fact, there'due south a more familiar proper name for this — a constant. Constant values like Math.PI practice not innovate whatever non-determinism, and, since their values cannot be changed, practice non allow any side furnishings:

          double Circumference(double radius) { return 2 * Math.PI * radius; } // However a pure role!                  

Singletons

Essentially, the Singleton design is just some other course of the global state. Singletons promote obscure APIs that lie about real dependencies and innovate unnecessarily tight coupling between components. They likewise violate the Single Responsibility Principle because, in addition to their primary duties, they control their own initialization and lifecycle.

Singletons tin can easily make unit tests order-dependent because they carry state effectually for the lifetime of the whole application or unit exam suite. Have a look at the following example:

          User GetUser(int userId) {     User user;     if (UserCache.Instance.ContainsKey(userId))     {         user = UserCache.Instance[userId];     }     else     {         user = _userService.LoadUser(userId);         UserCache.Example[userId] = user;     }     return user; }                  

In the example above, if a examination for the cache-hit scenario runs start, it volition add a new user to the cache, then a subsequent test of the cache-miss scenario may fail because it assumes that the cache is empty. To overcome this, nosotros'll take to write boosted teardown code to make clean the UserCache after each unit test run.

Using Singletons is a bad practice that can (and should) be avoided in most cases; however, information technology is important to distinguish between Singleton equally a blueprint pattern, and a single instance of an object. In the latter case, the responsibility of creating and maintaining a single instance lies with the awarding itself. Typically, this is handed with a factory or Dependency Injection container, which creates a unmarried instance somewhere most the "top" of the application (i.east., closer to an application entry signal) and then passes it to every object that needs it. This approach is absolutely correct, from both testability and API quality perspectives.

The new Operator

Newing upwards an instance of an object in gild to get some job done introduces the aforementioned trouble every bit the Singleton anti-pattern: unclear APIs with hidden dependencies, tight coupling, and poor testability.

For example, in gild to test whether the following loop stops when a 404 condition lawmaking is returned, the developer should gear up upwards a examination web server:

          using (var client = new HttpClient()) {     HttpResponseMessage response;     exercise     {         response = await client.GetAsync(uri);         // Procedure the response and update the uri...     } while (response.StatusCode != HttpStatusCode.NotFound); }                  

However, sometimes new is admittedly harmless: for example, it is OK to create simple entity objects:

          var person = new Person("John", "Doe", new DateTime(1970, 12, 31));                  

It is too OK to create a small, temporary object that does not produce whatsoever side effects, except to modify their own state, and and then return the result based on that state. In the following case, we don't care whether Stack methods were chosen or not — nosotros just check if the end result is correct:

          string ReverseString(cord input) {     // No need to do interaction-based testing and check that Stack methods were called or non;     // The unit test just needs to ensure that the render value is right (state-based testing).     var stack = new Stack<char>();     foreach(var south in input)     {         stack.Push button(south);     }     string result = cord.Empty;     while(stack.Count != 0)     {         consequence += stack.Pop();     }     render effect; }                  

Static Methods

Static methods are another potential source of non-deterministic or side-effecting behavior. They can easily introduce tight coupling and brand our lawmaking untestable.

For instance, to verify the behavior of the following method, unit of measurement tests must manipulate environment variables and read the console output stream to ensure that the appropriate data was printed:

          void CheckPathEnvironmentVariable() {      if (Environment.GetEnvironmentVariable("PATH") != null)     {         Console.WriteLine("PATH environment variable exists.");     }      else     {        Console.WriteLine("PATH environment variable is non defined.");     }  }                  

Notwithstanding, pure static functions are OK: any combination of them will still exist a pure office. For example:

          double Hypotenuse(double side1, double side2) { render Math.Sqrt(Math.Prisoner of war(side1, ii) + Math.Pow(side2, 2)); }                  

Benefits of Unit Testing

Manifestly, writing testable lawmaking requires some subject field, concentration, and extra endeavor. But software development is a complex mental activity anyhow, and we should always be careful, and avoid recklessly throwing together new lawmaking from the tiptop of our heads.

As a advantage for this act of proper software quality assurance, nosotros'll end up with clean, piece of cake-to-maintain, loosely coupled, and reusable APIs, that won't damage developers' brains when they try to understand it. Subsequently all, the ultimate advantage of testable lawmaking is not only the testability itself, but the ability to easily sympathize, maintain and extend that lawmaking also.

greenbergwrive1959.blogspot.com

Source: https://www.toptal.com/qa/how-to-write-testable-code-and-why-it-matters

0 Response to "Run the Tests Again to Make It Passes Consistently"

Postar um comentário

Iklan Atas Artikel

Iklan Tengah Artikel 1

Iklan Tengah Artikel 2

Iklan Bawah Artikel