January 16, 2013

Process of Deriving Test Cases

A test case is nothing but the inputs which are given to the program or software to test its capabilities and operations. Here our intention is to find if the software meets the specifications given by the customer. Test case should be derived effectively in order to find defects in the system.
Let’s discuss the process of deriving test cases for given program for both functional and structural testing.
Before identifying the test cases as a tester you should be through with functionality and the specifications of the software under test. And the test cases derived should test the system in terms identifying the defects. The test cases should be derived for the functions which the system should not work i.e negative test cases.

Read A,B
If A>B then
Print “A is greater”
Else
Print “B is greater”
End

Now from the program we can identify that the function of the program is to find greatest of two numbers. Let’s try and find out the required test cases for the program.
With the known details the program will identify the greatest of two numbers and will produce any one of the results “A is greater” or “B is greater”. So the inputs which will make the program to give the result are A=10 and B=100 for “B is greater” and A=100 and B=10 for “A is greater”. Hence we have identified test cases for the actual or expected output.
Now let’s think this way, What if both I/P have same values? And what should the program do at this state? This type thinking is actually required for the testers to derive effective test cases. This method of identifying test cases that are opposite to the function is called negative cases or negative testing. In this case if both I/P are same the program is not coded to produce any error pop up or warning. Which means my application will crash or will not produce any result. Another possibility is that the I/P given can be any value apart from numbers, but this is extreme case and this can also happen.

Then summarizing the various Test cases in a template as follows,

The actual result column and Pass/Fail column has to be filled after completing the test process. For static test techniques the test cases cannot be derived as the program will not be executed. Test cases will be derived only for the blackbox and whitebox techniques. The number of test cases can be varied based on the requirements and amount of testing required.

 

Ad-hoc testing

We have lot of other testing formats and techniques but adding to that we have another technique called the Ad-hoc testing. Most of us have heard of the word ad-hoc, but how it is related to testing?
Ad-hoc testing is not like other formal testing its totally an informal technique where the sequence of activities followed in other testing method is not followed correctly. Also It does not use any formal techniques to design test cases such as BVA, Equivalence Partitioning etc.,
After a number of planned test cases are executed, a rough testing will be done to check if the system or the software is working as per the functions and all specifications are met. Here the system will be operated without any SRS document and Functional specifications. It’s more of a new person perspective in terms of using the software. The testing process of testing a system without any formal procedure can be termed as ad hoc testing.
This method is applicable for all phases of testing. Very useful in finding defects that got missed at various levels of testing.
Various methods are followed to do Ad-hoc testing such as,
  1. Buddy Testing
  2. Pair Testing
  3. Defect Seeding
  4. Exploratory Testing
Buddy Testing
A developer and Tester are formed as buddies so that they can help each other mutually working a single aim of finding the defects.
Here the developer helps the tester to know how the program or the unit works so that he can do tests effectively in the mean time the defects can be rectified then and there without any delay. Usually buddy testing is normally done at the unit test phase where the testing and coding activity are done together.

Pair Testing
In this approach two testers will work as a team and will help each other in terms of their own views and ideas. Here lot of knowledge sharing and experience sharing will happen in order to test the application effectively. It is most likely that helping out each other with their own ideologies this will result in uncovering of defects which were not identified in the past during other test techniques.

Defect Seeding
This method is like playing hide and seek with defects. One group of the project will inject the defects into the program and another team will try to uncover it and rectify it. The concept behind this is that the injected defects may help to identify the unidentified defects.

Exploratory testing
This we have covered in detail in the other chapter.

Exploratory testing



Exploratory testing is a hands-on approach in which testers are involved in minimum planning and maximum test execution. The planning involves the creation of a test charter, a short declaration of the scope of a short (1 to 2 hour) time-boxed test effort, the objectives and possible approaches to be used. The test design and test execution activities are performed in parallel typically without formally documenting the test conditions, test cases or test scripts. This does not mean that other, more formal testing techniques will not be used. For example, the tester may decide to use boundary value analysis but will think through and test the most important boundary values without necessarily writing them down. Some notes will be written during the exploratory-testing session, so that a report can be produced afterwards.
Test logging is undertaken as test execution is performed, documenting the key aspects of what is tested, any defects found and any thoughts about possible further resting. A key aspect of exploratory testing is learning: learning by the tester about the software, its use, its strengths and its weaknesses. As its name implies, exploratory testing is about exploring, finding out about the software, what it does, what it doesn't do, what works and what doesn't work. The tester is constantly
making decisions about what to test next and where to spend the (limited) time. This is an approach that is most useful when there are no or poor specifications and when time is severely limited. It can also serve to complement other, more formal testing, helping to establish greater confidence in the software. In this way, exploratory testing can be used as a check on the formal test process by helping to ensure that the most serious defects have been found.

Error Guessing



Error guessing is a technique that should always be used as a complement to other more formal techniques. The success of error guessing is very much dependent on the skill of the tester, as good testers know where the defects are most likely to lurk. Some people seem to be naturally good at testing and others are good testers because they have a lot of experience either as a tester or working with a particular system and so are able to pin-point its weaknesses. This is why an error-guessing approach, used after more formal techniques have been applied to some extent, can be very effective. In using more formal techniques, the tester is likely to gain a better understanding of the system, what it does and how it works. With this better understanding, he or she is likely to be better at guessing ways in which the system may not work properly.


There are no rules for error guessing. The tester is encouraged to think of situations in which the software may not be able to cope. Typical conditions to try include division by zero, blank input, empty files and the wrong kind of data (e.g. alphabetic characters where numeric are required). If anyone ever says of a system or the environment in which it is to operate 'That could never happen', it might be a good idea to test that condition, as such assumptions about what will and will not happen in the live environment are often the cause of failures. A structured approach to the error-guessing technique is to list possible defects or failures and to design tests that attempt to produce them. These Defect and failure lists can be built based on the tester's own experience or that of other people, available defect and failure data, and from common knowledge about why software fails.