A test
type is focused on a particular test objective, which could be the testing
of a function to be performed by the component or system; a nonfunctional
quality characteristic, such as reliability or usability; the structure or
architecture of the component or system; or related to changes, i.e. confirming
that defects have been fixed (confirmation testing, or re-testing) and looking
for unintended changes (regression testing). Depending on its objectives,
testing will be organized differently. For example, component testing aimed at
performance would be quite different to component testing aimed at achieving
decision coverage.
With the testing concept in
mind the techniques have to be selected wisely depend upon the requirement or
the software being tested. In context of testing process the techniques can be
divided into two broad categories as “Static
and Dynamic” where they have various techniques under them.
Both of them are meant to
find the defects and defaults in the software in their own way but they are not
alternatives for each other because the characteristic varies widely. In static
method the testing process is carried out without executing the code but in the
case of dynamic the code will be thoroughly executed and the faults are
identified. Both of them are complementary methods as they tend to find
different types of defects effectively and efficiently.
Structure-based
(white-box) testing techniques
Structure-based testing
techniques (which are also dynamic rather than static) use the internal
structure of the software to derive test cases. They are commonly called 'white-box'
or 'glass-box' techniques (implying you can see into the system) since they
require knowledge of how the software is implemented, that is, how it works.
For example, a structural technique may be concerned with exercising loops in
the software. Different test cases may be derived to exercise the loop once,
twice, and many times. This may be done regardless of the functionality of the
software.
Specification-based
(black-box) testing techniques
The first of the dynamic testing
techniques we will look at are the specification based testing techniques.
These are also known as 'black-box' or
input/output driven testing techniques because they view the software as a
black-box with inputs and outputs, but they have no knowledge of how the system
or component is structured inside the box. In essence, the tester is
concentrating on what the software does, not how it does it.
Experience-based
testing techniques
In experience-based techniques, people's knowledge, skills and
background are a prime contributor to the test conditions and test cases. The
experience of both technical and business people is important, as they bring
different perspectives to the test analysis and design process. Due to previous
experience with similar systems, they may have insights into what could go
wrong, which is very useful for testing.
Where to apply the
different categories of techniques
Specification-based techniques
are appropriate at all levels of testing (component testing through to
acceptance testing) where a specification exists. When performing system or
acceptance testing, the requirements specification or functional specification
may form the basis of the tests. When performing component or integration
testing, a design document or low-level specification Forms the basis of the
tests.
Structure-based techniques can
also be used at all levels of testing. Developers use structure-based
techniques in component testing and component integration testing, especially
where there is good tool support for code coverage. Structure-based techniques
are also used in system and acceptance testing, but the structures are
different. For example, the coverage of menu options or major business
transactions could be the structural element in system or acceptance testing.
Functional Testing
The
function of a system (or component) is 'what it does'. This is typically described
in a requirements specification, a functional specification, or in use cases.
There may be some functions that are 'assumed' to be provided those are not
documented that are also part of the requirement for a system, though it is
difficult to test against undocumented and implicit requirements. Functional
tests are based on these functions, described in SRS documents or understood by
the testers and may be performed at all test levels (e.g. test for components
may be based on a component specification).
Functional testing considers the specified behavior and is often also referred
to as black-box testing. This
is not entirely true, since black-box testing also includes non-functional
testing. Functional testing can
be done focusing on suitability, interoperability,
security, accuracy and compliance. Security testing, for example,
investigates the functions (e.g. a firewall) relating to detection of threats,
such as viruses, from malicious outsiders.
Testing
functionality can be done from two perspectives:
- Requirements-based
- Business-process-based.
Requirements-based
testing uses a specification of the functional requirements for the system as
the basis for designing tests. A good way to start is to use the table of
contents of the requirements specification as an initial test inventory or list
of items to test (or not to test). We should also prioritize the requirements
based on risk criteria (if this is not already done in the specification) and
use this to prioritize the tests. This will ensure that the most important and
most critical tests are included in the testing effort.
Business-process-based
testing uses knowledge of the business processes. Business processes describe
the scenarios involved in the day-to-day business use of the system. Use cases
originate from object-oriented development, but are nowadays popular in many
development life cycles. They also take the business processes as a starting
point, although they start from tasks to be performed by users. Use cases are a
very useful basis for test cases from a business perspective. The techniques
used for functional testing are often specification-based,
but experienced-based techniques can also be used. Test conditions
and test cases are derived from the functionality of the component or system.
As part of test designing, a model may be developed, such as a process model,
state transition model or a plain-language specification.
Non-Functional
Testing
A second
target for testing is the testing of the quality characteristics, or
nonfunctional attributes of the system (or component or integration group).
Here we are interested in how well or how fast something is done. We are
testing something that we need to measure on a scale of measurement, for
example time to respond.
Non-functional
testing, as functional testing, is performed at all test levels. Non-functional
testing includes, but is not limited to, performance testing, load testing,
stress testing, usability testing, maintainability testing, reliability
testing and portability testing. It is the testing of 'how well' the system
works. Many have tried to capture software quality in a collection of
characteristics and related sub-characteristics. In these models some
elementary characteristics keep on reappearing, although their place in the
hierarchy can differ.
The
characteristics and their sub-characteristics are, respectively:
Functionality: This consists of five
sub-characteristics: suitability, accuracy, security, interoperability and
compliance.
Efficiency:
is divided into time behavior
(performance), resource utilization and compliance;
Portability:
This also consists of five
sub-characteristics: adaptability, installability, co-existence, replaceability
and compliance.
Reliability: further into the sub-characteristics
maturity (robustness), fault-tolerance, recoverability and compliance.
Usability:
This is classified into the
sub-characteristics understandability, learnability, operability,
attractiveness and compliance.
Maintainability: consists of five sub-characteristics:
analyzability, changeability, stability, testability and compliance.
Structural
Testing
The third
target of testing is the structure of the system or component. If we are
talking about the structure of a system, we may call it the system
architecture. Structural testing is often referred to as 'white-box' or 'glass-box' because we are interested
in what is happening 'inside the box'.
Structural
testing is most often used as a way of measuring the thoroughness of testing
through the coverage of a set of structural elements or coverage items. It can
occur at any test level, although is it true to say that it tends to be mostly
applied at component and integration and generally is less likely at higher
test levels, except for business-process testing.
At
component integration level it may be based on the architecture of the system,
such as a calling hierarchy. A system, system integration or acceptance testing
test basis could be a business model or menu structure. At component level, and
to a lesser extent at component integration testing, there is good tool support
to measure code coverage.
Coverage
measurement tools assess the percentage of executable elements (e.g. statements
or decision outcomes) that have been exercised by a test suite. If
coverage is not 100%, then additional tests may need to be written and run to
cover those parts that have not yet been exercised. This of course depends on
the exit criteria. The techniques used for structural testing are
structure-based techniques, also referred to as white-box techniques. Control
flow models are often used to support structural testing.
Confirmation
and regression testing
The final
target of testing is the testing of changes. This category is slightly
different to the others because if you have made a change to the software, you
will have changed the way it functions, the way it performs (or both) and its
structure. However we are looking here at the specific types of tests relating
to changes, even though they may include all of the other test types.
Confirmation
testing or re-testing
When a
test fails and we determine that the cause of the failure is a software defect,
the defect is reported, and we can expect a new version of the software that
has had the defect fixed. In this case we will need to execute the test again
to confirm that the defect has indeed been fixed. This is known as confirmation
testing.
When doing
confirmation testing, it is important to ensure that the test is executed in
exactly the same way as it was the first time, using the same inputs, data and
environment. If the test now passes does this mean that the software is now
correct? Well, we now know that at least one part of the software is correct -
where the defect was. But this is not enough. The fix may have introduced or
uncovered a different defect elsewhere in the software. The way to detect these
'unexpected side-effects' of fixes is to do regression testing.
Regression
testing
Like
confirmation testing, regression testing involves executing test cases that
have been executed before. The difference is that, for regression testing, the
test cases probably passed the last time they were executed (compare this with
the test cases executed in confirmation testing - they failed the last time).
The term 'regression testing' is something of a misnomer. More
specifically, the purpose of regression testing is to verify that modifications
in the software or the environment have not caused unintended adverse side
effects and that the system still meets its requirements.
It is
common for organizations to have what is usually called a regression test suite
or regression test pack. This is a set of test cases that is specifically used
for regression testing. They are designed to collectively exercise most
functions (certainly the most important ones) in a system but not test any one
in detail. It is appropriate to have a regression test suite at every level of
testing (component testing, integration testing, system testing, etc.). All of
the test cases in a regression test suite would be executed every time a new
version of software is produced and this makes them ideal candidates for automation. If the regression
test suite is very large it may be more appropriate to select a subset for
execution. Regression tests are executed whenever the software changes, either
as a result of fixes or new or changed functionality. It is also a good idea to
execute them when some aspect of the environment changes, for example when a
new version of a database management system is introduced or a new version of a
source code compiler is used.
Maintenance Testing
After the
completion of the testing and other activities the system will be launched for
live use in the operational environment and the testing done during this phase
is called maintenance testing. There is plenty of difference between
“maintenance testing” and “maintainability testing”.
A
maintenance test begins on the receipt of the application for the change. The
test manager then formulates the test plan for the maintenance or change for
the particular software. The test cases are specified or adopted based on the
changes requested. Upon the receipt of the test objects the new and modified
tests and the regression tests are executed and the test ware is preserved once
maintenance is complete.
One aspect
which, in many cases, differs somewhat from the development situation is the
test organization. New development and their appropriate test activities are
usually carried out as parts of a project, whereas maintenance tests are
normally executed as an activity in the regular organization. As a result,
there is often some lack of resources and flexibility, and the test process may
experience more competition from other activities.
Impact analysis
A major
and important activity within maintenance testing is impact analysis. During impact
analysis, together with stakeholders, a decision is made on what parts of
the system may be unintentionally affected and therefore need careful
regression testing. Risk analysis will help to decide where to focus regression
testing - it is unlikely that the team will have time to repeat all the
existing tests.
The two
levels of maintenance testing are,
1. Testing
the changes
2.
Regression tests to show that the rest of the system has not been affected by
the maintenance work.
Reasons
for carrying out maintenance testing
Most often
maintenance tests are carried in existing operating systems when there are some
modifications or changes are to be made based on the needs of the users or
stake holders. Most often the maintenance testing comes into picture when there
is modifications based on the below two categories.
- Modifications as per requirements
- Ad-hoc modifications
Modifications
as per requirements:
The
modifications that are pre planned can be classified as follows,
Enhancing
the software as per customer needs, by adding new features and functions.
Changes
that are required based on the operative environment and the level of ease of
access to users.
The
correction measures for the fixing of defects that are identified during the testing
phase.
Ad-hoc corrective modifications
These types
of modifications are usually carried out when there is an emergency fix
required for the software. Although we have tested the software at different
levels due to some unavoidable circumstances the software might not work or it
will crash. So in order to make this failure work the emergency fixes are
required. At this point the crash will be fixed and the correction measures are
taken to avoid it in future. The risk factor is also considered in this case to
avoid any future failures.
No comments:
Post a Comment