Software Testing Principles and Practices
Software Testing Principles and Practices
Testing
Original Title and Copyright: Software Testing, 4/e. © 2014 by S.K. Kataria & Sons.
This publication, portions of it, or any accompanying software may not be reproduced in any
way, stored in a retrieval system of any type, or transmitted by any means, media, electronic dis-
play or mechanical display, including, but not limited to, photocopy, recording, Internet postings,
or scanning, without prior permission in writing from the publisher.
Publisher: David Pallai
Mercury Learning and Information
22841 Quicksilver Drive
Dulles, VA 20166
info@merclearning.com
www.merclearning.com
(800) 232-0223
The publisher recognizes and respects all marks used by companies, manufacturers, and develop-
ers as a means to distinguish their products. All brand names and product names mentioned in
this book are trademarks or service marks of their respective companies. Any omission or misuse
(of any kind) of service marks or trademarks, etc. is not an attempt to infringe on the property
of others.
Our titles are available for adoption, license, or bulk purchase by institutions, corporations, etc.
For additional information, please contact the Customer Service Dept. at 800-232-0223(toll free).
All of our titles are available in digital format at authorcloudware.com and other digital vendors.
The sole obligation of Mercury Learning and Information to the purchaser is to replace the
book, based on defective materials or faulty workmanship, but not based on the operation or
functionality of the product.
Index 655
1
Introduction to Software
Testing
Inside this Chapter:
1.0. Introduction
1.1. The Testing Process
1.2. What is Software Testing?
1.3. Why Should We Test? What is the Purpose?
1.4. Who Should Do Testing?
1.5. What Should We Test?
1.6. Selection of Good Test Cases
1.7. Measurement of the Progress of Testing
1.8. Incremental Testing Approach
1.9. Basic Terminology Related to Software Testing
1.10. Testing Life Cycle
1.11. When to Stop Testing?
1.12. Principles of Testing
1.13. Limitations of Testing
1.14. Available Testing Tools, Techniques, and Metrics
1.0. INTRODUCTION
Testing is the process of executing the program with the intent of finding
faults. Who should do this testing and when should it start are very important
questions that are answered in this text. As we know software testing is the
fourth phase of the software development life cycle (SDLC). About 70% of
development time is spent on testing. We explore this and many other inter-
esting concepts in this chapter.
OR
“Software testing is the process of executing a program or system
with the intent of finding errors.”
[Myers]
OR
“It involves any activity aimed at evaluating an attribute or capabil-
ity of a program or system and determining that it meets its required
results.”
[Hetzel]
Testing is NOT:
a. The process of demonstrating that errors are not present.
b. The process of showing that a program performs its intended func-
tions correctly.
c. The process of establishing confidence that a program does what it is
supposed to do.
So, all these definitions are incorrect. Because, with these as guidelines,
one would tend to operate the system in a normal manner to see if it works.
One would unconsciously choose such normal/correct test data as would
prevent the system from failing. Besides, it is not possible to certify that a
system has no errors—simply because it is almost impossible to detect all
errors.
So, simply stated: “Testing is basically a task of locating errors.”
It may be:
a. Positive testing: Operate the application as it should be operated.
Does it behave normally? Use a proper variety of legal test data,
including data values at the boundaries to test if it fails. Check actual
test results with the expected. Are results correct? Does the applica-
tion function correctly?
In this database table given above there are 15 test cases. But these are
not sufficient as we have not tried with all possible inputs. We have not con-
sidered the trouble spots like:
i. Removing statement (@ ai_year % 400 = 0) would result in Y2K
problem.
ii. Entering year in float format like 2010.11.
iii. Entering year as a character or as a string.
iv. Entering year as NULL or zero (0).
This list can also grow further. These are our trouble spots or critical areas.
We wish to locate these areas and fix these problems before our customer
does.
Why it happened?
As we know software testing constitutes about 40% of overall effort and
25% of the overall software budget. Software defects are introduced during
SDLC due to poor quality requirements, design, and code. Sometimes due
to the lack of time and inadequate testing, some of the defects are left behind,
only to be found later by users. Software is a ubiquitous product; 90% of
people use software in their everyday life. Software has high failure rates due
to the poor qualify of the software.
Smaller companies that don’t have deep pockets can get wiped out
because they did not pay enough attention to software quality and conduct
the right amount of testing.
Cem Kaner said—“The best test cases are the ones that find bugs.” Our
efforts should be on the test cases that finds issues. Do broad or deep cover-
age testing on the trouble spots.
A test case is a question that you ask of the program. The point of run-
ning the test is to gain information like whether the program will pass or fail
the test.
Test Case ID
Purpose
Preconditions
Inputs
Expected Outputs
Postconditions
Execution History
Date Result Version Run By
(Continued)
(Continued)
(Continued)
7. Test suite: A collection of test scripts or test cases that is used for validat-
ing bug fixes (or finding new bugs) within a logical or physical area of a
product. For example, an acceptance test suite contains all of the test
cases that were used to verify that the software has met certain prede-
fined acceptance criteria.
8. Test script: The step-by-step instructions that describe how a test case is
to be executed. It may contain one or more test cases.
9. Test ware: It includes all of testing documentation created during the
testing process. For example, test specification, test scripts, test cases,
test data, the environment specification.
10. Test oracle: Any means used to predict the outcome of a test.
11. Test log: A chronological record of all relevant details about the execu-
tion of a test.
12. Test report: A document describing the conduct and results of testing
carried out for a system.
6. T
esting should begin “in small” and progress toward testing “in
large”: The smallest programming units (or modules) should be
tested first and then expanded to other parts of the system.
7. Testing should be conducted by an independent third party.
8. All tests should be traceable to customer requirements.
9. Assign best people for testing. Avoid programmers.
10. T
est should be planned to show software defects and not their
absence.
11. P
repare test reports including test cases and test results to summa-
rize the results of testing.
12. A
dvance test planning is a must and should be updated in a timely
manner.
To see some of the most popular testing tools of 2017, visit the following
NOTE
site: https://www.guru99.com/testing-tools.html
SUMMARY
8. Verification is
a. Checking product with respect to customer’s expectations
b. Checking product with respect to SRS
c. Checking product with respect to the constraints of the project
d. All of the above.
9. Validation is
a. Checking the product with respect to customer’s expectations
b. Checking the product with respect to specification
c. Checking the product with respect to constraints of the project
d. All of the above.
10. Which one of the following is not a testing tool?
a. Deja Gnu b. TestLink
c. TestRail d. SOLARIS
ANSWERS
1. b. 2. c. 3. c. 4. a.
5. d. 6. b. 7. d. 8. b.
9. a. 10. d.
FIGURE 1.4
while the test execution is done in the end. This early design of tests
reduces overall delay by increasing parallelism between develop-
ment and testing. It enables better and more timely validation of
individual phases. The V-model is shown in Figure 1.5.
FIGURE 1.5
REVIEW QUESTIONS
2
Software Verification and
Validation
Inside this Chapter:
2.0. Introduction
2.1. Differences Between Verification and Validation
2.2. Differences Between QA and QC
2.3. Evolving Nature of Area
2.4. V&V Limitations
2.5. Categorizing V&V Techniques
2.6. Role of V&V in SDLC—Tabular Form
2.7. Proof of Correctness (Formal Verification)
2.8. Simulation and Prototyping
2.9. Requirements Tracing
2.10. Software V&V Planning (SVVP)
2.11. Software Technical Reviews (STRs)
2.12. Independent V&V (IV&V) Contractor
2.13. Positive and Negative Effects of Software V&V on Projects
2.14. Standard for Software Test Documentation (IEEE829)
2.0. INTRODUCTION
Software that satisfies its user expectations is a necessary goal of a success-
ful software development organization. To achieve this goal, software engi-
neering practices must be applied throughout the evolution of the software
Verification Validation
1. It is a static process of verifying 1. It is a dynamic process of validating/
documents, design, and code. testing the actual product.
2. It does not involve executing the 2. It involves executing the code.
code.
3. It is human based checking of 3. It is the computer-based execution
documents/files. of program.
4. Target is requirements specification, 4. Target is actual product—a unit, a
application architecture, high level module, a set of integrated modules,
and detailed design, and database and the final product.
design.
5. It uses methods like inspections, 5. It uses methods like black-box,
walk throughs, desk-checking, etc. gray-box, and white-box testing.
6. It, generally, comes first—before 6. It generally follows verification.
validation.
7. It answers the question—Are we 7. It answers the question—Are we
building the product right? building the right product?
8. It can catch errors that validation 8. It can catch errors that verification
cannot catch. cannot catch.
Both of these are essential and complementary. Each provides its own
sets of error filters.
Each has its own way of finding the errors in the software.
Guessing
Interface Analysis
It is the detailed examination of the interface requirements specifications.
The evaluation criteria is the same as that for requirements specification.
The main focus is on the interfaces between software, hardware, user, and
external software.
Criticality Analysis
Criticality is assigned to each software requirement. When requirements are
combined into functions, the combined criticality of requirements form the
criticality for the aggregate function. Criticality analysis is updated periodi-
cally as requirement changes are introduced. This is because such changes
can cause an increase or decrease in a functions criticality which depends on
how the revised requirement impacts system criticality.
Criticality analysis is a method used to locate and reduce high-risk
problems and is performed at the beginning of the project. It identifies
the functions and modules that are required to implement critical program
functions or quality requirements like safety, security, etc.
Criticality analysis involves the following steps:
Step 1: C onstruct a block diagram or control flow diagram (CFD) of the
system and its elements. Each block will represent one software
function (or module) only.
Step 2: T race each critical function or quality requirement through
CFD.
Step 3: Classify all traced software functions as critical to:
a. Proper execution of critical software functions.
b. Proper execution of critical quality requirements.
Step 4: F ocus additional analysis on these traced critical software
functions.
Step 5: R epeat criticality analysis for each life cycle process to deter-
mine whether the implementation details shift the emphasis of
the criticality.
By program variable we broadly include input and output data, e.g., data
entered via a keyboard, displayed on a screen, or printed on paper. Any
externally observable aspect of the program’s execution may be covered by
the precondition and postcondition.
integrated V&V approach is very dependent upon the nature of the product
and the process used to develop it. Earlier the waterfall approach for testing
was used and now incremental approach is used. Regardless of the approach
selected, V&V progress must be tracked. Requirements/ evaluation matrices
play a key role in this tracking by providing a means of insuring that each
requirement of the product is addressed.
Step 7: Assessment
It is important that the software V&V plan provide for the ability to collect
data that can be used to assess both the product and the techniques used to
develop it. This involves careful collection of error and failure data, as well
as analysis and classification of these data.
Summary:
i. Complexity of software development and maintenance processes.
ii. Error frequencies for software work products.
iii. Error distribution throughout development phases.
iv. Increasing costs for error removal throughout the life cycle.
also does not exist for testing a specification or high level design. The
idea of testing a software test plan is also bewildering. Testing also does
not address quality issues or adherence to standards which are possible
with review processes.
Summary:
i. Exhaustive testing is impossible.
ii. Intermediate software products are largely untestable.
c. Reviews are a Form of Testing: The degree of formalism, scheduling,
and generally positive attitude afforded to testing must exist for software
technical reviews if quality products are to be produced.
Summary:
i. Objectives
ii. Human based versus machine based
iii. Attitudes and norms
d. Reviews are a Way of Tracking a Project: Through identification
of deliverables with well defined entry and exit criteria and successful
review of these deliverables, progress on a project can be followed and
managed more easily [Fagan]. In essence, review processes provide
milestones with teeth. This tracking is very beneficial for both project
management and customers.
Summary:
i. Individual developer tracking
ii. Management tracking
iii. Customer tracking
e. Reviews Provide Feedback: The instructor should discuss and
provide examples about the value of review processes for providing
feedback about software and its development process.
Summary:
i. Product ii. Process
Summary:
i. Project understanding ii. Technical skills
Inspections Walkthroughs
1. It is a five-step process that is 1. It has fewer steps than
well formalized. inspections and is a less formal
process.
2. It uses checklists for locating 2. It does not use a checklist.
errors.
3. It is used to analyze the quality 3. It is used to improve the quality
of the process. of the product.
4. This process takes a longer time. 4. It is a shorter process.
5. It focuses on training of junior 5. It focuses on finding defects.
staff.
Test Plan
1. Test-plan Identifier: Specifies the unique identifier assigned to the test
plan.
2. Introduction: Summarizes the software items and features to be tested,
provides references to the documents relevant for testing (for example,
overall project plan, quality assurance plan, configuration management
plan, applicable standards, etc.).
3. Test Items: Identifies the items to be tested including their version/
revision level, provides references to the relevant item documentation
(for example, requirements specification, design specification, user’s
guide, operations guide, installation guide, etc.), and identifies items
which are specifically excluded from testing.
4. Features to be Tested: Identifies all software features and their
combinations to be tested, identifies the test-design specification
associated with each feature and each combination of features.
5. Features not to be Tested: Identifies all features and significant
combinations of features which will not be tested, and the reasons for
this.
6. Approach: Describes the overall approach to testing (the testing activities
and techniques applied, the testing of non functional requirements
such as performance and security, the tools used in testing); specifies
completion criteria (for example, error frequency or code coverage);
identifies significant constraints such as testing-resource availability and
strict deadlines; serves for estimating the testing efforts.
7. Item Pass/Fail Criteria: Specifies the criteria to be used to determine
whether each test item has passed or failed testing.
8. Suspension Criteria and Resumption: Specifies the criteria used to
suspend all or a portion of the testing activity on the test items (for
example, at the end of working day, due to hardware failure or other
external exception, etc.), specifies the testing activities which must be
repeated when testing is resumed.
Test-Case Specification
1. Test-case Specification Identifier: Specifies the unique identifier
assigned to this test-case specification.
2. Test Items: Identifies and briefly describes the items and features to
be exercised by this test case, supplies references to the relevant
item documentation (for example, requirements specification, design
specification, user’s guide, operations guide, installation guide, etc.).
3. Input Specifications: Specifies each input required to execute the test
case (by value with tolerance or by name); identifies all appropriate
databases, files, terminal messages, memory resident areas, and external
values passed by the operating system; specifies all required relationships
between inputs (for example, timing).
4. Output Specifications: Specifies all of the outputs and features (for
example, response time) required of the test items, provides the exact
value (with tolerances where appropriate) for each required output or
feature.
5. Environmental Needs: Specifies the hardware and software configuration
needed to execute this test case, as well as other requirements (such as
specially trained operators or testers).
6. Special Procedural Requirements: Describes any special constraints on
the test procedures which execute this test case (for example, special set-
up, operator intervention, etc.).
7. Intercase Dependencies: Lists the identifiers of test cases which must be
executed prior to this test case, describes the nature of the dependencies.
Inputs
Expected results
Actual results
Date and time
Test-procedure step
Environment
Repeatability (whether repeated; whether occurring always, occa-
sionally, or just once).
Testers
Other observers
Additional information that may help to isolate and correct the cause
of the incident; for example, the sequence of operational steps or his-
tory of user-interface commands that lead to the (bug) incident.
4. Impact: Priority of solving the incident/correcting the bug (urgent, high,
medium, low).
Test-Summary Report
1. Test-Summary-Report Identifier: Specifies the unique identifier assigned
to this report.
2. Summary: Summarizes the evaluation of the test items, identifies
the items tested (including their version/revision level), indicates
the environment in which the testing activities took place, supplies
references to the documentation over the testing process (for example,
test plan, test-design specifications, test-procedure specifications, test-
item transmittal reports, test logs, test-incident reports, etc.).
3. Variances: Reports any variances/deviations of the test items from
their design specifications, indicates any variances of the actual testing
process from the test plan or test procedures, specifies the reason for
each variance.
4. Comprehensiveness Assessment: Evaluates the comprehensiveness of
the actual testing process against the criteria specified in the test plan,
identifies features or feature combinations which were not sufficiently
tested and explains the reasons for omission.
5. Summary of Results: Summarizes the success of testing (such as
coverage), identifies all resolved and unresolved incidents.
SUMMARY
1. Software engineering technology has matured sufficiently to be addressed
in approved and draft software engineering standards and guidelines.
2. Business, industries, and government agencies spend billions annually
on computer software for many of their functions:
To manufacture their products.
To provide their services.
To administer their daily activities.
To perform their short- and long-term management functions.
3. As with other products, industries and businesses are discovering that
their increasing dependence on computer technology to perform these
functions, emphasizes the need for safe, secure, reliable computer
systems. They are recognizing that software quality and reliability
are vital to their ability to maintain their competitiveness and high
technology posture in the market place. Software V&V is one of several
methodologies that can be used for building vital quality software.
ANSWERS
1. a. 2. c. 3. d. 4. a.
5. b. 6. b. 7. c. 8. d.
9. a. 10. d.
REVIEW QUESTIONS
1. a. Discuss briefly the V&V activities during the design phase of the
software development process.
b. Discuss the different forms of IV&V.
2. a. What is the importance of technical reviews in software development
and maintenance life cycle?
b. Briefly discuss how walkthroughs help in technical reviews.
3. a. Explain why validation is more difficult than verification.
b. Explain validation testing.
3
Black-Box (or Functional)
Testing Techniques
Inside this Chapter:
3.0. Introduction to Black-Box (or Functional Testing)
3.1. Boundary Value Analysis (BVA)
3.2. Equivalence Class Testing
3.3. Decision Table Based Testing
3.4. Cause-Effect Graphing Technique
3.5. Comparison on Black-Box (or Functional) Testing Techniques
3.6. Kiviat Charts
of two (or more) faults. So we derive test cases by holding the values of all
but one variable at their nominal values and letting that variable assume its
extreme values.
If we have a function of n-variables, we hold all but one at the nominal
values and let the remaining variable assume the min, min+, nom, max–,
and max values, repeating this for each variable. Thus, for a function of n
variables, BVA yields (4n + 1) test cases.
Please note that we explained above that we can have 13 test cases
(4n + 1) for this problem. But instead of 13, now we have 15 test cases. Also,
test case ID number 8 and 13 are redundant. So, we ignore them. However,
we do not ignore test case ID number 3 as we must consider at least one test
case out of these three. Obviously, it is mechanical work!
We can say that these 13 test cases are sufficient to test this program
using BVA technique.
The commission program produced a monthly sales report that gave the total
number of locks, stocks, and barrels sold, the salesperson’s total dollar sales
and finally, the commission.
Out of these 15 test cases, 2 are redundant. So, 13 test cases are sufficient
to test this program.
In this technique, the input and the output domain is divided into a
finite number of equivalence classes. Then, we select one representative of
each class and test our program against it. It is assumed by the tester that
if one representative from a class is able to detect error then why should
he consider other cases. Furthermore, if this single representative test case
did not detect any error then we assume that no other test case of this class
can detect error. In this method, we consider both valid and invalid input
domains. The system is still treated as a black-box meaning that we are not
bothered about its internal logic.
The idea of equivalence class testing is to identify test cases by using one
element from each equivalence class. If the equivalence classes are chosen
wisely, the potential redundancy among test cases can be reduced.
In fact, we will have, always, the same number of weak equivalence class test
cases as the classes in the partition.
Just like we have truth tables in digital logic, we have similarities between
these truth tables and our pattern of test cases. The Cartesian product guar-
antees that we have a notion of “completeness” in two ways:
a. We cover all equivalence classes.
b. We have one of each possible combination of inputs.
2. Also, strongly typed languages like Pascal and Ada, eliminate the need
for the consideration of invalid inputs. Traditional equivalence testing
is a product of the time when languages such as FORTRAN, C, and
COBOL were dominant. Thus, this type of error was common.
3.2.5. Solved Examples
Please note that the expected outputs describe the invalid input values
thoroughly.
So, we get this test case on the basis of valid classes – M1, D1, and Y1
above.
c. Weak robust test cases are given below:
So, we get 7 test cases based on the valid and invalid classes of the input
domain.
d. Strong robust equivalence class test cases are given below:
As done earlier, the inputs are mechanically selected from the approxi-
mate middle of the corresponding class:
So, three month classes, four day classes, and three year classes results
in 3 × 4 × 3 = 36 strong normal equivalence class test cases. Furthermore,
adding two invalid classes for each variable will result in 150 strong robust
equivalence class test cases.
It is difficult to show these 150 classes here.
d. And finally, the strong robust equivalence class test cases are as follows:
Step 3. T
ransform this cause-effect graph, so obtained in step 2 to a
decision table.
Step 4. C
onvert decision table rules to test cases. Each column of the
decision table represents a test case. That is,
R1 R2 R3 R4 R5 R6 R7 R8 R9 R10 R11
C1: a < b + c? F T T T T T T T T T T
C2: b < a + c? — F T T T T T T T T T
C3: c < a + b? — — F T T T T T T T T
C4: c = b? — — — T T T T F F F F
C5: a = c? — — — T T F F T T F F
C6: b = c? — — — T F T F T F T F
a1: Not a triangle × × ×
a2: Scalene ×
a3: Isosceles × × ×
a4: Equilateral ×
a5: Impossible × × ×
Each “-” (hyphen) in the decision table represents a “don’t care” entry.
Use of such entries has a subtle effect on the way in which complete decision
tables are recognized. For limited entry decision tables, if n conditions exist,
there must be 2n rules. When don’t care entries indicate that the condition is
irrelevant, we can develop a rule count as follows:
Rule 1. Rules in which no “don’t care” entries occur count as one rule.
Note that each column of a decision table represents a rule and the
number of rules is equal to the number of test cases.
Rule 2. Each “don’t care” entry in a rule doubles the count of that rule.
Note that in this decision table we have 6 conditions (C1—C6). Therefore,
n=6
Also, we can have 2n entries, i.e., 26 = 64 entries. Now we establish the
rule and the rule count for the above decision table.
R1 R2 R3 R4 R5 R6 R7 R8 R9 R10 R11
C1: a < b + c? F T T T T T T T T T T
C2: b < a + c? — F T T T T T T T T T
C3: c < a + b? — — F T T T T T T T T
C4: a = b? — — — T T T T F F F F
C5: a = c? — — — T T F F T T F F
C6: b = c? — — — T F T F T F T F
Rule Count 32 16 8 1 1 1 1 1 1 1 1 = 64
a1: Not a triangle × × ×
a2: Scalene ×
a3: Isosceles × × ×
a4: Equilateral ×
a5: Impossible × × ×
From the previous table we find that the rule count is 64. And we have
already said that 2n = 26 = 64. So, both are 64.
The question, however, is to find out why the rule count is 32 for the
Rule-1 (or column-1)?
We find that there are 5 don’t cares in Rule-1 (or column-1) and hence
2n = 25 = 32. Hence, the rule count for Rule-1 is 32. Similarly, for Rule-2, it
is 24 = 16 and 23 = 8 for Rule-3. However, from Rule-4 through Rule-11, the
number of don’t care entries is 0 (zero). So rule count is 20 = 1 for all these
columns. Summing the rule count of all columns (or R1-R11) we get a total
of 64 rule count.
Many times some problems arise with these decision tables. Let us
see how.
Consider the following example of a redundant decision table:
Conditions 1–4 5 6 7 8 9
C1 T F F F F T
C2 — T T F F F
C3 — T F T F F
a1 × × × — — —
a2 — × × × — ×
a3 × — × × × ×
Please note that the action entries in Rule-9 and Rules 1–4 are NOT
identical. It means that if the decision table were to process a transaction in
which C1 is true and both C2 and C3 are false, both rules 4 and 9 apply. We
observe two things
1. Rules 4 and 9 are in-consistent because the action sets are different.
2. The whole table is non-deterministic because there is no way to decide
whether to apply Rule-4 or Rule-9.
Also note carefully that there is a bottom line for testers now. They
should take care when don’t care entries are being used in a decision table.
3.3.3. Examples
3.3.3.1. Test Cases for the Triangle Problem Using Decision Table Based
Testing Technique
We have already studied the problem domain for the famous triangle prob-
lem in previous chapters. Next we apply the decision table based technique
on the triangle problem. The following are the test cases:
So, we get a total of 11 functional test cases out of which three are
impossible cases, three fail to satisfy the triangle property, one satisfies the
equilateral triangle property, one satisfies the scalene triangle property, and
three ways to get an isoceles triangle.
Conditions
C1: month in M1 T
C2: month in M2 T
C3: month in M3 T
C4: day in D1
C5: day in D2 :
C6: day in D3 :
C7: day in D4 : : :
C8: year in y1
a1: Impossible : : : : : : : :
a2: Next date
31-01-2018 14:56:19
94 • Software Testing
Because we know that we have serious problems with the last day of
last month, i.e., December. We have to change month from 12 to 1. So, we
modify our classes as follows:
M1 = {month: month has 30 days}
M2 = {month: month has 31 days except December}
M3 = {month: month is December}
D1 = {day: 1 ≤ day ≤ 27}
D2 = {day: day = 28}
D3 = {day: day = 29}
D4 = {day: day = 30}
D5 = {day: day = 31}
Y1 = {year: year is a leap year}
Y2 = {year is a common year}
The Cartesian product of these contain 40 elements. Here, we have a
22-rule decision table. This table gives a clearer picture of the Next Date
function than does the 36-rule decision table and is given below:
In this table, the first five rules deal with 30-day months. Notice that the
leap year considerations are irrelevant. Rules (6 – 10) and (11 – 15) deal with
31-day months where the first five with months other than December and
the second five deal with December. No impossible rules are listed in this
portion of the decision table.
Still there is some redundancy in this table. Eight of the ten rules simply
increment the day. Do we really require eight separate test cases for this
sub-function? No, but note the type of observation we get from the decision
table.
Finally, the last seven rules focus on February and leap year. This deci-
sion table analysis could have been done during the detailed design of the
Next Date function.
Further simplification of this decision table can also be done. If the
action sets of two rules in a decision table are identical, there must be at least
one condition that allows two rules to be combined with a don’t care entry.
In a sense, we are identifying equivalence classes of these rules. For exam-
ple, rules 1, 2, and 3 involve day classes as D1, D2, and D3 (30 day classes).
These can be combined together as the action taken by them is the same.
Similarly, for other rules other combinations can be done. The correspond-
ing test cases are shown in the table as in Figure 3.17.
a2: increment × × × × × × × × × × × × × ×
day
a3: reset day × × × × ×
a4: increment × × × ×
month
a5: reset month ×
a6: increment ×
year
FIGURE 3.16 Decision Table for the Next Date Function.
31-01-2018 14:56:19
96 • Software Testing
Step 4. Because there are 11 rules, we get 11 test cases and they are:
1 2 3 4
Conditions C1 1 0 0 0
(or Causes) C2 0 1 0 0
C3 0 0 1 1
C4 1 0 0 0
C5 0 1 1 0
C6 0 0 0 1
Actions E1 × — — —
(or Effects) E2 — × — —
E3 — — × —
E4 — — — ×
That is, if C1 and C4 are 1 (or true) then the effect (or action) is E1. Simi-
larly, if C2 and C5 is 1 (or true), action to be taken is E2, and so on.
Step 4. Because there are 4 rules in our decision table above, we must have
at least 4 test cases to test this system using this technique.
These test cases can be:
1. Salary = 20,000, Expenses = 2000
2. Salary = 100,000, Expenses = 10,000
3. Salary = 300,000, Expenses = 20,000
4. Salary = 300,000, Expenses = 50,000
So we can say that a decision table is used to derive the test cases which
can also take into account the boundary values.
3.5.1. Testing Effort
The functional methods that we have studied so far vary both in terms of the
number of test cases generated and the effort to develop these test cases.
To compare the three techniques, namely, boundary value analysis (BVA),
equivalence class partitioning, and decision table based technique, we con-
sider the following curve shown in Figure 3.21.
We can say that the effort required to identify test cases is the lowest in
BVA and the highest in decision tables. The end result is a trade-off between
the test case effort identification and test case execution effort. If we shift
our effort toward more sophisticated testing methods, we reduce our test
execution time. This is very important as tests are usually executed several
times. Also note that, judging testing quality in terms of the sheer number
of test cases has drawbacks similar to judging programming productivity in
terms of lines of code.
The examples that we have discussed so far show these trends.
3.5.2. Testing Efficiency
What we found in all of these functional testing strategies is that either the
functionality is untested or the test cases are redundant. So, gaps do occur in
functional test cases and these gaps are reduced by using more sophisticated
techniques.
We can develop various ratios of the total number of test cases generated
by method-A to those generated by method-B or even ratios on a test case
basis. This is more difficult but sometimes management demands numbers
even when they have little meaning. When we see several test cases with the
same purpose, sense redundancy, detecting the gaps is quite difficult. If we
use only functional testing, the best we can do is compare the test cases that
result from two methods. In general, the more sophisticated method will
help us recognize gaps but nothing is guaranteed.
3.5.3. Testing Effectiveness
How can we find out the effectiveness of the testing techniques?
a. By being dogmatic, we can select a method, use it to generate test
cases, and then run the test cases. We can improve on this by not
being dogmatic and allowing the tester to choose the most appropriate
method. We can gain another incremental improvement by devising
appropriate hybrid methods.
b. The second choice can be the structural testing techniques for the test
effectiveness. This will be discussed in subsequent chapters.
Note, however, that the best interpretation for testing effectiveness is
most difficult. We would like to know how effective a set of test cases is for
finding faults present in a program. This is problematic for two reasons.
1. It presumes we know all the faults in a program.
2. Proving that a program is fault free is equivalent to the famous halting
problem of computer science, which is known to be impossible.
The chart on the left shows that all metrics are well within the acceptable
range. The chart on the right shows an example where all metrics are above
maximum limits.
FIGURE 3.24
FIGURE 3.25
FIGURE 3.26
FIGURE 3.27
FIGURE 3.28
dimensional data?
The solution: express everything in terms of a common measure -
cost.
There are then two dimensions - utilization and cost - which when
Cost/Utilization—The Method
The following steps are shown below:
1. Choose factors to be measured.
2. Determine the cost of each factor as a percent of total system cost.
3. Determine the utilization of each factor.
4. Prepare a chart showing the cost and utilization of each factor.
5. Compute the measure of cost/utilization, F.
6. Compute the measure of balance, B.
7. Evaluate the resulting chart and measures.
Cost/Utilization—The Measures
Cost/Utilization: F = ∑uipl
i
where: ui = percent utilization of factor i
pi = cost contribution of factor i
Balance: B = 1− 2 √ ∑(F − ui)2 × pi
Conclusions
It is essential to maintain balance between system components in order to:
reduce costs.
SUMMARY
We summarize the scenarios under which each of these techniques will be
useful:
ANSWERS
1. c. 2. b. 3. b. 4. a.
5. b. 6. b. 7. a. 8. c.
9. a. 10. a.
Q. 11. Consider the above use case diagram for coffee maker. Find at least
ten acceptance test cases and black-box test cases and document it.
Ans. Test cases for coffee maker.
Preconditions: Run coffee maker by switching on power supply.
Software-Testing_Final.indb 121
case id name description Step Expected result result (P/F)
Acc01 Waiting When the coffee System displays menu as
state maker is not in use, follows:
it waits for user 1. Add recipe
input. 2. Delete recipe
3. Edit a recipe
4. Add inventory
5. Check inventory
6. Purchase beverage
Acc02 Add a Only three recipes Add the recipe. Each recipe name must
recipe may be added to the A recipe consists be unique in the recipe
coffee maker. of a name, price, list.
units of coffee,
units of dairy
creamer, units of
chocolate, water.
Acc03 Delete a A recipe may be Choose the rec- A status message is
recipe deleted from the ipe to be deleted printed and the coffee
coffeemaker if it by its name. maker is returned to the
exists in the list of waiting state.
recipes in the coffee
maker.
31-01-2018 14:56:58
Test Test case Test case Test steps Actual Test status
case id name description Step Expected result result (P/F)
Software-Testing_Final.indb 122
Acc04 Edit a The user will be Enter the Upon completion, a
recipe prompted for the recipe name status message is printed
recipe name they along with vari- and the coffee maker is
wish to edit. ous units. returned to the waiting
state.
Acc05 Add Inventory may Type the inven- Inventory is added to
inventory be added to the tory: coffee, the machine and a status
machine at any dairy creamer, message is printed.
time. (Inventory is water.
measured in integer
units.)
Acc06 Check Inventory may be Enter the units System displays the
inventory checked at any time. of each item. inventory.
Acc07 Purchase The user will not be Enter the units 1. System dispensed the
beverage able to purchase a and amount. change, if user paid
beverage if they do more than the price of
not deposit enough the beverage.
money. 2. System returns user’s
money if there is not
enough inventory.
31-01-2018 14:56:58
Test ID Description/steps Expected results Actual results Test status (P/F)
Software-Testing_Final.indb 123
checkOptions Precondition: Run CoffeeMaker
Enter: 0 Program exits
Enter: 1 Add recipe functionality
Enter: 2 Delete recipe functionality
Enter: 3 Edit recipe functionality
Enter: 4 Add inventory
functionality
Enter: 5 Inventory displays
Enter: 6 Make coffee functionality
addRecipe1 Precondition: Run CoffeeMaker Coffee successfully added.
Enter: 1 Return to main menu.
Name: Coffee
Price: 10
Coffee: 3
Dairy creamer: 1
Chocolate: 0
addRecipe2 Precondition: Run CoffeeMaker Coffee could not be
Enter: 1 added. (Recipe name
Name: Coffee must be unique in the
Price:10 recipe list.)
Coffee: 3 Return to main menu.
Dairy creamer: 1
Chocolate: 0
(Continued)
31-01-2018 14:56:58
Test ID Description/steps Expected results Actual results Test status (P/F)
addRecipe3 Precondition: Run CoffeeMaker Mocha could not be
Enter: 1 added. Price can not be
Software-Testing_Final.indb 124
Name: Mocha negative. Return to main
Price: –50 menu.
addRecipe4 Precondition: Run CoffeeMaker Mocha could not be
Enter: 1 added. Units of cof-
Name: Mocha fee can not be negative.
Price: 60 Return to main menu.
Coffee: –3
addRecipe5 Precondition: Run CoffeeMaker Mocha could not be
Enter: 1 added. Units of dairy
Name: Mocha creamer can not be nega-
Price: 20 tive.
Coffee: –3 Return to main menu.
Dairy creamer: –2
addRecipe6 Precondition: Run CoffeeMaker Mocha could not be
Enter: 1 added. Units of choco-
Name: Mocha late can not be negative.
Price: 20 Return to main menu.
Coffee: 3
Dairy creamer: 2
Chocolate: –3
addRecipe7 Precondition: Run CoffeeMaker Please input an integer.
Enter: 1 Return to main menu.
Name: Mocha
Price: a
31-01-2018 14:56:58
Test ID Description/steps Expected results Actual results Test status (P/F)
Software-Testing_Final.indb 125
addRecipe8 Precondition: Run CoffeeMaker Please input an integer.
Enter: 1 Return to main menu.
Name: Mocha
Price: 20
Coffee: a
addRecipe9 Precondition: Run CoffeeMaker Please input an integer.
Enter: 1 Return to main menu.
Name: Mocha
Price: 20
Coffee: 3
Dairy creamer: 2
Chocolate: a
Precondition: Run CoffeeMaker Coffee successfully added.
addRecipe10 Enter: 1 Return to main menu.
Name: Hot chocolate
Price: 20
Coffee: 3
Dairy creamer: 2
Chocolate: 3
deleteRecipe1 Precondition: addRecipe1 has run Successfully deleted.
successfully Return to main menu.
Enter: 2
Enter: 3
(Continued)
31-01-2018 14:56:58
Test ID Description/steps Expected results Actual results Test status (P/F)
deleteRecipe2 Precondition: Run CoffeeMaker There are no recipes to
Enter: 2 delete.
Software-Testing_Final.indb 126
Return to main menu.
REVIEW QUESTIONS
31-01-2018 14:56:58
Black-Box (or Functional) Testing Techniques • 127
REVIEW QUESTIONS
1. Perform the following:
a. Write a program to find the largest number.
b. Design a test case for program at 2(a) using a decision table.
c. Design equivalence class test cases.
2. Explain any five symbols used in the cause-effect graphing technique?
3. How do you measure:
a. Test effectiveness?
b. Test efficiency?
4. Write a short paragraph:
a. Equivalence testing.
5. Explain the significance of boundary value analysis. What is the purpose
of worst case testing?
6. Describe cause-effect graphing technique with the help of an example.
7. a. Discuss different types of equivalence class tests cases.
b. Consider a program to classify a triangle. Its input is a triple of the
integers (day x, y, z) and date types or input parameters ensure that
they will be integers greater than zero and less than or equal to 200.
The program output may be any of the following words: scalene,
isosceles, equilateral, right angle triangle, not a triangle. Design the
equivalence class test cases.
8. How can we measure and evaluate test effectiveness? Explain with the
help of 11 step S/W testing process.
9. What is the difference between:
Equivalence partitioning and boundary value analysis methods?
10. Consider the previous date function and design test cases using the
following techniques:
a. Boundary value analysis.
b. Equivalence class partitioning.
The function takes current date as an input and returns the previous
date of the day as the output.
Design robust test cases and identify equivalence class test cases for
output and input domains for this problem.
20. What is the difference between weak normal and strong normal
equivalence class testing?
21. Consider a program for the determination of previous date. Its input is a
triple of day, month, and year with the values in the range:
1 ≤ month ≤ 12
1 ≤ day ≤ 31
1900 ≤ year ≤ 2025
The possible outputs are “Previous date” and “Invalid date.” Design a
decision table and equivalence classes for input domain.
22. Consider a program given below for the selection of the largest of
numbers.
main ( )
{
float A, B, C;
printf (“Enter 3 values:”);
scanf (“%d%d%d”, &A, &B, &C);
printf (“Largest value is”);
if (A > B)
{
if (A > C)
printf (“%d\n”, A);
else
printf (“%d\n”, C);
}
else
{
if (C > B)
printf (“%d”, C);
else
printf (“%f”, B);
}
}
a. D
esign the set of test cases using BVA technique and equivalence
class testing technique.
b. Select a set of test cases that will provide 100% statement coverage.
c. Develop a decision table for this program.
23. Consider the above program and show that why is it practically impossible
to do exhaustive testing?
24. a. Consider the following point-based evaluation system for a trainee
salesman of an organization:
The marks of any three subjects are considered for the calculation of
average marks. Scholarships of $1000 and $500 are given to students
securing more than 90% and 85% marks, respectively. Develop a
decision table, cause effect graph, and generate test cases for the above
scenario.
4
White-Box (or Structural)
Testing Techniques
Inside this Chapter:
4.0. Introduction to White-Box Testing or Structural Testing or
Clear-Box or Glass-Box or Open-Box Testing
4.1. Static Versus Dynamic White-Box Testing
4.2. Dynamic White-Box Testing Techniques
4.3. Mutation Testing Versus Error Seeding—Differences in Tabular
Form
4.4. Comparison of Black-Box and White-Box Testing in Tabular
Form
4.5. Practical Challenges in White-Box Testing
4.6. Comparison on Various White-Box Testing Techniques
4.7. Advantages of White-Box Testing
4.2.2.1. Statement Coverage
In most of the programming languages, the program construct may be a
sequential control flow, a two-way decision statement like if-then-else, a
multi-way decision statement like switch, or even loops like while, do, repeat
until and for.
Statement coverage refers to writing test cases that execute each of the
program statements. We assume that the more the code is covered, the bet-
ter the testing of the functionality.
For a set of sequential statements (i.e., with no conditional branches),
test cases can be designed to run through from top to bottom. However, this
may not always be true in two cases:
1. If there are asynchronous exceptions in the code, like divide by zero,
then even if we start a test case at the beginning of a section, the test case
may not cover all the statements in that section. Thus, even in the case of
sequential statements, coverage for all statements may not be achieved.
2. A section of code may be entered from multiple points.
In case of an if-then-else statement, if we want to cover all the state-
ments then we should also cover the “then” and “else” parts of the if state-
ment. This means that we should have, for each if-then-else, at least one test
case to test the “then” part and at least one test case to test the “else” part.
The multi-way, switch statement can be reduced to multiple two-way if
statements. Thus, to cover all possible switch cases, there would be multiple
test cases.
f g
Total Statements Exercised
Statement Coverage =
Total Number of Executable Statements in Program
× 100
i = 0 ;
if (code = = “y”)
{
statement –1 ;
statement–2 ;
:
:
statement – n ;
}
else
result = {marks/ i} * 100 ;
In this program, when we test with code = “y,” we will get 80% code
coverage. But if the data distribution in the real world is such that 90% of the
time the value of code is not = “y,” then the program will fail 90% of the time
because of the exception-divide by zero. Thus, even with a code coverage of
80%, we are left with a defect that hits the users 90% of the time. The path
coverage technique, discussed next, overcomes this problem.
Path Coverage =
f Total Path Exercised
Total Number of paths in Program g × 100
Condition Coverage =
f Total Decisions Exercised
g
Total Number of Decisions in Program
× 100
4.2.2.4. Function Coverage
In this white-box testing technique, we try to identify how many program
functions are covered by test cases. So, while providing function coverage,
test cases can be written so as to exercise each of different functions in the
code.
The following are the advantages of this technique:
1. Functions (like functions in C) are easier to identify in a program and,
hence, it is easier to write test cases to provide function coverage.
2. Because functions are at a higher level of abstraction than code, it is
easier to achieve 100% function coverage.
3. It is easier to prioritize the functions for testing.
4. Function coverage provides a way of testing traceability, that is, tracing
requirements through design, coding, and testing phases.
5. Function coverage provides a natural transition to black-box testing.
Function coverage can help in improving the performance as well as
the quality of the product. For example, if in a networking software, we find
that the function that assembles and disassembles the data packets is being
used most often, it is appropriate to spend extra effort in improving the qual-
ity and performance of that function. Thus, function coverage can help in
improving the performance as well as the quality of the product.
Better code coverage is the result of better code flow understanding and
writing effective test cases. Code coverage up to 40–50% is usually achiev-
able. Code coverage of more than 80% requires an enormous amount of
effort and understanding of the code.
The multiple code coverage techniques discussed so far are not mutu-
ally exclusive. They supplement and augment one another. While statement
coverage can provide a basic comfort factor, path, decision, and function
coverage provide more confidence by exercising various logical paths and
functions.
McCabe IQ covers about 146 different counts and measures. These met-
rices are grouped according to six main “collections” each of which provides
a different level of granularity and information about the code being ana-
lyzed. The collections are given below:
i. McCabe metrics based on cyclomatic complexity, V(G).
ii. Execution coverage metrics based on any of branch, path, or Boolean
coverage.
iii. Code grammar metrics based around line counts and code structure
counts such as nesting.
iv. OO metrics based on the work of Chidamber and Kemerer.
v. Derived metrics based on abstract concepts such as understability,
maintainability, comprehension, and testability.
vi. Custom metrics imported from third-party software/systems, e.g.,
defect count.
McCabe IQ provides for about 100 individual metrics at the method,
procedure, function, control, and section/paragraph level. Also, there are 40
metrices at the class/file and program level.
Categories of Metrics
There are three categories of metrics:
1. McCabe metrics
2. OO metrics
3. Grammar metrics
Please remember that when collecting metrics, we rely upon subordinates
who need to “buy into” the metrics program. Hence, it is important to only
collect what you intend to use.
We should keep in mind, the Hawthorne Effect which states that when
you collect metrics on people, the people being measured will change their
behavior. Either of these practices will destroy the efficiency of any metrics
program.
The three metrics categories are explained below.
Essential Complexity
EDM =
Cyclomatic Complexity
CD = Decisions Made
Lines of Executable Code
If the path coverage is < 90% for new code or 70% for code under
maintenance then the test scripts require review and enhancement.
h. Boolean coverage: A technique used to establish that each condition
within a decision is shown by execution to independently and correctly
affect the outcome of the decision.
The major application of this technique is in safety critical sys-
tems and projects.
i. Combining McCabe metrics: Cyclomatic complexity is the basic
indicator for determining the complexity of logic in a unit of code. It
can be combined with other metrics.
2. Code refactoring
If V(G) > 10 and the condition
V(G) – EV(g) ≤ V(g) is true
Then, the code is a candidate for refactoring.
4. Test coverage
If the graph between V(G) against path coverage does not show a linear
increase then the test scripts need to be reviewed.
II. OO Metrics
a. Average V(G) for a class: If average V(G) > 10 then this metric
indicates a high level of logic in the methods of the class which in turn
indicates a possible dilution of the original object model. If the average
is high, then the class should be reviewed for possible refactoring.
b. Average essential complexity for a class: If the average is greater
than one then it may indicate a dilution of the original object model.
If the average is high, then the class should be reviewed for possible
refactoring.
c. Number of parents: If the number of parents for a class is greater
than one then it indicates a potentially overly complex inheritance
tree.
d. Response for class (RFC): RFC is the count of all methods within
a class plus the number of methods accessible to an object of this
class due to implementation. Please note that the larger the number
of methods that can be invoked in response to a message, the greater
the difficulty in comprehension and testing of the class. Also, note
that low values indicate greater specialization. If the RFC is high then
making changes to this class will be increasingly difficult due to the
extended impact to other classes (or methods).
e. Weighted methods for class (WMC): WMC is the count of
methods implemented in a class. It is a strong recommendation that
WMC does not exceed the value of 14. This metric is used to show
the effort required to rewrite or modify the class. The aim is to keep
this metric low.
f. Coupling between objects (CBO): It indicates the number of non-
inherited classes this class depends on. It shows the degree to which
this class can be reused.
For dynamic link libraries (DLLs) this measure is high as the soft-
ware is deployed as a complete entity.
For executables (.exe), it is low as here reuse is to be encouraged.
Please remember this point:
What is to be done?
The percentages of methods in a class using an attribute are averaged
and subtracted from 100. This measure is expressed in percentage.
Two cases arise:
i. If % is low, it means simplicity and high reusability.
ii. If % is high, it means a class is a candidate for refactoring and
could be split into two or more subclasses with low cohesion.
j. Combined OO metrics: V(G) can also be used to evaluate OO
systems. It is used with OO metrics to find out the suitable candidates
for refactoring.
By refactoring, we mean making a small change to the code which
improves its design without changing its semantics.
any path through the program that introduces at least one new set of
processing statements or a new condition. See the following steps:
Step 1. Construction of flow graph from the source code or flow charts.
Step 2. Identification of independent paths.
Step 3. Computation of cyclomatic complexity.
Step 4. Test cases are designed.
Using the flow graph, an independent path can be defined as a path in
the flow graph that has at least one edge that has not been traversed before
in other paths. A set of independent paths that cover all the edges is a basis
set. Once the basis set is formed, test cases should be written to execute all
the paths in the basis set.
i.e., .
We next show the basic notations that are used to draw a flow graph:
SOLVED EXAMPLES
EXAMPLE 4.1. Consider the following code:
void foo (float y, float a *, int n)
{
float x = sin (y) ;
if (x > 0.01)
z = tan (x) ;
else
z = cos (x) ;
for (int i = 0 ; i < x ; + + i) {
a[i] = a[i] * z ;
Cout < < a [i] ;
}
Draw its flow graph, find its cyclomatic complexity, V(G), and the independ-
ent paths.
SOLUTION. First, we try to number the nodes, as follows:
1. void foo (float y, float a *, int n)
{
float x = sin (y) ;
if (x > 0.01)
7.
cout < < i ;
}
So, its flow graph is shown in Figure 4.4. Next, we try to find V(G) by
three methods:
This means that we must execute these paths at least once in order to
test the program thoroughly. So, test cases can be designed.
EXAMPLE 4.2. Consider the following program that inputs the marks of five
subjects of 40 students and outputs average marks and the pass/fail message.
h # include <stdio.h>
a. (1) main ( ) {
(2) int num_student, marks, subject, total;
b. h (3) float average ;
(4) num_student = 1;
h
e. (5) while (num_student < = 40) {
f. h (6) total = 0 ;
h
(7) subject = 1;
(8) while (subject < = 5) }
(9) Scanf (“Enter marks: % d”, & marks);
(10) total = total + marks ;
(11) subject ++;
(12) }
g. (13) average = total/5 ;
h (14) if (average > = 50)
h. h (15) printf (“Pass... Average marks = % f”,
average);
(16) else
i. h (17) print (“FAIL ... Average marks are % f”,
average);
j. h (18) num_student ++;
(19) }
h
c. (20) printf (“end of program”);
d. h (21) }
Draw its flow graph and compute its V(G). Also identify the independent
paths.
SOLUTION. The process of constructing the flow graph starts with dividing
the program into parts where flow of control has a single entry and exit point.
In this program, line numbers 2 to 4 are grouped as one node (marked as
“a”) only. This is because it consists of declaration and initialization of varia-
bles. The second part comprises of a while loop-outer one, from lines 5 to 19
and the third part is a single printf statement at line number 20.
Note that the second part is again divided into four parts—statements
of lines 6 and 7, lines 8 to 12, line 13, and lines 14–17, i.e., if-then-else
structure using the flow graph
notation, we get this flow graph
in Figure 4.5.
Here, “∗” indicates that the node
is a predicate node, i.e., it has an
outdegree of 2.
The statements corre-
sponding to various nodes are
given below:
Statement
Nodes Numbers
a 2–4
b 5
e 6–7
f 8
z 9–12
g 13–14
h 15
i 17
j 18
c 19 FIGURE 4.5 Flow Graph for Example 4.2.
d 20
FIGURE 4.9
31-01-2018 14:57:15
162 • Software Testing
1. In test case id–2, we call GCD function recursively with x = x – y and y
NOTES as it is.
2. In test case id–3, we call GCD function recursively with y = y – x and x
as it is.
SOLUTION.
Cyclomatic complexity is a software metric that provides a quantitative
measure of the logical complexity of a program.
Cyclomatic complexity has a foundation in graph theory and is computed
in one of the three ways:
i. The number of regions corresponds to the cyclomatic complexity.
ii. Cyclomatic complexity: E – N + 2 (E is the number of edges, and
N is number of nodes).
iii. Cyclomatic complexity: P + 1 (P is the number of predicate nodes).
Referring to the flow graph, the cyclomatic number is:
1. The flow graph has three regions.
2. Complexity = 8 edges – 7 nodes + 2 = 3
3. Complexity = 2 predicate nodes + 1 = 3 (Predicate nodes = C1, C2)
FIGURE 4.11
Two test cases are required for complete branch coverage and four test cases
are required for complete path coverage.
Assumptions:
c1; if(i%2==0)
f1: EVEN()
f2: ODD()
c2: if(j > 0)
f3: POSITIVE()
f4: NEGATIVE()
i.e.
if(i%2==0){
EVEN();
}else{
ODD();
}
if(j < 0){
POSITIVE();
}else{
NEGATIVE();
}
Test cases that satisfy the branch coverage criteria, i.e., <c2, f1, c2, f3> and
<c1, f2, c2, f4>.
SOLUTION.
Nodes Lines
A 0, 1, 2
B, C, D 3, 4, 5
E 6
F 7
G, H, I 8, 9, 10
J 11
FIGURE 4.12 FIGURE 4.13
6 if (x ≤ 0) {
7 if(y ≥ 0){
8 z = y*z + 1;
9 }
10 }
11 else {
12 z = 1/x;
13 }
14 y = x * y + z
15 count = count – 1
16 while (count > 0)
17 output (z);
18 end
Draw its data-flow graph. Find out whether paths (1, 2, 5, 6) and (6, 2, 5, 6)
are def-clear or not. Find all def-use pairs?
SOLUTION. We draw its data flow graph (DD-path graph) first.
FIGURE 4.14
3 b
4 d
FIGURE 4.15 Flow Graph Example. FIGURE 4.16 Graph Matrix.
1 2 3
1 a d
2 b+e
3 c
FIGURE 4.17 Flow Graph Example. FIGURE 4.18 Graph Matrix.
Note that if there are several links between two nodes then “+” sign denotes a
parallel link.
This is, however, not very useful. So, we assign a weight to each entry
of the graph matrix. We use “1” to denote that the edge is present and “0”
to show its absence. Such a matrix is known as a connection matrix. For the
Figure above, we will have the following connection matrix:
1 2 3 4 1 2 3 4
1 1 1 1 1 1 2–1=1
2 2
3 1 1–1=0
3 1
4 1 1–1=0
4 1
1 + 1 = 2 = V(G)
FIGURE 4.19 Connection Matrix. FIGURE 4.20
Now, we want to compute its V(G). For this, we draw the matrix again,
i.e., we sum each row of the above matrix. Then, we subtract 1 from each
row. We, then, add this result or column of result and add 1 to it. This gives
us V(G). For the above matrix, V(G) = 2.
EXAMPLE 4.10. Consider the following flow graph:
FIGURE 4.21
FIGURE 4.22 FIGURE 4.23
Similarly, we can find two link or three link path matrices, i.e., A2, A3, ...
A . These operations are easy to program and can be used as a testing tool.
n–1
i. The flow graph of a given program is ii. The def/use graph for this
as follows: program is as follows:
FIGURE 4.24 FIGURE 4.25
Now, let us find its dcu and dpu. We draw a table again as follows:
30 Then
31 commission = 0.10 * 1000.0
32 commission = commission + 0.15 *
800
33 commission = commission + 0.20 *
(sales – 1800)
34 Else if (sales > 100.0)
35 then
36 commission = 0.10 * 1000.0
37 commission = commission + 0.15 *
(sales – 1000.0)
38 else commission = 0.10 * sales
39 Endif
40 Endif
41 Output (“Commission is $”,
commission)
42 End commission
Now, we draw its flow graph first:
DD-path Nodes
A 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13
B 14
C 15, 16, 17, 18, 19
D 20, 21, 22, 23, 24, 25, 26, 27, 28
E 29
F 30, 31, 32, 33
G 34
H 35, 36, 37
I 38
J 39
K 40
FIGURE 4.27 DD Path Graph. L 41, 42
The initial value definition for the variable, “titalstocks” occurs at node-
11 and it is first used at node-17. Thus, the path (11, 17) which consists of the
node sequence <11, 12, 13, 14, 15, 16, 17>, is definition clear.
The path (11, 22), which consists of the node sequence <11, 12, 13, 14,
15, 16, 17, 18, 19, 20> * & <21, 22> is not definition clear because the values
of total stocks are defined at node-11 and at node-17. The asterisk, *, is used
to denote zero or more repetitions.
Thus, out of 43 du-paths, 8-paths, namely, (11, 22), (17, 25), (17, 22),
(17, 25), (31, 33), (31, 41), (32, 41), and (36, 41), are not definition clear.
These 8-paths are the main culprits and thus the cause of the error.
Step 2: N
ext, we tabulate data actions. The D and U actions in xyz-class can
be read from the code.
= Define
= Use
where
= Define
= Use
1–5 DUD
1–2 DUD
Paths from node-2:
2–3 DD
2–4 DD
Paths from node-3:
3–6–7 DUD
3–6–8–10 DU–
Paths from node-4:
4–6–7 DUD
4–6–8–10 DU–
Paths from node-5:
5–6–7 DUD
5–6–8–10 DU–
Paths from node-7:
5–6–7 DUD
5–6–8–10 DU–
Similarly, the trace data flow for “Z” is as
follows: Define/Use Paths for Z are shown
in Figure 4.31. The variable Z has d actions
at nodes-1 and-7.
∴ Paths from node-1 are:
1–5–6–7 DUD
1–5–6–8–9 DU– where
1–5–6–8 DU– = Define
1–2–3–6–7 DUD = Use
1–2–4–6–7 DUD
1–2–3–6–8–9 DU–
1–2–3–6–8 DU–
Paths from node-7:
7–6–8–9 DU– FIGURE 4.31 Trace Data Flow
7–6–7 DU– for “Z.”
7–6–8 DU–
Step 4: Merge the paths
a. Drop sub-paths: Many of the tabulated paths are sub-paths. For
example,
{1 2 3 6 8} is a sub-path of {1 2 3 6 8 9}.
So, we drop the sub-path.
b. Connect linkable paths: Paths that end and start on the same node
can be linked. For example,
{1 5} {5 6 8 10}
becomes {1 5 6 8 10}
We cannot merge paths with mutually exclusive decisions. For example,
{1 5 6 8 9} and {1 2 3 6 7}
cannot be merged because they represent the predicate branches from
node-1.
So, merging all the traced paths provides the following set of paths:
The (7 6)* means that path 7–6 (the loop) can be iterated. We require that a
loop is iterated at least twice.
Step 5: Check testability
We need to check path feasibility.
Try path 1: {1–5–6–8–10}
Condition Comment
1. x ≤ 10 Force branch to node-5
2. y′ = z + x Calculation on node-5
3. x ≤ z + x Skip node-7, proceed to node-8
4. x ≤ z Skip node-9, proceed to node-10
The values that satisfy all these constraints can be found by trial and error,
graphing, or solving the resulting system of linear inequalities.
The set x = 0, y = 0, z = 0 works, so this path is feasible.
Try path 5: {1–2–3–6–8–10}
Condition Comment
1. x > 10 Force branch to node-2
2. x′ = x + 2 Calculation on node-2
3. y′ = y – 4 Calculation on node-2
4. x′ > z Force branch to node-3
5. x′ ≤ y′ Skip node-7, proceed to node-8
6. x′ ≤ z Skip node-9, proceed to node-10
Test Data
Input test data Output
Path Nodes visited x y z B9 B10
1. 1–5–6–8–10 0 0 0 – 0
9 0 9 – 18
2. 1–5–6–8–9–10 9 0 8 17 25
9 0 0 9 18
3. 1–5–6–(7 6)* 8–10 –1 0 –1 – –2
4. 1–5–6–(7–6)* 8–9–10 0 0 1 –1 –1
9 0 –1 –1 18
7. 1–2–3–6–(7–6)* 8–10 10 0 62 – 24
8. 1–2–3–4–(7–6)*8–9–10 10 0 12 –26 24
10 0 61 23 24
10. 1–2–3–6–8–9–10 10 0 0 12 24
12. 1–2–3–6–(7–6)*–8–9–10 10 0 1 1 24
Functional testing techniques always result in a set of test cases and s tructural
metrics are always expressed in terms of something countable like the num-
ber of program paths, the number of decision-to-decision paths (DD-paths),
and so on.
Figures 4.32 and 4.33 show the trends for the number of test cover-
age items and the effort to identify them as functions of structural testing
methods, respectively. These graphs illustrate the importance of choosing an
appropriate structural coverage metric.
SUMMARY
1. White-box testing can cover the following issues:
a. Memory leaks
b. Uninitialized memory
c. Garbage collection issues (in JAVA)
2. We must know about white-box testing tools also. They are listed below:
a. Purify by Rational Software Corporation
b. Insure++ by ParaSoft Corporation
c. Quantify by Rational Software Corporation
d. Expeditor by OneRealm Inc.
ANSWERS
1. b. 2. b. 3. a. 4. b.
5. a. 6. a. 7. b. 8. b.
9. c. 10. a.
FIGURE 4.34
REVIEW QUESTIONS
1. White-box testing is complementary to black-box testing, not alternative.
Why? Give an example to prove this statement.
2. a. What is a flow graph and what it is used for?
b. Explain the type of testing done using flow graph?
3. Perform the following:
a. Write a program to determine whether a number is even or odd.
b. Draw the paths graph and flow graph for the above problem.
c. Determine the independent path for the above.
4. Why is exhaustive testing not possible?
5. a. Draw the flow graph of a binary search routine and find its independent
paths.
FIGURE 4.35
14. What is data flow testing? Explain du-paths. Identify du- and dc-paths of
any example of your choice. Show those du-paths that are not dc-paths.
15. Write a short paragraph on data flow testing.
16. Explain the usefulness of error guessing testing technique.
17. Discuss the pros and cons of structural testing.
18. a. What are the problems faced during path testing? How they can be
minimized?
b. Given the source code below:
void foo (int a, b, c, d, e) {
if (a = = 0) {
return;
}
int x = 0;
if ((a = = b) or (c = = d)) {
x = 1;
}
e = 1/x;
}
List the test cases for statement coverage, branch coverage, and condition
coverage.
19. Why is error seeding performed? How it is different from mutation
testing?
20. a. Describe all methods to calculate the cyclomatic complexity.
b. What is the use of graph matrices?
21. Write a program to calculate the average of 10 numbers. Using data flow
testing design all du- paths and dc-paths in this program.
22. Write a short paragraph on mutation testing.
23. Write a C/C++ program to multiply two matrices. Try to take care of as
many valid and invalid conditions are possible. Identify the test data.
Justify.
24. Discuss the negative effects of the following constructs from the white-
box testing point of view:
a. GO TO statements
b. Global variables
25. Write a C/C++ program to count the number of characters, blanks, and
tabs in a line. Perform the following:
a. Draw its flow graph.
b. Draw its DD-paths graph.
c. Find its V(G).
d. Identify du-paths.
e. Identify dc-paths.
26. Write the independent paths in the following DD-path graph.
Also calculate mathematically. Also name the decision nodes shown in
Figure 4.36.
27. What are the properties of cyclomatic complexity?
FIGURE 4.36
28. Explain in detail the process to ensure the correctness of data flow in a
given fragment of code.
main
{
int K = 35, Z;
Z = check (K);
printf (“\n%d”, Z);
}
check (m)
{
int m;
if (m > 40)
return (1);
else
return (0);
}
29. Write a C program for finding the maximum and minimum out of three
numbers and compute its cyclomatic complexity using all possible
methods.
30. Consider the following program segment:
void sort (int a[], int n)
{
int i, j;
for (i = 1; i < n; i++)
for (j = i + 1, j < n; j++)
if (a[i] > a[j])
{
temp = a[i];
a[i] = a[j];
a[j] = temp;
}
i. Draw the control flow graph for this program segment.
ii. Determine the cyclomatic complexity for this program (give all
intermediate steps).
iii. How is cyclomatic complexity metric useful?
31. Explain data flow testing. Consider an example and show all “du” paths.
Also identify those “du” paths that are not “dc” paths.
32. Consider a program to find the roots of a quadratic equation. Its input is
a triplet of three positive integers (say a, b, c) from the interval [1, 100].
The output may be one of the following words—real roots, equal roots,
imaginary roots. Find all du-paths and those that are dc-paths. Develop
data flow test cases.
33. If the pseudocode below were a programming language, list the test
cases (separately) required to achieve 100% statement coverage and
path coverage.
1. If x = 3 then
2. Display_message x;
3. If y = 2 then
4. Display_message y;
5. Else
6. Display_message z;
7. Else
8. Display_message z;
34. Consider a program to classify a triangle. Draw its flow graph and DD-
path graph.
5
Gray-Box Testing
Inside this Chapter:
5.0. Introduction to Gray-Box Testing
5.1. What Is Gray-Box Testing?
5.2. Various Other Definitions of Gray-Box Testing
5.3. Comparison of White-Box, Black-Box, and Gray-Box Testing
Approaches in Tabular Form
SUMMARY
1. As testers, we get ideas for test cases from a wide range of knowl-
edge areas. This is partially because testing is much more effective
when we know what types of bugs we are looking for. As testers of
complex systems, we should strive to attain a broad balance in our
knowledge, learning enough about many aspects of the software and
systems being tested to create a battery of tests that can challenge
the software as deeply as it will be challenged in the rough and tum-
ble day-to-day use.
2. Every tester in a test team need not be a gray-box tester. More is the
mix of different types of testers in a team, better is the success.
ANSWERS
1. a. 2. b. 3. c. 4. b. 5. c.
REVIEW QUESTIONS
6
Reducing the Number of
Test Cases
Inside this Chapter:
6.0. Prioritization Guidelines
6.1. Priority Category Scheme
6.2. Risk Analysis
6.3. Regression Testing—Overview
6.4. Prioritization of Test Cases for Regression Testing
6.5. Regression Testing Technique—A Case Study
6.6. Slice Based Testing
There are four schemes that are used for prioritizing the existing set test
cases. These reduction schemes are as follows:
1. Priority category scheme
2. Risk analysis
3. Interviewing to find out problematic areas
4. Combination schemes
All of these reduction methods are independent. No one method is bet-
ter than the other. One method may be used in conjunction with another
one. It raises confidence when different prioritization schemes yield similar
conclusions.
We will discuss these techniques now.
Risk
Problem Probability of Impact of exposure
ID Potential problem (ri) occurrence (li) risk (xi) = li * xi
A Loss of power 1 10 10
B Corrupt file header 2 1 2
C Unauthorized access 6 8 48
D Databases not 3 5 15
synchronized
E Unclear user 9 1 9
documentation
F Lost sales 1 8 8
G Slow throughput 5 3 15
: : : : :
: : : : :
: : : : :
FIGURE 6.1 Risk Analysis Table (RAT).
We can see from the graph of Figure 6.2 that a risk with high severity is
deemed more important than a problem with high probability. Thus, all risks
mapped in the upper-left quadrant fall into priority 2.
For example, the risk-e which has a high probability of occurrence but a
low severity of impact is put under priority 3.
Method II: For an entirely different application, we may swap the defini-
tions of priorities 2 and 3, as shown in Figure 6.3.
An organization favoring Figure 6.3 seeks to minimize the total number
of defects by focusing on problems with a high probability of occurrence.
Dividing a risk matrix into quadrants is most common, testers can deter-
mine the thresholds using different types of boundaries based on application
specific needs.
Method III: Diagonal band prioritiza-
tion scheme.
If severity and probability tend to
be equal weight, i.e., if li = xi, then diag-
onal band prioritization scheme may
be more appropriate. This is shown in
Figure 6.4.
This threshold pattern is a com-
promise for those who have difficulty
in selecting between priority-2 and
priority-3 in the quadrant scheme. FIGURE 6.4 Method III.
by program changes from the rest of the code. The modules enclosed in the
firewall could be those that interact with the modified modules or those that
are direct ancestors or direct descendants of the modified modules.
The firewall concept is simple and easy to use, especially when the
change to a program is small. By retesting only the modules and interfaces
inside the firewall, the cost of regression integration testing can be reduced.
FIGURE 6.6
Test setup means the process by which AUT (application under test) is
placed in its intended or simulated environment and is ready to receive data
and output the required information. Test setup becomes more challenging
when we test embedded software like in ATMs, printers, mobiles, etc.
The sequence in which tests are input to an application is an important
issue. Test sequencing is very important for applications that have an inter-
nal state and runs continuously. For example, an online-banking software.
We then execute the test cases. Each test needs verification. This can
also be done automatically with the help of CASE tools. These tools com-
pare the expected and observed outputs. Some of the tools are:
a. Test Tube (by AT&T Bell Labs.) in 1994: This tool can do
selective retesting of functions. It supports C.
b. Echelon (by Microsoft) in 2002: No selective retesting but does
test prioritization. It uses basic blocks to test. It supports C and
binary languages.
c. ATACLx Suds (by Telcordia Technologies) in 1992: It does
selective retesting. It allows test prioritization and minimization. It
does control/data flow average. It also supports C.
Static slicing may lead to an unduly large program slice. So, Korel and Laski
proposed a method for obtaining dynamic slices from program executions.
They used a method to extract executable and smaller slices and to allow
more precise handling of arrays and other structures. So, we discuss dynamic
slicing.
Let “P” be the program under test and “t” be a test case against which P
has been executed. Let “l” be a location in P where variable v is used. Now,
the dynamic slice of P with respect to “t” and “v” is the set of statements in P
that lie in trace (t) and did effect the value of “v” at “l.” So, the dynamic slice
is empty if location “l” was not traversed during this execution. Please note
that the notion of a dynamic slice grew out of that of a static slice based on
program “P” and not on its execution.
Let us solve an example now.
EXAMPLE 6.1. Consider the following program:
1. main ( ) {
2. int p, q, r, z;
3. z = 0;
4. read (p, q, r);
5. if (p < q)
6. z = 1; //modified z
7. if (r < 1)
8. x = 2
9. output (z);
10. end
11. }
Test case (t1): <p = 1, q = 3, r = 2>. What will be
the dynamic slice of P with respect to variable “z” at line
9? What will be its static slice? What can you infer? If
t2: <p = 1, q = 0, r = 0> then what will be dynamic and FIGURE 6.7
static slices?
SOLUTION. Let us draw its flow graph first shown in Figure 6.7.
\ Dynamic slice (P) with respect to variable z at line 9 is td = <4, 5, 7, 8>
Static slice, ts = <3, 4, 5, 6, 7, 8>
Dynamic slice for any variable is generally smaller than the corresponding
NOTE
static slice.
Dynamic slice contains all statements in trace (t) that had an effect on
NOTE
program output.
Inferences mode:
1. A dynamic slice can be constructed based on any program variable
that is used at some location in P, the program that is being modified.
2. Some programs may have several locations and variables of interest at
which to compute the dynamic slice, then we need to compute slices
of all such variables at their corresponding locations and then take
union of all slices to create a combined dynamic slice. This approach
is useful for regression testing of relatively small components.
3. If a program is large then a tester needs to find out the critical loca-
tions that contain one or more variables of interest. Then, we can
build dynamic slices on these variables.
SUMMARY
Regression testing is used to confirm that fixed bugs have, in fact, been fixed
and that new bugs have not been introduced in the process and that features
that were proven correctly functional are intact. Depending on the size of a
project, cycles of regression testing may be performed once per milestone
or once per build. Some bug regression testing may also be performed dur-
ing each acceptance test cycle, focusing on only the most important bugs.
Regression tests can be automated.
ANSWERS
REVIEW QUESTIONS
7
Levels of Testing
Inside this Chapter:
7.0. Introduction
7.1. Unit, Integration, System, and Acceptance Testing Relationship
7.2. Integration Testing
7.0. INTRODUCTION
When we talk of levels of testing, we are actually talking of three levels of testing:
1. Unit testing
2. Integration testing
3. System testing
The three levels of testing are shown in Figure 7.1.
FIGURE 7.2
TEST is one of the CASE tools for unit testing (Parasoft) that automati-
cally tests classes written in MS.NET framework. The tester need not write
a single test or a stub. There are tools which help to organize and execute
test suites at command line, API, or protocol level. Some examples of such
tools are:
- - -
FIGURE 7.4
It begins with the main program, i.e., the root of the tree. Any lower-level
unit that is called by the main program appears as a “stub.” A stub is a piece
of throw-away code that emulates a called unit. Generally, testers have to
develop the stubs and some imagination is required. So, we draw.
Where “M” is the main program and “S”
represents a stub from the f igure, we find out
that:
Number of Stubs Required =
(Number of Nodes – 1)
Once all of the stubs for the main
rogram have been provided, we test the main
p
program as if it were a
standalone unit. FIGURE 7.5 Stubs.
7.2.2.4. Big-bang integration
Instead of integrating component by component and testing, this approach
waits until all the components arrive and one round of integration testing is
done. This is known as big-bang integration. It reduces testing effort and
removes duplication in testing for the multi-step component integrations.
Big-bang integration is ideal for a product where the interfaces are stable
with fewer number of defects.
7.2.3.2. Neighborhood Integration
The neighborhood of a node in a graph is the set of nodes that are one edge
away from the given node. In a directed graph, this includes all of the imme-
diate predecessor nodes and all of the immediate successor nodes. Please
note that these correspond to the set of stubs and drivers of the node.
For example, for node-16, neighborhood nodes are 9, 10, and 12 nodes
as successors and node-1 as predecessor node.
We can always compute the number of neighbors for a given call graph.
Each interior node will have one neighborhood plus one extra in case leaf
nodes are connected directly to the root node.
In module-A, nodes 1 and 5 are source nodes and nodes 4 and 6 are sink
nodes. Similarly, in module-B, nodes 1 and 3 are source nodes and nodes 2
and 4 are sink nodes. Module-C has a single source node 1 and a single sink
node, 5. This can be shown as follows:
7.2.5. System Testing
System testing focuses on a complete, integrated system to evaluate compli-
ance with specified requirements. Tests are made on characteristics that are
only present when the entire system is run.
the component and integration testing. The system test team generally
reports to a manager other than the product-manager to avoid conflicts
and to p rovide freedom to individuals during system testing. Testing the
product with an independent perspective and combining that with the
perspective of the customer makes system testing unique, different, and
effective.
The behavior of the complete product is verified during system testing.
Tests that refer to multiple modules, programs, and functionality are included
in system testing. This task is critical as it is wrong to believe that individually
tested components will work together when they are put together.
System testing is the last chance for the test team to find any leftover
defects before the product is handed over to the customer.
System testing strives to always achieve a balance between the objective
of finding defects and the objective of building confidence in the product
prior to release.
The analysis of defects and their classification into various categories
(called as impact analysis) also gives an idea about the kind of defects that
will be found by the customer after release. If the risk of the customers
getting exposed to the defects is high, then the defects are fixed before the
release or else the product is released as such. This information helps in
planning some activities such as providing workarounds, documentation on
alternative approaches, and so on. Hence, system testing helps in reducing
the risk of releasing a product.
System testing is highly complementary to other phases of testing. The
component and integration test phases are conducted taking inputs from
functional specification and design. The main focus during these testing
phases are technology and product implementation. On the other hand, cus-
tomer scenarios and usage patterns serve as the basis for system testing.
the test result cannot be taken as pass. Either the product or the non func-
tional testing process needs to be fixed here.
Non functional testing requires understanding the product behavior,
design, architecture, and also knowing what the competition provides. It also
requires analytical and statistical skills as the large amount of data generated
requires careful analysis. Failures in non functional testing affect the design
and architecture much more than the product code. Because non functional
testing is not repetitive in nature and requires a stable product, it is per-
formed in the system testing phase.
The differences listed in the table above are just the guidelines and not
the dogmatic rules.
Because both functional and non functional aspects are being tested in
the system testing phase so the question that arises is—what is the ratio of
the test-cases or effort required for the mix of these two types of testing? The
answer is here: Because functional testing is a focus area starting from the
unit testing phase while non functional aspects get tested only in the system
testing phase, it is a good idea that a majority of system testing effort be
focused on the non functional aspects. A 70%–30% ratio between non func-
tional and functional testing can be considered good and 50%–50% ratio is
a good starting point. However, this is only a guideline and the right ratio
depends more on the context, type of release, requirements, and products.
test may be performed in the next phase. So, the guideline is—“A test case
moved from a later phase to an earlier phase is a better option than delaying
a test case from an earlier phase to a later phase, as the purpose of testing is
to find defects as early as possible.” This has to be done after completing all
tests meant for the current phase, without diluting the tests of the current
phase.
We are now in a position to discuss various functional system testing
techniques in detail. They are discussed one by one.
In this method of system testing, the test cases are developed and
checked against the design and architecture to see whether they are actual
product-level test cases. This technique helps in validating the product
features that are written based on customer scenarios and verifying them
using product implementation.
If there is a test case that is a customer scenario but failed validation
using this technique, then it is moved to the component or integration test-
ing phase. Because functional testing is performed at various test phases, it is
important to reject the test cases and move them to an earlier phase to catch
defects early and avoid any major surprise at later phases.
We now list certain guidelines that are used to reject test cases for system
functional testing. They are:
1. Is this test case focusing on code logic, data structures, and unit of
the product?
If yes, then it belongs to unit testing.
2. Is this specified in the functional specification of any component?
If yes, then it belongs to component testing.
bank service needs a prompt reply. Some mail can be given automated mail
replies also. Hence, the terminology feature of the product should call the
e-mail appropriately as a claim or a transaction and also associate the profile
and properties in a way a particular business vertical works.
Syndication: Not all the work needed for business verticals is done by prod-
uct development organizations only. Even the solution integrators, service
providers pay a license fee to a product organization and sell the products
and solutions using their name and image. In this case, the product name,
company name, technology names, and copyrights may belong to the latter
parties or associations and the former would like to change the names in
the product. A product should provide features for those syndications in the
product and they are tested as a part of BVT.
Please note that in stage-1, the recorder intercepts the user and the
live system to record all transactions. All the recorded transactions from the
live system are then played back on the product under test under the super-
vision of the test engineer (as shown by dotted lines). In stage-2, the test
engineer records all transactions using a recorder and other methods and
plays back on the old live system (as shown again by dotted lines). So, the
overall stages are:
Sending the product too late may mean too little a time for beta defect fixes
and this one defeats the purpose of beta testing. So, late integration testing
phase and early system testing phase is the ideal time for starting a beta
program.
We send the defect fixes to the customers as soon as problems are
reported and all necessary care has to be taken to ensure the fixes meets the
requirements of the customer.
How many beta customers should be chosen?
If the number chosen are too few, then the product may not get a suffi-
cient diversity of test scenarios and test cases.
If too many beta customers are chosen, then the engineering organiza-
tion may not be able to cope with fixing the reported defects in time. Thus,
the number of beta customers should be a delicate balance between pro-
viding a diversity of product usage scenarios and the manageability of being
able to handle their reported defects effectively.
Finally, the success of a beta program depends heavily on the willingness
of the beta customers to exercise the product in various ways.
There are many contractual and legal requirements for a product. Failing
to meet these may result in business loss and bring legal action against the
organization and its senior management.
The terms certification, standards, and compliance testing are used
interchangeably. There is nothing wrong in the usage of terms as long as
the objective of testing is met. For example, a certifying agency helping an
organization meet standards can be called both certification testing and stan-
dards testing.
c riteria can be developed for a set of parameters and for various types of non
functional tests.
For example, a test to find out how many client-nodes can simulta-
neously log into the server. Failures during scalability test includes the
system not responding or system crashing. A product not able to respond
to 100 concurrent users while it is supposed to serve 200 users simultane-
ously is a failure. For a given configuration, the following template may
be used:
10–100
thousand
records
These tools help identify the areas of code not yet exercised after performing
NOTE functional tests.
NOTE The reliability of a product should not be confused with reliability testing.
deliberately to simulate the resource crunch and to find out its behavior. It is
expected to gracefully degrade on increasing the load but the system is not
expected to crash at any point of time during stress testing.
It helps in understanding how the system can behave under extreme and
realistic situations like insufficient memory, inadequate hardware, etc. Sys-
tem resources upon being exhausted may cause such situations. This helps to
know the conditions under which these tests fail so that the maximum limits,
in terms of simultaneous users, search criteria, large number of transactions,
and so on can be known.
Both spike and bounce tests determines how well the system behaves when
NOTE
sudden changes of loads occur.
Two spikes together form a bounce test scenario. Then, the load increases
into the stress area to find the system limits. These load spikes occur sud-
denly on recovery from a system failure.
There are differences between reliability and stress testing. Reliability
testing is performed by keeping a constant load condition until the test case
is completed. The load is increased only in the next iteration to the test case.
In stress testing, the load is generally increased through various means
such as increasing the number of clients, users, and transactions until and
beyond the resources are completely utilized. When the load keeps on
increasing, the product reaches a stress point when some of the transactions
start failing due to resources not being available. The failure rate may go up
beyond this point. To continue the stress testing, the load is slightly reduced
below this stress point to see whether the product recovers and whether the
failure rate decreases appropriately. This exercise of increasing/decreasing
the load is performed two or three times to check for consistency in behavior
and expectations (see Figure 7.15).
FIGURE 7.15
Sometimes, the product may not recover immediately when the load is
decreased. There are several reasons for this. Some of the reasons are
1. Some transactions may be in the wait queue, delaying the recovery.
2. Some rejected transactions may need to be purged, delaying the
recovery.
3. Due to failures, some clean-up operations may be needed by the
product, delaying the recovery.
4. Certain data structures may have gotten corrupted and may perma-
nently prevent recovery from stress point.
We can show stress testing with variable load in Figure 7.15.
Another factor that differentiates stress testing from reliability testing is
mixed operations/tests. Numerous tests of various types run on the system in
stress testing. However, the tests that are run on the system to create stress
points need to be closer to real-life scenarios.
3. The operations that generate the amount of load needed are planned
and executed for stress testing.
4. Tests that stress the system with random inputs (like number of
users, size of data, etc.) at random instances and random magnitude
are selected and executed as part of stress testing.
Defects that emerge from stress testing are usually not found from any
other testing. Defects like memory leaks are easy to detect but difficult
to analyze due to varying load and different types/ mix of tests executed.
Hence, stress tests are normally performed after reliability testing. To detect
stress-related errors, tests need to be repeated many times so that resource
usage is maximized and significant errors can be noticed. This testing helps
in finding out concurrency and synchronization issues like deadlocks, thread
leaks, and other synchronization problems.
TIPS Select those test cases that provide end-to-end functionality and run them.
7.2.5.6. Acceptance testing
It is a phase after system testing that is done by the customers. The customer
defines a set of test cases that will be executed to qualify and accept the
product. These test cases are executed by the customers are normally small
in number. They are not written with the intention of finding defects. Test-
ing in detail is already over in the component, integration, and system test-
ing phases prior to product delivery to the customer. Acceptance test cases
are developed by both customers and the product organization. Acceptance
test cases are black-box type of tests cases. They are written to execute near
real-life scenarios. They are used to verify the functional and non functional
aspects of the system as well. If a product fails the acceptance test then it
may cause the product to be rejected and may mean financial loss or may
mean rework of product involving effort and time.
A user acceptance test is:
• A chance to complete test software.
• A chance to completely test business processes.
• A condensed version of a system.
• A comparison of actual test results against expected results.
• A discussion forum to evaluate the process.
The main objectives are as follows:
• Validate system set-up for transactions and user access.
• Confirm the use of the system in performing business process.
• Verify performance on business critical functions.
• Confirm integrity of converted and additional data.
The project team will be responsible for coordinating the preparation
of all test cases and the acceptance test group will be responsible for the
execution of all test cases.
4. Tests that verify the basic existing behavior of the product are
included.
5. When the product undergoes modifications or changes, the accept-
ance test cases focus on verifying the new features.
6. Some non functional tests are included and executed as part of
acceptance testing.
7. Tests that are written to check if the product complies with certain
legal obligations are included in the acceptance test criteria.
8. Test cases that make use of customer real-life data are included for
acceptance testing.
FIGURE 7.16
We shall discuss each of these tests one by one.
to the command level and then apply test cases to check that each command
words as intended. No attention is paid to the combination of these basic
commands, the context of the feature that is formed by these combined
commands, or the end result of the overall feature. For example, FAST for
a File/SaveAs menu command checks that the SaveAs dialog box displays.
However, it does not validate that the overall file-saving feature works nor
does it validate the integrity of saved files.
Typically, errors encountered during the execution of FAST are
reported through the standard issue-tracking process. Suspending testing
during FAST is not recommended. Note that it depends on the organization
for which you work. Each might have different rules in terms of which test
cases should belong to RAT versus FAST and when to suspend testing or to
reject a build.
7.2.5.7.1. Introduction
In this internet era, when more and more of business is transacted online,
there is a big and understandable expectation that all applications will run as
fast as possible. When applications run fast, a system can fulfill the business
requirements quickly and put it in a position to expand its business. A system
or product that is not able to service business transactions due to its slow
performance is a big loss for the product organization, its customers, and its
customer’s customer. For example, it is estimated that 40% of online mar-
keting for consumer goods in the US happens in November and December.
Slowness or lack of response during this period may result in losses of several
million dollars to organizations.
In another example, when examination results are published on the
Internet, several hundreds of thousands of people access the educational
websites within a very short period. If a given website takes a long time to
complete the request or takes more time to display the pages, it may mean a
lost business opportunity, as the people may go to other websites to find the
results. Hence, performance is a basic requirement for any product and is
fast becoming a subject of great interest in the testing community.
Performance testing involves an extensive planning effort for the defi-
nition and simulation of workload. It also involves the analysis of collected
data throughout the execution phase. Performance testing considers such
key concerns as:
� Will the system be able to handle increases in web traffic with-
out compromising system response time, security, reliability, and
accuracy?
� At what point will the performance degrade and which components
will be responsible for the degradation?
� What impact will performance degradation have on company sales
and technical support costs?
Each of these preceding concerns requires that measurements be
applied to a model of the system under test. System attributes, such as
response time, can be evaluated as various workload scenarios are applied
to the model. Conclusions can be drawn based on the collected data. For
example, when the number of concurrent users reaches X, the response time
equals Y. Therefore, the system cannot support more than X number of con-
current users. However, the complication is that even when the X number
of concurrent users does not change, the Y value may vary due to differing
user activities. For example, 1000 concurrent users requesting a 2K HTML
page will result in a limited range of response times whereas response times
may vary dramatically if the same 1000 concurrent users simultaneously
submit purchase transactions that require significant server-side processing.
Designing a valid workload model that accurately reflects such real-world
usage is no simple task.
3. Latency
4. Tuning
5. Benchmarking
6. Capacity planning
We shall discuss these factors one by one.
1. Throughput. The capability of the system or the product in handling
multiple transactions is determined by a factor called throughput.
It represents the number of requests/ business transactions pro-
cessed by the product in a specified time duration. It is very import-
ant to understand that the throughput, i.e., the number of transactions
serviced by the product per unit time varies according to the load the
product is put under. This is shown in Figure 7.17.
From this graph, it is clear that the load to the product can be
increased by increasing the number of users or by increasing the num-
ber of concurrent operations of the product. Please note that initially
the throughput keeps increasing as the user load increases. This is the
ideal situation for any product and indicates that the product is capable
of delivering more when there are more users trying to use the product.
Beyond certain user load conditions (after the bend), the throughput
comes down. This is the period when the users of the system notice a
lack of satisfactory response and the system starts taking more time to
complete business transactions. The “optimum throughput” is repre-
sented by the saturation point and is one that represents the maximum
throughput for the product.
2. Response time. It is defined as the delay between the point of request
and the first response from the product. In a typical client-server
infrastructure a vailable for the product. Thus, from the figure above,
we can compute both the latency and the response time as follows:
Network latency = (N1 + N2 + N3 + N4)
Product latency = (A1 + A2 + A3)
Actual response time = (Network latency + Product latency)
The discussion about the latency in performance is very important, as
any improvement that is done in the product can only reduce the response
time by the improvements made in A1, A2, and A3. If the network latency
is more relative to the product latency and, if that is affecting the response
time, then there is no point in improving the product performance. In
such a case, it will be worthwhile to improve the network infrastructure.
In those cases where network latency is too large or cannot be improved,
the product can use intelligent approaches of caching and sending multi-
ple requests in one packet and receiving responses as a bunch.
4. Tuning. Tuning is procedure by which the product performance is
enhanced by setting different values to the parameters (variables) of
the product, operating system, and other components. Tuning improves
the product performance without having to touch the source code of
the product. Each product may have certain parameters or variables
that can be set at run time to gain optimum performance. The default
values that are assumed by such product parameters may not always give
optimum performance for a particular deployment. This necessitates
the need for changing the values of parameters or variables to suit the
deployment or a particular configuration. During performance testing,
tuning of the parameters is an important activity that needs to be done
before collecting numbers.
5. Benchmarking. It is defined as the process of comparing the throughput
and response time of the product to those of the competitive products.
No two products are the same in features, cost, and functionality. Hence,
it is not easy to decide which parameters must be compared across two
products. A careful analysis is needed to chalk out the list of transactions
to be compared across products. This produces meaniningful analysis to
improve the performance of the product with respect to competition.
6. Capacity planning. The most important factor that affects performance
testing is the availability of resources. A right kind of hardware and
software configuration is needed to derive the best results from
FIGURE 7.19
a. A
performance testing requirement should be testable. All features/
functionality cannot be performance tested.
For example, a feature involving a manual intervention cannot be
performance tested as the results depend on how fast a user responds
with inputs to the product.
b.
A performance testing requirement needs to clearly state what
factors need to be measured and improved.
c. A
performance testing requirement needs to be associated with the
actual number or percentage of improvement that is desired.
There are two types of requirements that performance testing focuses on:
1. Generic requirements.
2. Specific requirements.
1. Generic requirements are those that are common across all
products in the product domain area. All products in that area are
expected to meet those performance expectations.
Examples are time taken to load a page, initial response when a
mouse is clicked, and time taken to navigate between screens.
Specific requirements are those that depend on implementation
for a particular product and differ from one product to another in a
given domain.
An example is the time taken to withdraw cash from an ATM.
During performance testing both generic and specific require-
ments need to be tested.
See Table in next page for examples of performance test
requirements.
testing for 10 concurrent operations may be less than that of testing for
10,000 operations by several times. Hence, a methodical approach is to
gradually improve the concurrent operations by say 10, 100, 1000, 10,000,
and so on rather than trying to attempt 10,000 concurrent operations in the
first iteration itself. The test case documentation should clearly reflect this
approach.
Performance testing is a tedious process involving time and effort. All
test cases of performance testing are assigned different priorities. Higher
priority test cases are to be executed first. Priority may be absolute (given by
customers) or may be relative (given by test team). While executing the test
cases, the absolute and relative priorities are looked at and the test cases are
sequenced accordingly.
The performance test case is repeated for each row in this table and
factors such as the response time and throughput are recorded and analyzed.
After the execution of performance test cases, various data points are
collected and the graphs are plotted. For example, the response time graph
is shown below:
Plotting the data helps in making an easy and quick analysis which is
difficult with only raw data.
FIGURE 7.21
SUMMARY
We can say that we start with unit or module testing. Then we go in for inte-
gration testing, which is then followed by system testing. Then we go in for
acceptance testing and regression testing. Acceptance testing may involve
alpha and beta testing while regression testing is done during m
aintenance.
System testing can comprise of “n” different tests. That is it could
mean:
1. End-to-end integration testing
2. User interface testing
3. Load testing in terms of
a. Volume/size
b. Number of simultaneous users
c. Transactions per minute/second (TPM/TPS)
4. Stress testing
5. Testing of availability (24 × 7)
Performance testing is a type of testing that is easy to understand but
difficult to perform due to the amount of information and effort needed.
ANSWERS
1. a. 2. a. 3. b. 4. b.
5. d. 6. b. 7. d. 8. b.
9. c. 10. a.
Weight Meaning
+2 Must test, mission/safety critical
+1 Essential functionality, necessary for robust operation
+0 All other scenarios
Q. 2. H
ow will the operational profile maximize system reliability for a
given testing budget?
Ans. Testing driven by an operational profile is very efficient because
it identifies failures on average, in order of how often they occur.
This approach rapidly increases reliability—reduces failure inten-
sity per unit of execution time because the failures that occur most
frequently are caused by the faulty operations used most frequently.
According to J.D. Musa, users will also detect failures in order of
their frequency, if they have not already been found in the test.
Q. 3. What are three general software performance measurements?
Ans. The three general software performance measurements are as f ollows:
a. Throughput: The number of tasks completed per unit time.
It indicates how much of work has been done in an interval.
It does not indicate what is happening to a single task.
Example: Transactions per second.
b. Response time: The time elapsed between input arrival and
output delivery. Here, average and worst-case values are of
interest.
Example: Click to display delay.
c. Utilization: The percentage of time a component is busy. It can
be applied to processor resources like CPU, channels, storage,
or software resources like file-server, transaction dispatcher, etc.
Example: Server utilization.
Q. 4. What do you understand by system testing coverage?
Ans. System testing is requirements driven. The TC coverage metric
reflects the impact of requirements. [IEEE 89a]
FIGURE 7.22
Q. 8. C
onsider hypothetical “online railway reservation system.” Write
suitable scope statement for the system. Write all assumptions and
identify two test cases for each of the following:
i. Acceptance testing ii. GUI testing
iii. Usability and iv. Ad hoc testing
accessibility testing
Document test cases.
REVIEW QUESTIONS
1. Differentiate between alpha and beta testing?
2. Explain the following: Unit and Integration testing?
3. a. What would be the test objective for unit testing? What are the quality
measurements to ensure that unit testing is complete?
b. Put the following in order and explain in brief:
i. System testing
ii. Acceptance testing
iii. Unit testing
iv. Integration testing
4. Explain integration and system testing.
5. Write a short paragraph on levels of software testing.
16. a. Explain how you test the integration of two fragment codes with
suitable examples.
b. What are the various kinds of tests we apply in system testing?
Explain.
17. Assume that you have to build the real-time multiperson computer
game. What kinds of testing do you suggest or think are suitable. Give a
brief outline and justification for any four kinds of tests.
18. Discuss some methods of integration testing with examples.
19. a. What is the objective of unit and integration testing? Discuss with an
example code fragment.
b. You are a tester for testing a large system. The system data model is
very large with many attributes and there are many interdependencies
within the fields. What steps would you use to test the system and
what are the effects of the steps you have taken on the test plan?
20. What is the importance of stubs? Give an example.
21. a. Explain BVT technique.
b. Define MM-path graph. Explain through an example.
c. Give the integration order of a given call graph for bottom-up testing.
d. Who performs offline deployment testing? At which level of testing it
is done?
e. What is the importance of drivers? Explain through an example.
22. Which node is known as the transfer node of a graph?
23. a. Describe all methods of integration testing.
b. Explain different types of acceptance testing.
24. Differentiate between integration testing and system testing.
25. a. What are the pros and cons of decomposition based techniques?
b.
Explain call graph and path-based integration testing. Write
advantages and disadvantages of them.
c. Define acceptance testing.
d. Write a short paragraph on system testing.
8
Object-Oriented Testing
Inside this Chapter:
8.0. Basic Unit for Testing, Inheritance, and Testing
8.1. Basic Concepts of State Machines
8.2. Testing Object-Oriented Systems
8.3. Heuristics for Class Testing
8.4. Levels of Object-Oriented Testing
8.5. Unit Testing a Class
8.6. Integration Testing of Classes
8.7. System Testing (with Case Study)
8.8. Regression and Acceptance Testing
8.9. Managing the Test Process
8.10. Design for Testability (DFT)
8.11. GUI Testing
8.12. Comparison of Conventional and Object-Oriented Testing
8.13. Testing Using Orthogonal Arrays
8.14. Test Execution Issues
8.15. Case Study—Currency Converter Application
The techniques used for testing object-oriented systems are quite similar
to those that have been discussed in previous chapters. The goal is to
provide some test design paradigms that help us to perform Object-Oriented
Testing (OOT).
Case I: Extension
Suppose we change some methods in class A. Clearly,
We need to retest the changed methods.
We need to retest interaction among changed and unchanged methods.
We need to retest unchanged methods, if data flow exists between state-
ments, in changed and unchanged methods.
But, what about unchanged subclass B, which inherits from A? By anti-
composition:
“The changed methods from A also need to be exercised in the unique context
of the subclass.”
Here, we will not find the error unless we retest both number (DC) and
the inherited, previously tested, init balance( ) member functions.
FIGURE 8.2
Later, we specialize cert of deposit to have its own rollover ( ). That is,
SOLVED EXAMPLES
EXAMPLE 8.1. Consider the following code for the shape class hierarchy.
Class Shape {
private :
Point reference_point;
public :
void put_reference_point (Point);
point get_reference_point ( );
void move_to (point);
void erase ( );
virtual void draw ( ) = 0;
virtual float area ( );
shape (point);
shape ( );
}
Class triangle : public shape {
private :
point vertex 2;
point vertex 3;
public :
point get_vertex1 ( );
point get_vertex2 ( );
point get_vertex3 ( );
void set_vertex1 (point);
void set_vertex2 (point);
void set_vertex3 (point);
void draw ( );
float area ( );
Triangle ( );
Triangle (point, point, point);
}
Class Equi Triangle : Public Triangle
{
public
float area ( );
equi triangle ( );
equi triangle (point, point, point);
}
What kind of testing is required for this class hierarchy?
SOLUTION. We can use method-specific retesting and test case reuse for
the shape class hierarchy.
Let D = denote, develop, and execute test suite
R = Reuse and execute superclass test suite
E = Extend and execute superclass test suite
S = Skip, super’s test adequate
N = Not testable
Then, we get the following table that tells which type of testing is to be
performed.
NA = Not Applicable
State-Based Behavior
A state machine accepts only certain sequences of input and rejects all
others. Each accepted input/state pair results in a specific output. State-
based behavior means that the same input is not necessarily always accepted
and when accepted, does not necessarily produce the same output.
This simple mechanism can perform very complex control tasks.
Examples of sequentially constrained behavior include:
a. GUI interface control in MS-Windows
b. Modem controller
c. Device drivers with retry, restart, or recovery
d. Command syntax parser
e. Long-lived database transactions
f. Anything designed by a state model
The central idea is that sequence matters. Object behavior can be r eadily
modeled by a state machine.
A state machine is a model that describes behavior of the system under
test in terms of events (inputs), states, and actions (outputs).
FIGURE 8.4
A Water-Tank Example—STD
We draw STD of a water tank system with a valve. This valve can be in one
of the two states—shut or open. So, we have the following:
Mealy/Moore Machines
There are two main variants of state models (named for their developers).
Moore Machine:
Transitions do not have output.
An output action is associated with each state. States are active.
Mealy Machine:
Transitions have output.
No output action is associated with state. States are passive.
In software engineering models, the output action often represents the
activation of a process, program, method, or module.
Although the Mealy and Moore models are mathematically equivalent,
the Mealy type is preferred for software engineering.
A passive state can be precisely defined by its variables. When the same
output action is associated with several transitions, the Mealy machine pro-
vides a more compact representation.
Conditional/Guarded Transitions
Basic state models do not represent conditional transitions. This is remedied
by allowing a Boolean conditions on event or state variables.
Consider a state model for stack class. It has three states: empty, loaded,
and full.
We first draw the state machine model for a STACK class, without
guarded transitions. The initial state is “Empty.”
What is a guard? It is a predicate expression associated with an event.
Now, we draw another state machine model for STACK class, with guarded
transitions.
SOLUTION.
The game starts.
The player who presses the start button first gets the first serve. The
button press is modeled as the player-1 start and player-2 start events.
The current player serves and a volley follows. One of three things end
the volley.
If the server misses the ball, the server’s opponent becomes the server.
If the server’s opponent misses the ball, the server’s score is incremented
and gets another chance.
If the server’s opponent misses the ball and the server’s score is at the
game point, the server is declared winner.
FIGURE 8.9
Properties of Statecharts
1. They use two types of state: group and basic.
2. Hierarchy is based on a set-theoretic for malism (hypergraphs).
3. Easy to represent concurrent or parallel states and processes.
Therefore, we can say that:
Statecharts = State diagram
+
Depth
+
Orthogonality
+
Broadcast Communication
A basic state model and its equivalent statechart are shown below:
(a) (b)
FIGURE 8.10
We have already discussed the STD. We now discuss its equivalent, s tatechart.
In the Figure 8.10(b), we observe the following:
1. State D is a super state. It groups states A and C because they share
common transitions.
2. State A is the initial state.
3. Event f fires the transition AB or CB, depending on which state is active.
4. Event g fires AC but only if C is true (a conditional event).
5. Event h fires BC because C is marked as the default state.
6. The unlabelled transition inside of state D indicates that C is the default
state of D.
Statecharts can represent hierarchies of single-thread or concurrent state
machines.
Any state may contain substates.
The substate diagram may be shown on a separate sheet.
Decomposition rules are similar to those used for data flow diagrams
(DFD). Orthogonal superstates can represent several situations.
The interaction among states of separate classes (objects).
The non interaction among states of separate processes which
proceed independently. For example, “concurrent,” “parallel,”
“multi-thread,” or “asynchronous” execution.
Statecharts have been adapted for use in many OOA/OOD methodologies
including:
Booch OOD
Object modelling technique (OMT)
Object behavior analysis (OBA)
Fusion
Real-time object-oriented modeling (ROOM)
EXAMPLE 8.3. Consider a traffic light control system. The traffic light has
five states—red, yellow, green, flashing red, and OFF. The event, “power
on,” starts the system but does not turn on a light. The system does a self test
and if no faults are recognized, the no fault condition becomes true. When a
reset event occurs and the no fault condition holds, the red on event is gen-
erated. If a fault is raised in any state, the system raises a “Fault” event and
returns to the off state. Draw its
a. State transition diagram.
b. State chart.
The event reset fires off-red on because red on is marked as the only
default state with both superstates on and cycling.
In a state chart, a state may be an aggregate of other states (a superstate)
or an atomic state.
In the basic state model, one transition arrow must exist for each transi-
tion having the same event, the same resultant state but different accept-
ing states. This may be represented in a statechart with a single transition
from a superstate.
Figure 8.12 shows the statechart for the traffic light system.
Concatenation
Concatenation involves the formation of a subclass that has no locally defined
features other than the minimum requirement of a class definition.
State Space
A subclass state space results from the two factors:
FIGURE 8.13
TIPS Class hierarchies that do not meet these conditions are very likely to be
buggy.
Missing transition:
FIGURE 8.14
FIGURE 8.15
Missing action:
FIGURE 8.16
FIGURE 8.17
EXAMPLE 8.7. For a three player game, what conformance test suite will
you form? Derive the test cases.
SOLUTION.
TABLE 8.1 Conformance Test Suite for Three Player Game.
EXAMPLE 8.8. For a three player game, what sneak path test suite will you
form? Derive the test cases.
SOLUTION.
TABLE 8.2 Sneak Path Test Suite for Three Player Game.
3 if (a = = 2) | | (x > 1)
4 x = x + 1;
5 cout < < x;
}
We draw its control graph as it is our main tool for test case identification
shown in Figure 8.22.
Statement coverage for this C++ code: It simply requires that a test
suite cause every statement to be executed at least once.
We can get 100% C1 coverage for the abx method with one test case.
Predicate Testing
A predicate is the condition in a control statement: if, case, do while, do
until, or for. The evaluation of a predicate selects a segment of code to be
executed.
There are four levels of predicate average [Myers]:
Decision coverage
Condition coverage
Decision/condition coverage
Multiple condition coverage
Each of these subsumes C1 and provides greater fault detecting power.
There are some situations where predicate coverage does not subsume state-
ment coverage:
Methods with no decisions.
Methods with built-in exception handlers, for example, C++ try/throw/
catch.
Decision Coverage
We can improve on statement coverage by requiring that each decision
branch be taken at least once (at least one true and one false evaluation).
Either of the test suites below provides decision (C2) coverage for the abx
( ) method.
TABLE 8.3 Test Suite for Decision Coverage of abx ( ) Method (function).
Decision Coverage
It does not require testing all possible outcomes of each condition. It
improves on this by requiring that each condition be evaluated as true or
false at least once. There are four conditions in the abx ( ) method.
Either of the following test suites will force at least one evaluation of
every condition. They are given below:
Decision/Condition Coverage
Condition coverage does not require testing all possible branches. Decision/
condition coverage could improve on this by requiring that each condition be
evaluated as true or false at least once and each branch be taken at least once.
However, this may be infeasible due to a short-circuit Boolean evaluation.
Most programming languages evaluate compound predicates from left
to right, branching as soon as a sufficient Boolean result is obtained. This
allows statements like
if (a = 0) or (b/a > c) then .....
to handle a = 0 without causing a “divide-by-zero” exception. This can
prevent execution of compound conditions.
Test values
Test case Path a b x x′
DC1.1 1-2-3-4-5 2 0 4 3
DC1.2 1-3-5 1 1 1 1
The following table shows how each condition is covered by the M test
suite (for the abx ( ) method).
a b x Test Case
>1 =0 dc M1.1
≠0 dc M1.2
≤1 =0 dc M1.3
Impossible due to short circuit
≠0 dc M1.4
Impossible due to short circuit
=2 dc >1 M1.1
dc ≤1 M1.2
≠2 dc >1 M1.3
dc ≤1 M1.4
So, there are 8 possible conditional variants. And we are able to exercise
all 8 with only 4 test cases.
For the same value of x, paths <A> <C> and <B> <D> are not possible.
Because the predicates could be merged, you may have found a fault or
at least a questionable piece of code.
4. Unstructured Code.
Acceptable: Exception handling, break.
Not acceptable: Anything else.
Test case derivation: We now have a graph model of the entire class.
We can identify all intra-class control paths. We can identify all intra-
class du paths for instance variables. We can apply all the preceding test
case techniques at the class level.
b. The C* metric: We know that V(G) or C of a graph, G, is given by
e – n + 2. Similarly, for the FREE flow graph the class complexity is
represented by C* or V*(G). It is the minimum number of intra-class
control paths.
∴ C* = E – N + 2
E = em + 2m
N = nm + ns
where em = Total edges in all inserted subgraphs
m = Number of inserted subgraphs
nm = Total nodes in all inserted subgraphs
ns = Nodes in state graph (states)
Thus, we have
C * = em + 2m − nm − ns + 2
FIGURE 8.24
We need to consider three main facets of a class and its methods to
develop responsibility test cases. They are:
i. Functional Analysis
What kind of function is used to transform method inputs into
outputs?
Are inputs externally or internally determined?
ii. Domain Analysis
What are valid and invalid message inputs, states, and message
outputs?
What values should we select for test cases?
iii. Behavior Analysis
Does the sequence of method activation matter or not?
When sequence matters, how can we select efficient testing
sequence?
We will discuss each of these one by one.
A Testable Function
Must be independently invocable and observable.
Is typically a single responsibility and the collaborations necessary to
carry it out.
Often corresponds to a specific user command or menu action.
Testable functions should be organized into a hierarchy
Follow the existing design.
Develop a test function hierarchy.
Testable functions should be small, “atomic” units of work.
A method is a typically testable function for a class-level test. A use case
is a typically testable function for a cluster-level or system test.
Step 2. F
ind values for the variable matrix such that the determinant of the
entire matrix is not zero. This requires that
No row or column consists entirely of zeros.
An input domain and domain values are relatively easy to define. So, we have
a. Any input domain is the entire range of valid values for all external and
internal inputs to the method under test.
b. Private instance variables should be treated as input variables.
c. The domain must be modeled as a combinational function if there are
dependencies among several input variables.
An output domain may be more difficult to identify. So, we have
a. The output domain of an arithmetic function is the entire range of
values that can be computed by the function.
b. Output domains can be discontinuous–they may have “holes” or “cliffs.”
c. The output domain of a combination function is the entire range of
values for each action stub.
Next, we discuss a technique under domain analysis which is a type of
responsibility-based/ black-box testing method. This technique is popularly
known as equivalence class partitioning.
An equivalence class is a group of input values which require the same
response. A test case is written for members of each group and for non-
members. The test case designates the expected response.
FIGURE 8.25
Collection
operation Input size Collection state Expected result
1. Add Single element empty added
Single element not empty added
Single element capacity-1 added
Single element full reject
Several elements, not empty reject
sufficient to overflow
Several elements capacity-1 reject
Null element empty no action
Null element not empty no action
2. Update/ Several elements, not empty reject
Replace sufficient to overflow
by 1
Several elements, not empty accept
sufficient to reach
capacity
Several elements, not empty accept
sufficient to reach
capacity
Several elements, not empty accept
sufficient to reach
capacity-1
Several elements, not empty accept, check clean-up
fewer than in
updated collection
Collection Element
operation Input position Collection state Expected result
All operations Single item First Not empty Added
Single item Last Not empty Added
Delete/ Single item dc Single element Deleted
Remove Single item dc Empty Reject
A normal value
The upper-bound
The upper-bound +1
Try formula verification tests with array elements initialized to unusual
data patterns.
All elements zero or null
All elements one
All elements same value
All elements maximum value, all bits on, etc.
All elements except one are zero, one, or max
We treat each sub-structure with a special role (header, pointer vector,
etc.), as a separate data structure.
The pair-wise operand test pattern may also be applied to operators with
array operands.
Relationship test patterns: Collection classes may implement relation-
ships (mapping) between two or more classes. We have various ways of
showing relationships. For example
Entity-relationship model:
FIGURE 8.26
FIGURE 8.27
FIGURE 8.28
Exception Testing
Exception handling (like Ada exceptions, C++ try/throw/catch) add implicit
paths. It is harder to write cases to activate these paths. However, exception
handling is often crucial for reliable operations and should be tested. Test
cases are needed to force exceptions.
File errors (empty, overflow, missing)
I/O errors (device not ready, parity check)
Arithmetic over/under flows
Memory allocation
Task communication/creation
How can this testing be done?
1. Use patches and breakpoints: Zap the code or data to fake an error.
2. Use selective compilation: Insert exception-forcing code (e.g., divide-
by-zero) under control of conditional assembly, macro definition, etc.
3. Mistune: Cut down the available storage, disk space, etc. to 10% of
normal, for example, saturate the system with compute bound tasks.
This can force resource related exceptions.
4. Cripple: Remove, rename, disable, delete, or unplug necessary
resources.
5. Pollute: Selectively corrupt input data, files, or signals using a data zap
tool.
Suspicion Testing
There are many situations that indicate additional testing may be valuable
[Hamlet]. Some of those situations are given below:
1. A module written by the least experienced programmer.
2. A module with a high failure rate in either the field or in development.
3. A module that failed an inspection and needed big changes at the last
minute.
4. A module that was subject to a late or large change order after most of
the coding was done.
5. A module about which a designer or programmer feels uneasy.
These situations don’t point to specific faults. They may mean more
extensive testing is warranted. For example if n-tests are needed for branch
coverage, use 5n instead to test.
Error Guessing
Experience, hunches, or educated guesses can suggest good test cases.
There is no systematic procedure for guessing errors. According to Beizer,
“logic errors and fuzzy thinking are inversely proportional to the probability
of a path’s execution.”
For example, for a program that sorts a list, we could try:
An empty input list.
An input list with only one item.
A list where all entries have the same value.
A list that is already sorted.
We can try for weird paths:
Try to find the most tortuous, longest, strongest path from entry to exit.
Try “impossible” paths, and so on.
The idea is to find special cases that may have been overlooked by more
systematic techniques.
Historical Analysis
Metrics from past projects or previous releases may suggest possible trouble
spots. We have already noted that C metric was a good predictor of faults.
Coupling and cohesion are also good fault predictors. Modules with high
coupling and low cohesion are 7 times more likely to have defects compared
to modules with low coupling and high cohesion.
Testing really occurs with the third view but we still have some problems.
For example, we cannot test abstract classes because they cannot be instanti-
ated. Also, if we are using fully flattened classes, we will need to “unflatten”
them to their original form when our unit testing is complete. If we do not
use fully flattened classes, in order to compile a class, we will need all of the
other classes above it in the inheritance tree. One can imagine the software
configuration management (SCM) implications of this requirement.
The class as a unit makes the most sense when little inheritance occurs
and classes have what we might call internal control complexity. The class
itself should have an “interesting” state-chart and there should be a fair
amount of internal messaging.
Users
The following questions can help to identify user categories:
Who?
Who are the users?
Can you find any dichotomies?
— Big company versus small
— Novice versus experienced
— Infrequent versus heavy user
Experience: Education, culture, language, training, work with simi-
lar systems, etc.
Why?
What are their goals in performing the task—what do they want?
What do they produce with the system?
How?
What other things are necessary to perform the task?
— Information, other systems, time, money, materials, energy, etc.
What methods or procedures do they use?
Environment
The user/task environment (as well as the OS or computer) may span a wide
range of conditions.
Consider any system embedded in a vehicle. Anywhere the vehicle can
be taken is a possible environment.
What external factors are relevant to the user? To the system’s ability to
perform? For example, buildings, weather, electromagnetic interference, etc.
What internal factors are relevant to the user? To the system’s ability to
perform? For example, platform resources like speed, memory, ports, etc.,
AC power system loading, multitasking.
With scenarios categories in hand, we can focus on specific test cases.
This is called an operational profile.
An activity is a specific discrete interaction with the system. Ideally, an
activity closely corresponds to an event-response pair. It could be a subjec-
tive definition but must have a start/stop cycle. We can refine each activity
into a test by specifying:
Probability of occurrence.
Data values derived by partitioning.
Equivalence classes are scenario-oriented.
Scenarios are a powerful technique but have limitations and require a
concentrated effort. So, we have the following suggestions:
User/customer cooperation will probably be needed to identify realistic
scenarios.
Scenarios should be validated with user/customer or a focus group.
Test development and evaluation requires people with a high level of
product expertise who are typically in short supply.
Generate a large number of test cases.
Well-defined housekeeping procedures and automated support is
needed if the scenarios will be used over a long period of time by many
people.
Next we consider a case study of ACME Widget Co.
We will illustrate the operational profile (or specific test cases) with the
ACME Widget Co. order system.
Users
There are 1000 users of the ACME Widget order system.
Their usage patterns differ according to how often they use the system.
Of the total group, 300 are experienced, and about 500 will use the sys-
tem on a monthly or quarterly basis. The balance will use the system less
than once every six months.
Environment
Several locations have significantly different usage patterns.
Plant, office, customer site, and hand-held access.
Some locations are only visited by certain users. For example, only
experienced users go to customer sites.
Usage
The main user-activities are order entry, order inquiry, order update,
printing a shipping ticket, and producing periodic reports.
After studying the usage patterns, we find proportions vary by user type
and location.
For example, the infrequent user will never print a shipping ticket but is
likely to request periodic reports.
Some scenarios are shown in Table 8.5.
Scenario
User type p1 Location p2 Activity p3 probability (p)
Infrequent 0.2 Plant 0.05 Report 0.75 0.0075
0.2 Plant 0.05 Update 0.15 0.0015
0.2 Plant 0.05 Inquiry 0.10 0.0010
0.2 Office 0.95 Inquiry 0.60 0.1140
0.2 Office 0.95 Update 0.10 0.0190
0.2 Office 0.95 Report 0.30 0.0570
1.0000
There are two main parts in an operational profile: usage scenarios and
scenario probabilities or:
Scenario
User type p1 Location p2 Activity p3 probability (p)
Experienced 0.3 Plant 0.80 Print Ticket 0.90 0.2160
Cyclical 0.5 Hand Held 0.40 Order Entry 0.95 0.1900
Cyclical 0.5 Office 0.50 Inquiry 0.50 0.1250
Infrequent 0.2 Office 0.95 Inquiry 0.60 0.1140
Cyclical 0.5 Office 0.50 Order Entry 0.30 0.0750
Infrequent 0.2 Office 0.95 Report 0.30 0.0570
Cyclical 0.5 Office 0.50 Update 0.20 0.0500
Cyclical 0.5 Plant 0.10 Print Ticket 0.90 0.0450
Experienced 0.3 Office 0.10 Order Entry 0.70 0.0210
Experienced 0.3 Customer 0.10 Inquiry 0.70 0.0210
Site
Infrequent 0.2 Office 0.95 Update 0.10 0.0190
Experienced 0.3 Plant 0.80 Update 0.05 0.0120
Experienced 0.3 Plant 0.80 Inquiry 0.05 0.0120
Infrequent 0.2 Plant 0.05 Report 0.75 0.0075
Cyclical 0.5 Hand Held 0.40 Inquiry 0.03 0.0060
Experienced 0.3 Customer 0.10 Update 0.20 0.0060
Site
Experienced 0.3 Office 0.10 Update 0.20 0.0060
Cyclical 0.5 Hand Held 0.40 Update 0.20 0.0060
Experienced 0.3 Customer 0.10 Order Entry 0.10 0.0030
Site
Experienced 0.3 Office 0.10 Inquiry 0.10 0.0030
Cyclical 0.5 Plant 0.10 Inquiry 0.05 0.0025
Cyclical 0.5 Plant 0.10 Update 0.05 0.0025
Infrequent 0.2 Plant 0.05 Update 0.15 0.0015
Infrequent 0.2 Plant 0.05 Inquiry 0.10 0.0010
1.0000
The operational profile is a framework for a complete test plan. For each sce-
nario, we need to determine which functions of the system under test will be used.
An activity often involves several system functions; these are called “runs.”
Each run is a thread. It has an identifiable input and produces a distinct output.
For example, the experienced/plant/ticket scenario might be composed
of several runs.
Display pending shipments.
Display scheduled pickups.
Assign carrier to shipment.
Enter carrier landing information.
Print shipment labels.
Enter on-truck timestamp.
Some scenarios may be low probability but have high potential impact.
For example, suppose ACME Widget is promoting order entry at the cus-
tomer site as a key selling feature. So, even though this accounts for only 3
in a thousand uses, it should be tested as if it was a high-priority scenario.
This can be accomplished by adding a weight to each scanario:
Weights Scenario
+2 Must test, mission/safety critical
+1 Essential functionality, necessary for robust operation
+0 All other scenarios
Approach used
Reveal faults in new or modified modules. This requires running new
test cases and typically reusing old test cases.
Reveal faults in unchanged modules. This requires re-running old test
cases.
Requires reusable library of test suites.
When to do?
Periodically, every three months.
After every integration of fixes and enhancements.
Frequency, volume, and impact must be considered.
What to test?
Changes result from new requirements or fixes.
Analysis of the requirements hierarchy may suggest which subset to
select.
If new modules have been added, you should redetermine the call paths
required for CI coverage.
Acceptance Testing
On completion of the developer administered system test, three additional
forms of system testing may be appropriate.
a. Alpha test: Its main features are:
1. It is generally done “in-house” by an independent test organization.
2. The focus is on simulating real-world usage.
3. Scenario-based tests are emphasized.
b. Beta test: It’s main features are:
1. It is done by representative groups of users or customers with
prerelease system installed in an actual target environment.
2. Customer attempts routine usage under typical operating conditions.
3. Testing is completed when failure rate stabilizes.
c. Acceptance test: Its main features are:
1. Customer runs test to determine whether or not to accept the system.
2. Requires meeting of the minds on acceptance criterion and
acceptance test plan.
2. The test design: It defines the features/functions to test and the pass fail
criterion. It designates all test cases to be used for each feature/functions.
3. The test cases: It defines the items to be tested and provides traceability
to SRS, SDD. User operations or installation guides. It specifies the
input, output, environment, procedures, intercase dependencies of each
test case.
4. Test procedures: It describes and defines the procedures necessary to
perform each test.
Each item, section, and sub-section should have an identifying number
and designate date prepared and revised, authors, and approvals.
How should we go about testing a module, program, or system? What
activities and deliverables are necessary? A general approach is described in
IEEE 87a, an accepted industry standard for unit testing. It recommends
four main steps:
Step 1. Prepare a testing plan: Document the approach, the necessary
resources, and the exit criterion.
Step 2. Design the test:
2.1 Develop an architecture for the test, organize by goals.
2.2 Develop a procedure for each test case.
2.3 Prepare the test cases.
2.4 Package the plan per IEEE 82a.
2.5 Develop test data.
Step 3. Test the components:
3.1 Run the test cases.
3.2 Check and classify the results of each test case:
3.2.1 Actual results meet expected results.
3.2.2 Failure observed:
Implementation fault.
Design fault.
Undetermined fault.
Choice of Standards
The planning aspects are proactive measures that can have an across-the-
board influence on all testing projects.
Standards comprise an important part of planning in any organization.
Standards are of two types:
1. External standards
2. Internal standards
Built-in test features are shown in Figure 8.31 and are summarized below:
1. Assertions automate basic checking and provide “set and forget” runtime
checking of basic conditions for correct execution.
2. Set/Reset provides controllability.
3. Reporters provide observability.
4. A test suite is a collection of test cases and plan to use them. IEEE
standard 829 defines the general contents of a test plan.
5. Test tools require automation. Without automationless testing, greater
costs will be incurred to achieve a given reliability goal. The absence of
tools inhibits testability.
6. Test process: The overall software process capability and maturity can
significantly facilitate or hinder testability. This model follows the key
process ability of the defined level for software product engineering.
Test case 1 in the table above says to test the combination that has book
set to “in-stock,” purchase set to “cash,” and shipping set to “overnight.”
clicked, the result is displayed. The clear button will clear the screen. Click-
ing on the quit button ends the application.
Now, we will perform the following on this GUI application:
FIGURE 8.33
This is RUC-3. Based on this real-use case, we derive system-level test cases
also. They are given below.
Third Level: To derive test cases from the finite state machines derived
from a finite state machine description of the external appearance of the
GUI. This is shown below:
A test case from this formulation is a circuit. A path in which the start
node is the end node is usually an idle state. Nine such test cases are shown
in the table below. The numbers in the table show the sequence in which
the states are traversed by the test case. The test cases, TC1 to TC9, are as
follows:
State TC1 TC2 TC3 TC4 TC5 TC6 TC7 TC8 TC9
Idle 1 1 1 1 1 1 1 1 1, 3
Missing country and dollar message 2 2
Country selected 2 2, 4 2 4, 6
U.S. dollar amount entered 2 2, 4 2
Missing U.S. dollar msg 3 5
Both inputs done 3 5 3 5 3 3 7
Missing country msg 3
Equivalent amount displayed 4 6 4 6
Idle 2 5 7 5 7 1 1 1 1
Fourth Level: To derive test cases from state-based event tables. This
would have to be repeated for each state. We might call this the exhaustive
level because it exercises every possible event for each state. However, it
is not truly exhaustive because we have not tested all sequences of events
across states. The other problem is that it is an extremely detailed view of
system testing that is likely very redundant with integration and even unit-
level test cases.
Now, we will discuss statechart-based system testing.
Statecharts are a fine basis for system testing. The problem is that State
charts are prescribed to be at the class level in UML. There is no easy way
to compose Statecharts of several classes to get a system-level Statechart.
A possible solution is to translate each class-level Statechart into a set of
event-driven petri nets (EDPNs) to describe threads to be tested. Then the
atomic system functions (ASFs) and the data places are identified. Say, For
our GUI-application they are as follows:
SUMMARY
Various key object-oriented concepts can be tested using some testing
methods and tools. They are summarized below:
variables.
Code coverage methods for methods of a
class.
Alpha-Omega method of exercising
methods.
State diagram to test states of a class.
higher.
7. Inter-object communication Message sequencing.
8. Object reuse and parallel Needs more frequent integration tests and
ANSWERS
1. a. 2. a. 3. c. 4. c.
5. a. 6. d. 7. a. 8. b.
9. b. 10. c.
Q. 4. The big bang is estimated to have occurred about 18 billion years ago.
Given: Paths/second = 1 × 103
Second/year = 3.154 × 107
If we start testing 103 paths/second at instant of the big bang, how
many paths could we test? At what loop value of x, would we run out
of time?
Ans. Given: 18 billion years = 1.8 × 109 years
and 3.154 × 107 seconds/year
REVIEW QUESTIONS
1. Object-oriented languages like JAVA do not support pointers. This
makes testing easier. Explain how.
2. Consider a nested class. Each class has one method. What kind of
problems will you encounter during testing of such nested classes? What
about their objects?
3. Explain the following:
a. Unit and integration testing
b. Object-oriented testing
4. Write and explain some common inheritance related bugs.
5. How is object-oriented testing different from procedural testing?
Explain with examples.
6. Describe all the methods for class testing.
7. Write a short paragraph on issues in object-oriented testing.
8. Explain briefly about object-oriented testing methods with examples.
Suggest how you test object-oriented systems by use-case approach.
9. Illustrate “how do you design interclass test cases.” What are the various
testing methods applicable at the class level?
10. a. What are the implications of inheritance and polymorphism in object-
oriented testing?
b. How does GUI testing differ from normal testing? How is GUI
testing done?
11. With the help of suitable examples, demonstrate how integration testing
and system testing is done for object-oriented systems?
12. How reusability features can be exploited by object-oriented testing
approach?
13. a. Discuss the salient features of GUI testing. How is it different from
class testing?
b. Explain the testing process for object-oriented programs (systems).
14. Draw a state machine model for a two-player game and also write all
possible control faults from the diagram.
9
Automated Testing
Inside this Chapter:
9.0. Automated Testing
9.1. Consideration During Automated Testing
9.2. Types of Testing Tools-Static V/s Dynamic
9.3. Problems with Manual Testing
9.4. Benefits of Automated Testing
9.5. Disadvantages of Automated Testing
9.6. Skills Needed for Using Automated Tools
9.7. Test Automation: “No Silver Bullet”
9.8. Debugging
9.9. Criteria for Selection of Test Tools
9.10. Steps for Tool Selection
9.11. Characteristics of Modern Tools
9.12. Case Study on Automated Tools, Namely, Rational Robot,
WinRunner, Silk Test, and Load Runner
testing task. Automated testing tools vary in their underlying approach, qual-
ity, and ease of use.
Manual testing is used to document tests, produce test guides based
on data queries, provide temporary structures to help run tests, and mea-
sure the results of the tests. Manual testing is considered to be costly and
time-consuming. Therefore, automated testing is used to cut down time and
cost.
like a flow graph generator are also available in the market. Regression test-
ing tools are finally used during the testing life cycle. These tools are often
known as computer aided software testing (CAST) tools.
See Table on next page for certain other tools used during testing and
their field of applications.
(module) testing and subsequent integration testing (e.g., drivers and stubs)
as well as commercial software testing tools. Testing tools can be classified
into one of the two categories listed below:
a. Static Test Tools: These tools do not involve actual input and output.
Rather, they take a symbolic approach to testing, i.e., they do not test
the actual execution of the software. These tools include the following:
a. Flow analyzers: They ensure consistency in data flow from input to
output.
b. Path tests: They find unused code and code with contradictions.
c. Coverage analyzers: It ensures that all logic paths are tested.
d. Interface analyzers: It examines the effects of passing variables and
data between modules.
b. Dynamic Test Tools: These tools test the software system with “live”
data. Dynamic test tools include the following:
a. Test driver: It inputs data into a module-under-test (MUT).
b. Test beds: It simultaneously displays source code along with the
program under execution.
c. Emulators: The response facilities are used to emulate parts of the
system not yet developed.
d. Mutation analyzers: The errors are deliberately “fed” into the code
in order to test fault tolerance of the system.
5. Test cases for certain types of testing such as reliability testing, stress
testing, and load and performance testing cannot be executed without
automation. For example, if we want to study the behavior of a system
with millions of users logged in, there is no way one can perform these
tests without using automated tools.
6. Manual testing requires the presence of test engineers but automated
tests can be run around the clock, in a 24 × 7 environment.
7. Tests, once automated, take comparatively far less resources to execute.
A manual test suite requiring 10 persons to execute over 31 days, i.e.,
31 × 10 = 310 man-days, may take just 10 man-days for execution, if
automated. Thus, a ratio of 1: 31 is achieved.
8. Automation produces a repository of different tests which helps us to
train test engineers to increase their knowledge.
9. Automation does not end with developing programs for the test cases.
Automation includes many other activities like selecting the right
product build, generating the right test data, analyzing results, and so on.
Automation should have scripts that produce test data to maximize cov-
erage of permutations and combinations of input and expected output for
result comparison. They are called test data generators.
It is important for automation to relinquish the control back to test engi-
neers in situations where a further set of actions to be taken are not known.
As the objective of testing is to catch defects early, the automated tests
can be given to developers so that they can execute them as part of unit
testing.
9.8. DEBUGGING
Debugging occurs as a consequence of successful testing. That is, when a test
case uncovers an error, debugging is the process that results in the removal
of the error. After testing, we begin an investigation to locate the error, i.e.,
to find out which module or interface is causing it. Then that section of the
code is to be studied to determine the cause of the problem. This process is
called debugging.
Debugging is an activity of locating and correcting errors. Debugging is
not testing but it always occurs as a consequence of testing. The debugging
process begins with the execution of a test case. The debugging process will
always have one of the two outcomes.
1. The cause will be found, corrected, and removed.
2. The cause will not be found.
During debugging, we encounter errors that range from mildly annoying to
catastrophic. Some guidelines that are followed while performing debugging
are:
1. Debugging is the process of solving a problem. Hence, individuals
involved in debugging should understand all of the causes of an error
before starting with debugging.
2. No experimentation should be done while performing debugging. The
experimental changes often increase the problem by adding new errors
in it.
3. When there is an error in one segment of a program, there is a high
possibility of the presence of another error in that program. So, the rest
of the program should be properly examined.
4. It is necessary to confirm that the new code added in a program to
fix errors is correct. And to ensure this, regression testing should be
performed.
Note that in this graph, as the number of errors increases, the amount of
effort to find their causes also increases.
Once errors are identified in a software system, to debug the problem, a
number of steps are followed:
Step 1. Identify the errors.
Step 2. Design the error report.
Step 3. Analyze the errors.
Step 4. Debugging tools are used.
Step 5. Fix the errors.
Step 6. Retest the software.
After the corrections are made, the software is retested using regression
tests so that no new errors were introduced during debugging process.
Please note that debugging is an integral component of the software
testing process. Debugging occurs as a consequence of successful testing
and revealing the bugs from the software-under-test (SUT). When a test
case uncovers an error, debugging is the process that results in the removal
of the bugs. Also note that debugging is not testing, but it always occurs as a
consequence of testing. The debugging process begins with the execution of
a test case. This is shown in Figure 9.2.
Debugging Approaches
Several approaches have been discussed in literature for debugging
software-under-test (SUT). Some of them are discussed below.
1. Brute Force Method: This method is most common and least efficient
for isolating the cause of a software error. We apply this method when all
else fails. In this method, a printout of all registers and relevant memory
locations is obtained and studied. All dumps should be well documented
and retained for possible use on subsequent problems.
2. Back Tracking Method: It is a fairly common debugging approach that
can be used successfully in small programs. Beginning at the site where
a sympton has been uncovered, the source code is traced backward until
the site of the cause is found. Unfortunately, as the number of source
lines increases, the number of potential backward paths may become
unmanageably large.
3. Cause Elimination: The third approach to debugging, cause
elimination, is manifested by induction or deduction and introduces the
concept of binary partitioning. This approach is also called induction
and deduction. Data related to the error occurrence are organized to
isolate potential causes. A cause hypothesis is devised and the data
are used to prove or disprove the hypothesis. Alternatively, a list of all
possible causes is developed and tests are conducted to eliminate each.
If initial tests indicate that a particular cause hypothesis shows promise,
the data are refined in an attempt to isolate the bug.
Tools for Debugging: Each of the above debugging approaches can be
supplemented with debugging tools. For debugging we can apply a wide
variety of debugging tools such as debugging compilers, dynamic debugging
aids, automatic test case generators, memory dumps, and cross reference
maps. The following are the main debugging tools:
1. Turbo Debugger for Windows: The first debugger that comes to
mind when you think of a tool especially suited to debug your Delphi
code is Borland’s own Turbo Debugger for Windows.
2. Heap Trace: Heap Trace is a shareware heap debugger for Delphi 1·X
and 2·X applications that enables debugging of heap memory use. It helps
you to find memory leaks, dangling pointers, and memory overruns in
your programs. It also provides optional logging of all memory allocations,
de-allocations, and errors. Heap trace is optimized for speed, so there is
SUMMARY
Testing is an expensive and laborious phase of the software process. As a
result, testing tools were among the first software tools to be developed.
These tools now offer a range of facilities and their use. Significantly reduces
the cost of the testing process. Different testing tools may be integrated into
the testing workbench.
These tools are:
1. Test manager: It manages the running of program tests. It keeps track
of test data, expected results, and program facilities tested.
2. Test data generator: It generates test data for the program to be tested.
This may be accomplished by selecting data from a database.
3. Oracle: It generates predictions of expected test results.
4. File comparator: It compares the results of program tests with previous
test results and reports difference between them.
5. Report generator: It provides report definition and generation
facilities for test results.
6. Dynamic analyzer: It adds code to a program to count the number
of times each statement has been executed. After the tests have been
run, an execution profile is generated showing how often each program
statement has been executed.
ANSWERS
1. a. 2. e. 3. b. 4. a.
5. d. 6. a. 7. a. 8. b.
9. a. 10. a.
REVIEW QUESTIONS
1. Answer the following:
a. What is debugging?
b. What are different approaches to debugging?
c. Why is exhaustive testing not possible?
2. Explain the following:
a. Modern testing tools.
3. a. Differentiate between static and dynamic testing tools with examples
in detail?
b. Will exhaustive testing guarantee that the program is 100% correct?
4. Compare testing with debugging.
5. Differentiate between static and dynamic testing.
6. Write a short paragraph on testing tools.
7. Compare testing with debugging.
8. Explain back tracking method for debugging.
9. Differentiate between static and dynamic testing tools.
10. What are the benefits of automated testing tools over conventional
testing tools?
11. Discuss various debugging approaches with some examples.
12. a. What is debugging? Describe various debugging approaches.
b. Differentiate between static testing tools and dynamic testing tools.
13. Briefly discuss dynamic testing tools.
14. Write a short paragraph on any two:
a. Static testing tools.
b. Dynamic testing tools.
c. Characteristics of modern tools.
15. a. Discuss in detail automated testing and tools. What are the advantages
and disadvantages?
b. Explain in brief modern tools in the context of software development
and their advantages and disadvantages.
16. List and explain the characteristics of modern testing tools.
17. Explain modern testing tools.
18. Write a short paragraph on static and dynamic testing tools.
10
Test Point Analysis (TPA)
Inside this Chapter:
10.0. Introduction
10.1. Methodology
10.2. Case Study
10.3. TPA for Case Study
10.4. Phase Wise Breakup Over Testing Life Cycle
10.5. Path Analysis
10.6. Path Analysis Process
10.0. INTRODUCTION
There are a number of accepted techniques for estimating the size of the
software. This chapter describes the test estimate preparation technique
known as test point analysis (TPA). TPA can be applied for estimating the
size of testing effort in black-box testing, i.e., system and acceptance testing.
The goal of this technique is to outline all the major factors that affect test-
ing projects and to ultimately do an accurate test effort estimation. On time
project delivery cannot be achieved without an accurate and reliable test
effort estimate.
Effective test effort estimation is one of the most challenging and import-
ant activity in software testing. There are many popular models for test effort
estimation in vogue today. One of the most popular methods is FPA. How-
ever, this technique can only be used for white-box testing. Organizations
specializing in niche areas need an estimation model that can accurately cal-
culate the testing effort of the application-under-test.
TPA is one such method that can be applied for estimating test effort in
black-box testing. It is a 6-step approach to test estimation and planning. We
believe that our approach has a good potential for providing test estimation
for various projects. Our target audience for using this approach would be
anyone who would want to have a precise test effort estimation technique for
any given application-under-test (AUT).
Ineffective test effort estimation leads to schedule and cost overruns.
This is due to a lack of understanding of the development process and con-
straints faced in the process. But we believe that our approach overcomes all
these limitations.
To this end, the problem will be approached from a mathematical
perspective. We will be implementing the following testing metrics in
C++: static test points, dynamic test points, total number of test points, and
primary test hours.
We will be illustrating TPA using following case study.
DCM Data Systems Ltd. had a number of software products. One of the
newly developed products, vi editor, was installed locally and abroad.
Reports and surveys depicted that some of the program functionality
claimed did not adequately function. The management of the company
then handed over the project to an ISO certified CMM level 5 company,
KRV&V. KRV&V decided to use the TPA method to estimate black-box
testing effort.
10.1. METHODOLOGY
10.1.1. TPA Philosophy
The effort estimation technique TPA is based on three fundamental ele-
ments:
Size of the information system to be tested
Test strategy
Productivity
Size denotes the size of the information system to be tested.
Test strategy implies the quality characteristics that are to be tested on
each subsystem.
Productivity is the amount of time needed to perform a given volume of
testing work.
10.1.2. TPA Model
(FP * Q i )
ST = (1)
500
where
FP = total number of function points assigned to an information
system
Qi = weighing factor for statically measurable quality charac-
teristics
500 = minimum number of FPs that can be calculated in a day
DT = FPf * Df * QD(2)
ii. Usage intensity (Ui): It depicts how many users process a function and
how often.
Weights:
iii. Interfacing (I): It implies how much one function affects the other
parts of the system. The degree of interfacing is determined by first
ascertaining the logical data sets (LDSs) which the function in question
can modify, then the other functions which access these LDSs. An
interface rating is assigned to a function by reference to a table in
which the number of LDSs affected by the function are arranged
vertically and the number of the other functions accessing LDSs
are arranged horizontally. When working out the number of “other
functions” affected, a given function may be counted several times if it
accesses several LDSs, all of which are maintained by the function for
which the calculation is being made.
The sum of medium weights (ratings) for all factors is calculated. It comes
out to be 20.
v. Uniformity (U): It checks the reusability of the code. A uniformity
factor of 0.6 is assigned in case of the 2nd occurrence (reuse) of
unique, clone, and dummy functions. Otherwise in all cases a
uniformity factor 1 is assigned.
Method of calculation of Df: The Df factor is calculated by adding
together the ratings of first-four functions dependent variables, i.e., Up, Ui,
I, and C and then dividing it by 20 (sum of median/ nominal weights of these
factors).
The result is then multiplied by the uniformity factor (U). A Df factor is
calculated for each function.
é (Up + Ui + I + C) ù
Mathematically, Df = ê ú *U (3)
ë 20 û
where Up = User importance
Ui = Usage intensity
I = Interfacing
C = Complexity
U = Uniformity
Function FPs Up Ui I C U Df
Error message 4 6 8 4 3 1 1.05
Help screens 4 6 8 4 3 1 1.05
Menus 4 6 8 4 3 1 1.05
Weights:
é SR i ù
êë 4 * Wi úû + (0.02 * 4) (4)
Ý Ý
Explicit Implicit
i ← 1 to 4
TP = ST + ΣDT(5)
[FP * Qi ] + S
= ( FPf * Df * QD )
500 (6) [from eqs. (1) and (2)]
where
TP = Total number of test points assigned to the system as a
whole.
ST = Total number of static test points.
PF value Description
0.7 If test team is highly skilled
2.0 If test team has insufficient skills
Rating:
Rate Description
1 Testing involves the use of SQL, record, and playback tool.
These tools are used for test specification and testing.
2 Testing involves the use of SQL only. No record and
playback tool is being used. Tools are used for test
specification and testing.
4 No testing tools are available.
Rate Description
2 A development testing plan is available and the testing team
is familiar with the actual test cases and results.
4 If development testing plan is available.
8 If no development testing plan is available.
c. Test basis: The test basis variable reflects the quality of documentation
upon which the test under consideration is based. This includes
documents like SRS, DFD, etc. The more detailed and higher quality
the documentation is, the less time is necessary to prepare for testing
(preparation and specification phases).
Ratings:
Rate Description
3 Documentation standards and documentation templates
are used, inspections are carried out.
6 Documentation standards and documentation templates
are used.
12 No documentation standards and templates are used.
Rate Description
2 System was developed in 4GL programming language with
integrated DBMS.
4 System was developed using 4GL and 3GL programming
language.
8 System was developed using only 3GL programming language
such as COBOL and PASCAL.
e. Test environment: This variable depicts the extent to which the test
infrastructure in which the testing is to take place has been tried out.
Fewer problems and delays are likely during the execution phase in a
well tried and tested infrastructure.
Ratings:
Rate Description
1 Environment has been used for testing several times in the past.
2 Test environment is new but similar to earlier used environment.
4 Test environment is new and setup is experimental.
f. Testware: Testware variable reflects the extent to which the tests can
be conducted using existing testware where testware includes all of the
testing documentation created during the testing process, for example,
test specification, test scripts, test cases, test data, and environment
specification.
Ratings:
Rate Description
1 A usable, general initial data set and specified test cases are
available for test.
2 Usable, general initial data set available.
4 No usable testware is available.
Rate Description
3 The team consists of up to 4 persons.
6 The team consists of up to 5 and 10 persons.
12 The team consists of more than 10 persons.
Ratings:
Rate Description
2 Both an automated time registration system and automated
defect tracking system are available.
4 Either an automated time registration system or automated
defect tracking system is available.
8 No automated systems are available.
PT * (T + C)
Mathematically, THA = (11)
100
Step 8: Calculation of total test hours (TTH): The “total number of
test hours” are obtained by adding primary test hours and the planning and
control allowance.
PT + (T + C)
= (from equation (10)) (13)
100
Phase Estimate
I. Planning and control THA
II. Preparation 10%
III. Specification 40%
IV. Execution 45%
V. Completion 5%
I = 2 * 50% + 8 * 50% = 5
So,
Df = { 20 }
(6.6 + 5.4 + 5 + 6)
*U
(A)
where, U = Uniformity factor = 60% * 1 + 40% * 0.6
= 0.6 + 0.24 = 0.84
putting the value of U in equation (A), we get:
Df = (23/20) * 0.84 = 1.15 * 0.84 = 0.97
and QD (dynamic quality characteristic) = weighted score on the f ollowing
4 quality characteristics:
Suitability (weight = 0.75, medium importance – rate = 4)
Security (weight = 0.05, extremely important – rate = 6)
Usability (weight = 0.10, highly important – rate = 5)
Efficiency (weight = 0.10, medium importance – rate = 4)
Total FP * Q i
ST =
500
Now, Total FP = Data FP + Transaction FP = 650 + 600 = 1250
(1250 * 64)
So, ST = = 160
500
III. Total test points (TP):
TP = DT + ST = 2444.4 + 160 = 2604.4
IV. Productivity: = 1.4 test hours per test point
V. Environmental (Rating on 6 environmental factors)
=
factor : 21
where
Rating on test tools = 1
Rating on development testing = 4
Rating on test basis = 6
Rating on development environment = 2
Rating on test environment = 2
Rating on testware = 2
1+4+6+2+2+2
So, EF = = 0.81
21
VI. Primary test hours:
= TP * PF * EF = 2604 * 1.4 * 0.81 = 2953
VII. Planning and control allowance:
= Rating on team size factor + Rating on management tools factor
= 6% + 2% = 8%
VIII. Total test hours:
= Primary test hours+ 8% of Primary test hours
= 2953 + 8% of 2953 = 3189 hours
Each use case has only one start and can have multiple end points. Using
UML terminology, the start is indicated by a plain circle and a circle with a
dot inside indicates the end.
In order to draw a diagram for the use case the following steps should
be followed:
1. Draw the basic flow.
Identify nodes
Combine sequential steps into one branch
Annotate the branches with the text summarizing the action in those branches
Connect the nodes indicating the direction of flow
2. Repeat the step above for each alternate and exception flow.
The complete step-by-step process is illustrated in the attached d iagram
at Figure B. The use-case example of Figure A has been used to illustrate
the process. As explained above, the flow diagram is an excellent tool to
identify the use-case flow and other problems in the early stages. This
feature of the process can save lot of time and effort in the earlier stages
of the software process. Figure 10.3 also covers how to identify use-case
problems and then correcting them early in the s oftware process.
Criticality: This attribute describes how critical the failure of this path
could be with 1 being least and 10 being most.
Having defined these attributes, we can compute a path factor which is
Frequency + Criticality. This is an equal weight path factor. However, we can
provide different weights to these attributes to arrive at a proper path factor.
“User inserts his card in the machine, system successfully validates the
card, and prompts for 4 digit pin. User enters a valid pin. System suc-
cessfully validates the pin and prompts for amount. User enters a valid
amount. System ejects user card and correct amount for collection by
the user.”
Now this is a happy day path but as we know there may be certain min-
imum and maximum amounts that a user can withdraw from the ATM. Let
us say it is $10 and $500, respectively. In order to adequately test this path
using boundary value analysis (BVA), we need to test this withdrawal for
$10, $500, and $200 (middle). Thus, we need to create 3 test scenarios for
the same path.
The point to note here is that all three scenarios have the exact same
path through the use case. Why not test <$10 and >$500 withdrawal in the
same path? The reason we do not do it is that such tests belong to a different
path where a “user” tries to withdraw <$10 or >$500 and he gets an appro-
priate message and the user is prompted to reenter an acceptable amount.
The user reenters the correct amount and the system then lets the user with-
draw the money.
The following guidelines should be followed to create test cases:
1. Create one test case for each path that has been selected for testing. As
explained previously, the path description provides enough information
to write a proper test case.
2. Create multiple test scenarios within each test case. We recommend
using data tables that have data elements as rows and each column is a
data set and also a test scenario. See Appendix B where this is explained
with reference to the example.
3. Add GUI details in the steps, if necessary.
Correct the use case and the flow diagram by showing two more
nodes. The node is also a point where two or more flows meet.
Raise an issue:
How to exit from an endless loop?
Path Selection
Note: In a large number of cases, you will probably be testing all use-case
paths. The prioritization process helps you in selecting critical or more
important paths for testing.
Attribute name #1 #2 #3
ID VB4345680 VC245678 VA121000
V234569012 VB789134 BV463219
C4562P235 VC340909 AV453219
VB373890 VB789032 VA453219
Remarks I Valid and 3 I Valid and 3 I Valid and 3
Invalid IDs. Invalid IDs. Invalid IDs.
Correct Invalid Ids Correct Invalid IDs Correct Invalid IDs
by Valid ID. by Valid ID. by Valid ID.
Expected Student with Student with Student with
Results particulars as per particulars as per particulars as per
Table A1 will be Table A2 will be Table A3 will be
deleted deleted deleted
Note: Notice how we show valid and invalid data.
Attribute name #1 #2 #3
Name Victor Thomson John Smith Mary Bhokins
DOB 01/11/75 02/12/2000 10/11/1979
SS# 555 44 7777 222 11 7789 543 24 8907
Status Citizen Resident Alien Non Citizen
Graduation Date 02/20/2000 03/15/2001 09/15/2002
Expected Result System should System should System should
return a valid ID return a valid ID return a valid ID
SUMMARY
In our opinion, one of the most difficult and critical activities in IT is the
estimation process. We believe that it occurs because when we say that one
project will be accomplished in a certain amount of time by a certain cost, it
must happen. If it does not happen, several things may follow: from peers’
comments and senior management’s warnings to being fired depending on
the reasons and seriousness of the failure.
Before even thinking of moving to systems test at our organization, we
always heard from the development group members that the estimations
made by the systems test group were too long and expensive. We tried to
understand the testing estimation process.
The testing estimation process in place was quite simple. The inputs for
the process provided by the development team were the size of the develop-
ment team and the number of working days needed for building a solution
before starting systems tests.
The testing estimation process said that the number of testing engineers
would be half of the number of development engineers and one-third of the
number of development working days.
A spreadsheet was created in order to find out the estimation and calculate
the duration of tests and testing costs. They are based on the following formulas:
Testing working days = (Development working days)/3
Testing engineers = (Development engineers)/2
Testing costs = Testing working days * Testing engineers * Person
daily costs
As the process was only playing with numbers, it was not necessary to regis-
ter anywhere how the estimation was obtained.
To show how the process worked, if one development team said that to
deliver a solution for systems testing it would need 4 engineers and 66 work-
ing days, then the systems test would need 2 engineers (half) and 21 working
days (one-third). So the solution would be ready for delivery to the customer
after 87 (66 + 21) working days.
Just to be clear, in testing time the time for developing the test cases
and preparing the testing environment was not included. Normally, it would
need an extra 10 days for the testing team.
Besides being simple, that process worked fine for different projects
and years. But, we were not happy with this approach and the development
group were not either. Metrics, project analogies, expertise, and require-
ments, were not being used to support the estimation process.
Metric Value
1. Number of testcases created for each requirement 4,53
2. Number of testcases developed by working day 14,47
3. Number of testcases executed by working day 10,20
4. Number of ARs for testcase 0,77
5. Number of ARs verified by working day 24,64
Metric Value
Number of testcases – based on metric 1 31,710
Preparation phase – based on metric 2 11 working days
Execution phase – based on metric 3 16 working days
Number of ARs – based on metric 4 244 ARs
Regression phase – based on metric 5 6 working days
Normally, the results from the new estimate process are cheaper and
faster than the old one in about 20 to 25%. If the testing team gets a different
percentage, the testing team returns to the process in order to understand if
something was missed.
Sixth Rule: Estimation should be recorded:
All decisions should be recorded. It is very important because if require-
ments change for any reason, the records would help the testing team to
estimate again. The testing team would not need to return for all steps and
take the same decisions again. Sometimes, it is an opportunity to adjust the
estimation made earlier.
Seventh Rule: Estimation should be supported by tools:
A new spreadsheet has been created containing metrics that help to reach
the estimation quickly. The spreadsheet calculates automatically the costs
and duration for each testing phase.
There is also a letter template that contains some sections such as: cost
table, risks, and free notes to be filled out. This letter is sent to the customer.
It also shows the different options for testing that can help the customer
decide which kind of test he or she needs.
Eighth Rule: Estimation should always be verified:
Finally, all estimation should be verified. We’ve created another spreadsheet
for recording the estimations. The estimation is compared to the previous
ones recorded in a spreadsheet to see if they have similar trend. If the esti-
mation has any deviation from the recorded ones, then a re-estimation should
be made.
We can conclude from this chapter that the effort calculation can be
done even for black-box testing. It is indeed a challenging activity during
software testing. Test point analysis (TPA) is one such technique. Other
techniques like use case analysis, however, can also be used. It is also a very
powerful method to generate realistic test cases.
ANSWERS
1. b. 2. c. 3. a. 4. b.
5. a. 6. d. 7. a.
REVIEW QUESTIONS
1. What are the three main inputs to a TPA process? Explain.
2. With a flowchart explain the TPA model.
3. Explain the test points of some standard functions.
4. “The bigger the team, the more effort it will take to manage the project.”
Comment.
5. Write short paragraphs on:
a. Testware
b. Planning and control tools
11
Testing Your Websites—
Functional and
Non-Functional Testing
Inside this Chapter:
11.0. Abstract
11.1. Introduction
11.2. Methodology
11.0. ABSTRACT
Today everyone depends on websites for business, education, and trading
purposes. Websites are related to the Internet. It is believed that no work
is possible without Internet today. There are many types of users connected
to websites who need different types of information. So, websites should
respond according to the user requirements. At the same time, the cor-
rect behavior of sites has become crucial to the success of businesses and
organizations and thus should be tested thoroughly and frequently. In this
chapter, we are presenting various methods (functional and non-functional)
to test a website. However, testing a website is not an easy job because we
have to test not only the client-side but also the server-side. We believe our
approach will help any website engineer to completely test a website with a
minimum number of errors.
11.1 INTRODUCTION
The client end of the system is represented by a browser which connects to
the website server via the Internet. The centerpiece of all web applications
is a relational database which stores dynamic contents. A transaction server
controls the interactions between the database and other servers (often called
“application servers”). The administration function handles data updates and
database administration.
� What kind of performance is expected on the client side (e.g., how fast
should pages appear, how fast should animations, applets, etc. load and
run)?
There are many possible terms for the web app development life cycle
including the spiral life cycle or some form of the iterative life cycle. A more
cynical way to describe the most commonly observed approach is to describe
it as the unstructured development similar to the early days of software
development before software engineering techniques were introduced. The
“maintenance phase” often fills the role of adding missed features and fixing
problems.
� Will down time for server and content maintenance/upgrades be
allowed? How much?
� What kinds of security (firewalls, encryptions, passwords, etc.) will be
required and what is it expected to do? How can it be tested?
� How reliable are the Internet connections? And how does that affect the
backup system or redundant connection requirements and testing?
� What processes will be required to manage updates to the website’s con-
tent, and what are the requirements for maintaining, tracking, and con-
trolling page content, graphics, links, etc.?
� Will there be any standards or requirements for page appearance and/or
graphics throughout a site or parts of a site?
� How will internal and external links be validated and updated? How
often?
� How many times will the user login and do they require testing?
� How are CGI programs, Applets, Javascripts, ActiveX components, etc.
to be maintained, tracked, controlled, and tested?
The table below shows the differences between testing a software proj-
ect that is not web based and testing a web application project.
2. Planning: 2. Planning:
How long will it take our available We need to get this product out now.
resources to build this product? Purely driven by available time window
How will we test this product? and resources.
Typically involves experience-
based estimation and planning.
4. Implementation: 4. Implementation:
Let us decide on the sequence of Let us put in the framework and hang
building blocks that will optimize some of the key features. We can then
our integration of a series of builds. show it as a demo or pilot site to our
Sequential development of design customers.
components. Iterative prototyping with transition of
prototype to a website.
5. Integration: 5. Integration:
How does the product begin to take This phase typically does not exist. It is
shape, as the constituent pieces are a point in time when prototyping stops
bolted together? Are we meeting our and the site goes live.
requirements? Are we creating
what we set out to create in the first
place? Assembly of components to
build the specified system.
(Continued)
6. Testing: 6. Testing:
Have we tested the product in a It’s just a website — the designer will
reproducible and consistent manner? test it as (s)he develops it, right? How
Have we achieved complete test do you test a website? Make sure the
coverage? Have all serious defects been links all work?
resolved in some manner
Systematic testing of functionality Testing of implied features based on a
against specifications. general idea of desired functionality.
7. Release: 7. Release:
Have we met our acceptance criteria? Go live NOW! We can always add the
Is the product stable? Has QA rest of the features later!
authorized the product for release? Transfer of the development site to the
Have we implemented version control live server.
methods to ensure we can always
retrieve the source code for this
release?
Building a release candidate and
burning it to CD.
8. Maintenance: 8. Maintenance:
What features can we add for a future We just publish new stuff when it’s
release? What bug fixes? How do we ready...we can make changes on the fly,
deal with defects reported by the because there’s no installation required.
end-user? Any changes should be transparent to
our users...”
Periodic updates based on feature Integral part of the extended
enhancements and user feedback. development life cycle for web apps.
Average timeframe for the above: Average timeframe for the above:
One to three years 4 months
11.2. METHODOLOGY
11.2.1.2. Usability testing
For usability testing, there are standards and guidelines that have been
established throughout the industry. The end-users can blindly accept these
sites because the standards are being followed. But the designer shouldn’t
completely rely on these standards. While following these standards and
guidelines during the making of the website, he or she should also consider
the learnability, understandability, and operability features so that the user
can easily use the website.
In this testing the designer should also consider the loading time of the
web page during more transactions. For example, a web page loads in less
than eight seconds, or can be as complex as requiring the system to handle
10,000 transactions per minute, while still being able to load a web page
within eight seconds.
Another variant of performance testing is load testing. Load testing for a
web application can be thought of as multi-user performance testing, where
you want to test for performance slow-downs that occur as additional users
use the application. The key difference in conducting performance testing
of a web application versus a desktop application is that the web application
has many physical points where slow-downs can occur. The bottlenecks may
be at the web server, the application server, or at the database server, and
pinpointing their root causes can be extremely difficult.
Typical steps to create performance test cases are as follows:
� Identify the software processes that directly influence the overall perfor-
mance of the system.
� For each of the identified processes, identify only the essential input
parameters that influence system performance.
� Create usage scenarios by determining realistic values for the param-
eters based on past use. Include both average and heavy workload sce-
narios. Determine the window of observation at this time.
� If there is no historical data to base the parameter values on use esti-
mates based on requirements, an earlier version, or similar systems.
� If there is a parameter where the estimated values form a range, select
values that are likely to reveal useful information about the performance
of the system. Each value should be made into a separate test case.
Performance testing can be done through the “window” of the browser, or
directly on the server. If done on the server some of the performance time
that the browser takes is not accounted for taken into consideration.
11.2.1.4. Scalability testing
The term “scalability” can be defined as a web application’s ability to sustain
its required number of simultaneous users and/or transactions while main-
taining adequate response times to its end users.
When testing scalability, configuration of the server under test is critical.
All logging levels, server timeouts, etc. need to be configured. In an ideal
situation, all of the configuration files should be simply copied from test
11.2.1.5. Security testing
Probably the most critical criterion for a web application is that of secu-
rity. The need to regulate access to information, to verify user identities,
and to encrypt confidential information is of paramount importance. Credit
card information, medical information, financial information, and corporate
information must be protected from persons ranging from the casual visitor
to the determined hacker. There are many layers of security from password-
based security to digital certificates, each of which has its pros and cons. The
test cases for security testing can be derived as follows:
� The web server should be setup so that unauthorized users cannot
browse directories and the log files in which all data from the website
stores.
� Early in the project, encourage developers to use the POST command
wherever possible because the POST command is used for large data.
� When testing, check URLs to ensure that there are no “information
leaks” due to sensitive information being placed in the URL while using
a GET command.
� A cookie is a text file that is placed on a website visitor’s system that iden-
tifies the user’s “identity.” The cookie is retrieved when the user revisits
the site at a later time. Cookies can be controlled by the user, regarding
whether they want to allow them or not. If the user does not accept cook-
ies, will the site still work?
� Is sensitive information in the cookie? If multiple people use a work-
station, the second person may be able to read the sensitive informa-
tion saved from the first person’s visit. Information in a cookie should be
encoded or encrypted.
11.2.1.6. Recoverability testing
A website should have a backup or redundant server to which the traffic is
rerouted when the primary server fails. And the rerouting mechanism for
the data must be tested. If a user finds your service unavailable for an
excessive period of time, the user will switch over or browse the competi-
tor’s website. If the site can’t recover quickly then inform the user when
the site will be available and functional.
11.2.1.7. Reliability testing
Reliability testing is done to evaluate the product’s ability to perform its
required functions and give responses under stated conditions for a speci-
fied period of time.
For example, a web application is trusted by users who use an online
banking web application (service) to complete all of their banking transac-
tions. One would hope that the results are consistent and up to date and
according to the user’s requirements.
SUMMARY
It is clear from this chapter that for the failure-free operation of a website we
must follow both non-functional and functional testing methods. With these
methods one can test the performance, security, reliability, user interfaces,
etc. which are the critical issues related to the website. Web testing is a
challenging exercise and by following the methods described in this chapter,
some of those challenges may be mitigated.
ANSWERS
Inspection Walkthrough
1. It is a five-step process that is well- 1. It has fewer steps than inspection
formalized. and is a less formal process.
2. It uses checklists for locating errors. 2. It does not use a checklist.
3. It is used to analyze the quality of the 3. It is used to improve the quality
process. of product.
4. This process takes a long time. 4. It does not take a long time.
5. It focuses on training of junior staff. 5. It focuses on finding defects.
REVIEW QUESTIONS
1. How is website testing different from typical software testing?
2. Discuss various white-box testing techniques for websites.
3. Discuss various black-box testing techniques for websites.
4. Write short paragraphs on:
12
Regression Testing of a
Relational Database
Inside this Chapter:
12.0. Introduction
12.1. Why Test an RDBMS?
12.2. What Should We Test?
12.3. When Should We Test?
12.4. How Should We Test?
12.5. Who Should Test?
12.0. INTRODUCTION
Relational databases are tabular databases that are used to store target related
data that can be easily reorganized and queried. They are used in many appli-
cations by millions of end users. Testing databases involves three aspects:
Testing of the actual data
Database integrity
Functionality testing of database application
These users may access, update, delete, or append to the database. The
modified database should be error free. To make the database error free and
to deliver the quality product, regression testing of the database must be
done. Regression testing involves retesting of the database again and again to
ensure that it is free of all errors. It is a relatively new idea in the data com-
munity. Agile software developers take this approach to the application code.
Step 2: Running the test cases: The test cases are run. The running of the
database test cases is analogous to usual development testing.
Traditional Approach
Test cases are executed on the browser side. Inputs are entered on web
input forms and data is submitted to the back-end database via the web
browser interface. The results sent back to the browser are then validated
against expected values.
Advantages: It is simple and no programming skill is required. It not
only addresses the functionality of stored procedures, rules, triggers, and
data integrity but also the functionality of the web application as a whole.
Disadvantages: Sometimes the results sent to the browser after test
case execution do not necessarily indicate that the data itself is properly writ-
ten to a record in the table. When erroneous results are sent back to the
browser after the execution of test cases, it doesn’t necessarily mean that the
error is a database error.
A crucial danger with database testing and with regression testing
specificly is coupling between tests. If we put the database in to a known
state, run several tests against that known state before setting it, then those
tests are potentially coupled to one another.
Advanced Approach
Preparation for Database Testing
Generate a list of database tables, stored procedures, triggers, defaults,
rules, and so on. This will help us have a good handle on the scope of testing
required for database testing. The points which we can follow are:
1. Generate data schemata for tables. Analyzing the schema will help us
determine:
Can a certain field value be Null?
What are the allowed or disallowed values?
What are the constraints?
Is the value dependent upon values in another table?
Will the values of this field be in the look-up table?
What are user-defined data types?
What are primary key and foreign key relationships among tables?
2. At a high level, analyze how the stored procedures, triggers, defaults,
and rules work. This will help us determine:
What is the primary function of each stored procedure and trigger?
Does it read data and produce outputs, write data, or both?
What are the accepted parameters?
What are the return values?
When is the stored procedure called and by whom?
When is a trigger fired?
3. Determine what the configuration management process is. That is how
the new tables, stored procedures, triggers, and such are integrated.
Step 3: Checking the results: Actual database test results and expected
database test results are compared in this step as shown in the following
example:
The following test cases were derived for this code snippet:
Test_id Year (year to test) Expected result Observed result Match
1 –1 –1 –1 Yes
2 –400 –1 –1 Yes
3 100 0 0 Yes
4 1000 0 0 Yes
5 1800 0 0 Yes
6 1900 0 0 Yes
7 2010 0 0 Yes
8 400 1 1 Yes
9 1600 1 1 Yes
10 2000 1 1 Yes
11 2400 1 1 Yes
12 4 1 1 Yes
13 1204 1 1 Yes
14 1996 1 1 Yes
15 2004 1 1 Yes
SUMMARY
In this chapter, we have studied the regression testing of relational data-
bases. We have also done the black-box testing of a database code e xample.
ANSWERS
1. b. 2. c. 3. a. 4. c.
5. a. 6. c. 7. b.
REVIEW QUESTIONS
1. Why should an RDBMS be tested extensively?
2. How can you do black-box testing of a database?
3. What is refactoring? What are its three main objectives?
4. Explain with the help of a flowchart/an algorithm, the Test-First approach
used to test an RDBMS.
5. Comment on the flow of database testing.
6. Name some unit testing and load testing CASE tools and some test data
generators used during database testing.
13
A Case Study on
Testing of E-Learning
Management Systems
Abstract
Software testing is the process of executing a program or system with the
intent of finding errors. It involves any activity aimed at evaluating an attrib-
ute or capability of a program or system and determining that it meets its
required results. To deliver successful software products, quality has to be
ensured in each and every phase of a development process. Whatever the
organizational structure may be, the most important point is that the output
of each phase should be of very high quality. The SQA team is responsible to
ensure that all the development team should follow the quality-oriented pro-
cess. Any modifications to the system should be thoroughly tested to ensure
that no new problems are introduced and that the operational performance
is not degraded due to the changes. The goal of testing is to determine and
ensure that the system functions properly beyond the expected maximum
workload. Additionally, testing evaluates the performance characteristics
like response times, transaction rates, and other time sensitive issues.
Chapter One
Introduction
NIIT Technologies is a global IT and business process management services
provider with a footprint that spans 14 countries across the world. It has been
working with global corporations in the USA, Europe, Japan, Asia Pacific,
and India for over two decades. NIIT Technologies provides independent
validation and verification services for your high-performance applications.
Their testing services help organizations leverage their experience in testing
to significantly reduce or eliminate functional, performance, quality, and reli-
ability issues. NIIT Technologies helps enterprises and organizations make
their software development activities more successful and finish projects in
time and on budget by providing systematic software quality assurance.
The government of India Tax Return Preparers scheme to train unem-
ployed and partially employed persons to assist small and medium taxpay-
ers in preparing their returns of income has now entered its second phase.
During its launch year, on a pilot basis, close to 5,000 TRPs at 100 centers in
around 80 cities across the country were trained. 3737 TRPs were certified
by the Income Tax Department to act as Tax Return Preparers who assisted
various people in filing their IT returns. The government has now decided
to increase their area of operations by including training on TDS returns and
service tax returns to these TRPs. The quality assurance and testing team of
NIIT who constantly indulges in testing and maintaining the product qual-
ity have to test such online learning content management websites such as
www.trpscheme.com in the following manner:
Functional and regression testing
System testing: Load/stress testing, compatibility testing
Full life cycle testing
Chapter Two
Software Requirement Specifications
2.1. INTRODUCTION
This document aims at defining the overall software requirements for “testing
of an online learning management system (www.trpscheme.com).” Efforts
have been made to define the requirements exhaustively and accurately.
2.1.1. Purpose
This document describes the functions and capabilities that will be pro-
vided by the website, www.trpscheme.com. Its purpose is that the resource
center will be responsible for the day-to-day administration of the scheme.
The functions of the resource center will include to specify the curriculum
and all other matters relating to the training of the Tax Return Preparers
and maintain the particulars relating to the Tax Return Preparers. Also,
any other function that is assigned to it by the Board for the purposes of
implementation of the scheme.
2.1.2. Scope
The testing of the resource center section for service tax is done manually
mainly using functional and regression testing. Other forms of testing may
also be used such as integration testing, load testing, installation testing, etc.
Sites
http://en.wikipedia.org/Software_testing
2.1.5. Overview
The rest of the SRS document describes the various system requirements,
interfaces, features, and functionalities in detail.
2.2.1. Product Perspective
The application will be self-contained.
FIGURE 2.1
2.2.1.1. System interfaces
None.
2.2.1.2. User interfaces
The application will have a user-friendly and menu-based interface. The
login page will entertain both user and admin. The following forms and
pages will be included in the menu:
Login screen
Homepage
Return filed report
Return filed form
STRP wise report
Service wise report
Zone commissionerate wise report
STRP summary report
2.2.1.3. Hardware interfaces
1. Processor: Intel Pentium (4) Processor
2. Ram: 512 MB and above
3. Storage Space: 5 GB and above
4. A LAN card for the Internet
2.2.1.4. Software interfaces
1. Language: Java, XML
2. Software: Bugzilla, Putty, Toad
3. Database: Oracle
4. Platform: Windows 2000 (Server) / Linux
2.2.1.6. Memory constraints
At least 512 MB RAM and 2 GB hard disk will be required.
2.2.2. Product Functions
According to the customer use and needs the website function shall include:
i. To specify, with prior approval of the Board,
a. The number of persons to be enrolled during a financial year for
training to act as Tax Return Preparers;
2.2.3. User Characteristics
Education level: The user be able to understand one of the languages
of the browser (English, Hindi, Telugu). The user must also have a basic
knowledge of tax return and payments rules and regulations.
Technical expertise: The user should be comfortable using general-
purpose applications on a computer.
2.2.4. Constraints
Monitor sizes and ratios and color or black-and-white monitors render
it virtually impossible to design pages that look good on all device types.
Font sizes and colors need to be changeable to fit the requirements of
sight-impaired viewers.
2.2.6. Apportioning of Requirements
None.
When the STRPs click on the login button of resource center service tax
on TRPscheme.com, the following page will be displayed.
Homepage
When the STRP logs in by user id and password, the homepage is displayed.
The homepage has “Reported by STRP” menu on the left under which the
user will see two links, “Return Filed” and “Monthly/Quarterly Tax Paid
Form.” The user will also see two report links, “Return Filed” and “Monthly/
Quarterly Tax Paid Report” under the “Reports” menu. A message “This
is the Resource Center site. By using this site you can get information on
Resource Center Service Tax” also appears on the homepage.
Form validation: This form will require validation of the data such as
all of the mandatory fields cannot be left blank and “STC Code” must be
filled in otherwise the form will not be submitted. Fields such as “Amount
of Tax Payable,” “Amount of Tax Paid,” and “Interest Paid” will only be
numeric.
To complete a form, the user must fill out the following fields. All of the
fields are mandatory in this form.
•• Name of Assesses
•• STC Code
•• Period
•• Monthly/Quarterly
•• Month
•• Amount of Tax Payable
•• Amount of Tax Paid
Field format/length for STC Code will be as follows: [First 5 alphabetical]
[6-9 numeric] [10 alphabetical] [11-12 ST] [13-15 numeric]
“Month” drop-down list will be populated based on the “Period” and
“Monthly/Quarterly” selection. “Month” will be selected. If the user has
selected “Period” as April 1 though Sept 30 and 2009 and “Monthly” in
“Monthly/Quarterly” drop down then he or she will see April, May, June,
July, August, and September in the “Month” drop down. If the TRP has
selected “Quarterly” in “Monthly/Quarterly” drop down then the drop
down will show Apr_May_June and July_Aug_Sep.
The STRP can only fill in the details for the same STD code, period, and
month only once.
Report to view Monthly\Quarterly form data.
This report will allow STRPs to view Monthly\Quarterly Tax Paid form
data and will be able generate a report of the data. STRPs will generate
reports in HTML format and also be to able to export them into Excel
format.
To view the report data the STRP is required to provide the “Period” in
the given fields that are the mandatory fields.
The STRP can also use the other field STC code to generate.
The user must click on the “Generate Report” button to view the report
in HTML format or on “Export to Excel” if he or she wants to export the
data into Excel format.
The “Cancel” button will take the user to the login page.
2.5 Service Wise Report (Admin Report)
This report will allow the admin to generate a Report Service Wise of STRPs.
This report will be generated in HTML format as well as in Excel format.
Validations:
This page should contain a “Service Type” drop down and “Date from”
and “To” textboxes.
To view the Service Wise Report data the admin can select multiple ser-
vices from the “Service Type” list box and the data for those services will
be populated. “Service Type” will be a mandatory field so the user has to
select at least one service to view the report data.
The user must click on the “Generate Report” button to view the report
in HTML format or on “Export to Excel” if he or she wants to export the
data them into Excel format.
The TRP id, TRP name, and service type will also be provided in the
Excel sheet.
The “Cancel” button will take the user to the login page.
The user needs to fill in both the “Date from” and “To” fields. “Date
from” and “To” will extract the data based on “Date of Filling Return.”
STRPs Wise Report (Admin Report)
This report will allow the admin to search the data of the STRPs and will
be able to generate a report of the data. The admin will generate reports in
HTML format and also in Excel format.
To view the STRPs Wise Report data users have to give a “Period” because
its a mandatory field while the rest of the fields are non mandatory.
The user can also provide the date range if the user wants data from a
particular date range. If no date range is provided then all the data from
all of the STRPs will be populated for the given period.
The user needs to fill in both “Date from” and “To” fields. “Date from”
and “To” will extract the data based on “Date of Filling Return.”
The user must click on the “Generate Report” button to view the report
in HTML format or on “Export to Excel” if he or she wants to export the
data into Excel format.
The “TRP id” and “TRP name” will also be provided in the Excel sheet.
The “Cancel” button will take the user to the login page.
STRP Summary Report (Admin Report).
This report will allow the admin to generate a report for the top ten STRPs
based on the highest amount of tax paid for each return filled by the TRP.
This report will be generated in HTML format as well as in Excel format.
Validations:
To view this report the user will have to select a “Zone” as well as a
“Period.” These are mandatory filters.
There will be an option of “ALL” in the “Zone” drop down if the report
needs to be generated for all the zones.
The user must click on the “Generate Report” button to view the report
in HTML format or on “Export to Excel” if he or she wants to export the
data into Excel format.
The “Cancel” button will take the user to the login page.
The user needs to fill both “Date from” and “To” fields. “Date from” and
“To” will extract the data based on “Date of Filling Return.”
The user can either select the “Period” or “Date from” and “To” to gen-
erate the report. Both of the fields cannot be selected.
Zone/Commissionerate Wise Report (Admin Report)
This report will allow the admin to generate the report Zone/
Commissionerate Wise of STRPs. This report will be generated in HTML
format as well as in Excel format.
Validations:
To view the Commissionerate Wise Report data the admin can p rovided
“Zone,” “Commissionerate,” and “Division” to view the data but if
no input is provided then the data will include the entire “Zone,” the
“Commissionerate,” and the “Division.” The user will have to select
“Zone” because it will be a mandatory field. There will be an option of
“ALL” in the “Zone” drop down if the report needs to be generated for
all of the “Zone.”
“Commissionerate” will be mapped to the “Zone” and “Division” will
be mapped to “Commissionerate,” i.e., if a user selects a “Zone” then
all the “Commissionerate” under that “Zone” will come in to the “Com-
missionerate” drop down and if a user selects a “Commissionerate” then
only those “Division” will be populated in the “Division” drop down that
are under that “Commissionerate.” If any LTU is selected in the “Zone”
drop down the no other field will be populated.
The user must click on the “Generate Report” button to view the report
in HTML format or on “Export to Excel” if he or she wants to export the
data into Excel format.
The “TRP id,” “TRP name,” “Commissionerate,” and “Division” will also
be provided in the Excel sheet.
The “Cancel” button will take the user to the login page.
2.3.2. Functions
It defines the fundamental actions that must take place in the software in
accepting and processing the inputs and generating the outputs. The system
will perform the following:
VALIDITY CHECKS
The address should be correct.
An Internet connection should be present.
RESPONSES TO ABNORMAL SITUATIONS
An error message will be generated if the date format is wrong.
An error message will be generated if the STC code is entered i ncorrectly.
An error message will be generated if two users are assigned the same
STC code.
2.3.3. Modules
Test Plan
We write test plans for two very different purposes. Sometimes the test plan
is a product; sometimes it’s a tool. It’s too easy but also too expensive to
confuse these goals. In software testing, a test plan gives detailed testing
information regarding an upcoming testing effort including:
Scope of testing
Schedule
Test deliverables
Release criteria risks and contingencies
How the testing will be done?
Who will do it?
What will be tested?
How long it will take?
What the test coverage will be, i.e., what quality level is required?
Test Cases
A test case is a set of conditions or variables under which a tester will deter-
mine if a requirement upon an application is partially or fully satisfied. It
may take many test cases to determine that a requirement is fully satisfied. In
order to fully test that all of the requirements of an application are met, there
must be at least one test case for each requirement unless a requirement has
sub requirements. In that situation, each sub requirement must have at least
one test case. There are different types of test cases.
Common test case
Functional test case
Invalid test case
Integration test case
Configuration test case
Compatibility test case
2.3.4. Performance Requirements
Static numerical requirements are:
HTTP should be supported
HTML should be supported
Any number of users can be supported
Dynamic numerical requirements include the number of transactions
and tasks and the amount of data to be processed within certain time periods
for both normal and peak workload conditions depend upon the connection
speed of the user.
Chapter Three
System Design
Chapter Four
Reports And Testing
4.2. TESTING
The importance of software testing and its implications with respect to soft-
ware quality cannot be overemphasized. Software testing is a critical ele-
ment of software quality assurance and represents the ultimate review of
specification, design, and code generation.
4.2.1. Types of Testing
White-Box Testing: This type of testing goes inside the program and check
all the loops, paths, and branches to verify the program’s intention.
Chapter Five
Test Cases
Test
status
Test Actual (Pass/
case ID Objective Test steps Test data Expected results results Fail) Bug ID
STRP_R To verify the 1. Login as student. Loginid: A quicklink “Return Same as PASS
FR_301 availability of Homepage of the student01 Filed” appears on expected.
“Return Filed user appears. password: the left hand side
Report” to pass123 of the screen under
student role. “View Reports”
STRP_R To verify the 1. Login as student. Loginid: Return “Filed Same as PASS 118560
FR_302 accessibility Homepage of the student01 Report” page expected.
of “Return user appears. password: appears.
Filed” 2. Click on the pass123
button. quicklink “Return
Filed” under “View
Reports” heading.
STRP_R To verify 1. Login as student. Loginid: The values under Same as
FR_306 the report Homepage of the student01 the respective expected.
outputs in user appears. password: columns in
an Excel 2. Click on the pass123 HTML and Excel
spreadsheet quicklink “Return spreadsheet should
and HTML Filed,” “Return Filed match. The column
format. Report” page appears. headings are as
3. Fill in the follows:
“Period” field. Name
4. Click on the STC
“Export to Excel” Code
button. Period
5. Next select the Date of Filling
same period and click Return
on the “Generate Amount of Tax
Report” button. Payable
6. Observe and Amount of Tax Paid
verify the values Interest Paid
under the respective
column headings in
HTML format with
the Excel spread-
sheet format.
(Continued)
Test
status
Test Actual (Pass/
case ID Objective Test steps Test data Expected results results Fail) Bug ID
STRP_R To verify the 1. Login as student. Loginid: 1. Report should Same as PASS
FR_307 functionality Homepage of the student01 be generated for expected.
of the user appears. password: selected period
“Generate 2. Click on the pass123 showing correct
Report” quicklink “Return values under the
button when Filed,” “Return on respective column
the “STC Filed Report” page headings.
Code” filed appears. 2. Message “No
is blank and 3. Fill all the Record Found”
the “Period” mandatory fields should appear if no
field is except the “STC record for selected
selected. Code.” period exists.
4. Click on the
“Generate Report”
button.
STRP_R To verify the 1. Login as student. Loginid: 1. “File Download” Same as PASS
FR_308 functionality Homepage of the student01 dialog box appears expected.
of the user appears. password: with options
“Export 2. Click on the pass123 “Open,” “Save,” and
to Excel” quicklink “Return “Cancel.”
button when Filed,” “Return 2. Report should be
the “STC Filed Report” page generated in Excel
Code” field appears. for selected period
is blank and 3. Fill all the showing the correct
the “Period” mandatory fields values under the
field is except the “STC respective column
selected. Code.” headings.
4. Click on the 3. Message “No
“Generate Report” Record Found”
button. should appear if no
record for selected
period exists.
STRP_R To verify the 1. Login as student. Loginid: A Date Time Picker Same as PASS
FR_309 functionality Homepage of the student01 Window should pop expected.
of the user appears. password: up with the current
“Calendar” 2. Click on the pass123 date selected in the
button on quicklink “Return calendar.
“Return Filed,” “Return
Filed Filed Report” page
Report.” appears.
3. Select a period in
the “Period” field.
4. Click on the
“Pick a date” button.
(Continued)
Test
status
Test Actual (Pass/
case ID Objective Test steps Test data Expected results results Fail) Bug ID
STRP_10 To verify the 1. Login as student. The report should Same as PASS
FR_310 format of the Homepage of the be generated. expected.
“STC Code” user appears.
textbox. 2. Click on the
quicklink “Return
Filed,” “Return Filed
Report” page appears.
3. Fill “STC Code”
in the following
template.
STC code length: 15
characters 1-5:
alphabetical 6-9:
numerical 10th:
alphabetical
11-12: ST
13-15: numerical
4. Fill all the other
Mandatory details.
5. Click on the
“Generate Report”
button.
STRP_R To verify the 1. Login as student. Loginid: 1. An error message Same as PASS
FR_311 functionality Homepage of the student01 should appear expected.
of the user appears. password: stating “STC Code
“Generate 2. Click on the pass123 is invalid.” with a
Report” quicklink “Return STC code: “Back” button.
button when Filed Report,” ASZDF23 2. By clicking on
length of the “Return Filed Report” 45GST87 the “Back” button,
“STC Code” page appears. Period: “Return Filed
is less than 15 3. Fill in the April 1st - Report” page
characters. “STC Code” in the Sept 30th; appears.
following template. 2007-2008
STC code length: 14
characters 1-5:
alphabetical (In
Caps) 6-9:
numerical 10th:
alphabetical (In
Caps) 11-12: ST
13-14: numerical
4. Fill all the others
Mandatory details.
5. Click on the
“Generate Report”
button.
(Continued)
Test
status
Test Actual (Pass/
case ID Objective Test steps Test data Expected results results Fail) Bug ID
STRP_R To verify the 1. Login as student. Loginid: 1. An error message Same as PASS
FR_312 functionality Homepage of the student01 should appear expected.
of “Export user appears. password: stating
To Excel” 2. Click on the pass123 “STC Code is
button when quicklink “Return STC Code: invalid.” with a
the length Filed Report,” ASZDF23 “Back” button.
of the “STC “Return Filed Report” 45GST87 2. By clicking on
Code” is page appears. Period: the “Back” button
less than 15 3. Fill in the “STC April 1st - “Return Filed
characters. Code” in the Sept 30th; Report” page
following template. 2007-2008 appears.
STC code length:
14 characters.
1-5: alphabetical
(In Caps) 6-9:
numeral 10th:
alphabetical (In
Caps) 11-12: ST
13-14: numeral
4. Fill all the others
Mandatory details.
5. Click on the
“Export To Excel”
button.
STRP_R To verify the 1. Login as student. Loginid: An error message Same as
FR_313 functionality Homepage of the student01 should appear expected.
of the user appears. password: stating “STC Code
“Generate 2. Click on the pass123 is invalid.” With a
Report” quicklink “Return STC Code: “Back” button.
button when Filed Report,” asdfg234
the letters “Return Filed 5gST87
of the “STC Report” page Period:
Code” are appears. April 1st -
written in 3. Fill in the “STC Sept 30th;
small letters. Code” in the 2007-2008
following template.
STC code length:
15 characters. 1-5:
alphabetical (In
Small) 6-9: numeral
10th: Alphabet (In
Small) 11-12: ST
13-15: numeral
4. Select a period
the “Period” field.
5. Click on the
“Export To Excel”
button.
(Continued)
Test
status
Test Actual (Pass/
case ID Objective Test steps Test data Expected results results Fail) Bug ID
STRP_R To verify the 1. Login as student. Loginid: 1. An error message Same as PASS
FR_317 functionality Homepage of the student01 should appear expected.
of the user appears. password: stating “STC Code
“Generate 2. Click on the pass123 is invalid.” With a
Report” quicklink “Return STC Code: “Back” button.
button when Filed Report,” ASZDFJU 2. By clicking on
all of the “Return Filed ILHGLO the “Back” button
characters Report” page YU “Return Filed
of the “STC appears. Period: Report” page
Code” are 3. Fill in the “STC April 1st - appears.
alphabetical. Code” in the Sept 30th;
following template. 2007-2008
STC code length: 15
characters.
1-15: alphabetical
(In Caps)
4. Fill all the other
Mandatory details.
5. Click on the
“Generate Report”
button.
STRP_R To verify the 1. Login as student. Loginid: The report should Same as PASS
FR_321 functionality Homepage of the student01 be generated. expected.
of the user appears. password:
“Generate 2. Click on the pass123
Report” quicklink “Return Period:
button when Filed,” “Return April 1st -
the date Filed Report” page Sept 30th;
format is appears. 2007-2008
“dd/mm/ 3. Fill the “Date Date from:
yyyy” in any from” and/or “To” 10/01/2007
or both of in “dd/mm/yyyy”
the “Date format.
from” 4. Select a period in
and “To” the “Period” field.
textboxes. 5. Click on the
“Generate Report”
button.
STRP_R To verify the 1. Login as student. Loginid: The report should Same as
FR_322 functionality Homepage of the student01 be generated. expected.
of the user appears. password:
“Export 2. Click on the pass123
To Excel” quicklink “Return Period:
button when Filed,” “Return April 1st -
Filed Report” page Sept 30th;
appears. 2007-2008
(Continued)
Test
status
Test Actual (Pass/
case ID Objective Test steps Test data Expected results results Fail) Bug ID
date format 3. Fill the “Date Date from: PASS
is “dd/ from the” in “dd/ 01/10/2007
mm/yyyy” mm/yyyy” format.
in any or 4. Select a period in
both of the “Period” field.
“Date from” 5. Click on the
and “To” “Export To Excel”
textboxes. button.
STRP_R To verify the 1. Login as student. Loginid: The report should Same as
FR_323 functionality Homepage of the student01 be generated if expected.
of the user appears. password: records exist in that
“Generate 2. Click on the pass123 period. Otherwise,
Report” quicklink “Return Period: the message “No
button when Filed,” “Return April 1st - Record Found.”
the “Date Filed Report” page Sept 30th; should display.
from” and appears. 2007-2008
“To” fields 3. Fill the “Date “Date
are filled. from” and “To” from”:
fields in “dd/mm/ 01/10/2007
yyyy” format. “To”:
4. Select a period in 30/09/2008
the “Period” field.
5. Click on the
“Generate Report”
button.
STRP_R To verify the 1. Login as student. Loginid: The report should Same as
FR_324 functionality Homepage of the student01 be generated if the expected.
of the user appears. password: records exist in that
“Export 2. Click on the pass123 period. Otherwise,
To Excel” quicklink “Return Period: the message “No
button when Filed,” “Return April 1st - Record Found.”
the “Date Filed Report” page Sept 30th; should display.
from” and appears. 2007-2008
“To” fields 3. Fill in the “Date “Date
are filled. from” and “To” from”:
fields in “dd/mm/ 01/10/2007
yyyy” format. “To”:
4. Select a period in 30/09/2008
the “Period” field.
5. Click on the
“Export To Excel”
button.
(Continued)
Test
status
Test Actual (Pass/
case ID Objective Test steps Test data Expected results results Fail) Bug ID
STRP_R To verify the 1. Login as student. Loginid: The report should Same as
FR_325 functionality Homepage of the student01 be generated if the expected.
of the user appears. password: records exist from
“Generate 2. Click on the pass123 the date entered
Report” quicklink “Return Period: in the “Date from”
button when Filed,” “Return Filed April 1st - field. Otherwise,
only the Report” page appears. Sept 30th; the message “No
“Date from” 3. Fill in the “Date 2007-2008 Record Found.”
field is filled from” field in “dd/ “Date should display.
and the “To” mm/yyyy” format. from”:
field is left 4. Select a period in 01/10/2007
blank. the “Period” field.
5. Click on the
“Generate Report”
button.
STRP_R To verify the 1. Login as student. Loginid: The report should Same as
FR_326 functionality Homepage of the student01 be generated if the expected.
of the user appears. password: records exist from
“Export 2. Click on the pass123 the date entered
To Excel” quicklink “Return Period: in the “Date from”
button when Filed,” “Return Filed April 1st - field. Otherwise,
only the Report” page appears. Sept 30th; the message “No
“Date from” 3. Fill in the “Date 2007-2008 Record Found.”
field is filled from” field in “dd/ “Date should display.
and the “To” mm/yyyy” format. from”:
field is left 4. Select a period in 01/10/2007
blank. the “Period” field.
5. Click on the
“Export To Excel”
button.
STRP_R To verify the 1. Login as student. Loginid: The report should Same as
FR_327 functionality Homepage of the student01 be generated if expected.
of the user appears. password: the records exist
“Generate 2. Click on the pass123 until the date
Report” quicklink “Return Period: entered in the “To”
button when Filed,” “Return April 1st - field. Otherwise,
only the “To” Filed Report” page Sept 30th; the message “No
field is filled appears. 2007-2008 Record Found.”
in and the 3. Fill in the “To” “To”: should display.
“Date from” field in “dd/mm/ 30/09/2008
field is left yyyy” format.
blank. 4. Select a period in
the “Period” field.
5. Click on the
“Generate Report”
button.
(Continued)
Test
status
Test Actual (Pass/
case ID Objective Test steps Test data Expected results results Fail) Bug ID
STRP_R To verify the 1. Login as student. Loginid: The report should Same as
FR_328 functionality Homepage of the student01 be generated if expected.
of the user appears. password: the records exist
“Export 2. Click on the pass123 untill the date
To Excel” quicklink “Return Period: entered in the “To”
button when Filed,” “Return Filed April 1st - field. Otherwise,
only the “To” Report” page appears. Sept 30th; the message “No
field is filled 3. Fill in the “To” 2007-2008 Record Found.”
in and the field in “dd/mm/ “To”: should display.
“Date from” yyyy” format. 30/09/2008
field is left 4. Select a period in
blank. the “Period” field.
5. Click on the
“Export To Excel”
button.
STRP_R To verify the 1. Login as student. Period: An error message Same as PASS 112387
FR_329 functionality Homepage of the April 1st - should appear expected.
of the user appears. Sept 30th; saying, “From Date
“Generate 2. Click on the 2007-2008 can not be greater
Report” quicklink “Return “Date than To Date.”
button when Filed,” “Return Filed from”:
the “Date Report” page appears. 01/10/2008
from” is 3. Fill in the “Date “To” date:
greater than from” and “To” field 30/09/2008
the “To in “dd/mm/yyyy”
Date.” format.
4. Select a period in
the “Period” field.
5. Click on the
“Generate Report”
button.
STRP_R To verify the 1. Login as student. Period: An error message Same as PASS 11238
FR_330 functionality Homepage of the April 1st - should appear expected.
of the user appears. Sept 30th; saying “From Date
“Export 2. Click on the 2007-2008 can not be greater
To Excel” quicklink “Return “Date than To Date.”
button when Filed,” “Return Filed from”:
the “Date Report” page appears. 01/10/2008
from” is 3. Fill in the “Date “To” Date:
greater than from” and “To” 30/09/2008
the “To” fields in “dd/mm/
Date. yyyy” format.
4. Select a period in
the “Period” field.
5. Click on the
“Export To Excel”
button.
(Continued)
Test
status
Test Actual (Pass/
case ID Objective Test steps Test data Expected results results Fail) Bug ID
STRP_R To verify the 1. Login as student. Loginid: An error message Same as PASS
FR_331 Max Length Homepage of the student01 saying “Date expected.
of the “Date user appears. password: Format of Start
from” field. 2. Click on the pass123 Date is not valid.”
quicklink “Return Period: should appear with
Filed,” “Return Filed April 1st - the “Back” button.
Report” page appears. Sept 30th; On clicking the
3. Enter more than 2007-2008 “Back” button,
10 characters in the “Date the “Return Filed
“Date from” field. from”: Report” page should
4. Enter a valid date 01/10/2008 appear.
in the “To” field.
5. Select a period in
the “Period” field.
6. Click on the
“Generate Report”
button.
STRP_R To verify the 1. Login as student. NA Homepage of the Same as PASS
FR_333 functionality Homepage of the user appears. expected.
of the user appears.
“Home” 2. Click on the
button at quicklink “Return
the “Return Filed Report,”
Filed “Return Filed Report”
Report” page appears.
page. 3. Fill all of the
mandatory fields
with valid data.
4. Click on the
“Home” quicklink.
STRP_R To verify the 1. Login as student. NA Homepage of the Same as PASS
FR_334 functionality Homepage of the user appears. expected.
of the user appears.
“Home” 2. Click on the
button at quicklink “Return
the error Filed Report,”
message “Return Filed Report”
page. page appears.
3. Leave the
“Period” field
unselected.
4. Click on the
“Generate Report”
button.
5. Click on the
“Home” quicklink.
(Continued)
Test
status
Test Actual (Pass/
case ID Objective Test steps Test data Expected results results Fail) Bug ID
STRP_R To verify the
1. Login as student. NA Homepage of the Same as PASS
FR_335 functionality
Homepage of the user appears. expected.
of the user appears.
“Cancel” 2. Click on the
button on the
quicklink “Return
“Return Filed
Filed Report,”
Report” page.
“Return Filed
Report” page
appears.
3. Click on the
“Cancel” button.
STRP_R To verify 1. Login as student. NA The “Period” drop Same as PASS
FR_336 the values of Homepage of the down should display expected.
the “Period” user appears. two values:
drop down. 2. Click on the 1. April 1st - Sept
quicklink “Return 30th
Filed Report,” 2. Oct 1st - March
“Return Filed 31st
Report” page
appears.
3. Click on the
“Period” drop down.
STRP_R To verify 1. Login as student. Loginid: When we click on Same as PASS 118564
FR_337 whether the Homepage of the student01 the “Back” button expected.
fields are user appears. password: the user comes back
retaining 2. Click on the pass!23 to “Return Filed
values or quicklink “Return “Date Report” page and all
not after Filed Report,” from”: the previous filled
the error “Return Filed 30/09/2008 values remain intact.
message Report” page
appears. appears.
3. Leave the
“Period” field
unselected.
4. Click on the
“Generate Report”
button.
5. An error message
appears.
6. Click on the
“Back” button.
(Continued)
Test
status
Test Actual (Pass/
case ID Objective Test steps Test data Expected results results Fail) Bug ID
STRP_R To verify the 1. Login as student. Loginid: If the report Same as PASS
FR_338 pagination Homepage of the student01 output section expected.
on the report user appears. password: contains more than
output 2. Click on the pass123 10 records, the
section. quicklink “Return Period: pagination takes
Filed Report,” April 1st - place and the next
“Return Filed Sept 30th; 10 records will be
Report” page 2007-2008 visible on the next
appears. page.
3. Select a period in
the “Period” field.
4. Click on the
“Generate Report”
button.
STRP_R To verify the 1. Login as student. Loginid: There will be only Same as PASS
FR_339 pagination Homepage of the student01 one page of output expected.
on the report user appears. password: section and all of
output 2. Click on the pass123 the pagination links
section when quicklink “Return Period: are disabled.
the number Filed Report,” April 1st -
of records “Return Filed Sept 30th;
are less than Report” page 2007-2008
10. appears.
3. Select a period in
the “Period” field.
4. Click on the
“Generate Report”
button.
STRP_R To verify the 1. Login as student. Loginid: There will be only Same as PASS
FR_340 pagination Homepage of the student01 one page of output expected.
on the report user appears. password: section and all of
output 2. Click on the pass123 the pagination links
section when quicklink “Return Period: are disabled.
the records Filed Report,” April 1st -
are equal to “Return Filed Sept 30th;
10. Report” page 2007-2008
appears.
3. Select a period in
the “Period” field.
4. Click on the
“Generate Report”
button.
(Continued)
Test
status
Test Actual (Pass/
case ID Objective Test steps Test data Expected results results Fail) Bug ID
STRP_R To verify the 1. Login as student. Loginid: The next 10 records Same as PASS
FR_341 pagination Homepage of the student01 will be visible on the expected.
on the report user appears. password: next page and the
output 2. Click on the pass123 “Next” and “Last”
section when quicklink “Return Period: links are clickable.
the records Filed Report,” April 1st -
are greater “Return Filed Sept 30th;
than 10. Report” page 2007-2008
appears.
3. Select a period in
the “Period” field.
4. Click on the
“Generate Report”
button.
STRP_R To verify the 1. Login as student. Loginid: Every page of Same as PASS
FR_342 number of Homepage of the student01 the report output expected.
records on user appears. password: section should
each page in 2. Click on the pass123 contain a maximum
the report quicklink “Return Period: of 10 records.
output Filed Report,” April 1st -
section. “Return Filed Sept 30th;
Report” page 2007-2008
appears.
3. Select a period in
the “Period” field.
4. Click on the
“Generate Report”
button.
STRP_R To verify the 1. Login as student. Loginid: By clicking on the Same as PASS
FR_343 functionality Homepage of the student01 “Next” button, expected.
of the “Next” user appears. password: the next page of
button 2. Click on the pass123 the report output
on the quicklink “Return Period: section appears.
pagination. Filed Report,” April 1st -
“Return Filed Sept 30th;
Report” page 2007-2008
appears.
3. Select a period in
the “Period” field.
4. Click on the
“Generate Report”
button.
(Continued)
Test
status
Test Actual (Pass/
case ID Objective Test steps Test data Expected results results Fail) Bug ID
STRP_R To verify the 1. Login as student. Loginid: By clicking on Same as PASS
FR_344 functionality Homepage of the student01 the “Last” button, expected.
of the “Last” user appears. password: the last page of
button on 2. Click on the pass123 the report output
pagination. quicklink “Return Period: section appears.
Filed Report,” April 1st -
“Return Filed Sept 30th;
Report” page 2007-2008
appears.
3. Select a period in
the “Period” field.
4. Click on the
“Generate Report”
button.
STRP_R To verify the 1. Login as student. Loginid: By clicking on the Same as PASS
FR_345 functionality Homepage of the student01 “First” button, expected.
of the “First” user appears. password: the first page of
button on 2. Click on the pass123 the report output
pagination. quicklink “Return Period: section appears.
Filed Report,” April 1st -
“Return Filed Sept 30th;
Report” page 2007-2008
appears.
3. Select a period in
the “Period” field.
4. Click on the
“Generate Report”
button.
STRP_R To verify the 1. Login as student. Loginid: By clicking on the Same as PASS
FR_346 functionality Homepage of the student01 “Prev” button, the expected.
of the “Prev” user appears. password: previous page of
button on 2. Click on the pass123 the report output
pagination. quicklink “Return Period: section appears.
Filed Report,” April 1st -
“Return Filed Sept 30th;
Report” page 2007-2008
appears.
3. Select a period in
the “Period” field.
4. Click on the
“Generate Report”
button.
(Continued)
Test
status
Test Actual (Pass/
case ID Objective Test steps Test data Expected results results Fail) Bug ID
STRP_R To verify the 1. Login as student. Loginid: The entered page Same as PASS
FR_347 functionality Homepage of the student01 number will appear expected.
of the “Go” user appears. password: and the text above
button when 2. Click on the pass123 the “First Prev Next
the user quicklink “Return Period: Last” link will show
enters a Filed,” “Return April 1st - the current page.
page number Filed Report” page Sept 30th;
in text box. appears. 2007-2008
3. Select a period in Go to
the “Period” field. Page: 2
4. Click on the
“Generate Report”
button.
5. Fill in a page
number in the “Go
to Page” textbox and
click on the “Go”
button.
STRP_R To verify the 1. Login as student. Loginid: Textbox does not Same as PASS
FR_348 functionality Homepage of the student01 accept the value and expected.
of the “Go” user appears. password: remains blank.
button when 2. Click on the pass123
the user quicklink “Return Period:
enters an Filed,” “Return April 1st -
alphanumeric Filed Report” page Sept 30th;
value in the appears. 2007-2008
text box. 3. Select a period in
the “Period” field.
4. Click on the
“Generate Report”
button.
5. Fill in an
alphabetical character
in the “Go to Page”
textbox and click on
the “Go” button.
STRP_R To verify the 1. Login as student. Loginid: The page number Same as PASS
FR_349 text of the Homepage of the student01 details of the expected.
page number user appears. password: report output
details of the 2. Click on the pass123 section should
pagination. quicklink “Return Period: show the current
Filed,” “Return April 1st - page number in
Filed Report” page Sept 30th; the format “Page
appears. 2007-2008 (current page)
3. Select a period in of (Total pages)
the “Period” field. Pages.”
(Continued)
Test
status
Test Actual (Pass/
case ID Objective Test steps Test data Expected results results Fail) Bug ID
4. Click on the
“Generate Report”
button.
5. Fill in a page
number in the “Go
to Page” textbox and
click on the “Go”
button.
STRP_R To verify the 1. Login as student. Loginid: If the report output Same as PASS
FR_350 availability Homepage of the student01 section contains expected.
of the “First” user appears. password: more than 10
and “Prev” 2. Click on the pass123 records and the
links on the quicklink “Return Period: user is at last page
pagination. Filed,” “Return April 1st - of the pagination
Filed Report” page Sept 30th; then the “First” and
appears. 2007-2008 “Prev” links on the
3. Select a period in pagination should
the “Period” field. be enabled.
4. Click on the
“Generate Report”
button.
STRP_R To verify the 1. Login as student. Loginid: If the report output Same as PASS
FR_351 availability Homepage of the student01 section contains expected.
of the “Next” user appears. password: more than 10
and “Last” 2. Click on the pass123 records and the
links on the quicklink “Return Period: user is at first page
pagination. Filed,” “Return Filed April 1st - of the pagination
Report” page appears. Sept 30th; then the “Next” and
3. Select a period in 2007-2008 “Last” links on the
the “Period” field. pagination will be
4. Click on the enabled.
“Generate Report”
button.
STRP_R To verify the 1. Login as student. Loginid: If the report output Same as PASS
FR_352 availability of Homepage of the student01 section contains expected.
the “First,” user appears. password: more than 10
“Prev,” 2. Click on the pass123 records and the
“Next,” quicklink “Return Period: user is neither on
and “Last” Filed,” the “Return April 1st - the first page nor
links on the Filed Report” page Sept 30th; on the last page of
pagination. appears. 2007-2008 the pagination then
all four links on the
pagination page
should be enabled.
(Continued)
Test
status
Test Actual (Pass/
case ID Objective Test steps Test data Expected results results Fail) Bug ID
3. Select a period in
the “Period” field.
4. Click on the
“Generate Report”
button.
STRP_R To verify 1. Login as student. Loginid: The output section Same as PASS
FR_353 the sorting Homepage of the student01 of the report should expected.
order of the user appears. password: be sorted on the
records in 2. Click on the pass123 alphabetical order
the report quicklink “Return Period: of column “Name.”
output Filed,” “Return April 1st -
section page. Filed Report” page Sept 30th;
appears. 2007-2008
3. Select a period in
the “Period” field.
4. Click on the
“Generate Report”
button.
STRP_R To verify the 1. Login as student. Loginid: The Login page Same as PASS
FR_354 functionality Homepage of the student01 of the website is expected.
of the user appears. password: displayed.
“Logout” 2. Click on the pass123
button on quicklink “Return
the “Return Filed,” “Return
Filed Filed Report” page
Report” appears.
page. 3. Click on the
“Logout” button.
STRP_R To verify the 1. Login as student. Loginid: The quicklinks Same as PASS
FR_355 functionality Homepage of the student01 should be clickable expected.
of the user appears. password: and the respective
quicklinks on 2. Click on the pass123 page should be
left side on quicklink “Return displayed.
the “Return Filed,” “Return
Filed Filed Report” page
Report” appears.
page. 3. Click on any of
the quicklinks on
left side of the page.
Test status
Test case ID Objective Test steps Expected results (Pass/Fail)
STRP_MQF_30 To verify the 1. Login as student “Monthly/Quarterly PASS
1 availability of the role. Homepage of the Tax Paid Form”
“Monthly/Quarterly user appears. quicklink should
Tax Paid Form” 2. Observe the appear under the
quicklinks to the quicklinks appearing “To Be Reported By
student role. under the “To Be STRP” section.
Reported By STRP”
section.
STRP_MQF_30 To verify the 1. Login as student The “Monthly/ PASS
2 accessibility of the role. Homepage of the Quarterly Tax Paid
“Monthly/Quarterly user appears. Form” page should
Tax Paid Form.” 2. Click the “Monthly/ appear.
Quarterly Tax Paid
Form” quicklink
appearing under the
“To Be Reported By
STRP” section in
quicklink.
STRP_MQF_30 To verify the “STRP 1. Login as student. 1. The “STRP ID” PASS
4 Details” at the Homepage of the user should show the login
“Monthly/Quarterly appears. ID of the logged in
Tax Paid Form” 2. Click on the user.
page. quicklink “Monthly/ 2. The “STRP Name”
Quarterly Tax Paid should show the name
Form,” the “Monthly/ of the logged in user.
Quarterly Tax Paid 3. The “STRP PAN
Form” page appears. Number” should show
the PAN No. of the
logged in user.
STRP_MQF_30 To verify the 1. Login as student 1. The “Monthly/ PASS
5 functionality of the role. Homepage of the Quarterly Tax Paid
“Submit” button user appears. Form” should not get
when no value 2. Click the “Monthly/ submitted.
is entered in the Quarterly Tax Paid 2. The following
“Name of Assessee” Form” quicklink error message
field. appearing under the should appear with
“To Be Reported By the “Back” button:
STRP” section on the “Name of Assessee is
homepage. mandatory.”
3. Do not enter any 3. Clicking the “Back”
value in the “Name of button should take the
Assessee” field. user to homepage.
(Continued)
Test status
Test case ID Objective Test steps Expected results (Pass/Fail)
4. Enter valid values in
all mandatory fields.
5. Click the “Submit”
button.
STRP_MQF_30 To verify the 1. Login as student 1. The “Monthly/ PASS
6 functionality of the role. Homepage of the Quarterly Tax Paid
“Submit” button user appears. Form” should not get
when no value is 2. Click the “Monthly/ submitted.
entered in the “STC Quarterly Tax Paid 2. The following
Code” field. Form” quicklink error message should
appearing under the appear with the “Back”
“To Be Reported By button: “STC Code is
STRP” section on the mandatory for valid
homepage. Return Filed.”
3. Do not enter any 3. Clicking the “Back”
value in the “STC button should take the
Code” field. user to the homepage.
4. Enter valid values
in all of the mandatory
fields.
5. Click the “Submit”
button.
STRP_MQF_30 To verify the 1. Login as student 1. The “Monthly/ PASS
8 functionality of the role. Homepage of the Quarterly Tax Paid
“Submit” button user appears. Form” should not get
when no value 2. Click the “Monthly/ submitted.
is entered in the Quarterly Tax Paid 2. The following
“Amount of Tax Form” quicklink error message should
Payable” field. appearing under the come with the “Back”
“To Be Reported By button: “Amount of tax
STRP” section on the payable is mandatory
homepage. for valid Return
3. Do not enter any Filed.”
value in the “Amount 3. Clicking the “Back”
of Tax Payable” field. button should take the
4. Enter valid values user to the homepage.
in all of the mandatory
fields.
5. Click the “Submit”
button.
STRP_MQF_31 To verify the 1. Login as student The form should get PASS
0 functionality of the role. Homepage of the submitted successfully
“Submit” button user appears. and the following
when the value in 2. Click the “Monthly/ confirmation message
the “STC Code” Quarterly Tax Paid should appear:
field is entered Form” quicklink “Record has been
saved successfully.”
(Continued)
Test status
Test case ID Objective Test steps Expected results (Pass/Fail)
in the following a ppearing under the
format: “To Be Reported By
STC code length: STRP” section on the
15 characters. homepage.
1-5: alphabetical 3. Enter a value of the
6-9: numeral “STC Code” in the
10th: alphabetical following format:
11-12: ST STC code length: 15
13-15: numeral characters.
1-5: alphabetical
6-9: numeral
10th: alphabetical
11-12: ST
13-15: numeral
4. Enter valid values
in all of the mandatory
fields.
5. Click the “Submit”
button.
STRP_MQF_31 To verify the max Specification not
1 length of the provided.
“Amount of Tax
Paid” textbox.
STRP_MQF_31 To verify the max Specification not
2 length of the provided.
“Name of Assessee”
textbox.
Test status
Test case ID Objective Test steps Expected results (Pass/Fail)
STRP_MQF_301 To verify the 1. Login as student “Monthly/Quarterly PASS
availability of the role. Homepage of the Tax Paid Form”
“Monthly/Quarterly user appears. quicklink should
Tax Paid Form” 2. Observe quicklinks appear under the
quicklinks to appearing under the “To Be Reported By
student role. “To Be Reported By STRP” section.
STRP” section.
(Continued)
Test status
Test case ID Objective Test steps Expected results (Pass/Fail)
STRP_MQF_302 To verify the 1. Login as student “Monthly/Quarterly PASS
accessibility of the role. Homepage of the Tax Paid Form” page
“Monthly/Quarterly user appears. should appear.
Tax Paid Form.” 2. Click the “Monthly/
Quarterly Tax Paid
Form” quicklink
appearing under the
“To Be Reported By
STRP” section in
quicklink.
STRP_MQF_304 To verify the “STRP 1. Login as student. 1. The “STRP ID” PASS
Details” at the Homepage of the user should show the login
“Monthly/Quarterly appears. ID of the logged in
Tax Paid Form” page. 2. Click on the user.
quicklink “Monthly/ 2. The “STRP
Quarterly Tax Paid Name” should show
Form,” “Monthly/ the name of the
Quarterly Tax Paid logged in user.
Form” page appears. 3. The “STRP PAN
Number” should
show the PAN No.
of the logged in
user.
STRP_MQF_305 To verify the 1. Login as student 1. The “Monthly/ PASS
functionality of the role. Homepage of the Quarterly Tax Paid
“Submit” button user appears. Form” should not get
when no value is 2. Click the “Monthly/ submitted.
entered in the “Name Quarterly Tax Paid 2. The following
of Assessee” field. Form” quicklink error message
appearing under the should come with
“To Be Reported the “Back” button:
By STRP” section “Name of Assessee is
in quicklink on the mandatory.”
homepage. 3. Clicking the “Back”
3. Do not enter any button should take the
value in the “Name of user to the homepage.
Assessee” field.
4. Enter valid values
in all of the mandatory
fields.
5. Click the “Submit”
button.
(Continued)
Test status
Test case ID Objective Test steps Expected results (Pass/Fail)
STRP_MQF_306 To verify the 1. Login as student 1. The “Monthly/ PASS
functionality of the role. Homepage of the Quarterly Tax Paid
“Submit” button user appears. Form” should not get
when no value is 2. Click the “Monthly/ submitted.
entered in the “STC Quarterly Tax Paid 2. The following error
Code” field. Form” quicklink message should
appearing under the come with the “Back”
“To Be Reported By button: “STC Code is
STRP” section on the mandatory for valid
homepage. Return Filed.”
3. Do not enter any 3. Clicking the “Back”
value in the “STC button should take the
Code” field. user to the homepage.
4. Enter valid values
in all of the mandatory
fields.
5. Click the “Submit”
button.
STRP_MQF_308 To verify the 1. Login as student 1. The “Monthly/ PASS
functionality of the role. Homepage of the Quarterly Tax Paid
“Submit” button user appears. Form” should not get
when no value 2. Click the “Monthly/ submitted.
is entered in the Quarterly Tax Paid 2. The following
“Amount of Tax Form” quicklink error message should
Payable” field. appearing under the come with the “Back”
“To Be Reported By button: “Amount of tax
STRP” section on the payable is mandatory
homepage. for valid Return
3. Do not enter any Filed.”
value in the “Amount 3. Clicking the “Back”
of Tax Payable” field. button should take the
4. Enter valid values user to the homepage.
in all of the mandatory
fields.
5. Click the “Submit”
button.
STRP_MQF_310 To verify the 1. Login as student The form should get PASS
functionality of the role. Homepage of the submitted successfully
“Submit” button user appears. and the following
when value in the
(Continued)
Test status
Test case ID Objective Test steps Expected results (Pass/Fail)
“STC Code” field 2. Click the “Monthly/ confirmation message
is entered in the Quarterly Tax Paid should appear:
following format: Form” quicklink “Record has been
STC code length: appearing under the saved successfully.”
15 characters. “To Be Reported By
1-5: alphabetical STRP” section on the
6-9: numeral homepage.
10th: alphabetical 3. Enter the value of
11-12: ST the “STC Code” in the
13-15: numeral following format:
STC code length: 15
characters.
1-5: alphabetical
6-9: numeral
10th: alphabetical
11-12: ST
13-15: numeral
4. Enter valid values
in all of the mandatory
fields.
5. Click the “Submit”
button.
STRP_MQF_311 To verify the max Specification not
length of the provided.
“Amount of Tax
Paid” textbox.
STRP_MQF_312 To verify the max Specification not
length of the provided.
“Name of Assessee”
textbox.
Test status
Test case ID Objective Test steps Expected results (Pass/Fail)
STRP_ To verify the 1. Login as student The “Monthly/ PASS
MQTR_301 availability of the role. Homepage of the Quarterly Tax Paid
“Monthly/Quarterly user appears. Report” quicklink
Tax Paid” report to 2. Observe the should appear under
student role. quicklinks on left side the “View Reports”
of the homepage. section.
STRP_ To verify the 1. Login as student The “Monthly/ PASS
MQTR_302 accessibility of the role. Homepage of the Quarterly Tax Paid
“Monthly/Quarterly user appears. Report” page should
Tax Paid” report 2. On the homepage, appear.
through quicklinks. under the “View
Reports” click on the
“Monthly/Quarterly
Tax Paid” link.
STRP_ To verify the 1. Login as student. Report should not PASS
MQTR_304 functionality of the 2. Go to “View get generated and
“Generate Report” reports” and the the following error
button when no “Monthly/Quarterly message should come
value is entered in Tax Paid” quicklinks, with the “Back”
the “Period” field. the “Monthly/ button: “Select The
Quarterly Tax Paid Period.” Clicking the
Report” page appears. “Back” button should
3. Do not enter any take the user to the
value in the “Period” “Monthly/Quarterly
field. Tax Paid Report” page.
4. Select the “Date from” Note: This ensures
and “To” fields from the that the “Period”
“Date” picker control. field is mandatory.
5. Click the “Generate
Report” button.
STRP_ To verify the 1. Login as student. 1. The report should PASS
MQTR_305 functionality of the 2. Go to “View get generated.
“Generate Report” reports” and the 2. All of the records of
button when no “Monthly/Quarterly the user should appear
value is entered in Tax Paid” quicklinks. in the “Reports” output
the “To” date field. 3. Select a period from section.
the “Period” drop down.
4. Do not enter any value
in the “To” date field.
(Continued)
Test status
Test case ID Objective Test steps Expected results (Pass/Fail)
5. Select a valid date
in the “Date from”
field from the Date
picker control.
6. Click the “Generate
Report” button.
STRP_ To verify the 1. Login as student. 1. The report should PASS
MQTR_306 functionality of the 2. Go to “View get generated.
“Generate Report” Reports” and the 2. All of the records of
button when no “Monthly/Quarterly the user should appear
value is entered in Tax Paid” quicklinks. in the “Reports”
the “Date From” 3. Select a period from output section.
field. the “Period” drop
down.
4. Do not enter any
value in the “Date
from” field.
5. Select a valid date
in the “To” date field
from the Date picker
control.
6. Click the “Generate
Report” button.
STRP_ To verify the values 1. Login as student. 1. The “Assessment PASS
MQTR_310 appearing in the 2. Go to “View Year” drop down
“Assessment Year” Reports” and the should have the
drop down. “Monthly/Quarterly following values:
Tax Paid” quicklinks, a. 2007-2008
the “Monthly/ b. 2008-2009
Quarterly Tax Paid c. 2009-2010
Report” page appears. 2. These values should
3. Click the “year” be sorted by ascending
drop down. order.
(Continued)
(Continued)
Test status
Test case ID Objective Test steps Expected results (Pass/Fail)
STRP_RFR_317 To verify the 1. Login as admin. An error message PASS
functionality of the Homepage of the user should appear saying,
“Export to Excel” appears. “From Date can not
button when the 2. Click on the be greater than To
“Date from” is quicklink “Service Wise Date.”
greater than the Report,” “Service Wise
“To” date. Report” page appears.
3. Select a service
type from the “Service
Type” drop down.
4. Fill the “Date from”
and “To” fields in “dd/
mm/yyyy” format.
5. Click on the “Export
to Excel” button.
STRP_RFR_318 To verify the Max 1. Login as admin. An error message PASS
Length of the “Date Homepage of the user saying, “Date Format
from” field. appears. of Start Date is not
2. Click on the valid.” should appear
quicklink “Service Wise with the “Back”
Report,” “Service Wise button.
Report” page appears. On clicking the “Back”
3. Select a service button the “Service
type from the “Service Wise Report” page
Type” drop down. should appear.
4. Enter more than
10 characters in the
“Date from” field.
5. Enter a valid date in
the “To” field.
6. Click on the “Generate
Report” button.
STRP_RFR_319 To verify the Max 1. Login as admin. An error message PASS
Length of the “To” Homepage of the user saying, “Date Format
field. appears. of End Date is not
2. Click on the valid.” should appear
quicklink “Service Wise with the “Back”
Report,” “Service Wise button.
Report” page appears. On clicking the “Back”
3. Select a service button the “Service
type from the “Service Wise Report” page
Type” drop down. should appear.
4. Enter more than 10
characters in the “To”
field.
5. Enter a valid date in
the “Date from” field.
6. Click on the
“Generate Report”
button.
(Continued)
CONCLUSION
a. Advantages:
Delivery of a quality product and software met all quality requirements.
Website is developed within the time frame and budget.
A disciplined approach to software development.
Quality products lead to happy customers.
Software is developed within the time frame and budget.
b. Limitations:
Quality is compromised to deliver the product on time and within budget.
It is a time consuming process.
It requires a large number of employees that leads to an increase in the
product cost.
14
The Game Testing Process 1
Developers don’t fully test their own games. They don’t have time to, and
even if they did, it’s not a good idea. Back at the dawn of the video game
era, the programmer of a game was also its artist, designer, and tester. Even
though games were very small—the size of email— the programmer spent
most of his time designing and programming. Little of his time was spent
testing. If he did any testing, it was based on his own assumptions about how
players would play his game. The following sidebar illustrates the type of
problem these assumptions could create.
This chapter appeared in Game Testing, Third Edition, C. Schultz and R. D. Bryant.
1
breathtaking (for the time) and the game went on to become one of the best
sellers on the Intellivision platform.
Weeks after the game was released, however, a handful of customers
began to call the game’s publisher, Mattel Electronics, with an odd com-
plaint: when they scored more than 9,999,999 points, the score displayed
negative numbers, letters, and symbol characters. This in spite of the prom-
ise of “unlimited scoring potential” in the game’s marketing materials. The
problem was exacerbated by the fact that the Intellivision console had a fea-
ture that allowed players to play the game in slow motion, making it much
easier to rack up high scores. John Sohl, the programmer, learned an early
lesson about video games: the player will always surprise you.
The sidebar story demonstrates why video game testing is best done by
testers who are: (a) professional, (b) objective, and (c) separated—either
physically or functionally—from the game’s development team. That remove
and objectivity allows testers to think independently of the developers, to
function as players, and to figure out new and interesting ways to break the
game. This chapter discusses how, like the gears of a watch, the game testing
process meshes into the game development process.
“BLACK-BOX” TESTING
Almost all game testing is black-box testing, testing done from outside the
application. No knowledge of, or access to, the source code is granted to the
tester. Game testers typically don’t find defects by reading the game code.
Rather, they try to find defects using the same input devices available to
the average player, be it a mouse, a keyboard, a console gamepad, a motion
sensor, or a plastic guitar. Black-box testing is the most cost-effective way to
test the extremely complex network of systems and modules that even the
simplest video game represents.
Figure 14.1 illustrates some of the various inputs you can provide to a
videogame and the outputs you can receive back. The most basic of inputs
are positional, and control data in the form of button presses and cursor
movements, or vector inputs from accelerometers, or even full-body cam-
eras. Audio input can come from microphones fitted in headsets or attached
to a game controller. Input from other players can come from a second con-
troller, a local network, or the Internet. Finally, stored data such as saved
games and options settings can be called up as input from memory cards or
a hard drive.
INPUTS OUTPUTS
Button Presses
Video
Audio
Audio
Video GAME CODE
(THE “BLACK-BOX”)
Vibration
Packets
Memory
Memory
Once some or all of these types of input are received by the game,
it reacts in interesting ways and produces such output as video, audio,
vibration (via force feedback devices), and data saved to memory cards or
hard drives.
The input path of a video game is not
oneway, however. It is a feedback loop, The Player
where the player and the game are constantly
reacting to each other. Players don’t receive Inputs Outputs
output from a game and stop playing. They
constantly alter and adjust their input “on the The Game
fly,” based on what they see, feel, and hear
in the game. The game, in turn, makes sim- FIGURE 14.2 The player’s
ilar adjustments in its outputs based on the feedback loop adjusts to the
game’s input, and vice versa.
inputs it receives from the player. Figure 14.2
illustrates this loop.
If the feedback received by the player were entirely predictable all
the time, the game would be no fun. Nor would it be fun if the feedback
received by the player were entirely random all the time. Instead, feed-
back from games should be just random enough to be unpredictable. It is
the unpredictability of the feedback loop that makes games fun. Because
the code is designed to surprise the player and the player will always sur-
prise the programmer, black-box testing allows testers to think and behave
like players.
“WHITE-BOX” TESTING
In contrast to black-box testing, white-box testing gives the tester opportuni-
ties to exercise the source code directly in ways that no player ever could. It
can be a daunting challenge for the white-box tester to read a piece of game
code and predict every single interaction it will have with every other bit of
code, and whether the programmer has accounted for every combination
and order of inputs possible. Testing a game using only white-box methods
is also extremely difficult because it is nearly impossible to account for the
complexity of the player feedback loop. There are, however, situations in
which white-box testing is more practical and necessary than black-box test-
ing. These include the following:
Tests performed by developers prior to submitting new code for integra-
tion with the rest of the game
Testing code modules that will become part of a reusable library across
multiple games or platforms
Testing code methods or functions that are essential parts of a game
engine or middleware product
Testing code modules within your game that might be used by thirdparty
developers or “modders” who, by design, could expand or modify the
behavior of your game to their own liking
Testing low-level routines that your game uses to support specific functions
in the newest hardware devices, such as graphics cards or audio processors
In performing white-box tests, you execute specific modules and the
various paths that the code can follow when you use the module in various
ways. Test inputs are determined by the types and values of data that can be
passed to the code. Results are checked by examining values returned by the
module, global variables that are affected by the module, and local variables
as they are processed within the module. To get a taste of white-box testing,
consider the TeamName routine from Castle Wolfenstein: Enemy Territory:
Four white-box tests are required for this module to test the proper
behavior of each line of code within the module. The first test would be
to call the TeamName function with the parameter TEAM_AXIS and then
check that the string “RED” is returned. Second, pass the value of TEAM_
ALLIES and check that “BLUE” is returned. Third, pass TEAM_SPECTATOR
and check that “SPECTATOR” is returned. Finally, pass some other value
such as TEAM_NONE, which makes sure that “FREE” is returned. Together
these tests not only exercise each line of code at least once, they also test the
behavior of both the “true” and “false” branches of each if statement.
This short exercise illustrates some of the key differences between a
white-box testing approach and a black-box approach:
Black-box testing should test all of the different ways you could choose
a test value from within the game, such as different menus and buttons.
White-box testing requires you to pass that value to the routine in one
form—its actual symbolic value within the code.
By looking into the module, white-box testing reveals all of the possi-
ble values that can be provided to and processed by the module being
tested. This information might not be obvious from the product require-
ments and feature descriptions that drive black-box testing.
Black-box testing relies on a consistent configuration of the game and its
operating environment in order to produce repeatable results. White-
box testing relies only on the interface to the module being tested and
is concerned only about external files when processing streams, file sys-
tems, or global variables.
the game support? What features have been cut? The scope of testing
should ensure that no new issues were introduced in the process of fixing
bugs prior to this release.
2. Prepare for testing. Code, tests, documents, and the test environment
are updated by their respective owners and aligned with one another. By
this time the development team should have marked the bugs fixed for
this build in the defect database so the QA team can subsequently verify
those fixes and close the bugs.
3. Perform the test. Run the test suites against the new build. If you find
a defect, test “around” the bug to make certain you have all the details
necessary to write as specific and concise a bug report as possible. The
more research you do in this step, the easier and more useful the bug
report will be.
4. Report the results. Log the completed test suite and report any defects
you found.
5. Repair the bug. The test team participates in this step by being available
to discuss the bug with the development team and to provide any directed
testing a programmer might require to track the defect down.
6. Return to Step 1 and re-test. With new bugs and new test results
comes a new build.
These steps not only apply to black-box testing, they also describe
white-box testing, configuration testing, compatibility testing, and any
other type of QA. These steps are identical no matter what their scale. If
you substitute the word “game” or “project” for the word “build” in the
preceding steps, you will see that they can also apply to the entire game, a
phase of development (Alpha, Beta, and so on), or an individual module or
feature within a build. In this manner, the software testing process can be
considered fractal—the smaller system is structurally identical to the larger
system, and vice versa.
As illustrated in Figure 14.3, the The Tester
testing process itself is a feedback
loop between the tester and devel-
oper. The tester plans and executes Bugs Code
tests on the code, then reports the
bugs to the developer, who fixes The Developer
them and compiles a new build,
which the tester plans and executes, FIGURE 14.3 The testing process
and so on. feedback loop.
This is a very small portion of a very simple test suite for a very small
and simple game. The first section (steps one through seven) tests launching
the game, ensuring that the default display is correct, and exiting. Each step
either gives the tester one incremental instruction or asks the tester one
simple question. Ideally, these questions are binary and unambiguous. The
tester performs each test case and records the result.
Because the testers will inevitably observe results that the test designer
hadn’t planned for, the Comments field allows the tester to elaborate on a
Yes/No answer, if necessary. The lead or primary tester who receives the
completed test suite can then scan the Comments field and make adjust-
ments to the test suite as needed for the next build.
Where possible, the questions in the test suite should be written in such
a way that a “yes” answer indicates a “pass” condition—the software is work-
ing as designed and no defect is observed. “No” answers, in turn, should
indicate that there is a problem and a defect should be reported. There are
several reasons for this: it’s more intuitive, because we tend to group “yes”
and “pass” (both positives) together in our minds the same way we group
“no” and “fail.” Further, by grouping all passes in the same column, the com-
pleted test suite can be easily scanned by both the tester and test managers
to determine quickly whether there were any fails. A clean test suite will
have all the checks in the Pass column.
For example, consider a test case covering the display of a tool tip—a
small window with instructional text incorporated into many interfaces.
A fundamental test case would be to determine whether the tool tip text
contains any typographical errors. The most intuitive question to ask in the
test case is:
The problem with this question is that a pass (no typos, hence no bugs)
would be recorded as a “no.” It would be very easy for a hurried (or tired)
tester to mistakenly mark the Fail column. It is far better to express the
question so that a “yes” answer indicates a “pass” condition:
Is the text free of typographical errors?
Entry Criteria
It’s advisable to require that any code release meets some criteria for being
fit to test before you risk wasting your time, or your team’s time, testing it.
This is similar to the checklists that astronauts and pilots use to evaluate the
fitness of their vehicle systems before attempting flight. Builds submitted to
testing that don’t meet the basic entry criteria are likely to waste the time of
both testers and programmers. The countdown to testing should stop until
the test “launch” criteria are met.
The following is a list of suggestions for entry criteria. Don’t keep these
a secret from the rest of the development team. Make the team aware of the
purpose—to prevent waste—and work with them to produce a set of criteria
that the whole team can commit to.
The game code should be built without compiler errors. Any new com-
piler warnings that occur are analyzed and discussed with the test team.
The code release notes should be complete and should provide the detail
that testers need to plan which tests to run or to re-run for this build.
Defect records for any bugs closed in the new release should be updated
so they can be used by testers to make decisions about how much to test
in the new build.
Tests and builds should be properly version-controlled, as described in
the sidebar, “Version Control: Not Just for Developers.”
When you are sufficiently close to the end of the project, you also want
to receive the game on the media on which it will ship. Check that the
media provided contains all of the files that would be provided to your
customer.
of bugs in an old build. This is not only a waste of time, but it can cause
panic on the part of the programmer and the project manager.
Proper version control for the test team includes the following steps:
1. Collect all prior physical (e.g., disk-based) builds from the test team
before distributing the new build. The prior versions should be staked
together and archived until the project is complete. (When testing digital
downloads, uninstall and delete or archive prior digital builds.)
2. Archive all paperwork. This includes not only any build notes you received
from the development team, but also any completed test suites, screen
shots, saved games, notes, video files, and any other material generated
during the course of testing a build. It is sometimes important to retrace
steps along the paper trail, whether to assist in isolating a new defect or
determining in what version an old bug was re-introduced.
3. Verify the build number with the developer prior to distributing it.
4. In cases where builds are transmitted electronically, verify the byte count,
file dates, and directory structure before building it. It is vital in situations
where builds are sent via FTP, email, Dropbox (www.dropbox.com) or other
digital means that the test team makes certain to test a version identical
to the version the developers uploaded. Confirm the integrity of the
transmitted build before distributing it to the testers.
5. Renumber all test suites and any other build-specific paperwork or
electronic forms with the current version number.
6. Distribute the new build for smoke testing.
Configuration Preparation
Before the test team can work with the new build, some housekeeping is in
order. The test equipment must be readied for a new round of testing. The
test lead must communicate the appropriate hardware configuration to each
tester for this build. Configurations typically change little over the course of
game testing. To test a single-player-only console game, you need the game
console, a controller, and a memory card or hard drive. That hardware con-
figuration typically will not change for the life of the project. If, however, the
new build is the first in which network play is enabled, or a new input device
or PC video card has been supported, you will perhaps need to augment the
hardware configuration to perform tests on that new code.
Save your saves! Always archive your old player-created data, including
TIP game saves, options files, and custom characters, levels, or scenarios.
Testing takes place in the lab and labs should be clean. So should test
hardware. It’s difficult to be too fastidious or paranoid when preparing test
configurations. When you get a new build, reformat your PC rather than
merely uninstall the new build.
Delete your old builds! Reformat your test hardware—whether it’s a PC, a
TIP tablet or a smartphone. If it’s a browser game, delete the cache.
Browser games should be purged from each browser’s cache and the
browser should be restarted before you open the new game build. In the
case of Flash® games, you can right-click on the old build and select “Global
Settings…” This will launch a separate browser process and will connect
you to the Flash Settings Manager. Choosing the “Website Storage Settings
panel” will launch a Flash applet. Click the “Delete all sites” button and
close all of your browser processes. Now you can open the new build of your
Flash game.
iOS™ games should be deleted both from the device and the iTunes®
client on the computer the device is synched to. When prompted by iTunes,
choose to delete the app entirely (this is the “Move to Recycle Bin” or “Move
to Trash” button). Now, synch your device and make certain the old build has
been removed both from iTunes and your device. Empty the Recycle Bin (or
the Trash), relaunch iTunes, copy the new build, and synch your device again.
Android™ games, like iOS games, should be deleted entirely from the
device and your computer. Always synch your device to double-check that
you have scrubbed the old build off before you install the new build.
Whatever protocol is established, config prep is crucial prior to the
distribution of a new build.
Smoke Testing
The next step after accepting a new build and preparing to test it is to cer-
tify that the build is worthwhile to submit to formal testing. This process is
sometimes called smoke testing, because it’s used to determine whether a
build “smokes” (malfunctions) when run. At a minimum, it should consisted
of a “load & launch,” that is, the lead or primary tester should launch the
game, enter each module from the main menu, and spend a minute or two
playing each module. If the game launches with no obvious performance
problems and each module implemented so far loads with no obvious prob-
lems, it is safe to certify the build, log it, and duplicate it for distribution to
the test team.
Now that the build is distributed, it’s time to test for new bugs, right?
Not just yet. Before testing can take a step forward, it must first take a step
backward and verify that the bugs the development team claims to have fixed
in this build are indeed fixed. This process is known as regression testing.
Regression Testing
Fix verification can be at once very satisfying and very frustrating. It gives
the test team a good sense of accomplishment to see the defects they report
disappear one by one. It can be very frustrating, however, when a fix of one
defect creates another defect elsewhere in the game, as can often happen.
The test suite for regression testing is the list of bugs the development
team claims to have fixed. This list, sometimes called a knockdown list, is
ideally communicated through the bug database. When the programmer or
artist fixes the defect, all they have to do is change the value of the Devel-
oper Status field to “Fixed.” This allows the project manager to track the
progress on a minute-to-minute basis. It also allows the lead tester to sort
the regression set (by bug author or by level, for example). At a minimum,
the knockdown list can take the form of a list of bug numbers sent from the
development team to the lead tester.
Each tester will take the bugs they’ve been assigned and perform the
steps in the bug write-up to verify that the defect is indeed fixed. The fixes
for many defects are easily verified (typos, missing features, and so on).
Some defects, such as hard-to-reproduce crashes, could seem fixed, but the
lead tester might want to err on the side of caution before he closes the bug.
By flagging the defect as verify fix, the bug can remain in the regression set
(i.e., stay on the knockdown list) for the next build (or two), but out of the
set of open bugs that the development team is still working on. Once the
bug has been verified as fixed in two or three builds, the lead tester can then
close the bug with more confidence.
At the end of regression testing, the lead tester and project manager
can get a very good sense of how the project is progressing. A high fix rate
(number of bugs closed divided by the number of bugs claimed to have been
fixed) means the developers are working efficiently. A low fix rate could be
cause for concern. Are the programmers arbitrarily marking bugs as fixed
if they think they’ve implemented new code that might address the defect,
rather than troubleshooting the defect itself? Are the testers not writing
clear bugs? Is there a version control problem? Are the test systems config-
ured properly? While the lead tester and project manager mull over these
questions, it’s time for you to move on to the next step in the testing process:
performing structured tests and reporting the results.
These are the types of questions you will be asked by the lead tester,
project manager, or developer. Try to develop the habit of second-guessing
such questions by performing some quick additional testing before you write
the bug. Test to see whether the defect occurs in other areas. Test to deter-
mine whether the bug happens when you choose a different character. Test
to check which other game modes contain the issue. This practice is known
as testing “around” the bug.
Once you are satisfied that you have anticipated any questions that the
development team might ask, and you have all your facts ready, you are
finally ready to write the bug report.
possible. You can’t assume that everyone reading your bug report will be as
familiar with the game as you are. Testers spend more time in the game—
exploring every hidden path, closely examining each asset—than almost any-
one else on the entire project team. A well-written bug will give a reader
who is not familiar with the game a good sense of the type and severity of the
defect it describes.
This is neither a defect nor a fact; it’s an unsolicited and arbitrary opin-
ion about design. There are forums for such opinions—discussions with the
lead tester, team meetings, play testing feedback—but the bug database isn’t
one of them.
A common complaint in many games is that the artificial intelligence, or
AI, is somehow lacking. (AI is a catch-all term that means any opponents or
NPCs controlled by the game code.)
The AI is weak.
This could indeed be a fact, but it is written in such a vague and gen-
eral way that it is likely to be considered an opinion. A much better way to
convey the same information is to isolate and describe a specific example of
AI behavior and write up that specific defect. By boiling issues down to spe-
cific facts, you can turn them into defects that have a good chance of being
addressed.
Before you begin to write a bug report, you have to be certain that you
TIP have all your facts.
Brief Description
Larger databases could contain two description fields: Brief Description (or
Summary) and Full Description (or Steps). The Brief Description field is
used as a quick reference to identify the bug. This should not be a cute nick-
name, but a one-sentence description that allows team members to identify
and discuss the defect without having to read the longer, full description
each time. Think of the brief description as the headline of the defect report.
Crash to desktop.
This is a complete sentence, but it is not specific enough. What did the
tester experience? Did the game not save? Did a saved game not load? Does
saving cause a crash?
This is a run-on sentence that contains far too much detail. A good way
to boil it down might be
Write the full description first, and then write the brief description.
TIP Spending some time polishing the full description will help you
understand the most important details to include in the brief description.
Full Description
If the brief description is the headline of a bug report, the Full Description
field provides the gory details. Rather than a prose discussion of the defect,
the full description should be written as a series of brief instructions so that
anyone can follow the steps and reproduce the bug. Like a cooking recipe—
or computer code, for that matter—the steps should be written in second
person imperative, as though you were telling someone what to do. The last
step is a sentence (or two) describing the bad result.
The fewer steps, the better; and the fewer words, the better. Remember
Brad Pitt’s warning to Matt Damon in Ocean’s Eleven: don’t use seven steps
when four will do. Time is a precious resource when developing a game. The
less time it takes a programmer to read, reproduce, and understand the bug,
the more time he has to fix it.
1. Launch game.
2. Choose multiplayer.
3. Choose skirmish.
4. Choose “Sorrowful Shoals” map.
5. Choose two players.
6. Start game.
These are very clear steps, but for the sake of brevity they can be boiled
down to
1.
Start a two player skirmish game on “Sorrowful Shoals.”
1.
Create a game against one human player. Choose
Serpent tribe.
2.
Send a swordsman into a Thieves Guild to get the
Mugging power-up.
3.
Have your opponent create any unit and give
that unit any power-up.
4.
Have your Swordsman meet the other player’s
unit somewhere neutral on the map.
5.
Activate the Mugging power-up.
6.
Attack your opponent’s unit.
--> Crash to desktop as Swordsman strikes.
This might seem like many steps, but it is the quickest way to repro-
duce the bug. Every step is important to isolate the behavior of the mug-
ging code. Even small details, like meeting in a neutral place, are important,
because meeting in occupied territory might bring allied units from one side
or another into the fight, and the test might then be impossible to perform.
Great Expectations
Oftentimes, the defect itself will not be obvious from the steps in the full
description. Because the steps produce a result that deviates from player
expectation, but does not produce a crash or other severe or obvious
symptom, it is sometimes necessary to add two additional lines to your full
description: Expected Result and Actual Result.
Expected Result describes the behavior that a normal player would rea-
sonably expect from the game if the steps in the bug were followed. This
expectation is based on the tester’s knowledge of the design specification,
the target audience, and precedents set (or broken) in other games, espe-
cially games in the same genre.
Actual Result describes the defective behavior. Here’s an example.
1. Create a multiplayer game.
2. Click Game Settings.
3.
Using your mouse, click any map on the map list.
Remember the map you clicked on.
4.
Press up or down directional keys on your keyboard.
5.
Notice the highlight changes. Highlight any other
map.
6. Click Back.
7. Click Start Game.
Expected Result: Game loads map you chose with the keyboard.
Actual Result: Game loads map you chose with the mouse.
Although the game loaded a map, it wasn’t the map the tester chose with
the keyboard (the last input device he used). That’s a bug, albeit a subtle
one. Years of precedent creates the expectation in the player’s mind that the
computer will execute a command based on the last input the player gave.
Because the map-choosing interface failed to conform to player expectation
and precedent, it could be confusing or annoying, so it should be written up
as a bug.
Use the Expected/Actual Result steps sparingly. Much of the time,
defects are obvious (see Figure 14.5) Here’s an example of “stating the obvi-
ous” in a crash bug.
INTERVIEW
More players are playing games than ever before. As any human population
grows—and the pool of game players has grown exponentially over the last
decade—that population becomes more diverse. Players are different from
each other, have different levels of experience with games, and play games
for a range of different reasons. Some players want a competitive experi-
ence, some an immersive experience, some want a gentle distraction.
The pool of game testers in any organization is always less diverse than
the player base of the game they are testing. Game testers are profession-
als, they have skills in manipulating software interfaces, they are generally
(but not necessarily) experienced game players. It’s likely that if your job is
creating games, that you’ve played video games—a lot of them. But not every
player is like you.
Brent Samul, QA Lead for developer Mobile Deluxe, put it this way: “The
biggest difference when testing for mobile is your audience. With mobile
you have such a broad spectrum of users. Having played games for so long
myself, it can sometimes be really easy to overlook things that someone who
doesn’t have so much experience in games would get stuck on or confused
about.”
It’s a big job. “With mobile, we have the ability to constantly update and
add or remove features from our games. There are always multiple things to
test for with all the different configurations of smartphones and tablets that
people have today,” Mr. Samul says.
Although testers should write bugs against the design specification, the
authors of that specification are not omniscient. As the games on every plat-
form become more and more complex, it’s the testers’ job to advocate for
the players—all players—in their bug writing. (Permission Brent Samul)
Habits to Avoid
For the sake of clarity, effective communication, and harmony among mem-
bers of the project team try to avoid two common bug writing pitfalls: humor
and jargon.
Although humor is often welcome in high-stress situations, it is not wel-
come in the bug database. Ever. There are too many chances for misinter-
pretation and confusion. During crunch time, tempers are short, skins are
thin, and nerves are frayed. The defect database could already be a point of
contention. Don’t make the problem worse with attempts at humor (even if
you think your joke is hilarious). Finally, as the late William Safire warned,
you should “avoid clichés like the plague.”
It perhaps seems counterintuitive to want to avoid jargon in such a spe-
cialized form of technical writing as a bug report, but it is wise to do so.
Although some jargon is unavoidable, and each project team quickly devel-
ops it own nomenclature specific to the project they’re working on, testers
should avoid using (or misusing) too many obscure technical terms or acro-
nyms. Remember that your audience could range from programmers to
financial or marketing executives, so use plain language as much as possible.
Although testing build after build might seem repetitive, each new build
provides exciting new challenges with its own successes (fixed bugs and
passed tests) and shortfalls (new bugs and failed tests). The purpose of going
about the testing of each build in a structured manner is to reduce waste and
to get the most out of the game team. Each time around, you get new build
data that is used to re-plan test execution strategies and update or improve
your test suites. From there, you prepare the test environment and perform
a smoke test to ensure the build is functioning well enough to deploy to the
entire test team. Once the test team is set loose, your top priority is typically
regression testing to verify recent bug fixes. After that, you perform many
other types of testing in order to find new bugs and to check that old ones
have not re-emerged. New defects should be reported in a clear, concise,
and professional manner after an appropriate amount of investigation. Once
you complete this journey, you are rewarded with the opportunity to do it
all over again.
EXERCISES
1. Briefly describe the difference between the Expected Result and the
Actual Result in a bug write-up.
2. What’s the purpose of regression testing?
3. Briefly describe the steps in preparing a test configuration.
4. What is a “knockdown list”? Why is it important?
5. True or False: Black-box testing refers to examining the actual game
code.
6. True or False: The Brief Description field of a defect report should
include as much information as possible.
15
Basic Test Plan Template 1
Game Name
1. Copyright Information
Table of Contents
SECTION I: QA TEAM (and areas of responsibility)
1. QA Lead
a. Office phone
b. Home phone
c. Mobile phone
d. Email / IM / VOIP addresses
2. Internal Testers
3. External Test Resources
This chapter appeared in Game Testing, Third Edition, C. Schultz and R. D. Bryant.
1
c. Etc.
d. The final activity is usually to run an automated script
that reports the results of the various tests and posts
them in the QA portion of the internal Web site.
2. Level #2
3. Etc.
ii. Run though a predetermined set of multiplayer levels,
performing a specified set of activities.
1. Level #1
a. Activity #1
b. Activity #2
c. Etc.
d. The final activity is usually for each tester involved in
the multiplayer game to run an automated script that
reports the results of the various tests and posts them
in the QA portion of the internal Web site.
2. Level #2
3. Etc.
iii. Email showstopper crashes or critical errors to the entire
team.
iv. Post showstopper crashes or critical errors to the daily top
bugs list (if one is being maintained).
3. Daily Reports
a. Automated reports from the preceding daily tests are posted in the
QA portion of the internal Web site.
4. Weekly Activities
a. Weekly tests
i. Run though every level in the game (not just the preset ones
used in the daily test), performing a specified set of activities
and generating a predetermined set of tracking statistics. The
same machine should be used each week.
1. Level #1
a. Activity #1
b. Activity #2
c. Etc.
2. Level #2
3. Etc.
ii. Weekly review of bugs in the Bug Tracking System
1. Verify that bugs marked “fixed” by the development team
really are fixed.
2. Check the appropriateness of bug rankings relative to
where the project is in the development.
3. Acquire a “feel” for the current state of the game, which
can be communicated in discussions to the producer and
department heads.
4. Generate a weekly report of closed-out bugs.
b. Weekly Reports
i. Tracking statistics, as generated in the weekly tests.
5. Ad Hoc Testing
a. Perform specialized tests as requested by the producer, tech lead, or
other development team members
b. Determine the appropriate level of communication to report the
results of those tests.
6. Integration of Reports from External Test Groups
a. If at all possible, ensure that all test groups are using the same bug
tracking system.
b. Determine which group is responsible for maintaining the master
list.
c. Determine how frequently to reconcile bug lists against each other.
d. Ensure that only one consolidated set of bugs is reported to the
development team.
A
Quality Assurance and
Testing Tools
IEEE/ANSI Software Test
Standard Process Purpose
829–1983 Software Test This standard covers the entire testing process.
Documentation
1008–1987 Software Unit This standard defines an integrated approach to
Testing systematic and documented unit testing.
1012–1986 Software This standard provides uniform and minimum
Verification and requirements for the format and content of
Validation Plans software verification and validation plans.
1028–1988 Software This standard provides direction to the reviewer
Reviews and or auditor on the conduct of evaluations.
Audits
730–1989 Software Quality This standard establishes a required format and
Assurance Plans a set of minimum contents for software quality
assurance plans.
828–1990 Software This standard is similar to IEEE standard 730, but
Configuration deals with the more limited subject of software
Management configuration management. This standard identifies
Plans requirements for configuration identification,
configuration control, configuration status
reporting, configuration audits and reviews.
1061–1992 Software This standard provides a methodology for
Quality Metrics establishing quality requirements. It also deals
Methodology with identifying, implementing, analyzing and
validating the process of software quality metrics.
Description Tools
Functional/Regression Testing WinRunner
Silkiest
Quick Test Pro (QTP)
Rational Robot
Visual Test
In-house Scripts
Load/Stress Testing (Performance) LoadRunner
Astra Load Test
Application Centre Test (ATC)
In-house Scripts
Web Application Stress Tool (WAS)
Test Case Management Test Director
Test Manager
In-house Test Case Management tools
Defect Tracking TestTrack Pro
Bugzilla
Element Tool
ClearQuest
TrackRecord
In-house Defect Tracking tools of clients
Unit/Integration Testing C++ Test
JUnit
NUnit
PhpUnit
Check
Cantata++
B
Suggested Projects
1. ONLINE CHATTING
Develop a Software package that will act as an online community, which you
can use to meet new friends using voice, video, and text. The community has
rooms for every interest, where people can communicate by using real-time
multi-point video, text, and voice in “calls.” The package should also have
video and voice instant Messages with different colors, fonts, and overlays to
choose from, or you can participate in a real-time text chat room.
Also incorporate broadcast, where one host shares his/her video, voice,
and text chat with up to ten other viewers. Everyone involved in the broad-
cast can communicate with text chat. Also add the option of profiles; wherein
each community member has the opportunity to post a picture and some
optional personal information that can be accessed from the directory by any
other community member.
3. BROWSER
Design a fast, user-friendly, versatile Internet/intranet browser, Monkey,
that also includes a newsreader. The keyboard plays an integral role in
surfing, which can make moving around the Web easy and fast. You can run
multiple windows, even at start-up, and special features are included for
users with disabilities. Other options include the ability to design your own
look for the buttons in the program; file uploading support to be used with
forms and mail; an option to turn-off tables; advanced cookie filtering; and
a host of other powerful features, including an email client enhancement,
keyboard shortcuts. Ensure also integrated search and Instant Messaging,
email support and accessibility to different Web sites.
9. MYTOOL
Prepare a project, named MyTool, which is an all-in-one desktop and system
utility program. Its features include an easy-to-use personal calendar that
helps you manage your daily events and can remind you of important events
in a timely manner. You can create quick notes on your desktop with the
embedded WordPad applet. Schedule events, shut down Windows, down
load software, and more, all automatically at intervals you set.
10. TIMETRAKER
Track the time using your computer. Set alarms for important appointments,
lunch, and so on. Sum the time spent on projects weekly, monthly, yearly,
and do much more.
17. FEECHARGER
Translators, language professionals, and international writers charge based
upon the number of words, lines, and/or characters in the documents they
handle. They need to check the currency conversion rates very frequently.
They go online, find and calculate the current rate. Design a tool that does
all this.
19. PROTECT PC
Develop a software tool, named Protect PC, which locks and hides desired
folders, files, data, and applications. The tool should provide a user-
friendly solution for security and privacy. It should have the security and
privacy of 128‑bit encryption. Also have the feature so that hackers cannot
find encrypted files. Also provide Network, FDD, and CD-ROM Locking
functions, and a Delete function for List of Recent Documents and W
indows
Explorer.
using the programs. You may also like to secure Windows so that users can-
not run unauthorized programs or modify Windows configurations such as
wallpaper and network settings. Develop a software tool to do all this with
additional features, such as screenshot capturing, enhanced keystroke cap-
turing (captures lowercase and special characters), ability to email log files
to a specific email address.
24. FAXMATE
Design a software tool, FaxMate, which creates, sends, and receives faxes.
The incoming faxes should be announced by customizing the program with
sound file. Include a library of cover sheet templates, and provide a phone
number database for frequently used numbers. Use the latest fax technol-
ogy, including fax-modem autodetection.
29. MOVIEEDITOR
Develop a software tool, named MovieEditor, which can edit and animate
video clips, add titles and audio, or convert video formats. It should support
real-time preview and allow experimenting as well. Build Internet function-
ality such as Real Video, ASF, and Quicktime into this tool. The tool should
also have the capability so that all current video, graphic, and audio formats
can be imported into a video production, animated, and played back in dif-
ferent formats. The tool should convert all current image, video, and audio
files, and may be used as a multimedia browser for displaying, searching,
and organizing images, videos, and audio files. It should provide support for
both native DV and Fire Wire interfaces with integrated device control for
all current camcorders.
30. TAXMASTER
Develop a software package, named TaxMaster, which helps to complete
income tax and sales tax returns. It should make available all the rules,
forms, schedules, worksheets and information available in the US scenario
to c omplete the tax forms. The tool should calculate returns for the user,
then review the return and send alert for possible errors. It should provide
the facility to file the return electronically or print a paper return, so your
tool should be available on the Web.
34. METRIC
Study and define metrics for any of the programming languages Java/C++/C/
Perl/Visual Basic. Develop a software tool, called Metric, which determines
the source code and McCabe metrics for these languages.
35. FUNCTIONPTESTIMATOR
Study the function point method for size estimation for software projects.
Design a software package, named FunctionPtEstimator, which computes
the function points and the corresponding KLOC for a project. Your tool
should be able to prepare a comparative analysis for different programming
languages.
40. WORLDTIMER
Develop a software tool, named WorldTimer, which displays the date and
time of cities around the globe in 8 Clocks. It should provide information,
such as language, population, currency and telephone codes, of the capital
cities of all the countries. It should show the sunrise/sunset line in a world
map and the difference in time between any two cities.
C
Glossary
Abstract class: A class that cannot be instantiated, i.e., it cannot have any instances.
Abstract test case: See high-level test case.
Acceptance: See acceptance testing.
Acceptance criteria: The exit criteria that a component or system must satisfy in
order to be accepted by a user, customer, or other authorized entity. [IEEE 6.10]
Acceptance testing: It is done by the customer to check whether the product
is ready for use in the real-life environment. Formal testing with respect to user
needs, requirements, and business processes conducted to determine whether or
not a system satisfies the acceptance criteria and to enable the user, customers, or
other authorized entity to determine whether or not to accept the system. [After
IEEE 610]
Accessibility testing: Testing to determine the ease by which users with disabilities
can use a component or system. [Gerrard]
Accuracy: The capability of the software product to provide the right or agreed
results or effects with the needed degree of precision. [ISO 9126] See also
functionality testing.
Activity: A major unit of work to be completed in achieving the objectives of a
hardware/ software project.
Actor: An actor is a role played by a person, organization, or any other device which
interacts with the system.
Actual outcome: See actual result.
Actual result: The behavior produced/observed when a component or system is
tested.
Ad hoc review: See informal review.
Ad hoc testing: Testing carried out informally; no formal test preparation takes
place, no recognized test design technique is used, there are no expectations for
results and randomness guides the test execution activity.
Adaptability: The capability of the software product to be adapted for different
specified environments without applying actions or means other than those provided
for this purpose for the software considered. [ISO 9126] See also portability testing.
Agile testing: Testing practice for a project using agile methodologies, such as
extreme programming (XP), treating development as the customer of testing and
emphasizing the test-first design paradigm.
Aggregation: Process of building up of complex objects out of existing objects.
Algorithm test [TMap]: See branch testing.
Alpha testing: Simulated or actual operational testing by potential users/customers
or an independent test team at the developers’ site, but outside the development
organization. Alpha testing is often employed as a form of internal acceptance testing.
Analyst: An individual who is trained and experienced in analyzing existing systems
to prepare SRS (software requirement specifications).
Analyzability: The capability of the software product to be diagnosed for
deficiencies or causes of failures in the software, or for the parts to be modified to
be identified. [ISO 9126] See also maintainability testing.
Analyzer: See static analyzer.
Anomaly: Any condition that deviates from expectation based on requirements
specifications, design documents, user documents, standards, etc. or from someone’s
perception or experience. Anomalies may be found during, but not limited to,
reviewing, testing, analysis, compilation, or use of software products or applicable
documentation. [IEEE 1044] See also defect, deviation, error, fault, failure, incident,
or problem.
Arc testing: See branch testing.
Atomicity: A property of a transaction that ensures it is completed entirely or not
at all.
Attractiveness: The capability of the software product to be attractive to the user.
[ISO 9126] See also usability testing.
Audit: An independent evaluation of software products or processes to ascertain
compliance to standards, guidelines, specifications, and/or procedures based on
objective criteria, including documents that specify: (1) the form or content of the
products to be produced, (2) the process by which the products shall be produced,
and (3) how compliance to standards or guidelines shall be measured. [IEEE 1028]
Audit trail: A path by which the original input to a process (e.g., data) can be
traced back through the process, taking the process output as a starting point. This
facilitates defect analysis and allows a process audit to be carried out. [After TMap]
Automated testware: Testware used in automated testing, such as tool scripts.
Availability: The degree to which a component or system is operational and
accessible when required for use. Often expressed as a percentage. [IEEE 610]
Back-to-back testing: Testing in which two or more variants of a component or
system are executed with the same inputs, the outputs compared, and analyzed in
cases of discrepancies. [IEEE 610]
Data flow analysis: A form of static analysis based on the definitions and usage of
variables.
Data flow coverage: The percentage of definition-use pairs that have been
exercised by a test case suite.
Data flow test: A white-box test design technique in which test cases are designed
to execute definitions and use pairs of variables.
Dead code: See unreachable code.
Debugger: See debugging tool.
Debugging: The process of finding, analyzing, and removing the causes of failures
in software.
Debugging tool: A tool used by programmers to reproduce failures, investigate
the state of programs, and find the corresponding defect. Debuggers enable
programmers to execute programs step by step to halt a program at any program
statement and to set and examine program variables.
Decision: A program point at which the control flow has two or more alternative
routes. A node with two or more links to separate branches.
Decision condition coverage: The percentage of all condition outcomes and
decision outcomes that have been exercised by a test suite. 100% decision condition
coverage implies both 100% condition coverage and 100% decision coverage.
Decision condition testing: A white-box test design technique in which test cases
are designed to execute condition outcomes and decision outcomes.
Decision coverage: The percentage of decision outcomes that have been exercised
by a test suite. 100% decision coverage implies both 100% branch coverage and
100% statement coverage.
Decision outcome: The result of a decision (which therefore determines the
branches to be taken).
Decision table: A table showing combinations of inputs and/or stimuli (causes)
with their associated outputs and/or actions (effects) which can be used to design
test cases. It lists various decision variables, the conditions assumed by each of the
decision variables, and the actions to take in each combination of conditions.
Decision table testing: A black-box test design technique in which test cases are
designed to execute the combinations of inputs and/or stimuli (causes) shown in a
decision table. [Veenendaal]
Decision testing: A white-box test design technique in which test cases are
designed to execute decision outcomes.
Defect: A flaw in a component or system that can cause the component or system to
fail to perform its required function, e.g., an incorrect statement or data definition.
A defect, if encountered during execution, may cause a failure of the component or
system.
Load test: A test type concerned with measuring the behavior of a component
or system with increasing load, e.g., number of parallel users and/or numbers of
transactions to determine what load can be handled by the component or system.
Locale: An environment where the language, culture, laws, currency, and many
other factors may be different.
Locale testing: It focuses on testing the conventions for number, punctuations,
date and time, and currency formats.
Logic-coverage testing: See white-box testing. [Myers]
Logic-driven testing: See white-box testing.
Logical test case: See high-level test case.
Low-level test case: A test case with concrete (implementation level) values for
input data and expected results.
Maintainability: The ease with which a software product can be modified to correct
defects, modified to meet new requirements, modified to make future maintenance
easier, or adapted to a changed environment. [ISO 9126]
Maintainability testing: The process of testing to determine the maintainability
of a software product.
Maintenance: Modification of a software product after delivery to correct defects,
to improve performance or other attributes, or to adapt the product to a modified
environment. [IEEE 1219]
Maintenance testing: Testing the changes to an operational system or the impact
of a changed environment to an operational system.
Management review: A systematic evaluation of software acquisition, supply,
development, operation, or maintenance process, performed by or on behalf of
management that monitors progress, determines the status of plans and schedules,
confirms requirements and their system allocation, or evaluates the effectiveness
of management approaches to achieve fitness for purpose. [After IEEE 610, IEEE
1028]
Mandel bug: A bug whose underlying causes are so complex and obscure as to
make its behavior appear chaotic or even non-deterministic.
Master test plan: See project test plan.
Maturity: (1) The capability of an organization with respect to the effectiveness and
efficiency of its processes and work practices. See also capability maturity model and
test maturity model. (2) The capability of the software product to avoid failure as a
result of defects in the software. [ISO 9126] See also reliability.
Measure: The number or category assigned to an attribute of an entity by making
a measurement. [ISO 14598]
Measurement: The process of assigning a number or category to an entity to
describe an attribute of that entity. [ISO 14598]
Measurement scale: A scale that constrains the type of data analysis that can be
performed on it. [ISO 14598]
Memory leak: A situation in which a program requests memory but does not release
it when it is no longer needed. A defect in a program’s dynamic store allocation
logic that causes it to fail to reclaim memory after it has finished using it, eventually
causing the program to fail due to lack of memory.
Message: Message is a programming language mechanism by which one unit
transfers control to another unit.
Messages: It shows how objects communicate. Each message represents one object
making function call of another.
Metric: A measurement scale and the method used for measurement. [ISO 14598]
Migration testing: See conversion testing.
Milestone: A point in time in a project at which defined (intermediate) deliverables
and results should be ready.
Mistake: See error.
Moderator: The leader and main person responsible for an inspection or other
review process.
Modified condition decision coverage: See condition determination coverage.
Modified condition decision testing: See condition determination coverage
testing.
Modified multiple condition coverage: See condition determination coverage.
Modified multiple condition testing: See condition determination coverage
testing.
Module: Modules are parts, components, units, or areas that comprise a given
project. They are often thought of as units of software code. See also component.
Module testing: See component testing.
Monitor: A software tool or hardware device that runs concurrently with the
component or system under test and supervises, records, and/or analyzes the
behavior of the component or system. [After IEEE 610]
Monkey testing: Randomly test the product after all planned test cases are done.
Multiple condition: See compound condition.
Multiple condition coverage: The percentage of combinations of all single
condition outcomes within one statement that have been exercised by a test suite.
100% multiple condition coverage implies 100% condition determination coverage.
Multiple condition testing: A white-box test design technique in which test cases
are designed to execute combinations of single condition outcomes (within one
statement).
Multiplicity: Information placed at each end of an association indicating how many
instances of one class can be related to instances of the other class.
Release (or golden master): The build that will eventually be shipped to the
customer, posted on the Web, or migrated to the live Web site.
Release note: A document identifying test items, their configuration, current
status, and other delivery information delivered by development to testing, and
possibly other stakeholders, at the start of a test execution phase. [After IEEE 829]
Reliability: Probability of failure free operation of software for a specified time
under specified operating conditions. The ability of the software product to perform
its required functions under stated conditions for a specified period of time, or for a
specified number of operations. [ISO 9126]
Reliability testing: The process of testing to determine the reliability of a software
product.
Replaceability: The capability of the software product to be used in place of
another specified software product for the same purpose in the same environment.
[ISO 9126] See also portability.
Requirement: A condition or capability needed by a user to solve a problem or
achieve an objective that must be met or possessed by a system or system component
to satisfy a contract, standard, specification, or other formally imposed document.
[After IEEE 610]
Requirements-based testing: An approach to testing in which test cases are
designed based on test objectives and test conditions derived from requirements,
e.g., tests that exercise specific functions or probe non functional attributes such as
reliability or usability.
Requirements management tool: A tool that supports the recording of
requirements, requirements attributes (e.g., priority, knowledge responsible)
and annotation, and facilitates traceability through layers of requirements and
requirements change management. Some requirements management tools also
provide facilities for static analysis, such as consistency checking and violations to
pre-defined requirements rules.
Requirements phase: The period of time in the software life cycle during which
the requirements for a software product are defined and documented. [IEEE 610]
Requirements tracing: It is a technique of ensuring that the product, as well as the
testing of the product, addresses each of its requirements.
Resource utilization: The capability of the software product to use appropriate
amounts and types of resources, for example, the amounts of main and secondary
memory used by the program and the sizes of required temporary or overflow files,
when the software performs its function under stated conditions. [After ISO 9126]
See also efficiency.
Resource utilization testing: The process of testing to determine the resource
utilization of a software product.
Safety testing: The process of testing to determine the safety of a software product.
Sanity test: See smoke test.
Scalability: The capability of the software product to be upgraded to accommodate
increased loads. [After Gerrard]
Scalability testing: Testing to determine the scalability of the software product.
Scenario testing: See use-case testing.
Scribe: The person who has to record each defect mentioned and any suggestions
for improvement during a review meeting on a logging form. The scribe has to make
sure that the logging form is readable and understandable.
Scripting language: A programming language in which executable test scripts are
written, used by a test execution tool (e.g., a capture/replay tool).
Security: Attributes of software products that bear on its ability to prevent
unauthorized access, whether accidental or deliberate, to programs and data.
[ISO 9126]
Security testing: Testing to determine the security of the software product.
Serviceability testing: See maintainability testing.
Severity: The degree of impact that a defect has on the development or operation
of a component or system. [After IEEE 610]
Shelfware: Software that is not used.
Simulation: A technique that uses an executable model to examine the behavior
of the software. The representation of selected behavioral characteristics of one
physical or abstract system by another system. [ISO 2382/1]
Simulator: A device, computer program, or system used during testing, which
behaves or operates like a given system when provided with a set of controlled
inputs. [After IEEE 610, DO178b] See also emulator.
Sink node: It is a statement fragment at which program execution terminates.
Slicing: It is a program decomposition technique used to trace an output variable
back through the code to identify all code statements relevant to a computation in
the program.
Smoke test: It is a condensed version of a regression test suite. A subset of all
defined/planned test cases that cover the main functionality of a component or
system, to ascertaining that the most crucial functions of a program work, but not
bothering with finer details. A daily build and smoke test are among industry best
practices. See also intake test.
Software feature: See feature.
Software quality: The totality of functionality and features of a software product
that bear on its ability to satisfy stated or implied needs. [After ISO 9126]
Software quality characteristic: See quality attribute.
Software runaways: Large size projects failed due to lack of usage of systematic
techniques and tools.
Software test incident: See incident.
Software test incident report: See incident report.
Software Usability Measurement Inventory (SUMI): A questionnaire
based usability test technique to evaluate the usability, e.g., user-satisfaction, of a
component or system. [Veenendaal]
Source node: A source node in a program is a statement fragment at which program
execution begins or resumes.
Source statement: See statement.
Specialization: The process of taking subsets of a higher-level entity set to form
lower-level entity sets.
Specification: A document that specifies, ideally in a complete, precise, and
verifiable manner, the requirements, design, behavior, or other characteristics of
a component or system, and, often, the procedures for determining whether these
provisions have been satisfied. [After IEEE 610]
Specification-based test design technique: See black-box test design technique.
Specification-based testing: See black-box testing.
Specified input: An input for which the specification predicts a result.
Stability: The capability of the software product to avoid unexpected effects from
modifications in the software. [ISO 9126] See also maintainability.
Standard software: See off-the-shelf software.
Standards testing: See compliance testing.
State diagram: A diagram that depicts the states that a component or system can
assume, and shows the events or circumstances that cause and/or result from a
change from one state to another. [IEEE 610]
State table: A grid showing the resulting transitions for each state combined with
each possible event, showing both valid and invalid transitions.
State transition: A transition between two states of a component or system.
State transition testing: A black-box test design technique in which test cases are
designed to execute valid and invalid state transitions. See also N-switch testing.
Statement: An entity in a programming language, which is typically the smallest
indivisible unit of execution.
Statement coverage: The percentage of executable statements that have been
exercised by a test suite.
Statement testing: A white-box test design technique in which test cases are
designed to execute statements.
Syntax testing: A black-box test design technique in which test cases are designed
based upon the definition of the input domain and/or output domain.
System: A collection of components organized to accomplish a specific function or
set of functions. [IEEE 610]
System integration testing: Testing the integration of systems and packages;
testing interfaces to external organizations (e.g., electronic data interchange,
Internet).
System testing: The process of testing an integrated system to verify that it meets
specified requirements. [Hetzel]
Technical review: A peer group discussion activity that focuses on achieving
consensus on the technical approach to be taken. A technical review is also known
as a peer review. [Gilb and Graham, IEEE 1028]
Technology transfer: The awareness, convincing, selling, motivating, collaboration,
and special effort required to encourage industry, organizations, and projects to
make good use of new technology products.
Test: A test is the act of exercising software with test cases. A set of one or more test
cases. [IEEE 829]
Test approach: The implementation of the test strategy for a specific project. It
typically includes the decisions made that follow based on the (test) project’s goal
and the risk assessment carried out, starting points regarding the test process, the
test design techniques to be applied, exit criteria, and test types to be performed.
Test automation: The use of software to perform or support test activities, e.g., test
management, test design, test execution, and results checking.
Test basis: All documents from which the requirements of a component or system
can be inferred. The documentation on which the test cases are based. If a document
can be amended only by way of formal amendment procedure, then the test basis is
called a frozen test basis. [After TMap]
Test bed: An environment containing the hardware, instrumentation, simulators,
software tools, and other support elements needed to conduct a test. See also test
environment.
Test case: A test that, ideally, executes a single well-defined test objective, i.e.,
a specific behavior of a feature under a specific condition. A set of input values,
execution preconditions, expected results and execution postconditions, developed
for a particular objective or test condition, such as to exercise a particular program
path or to verify compliance with a specific requirement. [After IEEE 610]
Test case design technique: See test design technique.
Test case specification: A document specifying a set of test cases (objective,
inputs, test actions, expected results, and execution preconditions) for a test item.
[After IEEE 829]
Test maturity model (TMM): A five level staged framework for test process
improvement, related to the capability maturity model (CMM) that describes the
key elements of an effective test process.
Test object: The component or system to be tested. See also test item.
Test objective: A reason or purpose for designing and executing a test.
Test oracle: It is a mechanism, different from the program itself that can be used to
check the correctness of the output of the program for the test cases. It is a process in
which test cases are given to test oracles and the program under testing. The output
of the two is then compared to determine if the program behaved correctly for the
test cases. A source to determine expected results to compare with the actual result
of the software under test. An oracle may be the existing system (for a benchmark),
a user manual, or an individual’s specialized knowledge, but should not be the code.
[After Adrion]
Test outcome: See result.
Test pass: See pass.
Test performance indicator: A metric, in general high level, indicating to what
extent a certain target value or criterion is met. Often related to test process
improvement objectives, e.g., defect detection percentage (DDP).
Test phase: A distinct set of test activities collected into a manageable phase of a
project, e.g., the execution activities of a test level. [After Gerrard]
Test plan: A management document outlining risks, priorities, and schedules for
testing. A document describing the scope, approach, resources, and schedule of
intended test activities. It identifies amongst others test items, the features to be
tested, the testing tasks, who will do each task, degree of tester independence, the
test environment, the test design techniques and test measurement techniques to
be used, and the rationale for their choice, and any risks requiring contingency
planning. It is a record of the test planning process. [After IEEE 829]
Test planning: The activity of establishing or updating a test plan.
Test point analysis (TPA): A formula-based test estimation method based on
function point analysis. [TMap]
Test points: They allow data to be modified or inspected at various points in the
system.
Test policy: A high-level document describing the principles, approach, and major
objectives of the organization regarding testing.
Test procedure: See test procedure specification.
Test procedure specification: A document specifying a sequence of actions for the
execution of a test. Also known as test script or manual test script. [After IEEE 829]
Test process: The fundamental test process comprises planning, specification,
execution, recording, and checking for completion. [BS 7925/2]
Test tool: A software product that supports one or more test activities, such as
planning and control, specification, building initial files and data, test execution, and
test analysis. [TMap] See also CAST.
Test type: A group of test activities aimed at testing a component or system
regarding one or more interrelated quality attributes. A test type is focused on a
specific test objective, i.e., reliability test, usability test, regression test, etc., and may
take place on one or more test levels or test phases. [After TMap]
Testable requirements: The degree to which a requirement is stated in terms that
permit establishment of test designs (and subsequently test cases) and execution
of tests to determine whether the requirements have been met. [After IEEE 610]
Testability: The capability of the software product to enable modified software to
be tested. [ISO 9126] See also maintainability.
Testability looks: The code that is inserted into the program specifically to facilitate
testing.
Testability review: A detailed check of the test basis to determine whether the test
basis is at an adequate quality level to act as an input document for the test process.
[After TMap]
Tester: A technically skilled professional who is involved in the testing of a
component or system.
Testing: The process of executing the program with the intent of finding faults.
The process consisting of all life cycle activities, both static and dynamic, concerned
with planning, preparation, and evaluation of software products and related work
products to determine that they satisfy specified requirements, to demonstrate that
they are fit for purpose, and to detect defects.
Testing interface: A set of public properties and methods that you can use to
control a component from an external testing program.
Testware: Artifacts produced during the test process required to plan, design, and
execute tests, such as documentation, scripts, inputs, expected results, set-up and
clear-up procedures, files, databases, environment, and any additional software or
utilities used in testing. [After Fewster and Graham]
Thread testing: A version of component integration testing where the progressive
integration of components follows the implementation of subsets of the requirements,
as opposed to the integration of components by levels of a hierarchy.
Time behavior: See performance.
Top-down testing: An incremental approach to integration testing where the
component at the top of the component hierarchy is tested first, with lower level
components being simulated by stubs. Tested components are then used to test
lower level components. The process is repeated until the lowest level components
have been tested.
D
Sample Project Description
[N.B.: Students may be encouraged to prepare descriptions of projects on
these lines and then develop the following deliverables]
1. SRS Document 2. Design Document 3. Codes 4. Test Oracles
Keywords
Generic Technology Keywords: databases, network and middleware,
programming.
Specific Technology Keywords: MS-SQL server, HTML, Active Server Pages.
Project Type Keywords: analysis, design, implementation, testing, user interface.
Requirements:
Hardware requirements:
Alternatives
Number Description (if available)
1. PC with 2 GB hard-disk and 256 MB RAM Not-Applicable
2.
Software requirements:
Alternatives
Number Description (if available)
1. Windows 95/98/XP with MS-Office Not-Applicable
2. MS-SQL server MS-Access
3.
Manpower requirements:
2-3 students can complete this in 4-6 months if they work full-time on it.
E
Bibliography
Special thanks to the great researchers without whose help this book would
not have been possible:
1. Jorgensen Paul, “Software Testing—A Practical Approach”, CRC Press,
2nd Edition 2007.
2. Srinivasan Desikan and Gopalaswamy Ramesh, “Software testing—
Principles and Practices”, Pearson Education Asia, 2002.
3. Tamres Louise, “Introduction to Software Testing”, Pearson Education
Asia, 2002.
4. Mustafa K., Khan R.A., “Software Testing—Concepts and Practices”,
Narosa Publishing, 2007.
5. Puranik Rajnikant, “The Art of Creative Destination”, Shroff Publishers,
First Reprint, 2005.
6. Agarwal K.K., Singh Yogesh, “Software Engineering”, New Age
Publishers, 2nd Edition, 2007.
7. Khurana Rohit, “Software Engineering—Principles and Practices”,
Vikas Publishing House, 1998.
8. Agarwal Vineet, Gupta Prabhakar, “Software Engineering”, Pragati
Prakashan, Meerut.
9. Sabharwal Sangeeta, “Software Engineering—Principles, Tools and
Techniques”, New Age Publishers, 1st Edition, 2002.
10. Mathew Sajan, “Software Engineering”, S. Chand and Company Ltd.,
2000.
11. Kaner, “Lessons Learned in Software Testing”, Wiley, 1999.
12. Rajani Renu, Oak Pradeep, “Software Testing”, Tata McGraw Hill, First
Edition, 2004.
13. Nguyen Hung Q., “Testing Applications on Web”, John Wiley, 2001.
14. “Testing Object-Oriented Systems—A Workshop Workbook”, by Quality
Assurance Institute (India) Ltd., 1994-95.
Phase wise breakup over testing life Response for class (RFC) 145
cycle 453 Responsibility-based class testing/black-
Positive and negative effect of software box/functional specification-based
V&V on projects 48 testing of classes 345
Practical challenges in white box Retest-all strategy 221
testing 190 Risk analysis 217
Principles of testing 18 Robustness testing 67
Prioritization guidelines 215 Role of V&V in SDLC 33
Prioritization of test cases for regression
S
testing 224
Priority category scheme 216 Sandwich integration approach 239
Problems with manual testing 413 Scalability testing 260, 487
Progressive regression testing 221 Security testing 488
Proof of correctness (formal Selecting test cases 269
verification) 37 Selection of good test cases 9
Pros and cons of decomposition-based Selective strategy 221
techniques 240 Setting up the configuration 258
Pros and cons 242 Simulation and prototyping 38
Skills needed for using automated
R tools 416
Rationale for STRs 43 Slice based testing 226
Recoverability testing 488 Smoke testing 571
Regression and acceptance testing 381 Software technical reviews 43
Regression testing at integration rationale for 43
level 222 review methodologies 4646
Regression testing at system level 223 types of 45
Regression testing at unit level 222 Software testing 2
Regression testing in object oriented basic terminology related to 11
software 224 Software V&V planning (SVVP) 39
Regression testing of a relational Software verification and
database 493–500 validation 29–57
Regression testing of global Standard for software test
variables 223 documentation (IEEE829) 50
Regression testing technique 225 State machines
Regression testing 220, 381, 571 basic concepts of 310
types of 221 Statement coverage 136
Release acceptance test (RAT) 271 Static versus dynamic white box
Reliability testing 262, 489 testing 134
Requirements tracing 38 Steps for tool selection 424