98.What is meant by White box testing?
Tests are based on coverage of code statements, branches, paths, and conditions
White box testing assumes a knowledge of the software under test. It is also called structural testing sometimes. The structure of the software under test is used to generate test cases to problems of surprises and incorrect implementation. The white box testing techniques typically involve exercising the different sections of the code systematically.
The structure of the software and ways in which they may be exercised vary so much that a large number of test cases are necessary to test "all possible combinations".
These criteria are called the coverage criteria. The well accepted coverage criteria include:
� Statement Coverage
� Branch Coverage
� Decision Coverage
� Path Coverage
99.What is meant by Decision / Branch Coverage?
Branch Coverage Testing seeks to ensure that every branch has been executed. Branch Coverage can be tested by probes inserted at points in the program that represent arcs from branch points in the flow graphs.
100.What is meant by Functional or Black box testing?
Tests are based on requirements and functionalities of application
Black box or responsibility based testing views the unit or system under test as a Black box and involves test cases which exercise the responsibilities to be met by the unit or system under test. In case of units, the Input output specifications of an individual unit is taken and the unit is subjected to a set of cases that attempt to find omissions and defects in the implementation vis a vis the requirements.
101.What are the two important techniques of Black box testing?
The two important techniques of Black box testing are
� Equivalence Partitioning
� Boundary Value Analysis
� Error Guessing
� Cause Effect Graphing
102.What is meant by Equivalence Partioning?
This method divides the input domain of a program into classes of data from which test cases can be derived. Equivalence partitioning strives to define a test case that uncovers classes of errors and thereby reduces the number of test cases needed. It is based on an evaluation of equivalence classes for an input condition. An equivalence class represents a set of valid or invalid states for input conditions.
Equivalence classes may be defined according to the following guidelines:
1. If an input condition specifies a range, one valid and two invalid equivalence classes are defined.
2. If an input condition requires a specific value, then one valid and two invalid equivalence classes are defined.
3. If an input condition specifies a member of a set, then one valid and one invalid equivalence class are defined.
4. If an input condition is Boolean, then one valid and one invalid equivalence class are defined.
103.What is meant by Boundary Value Analysis?
This method leads to a selection of test cases that exercise boundary values. It complements equivalence partitioning since it selects test cases at the edges of a class. Rather than focusing on input conditions solely, BVA derives test cases from the output domain also. BVA guidelines include:
1. For input ranges bounded by a and b, test cases should include values a and b and just above and just below a and b respectively.
2. If an input condition specifies a number of values, test cases should be developed to exercise the minimum and maximum numbers and values just above and below these limits.
3. Apply guidelines 1 and 2 to the output.
If internal data structures have prescribed boundaries, a test case should be designed to exercise the data structure at its boundary
104.What is meant by Error Guessing?
A test case design technique where the experience of the tester is used to postulate what faults exist, and to design tests specially to expose them. The basic idea is to make a list of possible errors or error-prone situations and then develop tests based on the list. What are the most common error-prone situations we have seen before? Defects' histories are useful. There is a high probability that defects that have been there in the past are the kind that are going to be there in the future.
105.What is meant by Cause Effect Graphing?
Cause-effect graphing is a technique that provides a concise representation of logical conditions and corresponding actions. There are four steps:
1. Causes (input conditions) and effects (actions) are listed for a module and an identifier is assigned to each.
2. A cause-effect graph is developed.
3. The graph is converted to a decision table.
4. Decision table rules are converted to test cases.
107.What are the different types of Testing?
1.Unit Testing
Unit testing is the first stage of testing. It immediately follows the coding stage. The Unit Testing stage ensures that a Unit behaves correctly. Unit testing meets the first objective.
Goal of Unit testing is to uncover defects using formal techniques like Boundary Value Analysis (BVA), Equivalence Partitioning, and Error Guessing. Defects and deviations in Date formats, Special requirements in input conditions (for example Text box where only numeric or alphabets should be entered), selection based on Combo Box's, List Box's, Option buttons, Check Box's would be identified during the Unit Testing phase.
2.Integeration Testing
Testing of combined parts of an application to determine if they function together correctly. The 'parts' can be code modules, individual applications, client and server applications on a network, etc. Integration testing tests the interactions between different units.
Usually, the following methods of Integration testing are followed:
A.Top-down Integration approach.
B. Bottom-up Integration approach.
A).Top-Down Integration Approach
In Top Down Testing, the top most layer of Units are tested first followed
by the next lower layer of units. While testing the second layer of units, the already tested top layer units are used.
B).Bottom-up Integration approach.
In Bottom up testing, all lowest level units are tested first. Then the higher
level units are tested. The units already tested are used during testing higher level modules
3.System Testing
Black-Box type testing that is based on overall requirements specifications; covers all combined parts of a system. System testing is done by integration all the module to form the complete system as stated in the SRS document, and tests conducted on the entire system is called as system testing, this only a validation exercise. The reference document for system testing is SRS Document. This testing is performed to ensure the system is fully functional as specified in SRS.
The following types of testing comes under system testing
a) Recovery Testing
b) Security Testing
c) Stress Testing
d) Performance Testing
4.Recovery Testing
This testing is done to check out how system can be brought back to its normal function when it's integrity of the system fails. Testing how well a system recovers from crashes, hardware failures, or other catastrophic problems.
If recovery is automatic, reinitialization, check pointing mechanisms, data recovery and restart are evaluated for correctness. If recovery requires human intervention, the mean-time-to-repair (MTTR) is evaluated to determine whether it is within acceptable limits.
5.Security Testing
This testing is done with intend to check out how far the system can withstand to external threat, and to ensure the integrity of the system is properly maintained. During Security testing, password cracking, unauthorized entry into the software, network security are all taken into consideration. testing how well the system protects against unauthorized internal or external access, willful damage, etc
6.Stress Testing
This testing is done by subjecting the system to abnormal conditions such as heavy load, complex data base queries, large integer values and performing the same operation a number times to find out at what point the system breaks.
7.Performance Testing
Performance testing is designed to test run time performance of software within the context of an integrated system. Performance tests are often coupled with stress testing and often require both hardware and software infrastructure. That is, it is necessary to measure resource utilization in an exacting fashion.
8.Alpha Testing
Testing of an application when development is nearing completion; minor design changes may still be made as a result of such testing. The Application being tested in the Developer's site.
9.User Acceptance Testing (UAT)
User Acceptance testing occurs just before the software is released to the customer. The end-users along with the developers perform the User Acceptance Testing with a certain set of test cases and typical scenarios.
10.Installation Testing
Testing of full, partial, or upgrade install/uninstall processes.
11. Beta Testing
Testing when development and testing are essentially completed and final bugs and problems need to be found before final release. The developer will not be present in the customer's place.
So, the Beta test is a 'live' application of the software in an environment that cannot be controlled by a developer.
12. Load Testing
This testing is performed to check how the system can withstand for heavy loads, this testing is done to check out at what point of load the system performance of the system degrades.
Testing an application under heavy loads, such as testing of a web site under a range of loads to determine at what point the system's response time degrades or fails.
13. Volume Testing
Testing where the system is subjected to large volumes of data.
14. Regression Testing
When new versions or builds come for testing, First testing should be done to ensure that the previous functionality of the program is not disturbed by fixing errors or adding new functionality, Testing the new build to ensure that the previous functionality is not effected by executing the tests cases which are success full on the previous version are build. Here a test suite is used, a test suite is one which contains interesting test cases and their expected results for future use.
15. Sanity / Smoke testing
Typically an initial testing effort to determine if a new software version is performing well enough to accept it for a major testing effort.
This is the initial testing performed when the new build is available in order to check out whether the build is ready for further Or major testing. This testing is performed to check out whether to check out the application can be accepted for further testing. This is called smoke testing. Testing the same with the new version of the released software is sanity testing.
16. Database Testing
Check the integrity of database field values.
17.Adhoc Testing
Testers who are very comfortable on the application on which they are testing do this testing. This testing is done randomly without any test cases with the intend of finding errors.
18.Compatibility Testing
How well the system performs in a particular H/W, S/W, and N/W. Testing how well software performs in a particular hardware/software/ operating system/network. Environment.
19.Comparission Testing
Comparing software weaknesses and strengths to competing products
20.Usability testing
Testing for 'user-friendliness'. Clearly this is subjective, and will depend on the targeted end-user or customer.
108.What is meant by a Stub and drivers?
If we are following Top down Integration approach we need stubs and when we are using bottom up approach we need drivers.
Stubs are basically " called functions " and drivers are " Calling functions"
They are needed because when we are integrating your system, you do not have all the components available which are required to test the components which are available
Ex If your Module A calls two functions from Module B and C and you are asked to test the Module A when you do not have B and C available to you. You will need to come up some sort of dummy code to mimick Module B and C so that A can make function calls. This dummy piece of code will be called Stubs
Similarly if you have B and C ready but do not have A, you need a dummy code to drive B and C. This will be called a Driver.