Pages

Wednesday, 7 November 2012

Objectives of Testing



1. Identification of errors at the early stage.

2. To ensure that customer requirements are met.

3. To ensure that the product is reliable.

4. To ensure that the product is maintainable.

5. To ensure that the product is bug free.

6. To ensure the robustness, Compatibility & security of the pro
duct.
Monday, 5 November 2012

Entry & Exit criteria for testing


Start Criteria for Testing
 
We have to meet some criteria before starting the testing phase in SDLC. They are,
1. We can start functional testing once the partial code or full product is ready
2. The code has passed the unit tests or smoke test.
3. We can start System testing once the code has successfully completed the integration testing, unit testing.
4. We can start unit testing once the code review has been completed successfully.


Stop Criteria for Testing

The following criteria’s are necessary to exit the testing process,

1.When we executed all the test cases  without producing any error
2.If testing depends on any other tools/methods to proceed further.
3.When test coverage is completed
4.When budget is over
5.When the bugs left over are of low priority
Wednesday, 31 October 2012

Static testing


In static testing we will not execute the code but we will check for the design documents and we will run through the code manually to find the errors. Main intention is to find the bugs in early stage of SDLC. We will give review comments. Th documents that we are going to review are mostly,

1. Requirement Specification document
2. Design Document 
3. Test plan 
4. Test cases/Scripts. 

Static testing techniques:

Informal Reviews: Here we will review the code or the documents without any formal meeting setup. Just we will review & give informal comments on them. We are not going to document the review comments in this technique.

Technical Reviews: A team of members will review the document. This team consists of peers who have knowledge on the technical things of the project. They will review the Design document, test plan & test cases and they will give review comments which will be documented.

Walkthrough: The author will walk through the code to his team. The author will arrange for the meeting. The team can ask for questions if any. Review comments will be documented.

Inspection: A team of members will do inspection on the code. The author has to invite the members. Mostly managers will be in the team for inspection. One inspector and normally more than 2 readers will conduct the inspection. Review comments will be documented by the author to rectify the errors found in the code. It will follow strict process. Checklist will be maintained by reviewers.

Code Review: Reviewing the source code without executing. Checks for modularity, syntax of the code, Standards followed in the code.

System testing


System testing is testing the application as a whole that is testing both the hardware and software requirements in a complete integrated system or testing the complete product. System testing can be of black box testing since the tester need not have any knowledge on the internal code structure. It is a series of tests carried out to perform full testing of a product. It is also called as end to end testing. We will look for some of the types of testing included in system testing

1. Performance testing
Testing the performance of the application to determine how it works on heavy load in terms of responsiveness and stability. 

2. Load testing
Testing the behavior of the application under a normal load. During peak hours the server and the database will be monitored to find out the response time. Mostly this type of testing will be automated.

3. Stress Testing
This type of testing is carried out to find the breakpoint of the application. The tolerated amount of load that the application can accept. Testing the behavior of the application under a extreme load.

4. Endurance Testing
Throughput and response time are monitored with continuous load to the application. Application will be tested continuously for 48 or 72 hours to determine the system behavior. 

5. Spike testing
Increasing the load suddenly at a specified amount of time to determine the behavior of the application.

6. Compatibility testing
Testing the application to determine the compatibility like whether the application runs on multiple host, supports different platforms (OS), Compilers. 

7. GUI Testing
Graphical User Interface Testing. This testing should be carried out manually. We will test mainly for the user friendliness of the application. Will check for the icons, look & feel, easy drag & drop, table selection, color of the screens.

8. Usability Testing
Testing the whole product to determine whether the product developed satisfies the intended purpose and is of easy to use for the end users (customers).

9. Regression Testing
Testing the application once the developers has given the fixes for the bugs raised or once the requirements have been changed. The purpose of this testing is to ensure that no old bugs reappear and fixing of bugs should not have caused new bugs during development process.

Tuesday, 30 October 2012

Bug Life Cycle


Whenever the tester finds defect in the solution it will be created in the tracker as new issue with the priority & severity number.

Once the issue has been accepted by the developers as valid (if the issue is reproducible and it is a mis from requirement) it will be moved to open state.

If the developer is not able to reproduce the defect raised by the tester then the status will get changed as not reproducible.

If the issue reported is not a valid issue then the status will be rejected.

If the issue raised by the tester is not in the requirement but if it is valid one the client will accept the issue as change request.

If the same issue has been raised by a different reporter or if the issue has been covered in the different defect then the status will be of Duplicated.

The team lead will assign that bug to the corresponding team member so that the bug will be in Assigned state.

Once the developer start looking into the root cause of the problem then the status will be changed to Work in Progress.

If the issue has been resolved by the developer it will be moved to fixed/resolved state.

Once the reporter test the same issue and confirms that issue is not reproducible the status will be moved to closed.

Once the reporter test the same issue and confirms that issue is reproducible the reporter will reopen the same issue.

If the team accepts that the issue is valid but if the priority is low so that it can be fixed in next phase or next build then the status will be deferred.

If the developer needs some clarification on the bug raised or if he has some doubt on fixing the defect the status will be feedback.

These procedures are being followed in DDTS (Distributed Defect Tracking System).

Black Box Testing & Design Techniques



Black box testing tests the functionality of the application without having the knowledge of internal structure of the code. Black box testing is also called as Specification based testing. This testing can be applied in all levels of testing namely system, integration, acceptance testing.

Black box testing design techniques are
  • Decision table testing
  • All pairs testing
  • State transition tables
  • Equivalence partitioning
  • Boundary value analysis


Decision Table testing

We have an application which works based on simple true/false condition of a variable. So in decision based testing we will test for both the conditions. What happens when the variable status is true or what happens if it is false?  This is mostly like if-then-else, switch cases with each condition having some actions to perform.  So we need to test all the actions.

All pairs testing

Consider an application where we need to supply value to some three input parameters. So we need to test all possible combinations of input based on probability.  We have to consider all possible scenarios like,
  •  If we give value for all 3 parameters
  • If we do not give value for all 3 parameters
  • If we give value for 2 parameters.

Likewise we need to test. This testing is mainly to produce optimal number of test cases.  We do not want to perform the entire test based on the probability (3 * 2) since we have 3 parameters but we can achieve the result by some combinations of input.

State transition tables

This table shows the current state of the machine and to which status the machine will move to based on input parameters. Consider an application where the state of the machine will move from s1 to s2 if the parameter is 1. If it is zero it will be in same state. We can represent the same as,


State
Input  ‘0’
Input  ‘1’
S1
S1
S2
S2
S2
S1

Equivalence partitioning

This involves partitioning the data in to some best partitions so that one test case is enough to cover each partition.  Consider an application where the app will work only if the value supplied by the user is between 10 to 15. So we can divide these as
  • 1 to 10 as one partition
  • 10 to 15 as second
  • > 15 as third.

So while testing any one value from each partition is enough to test. After portioning we have to apply boundary value cases in order to select the best suited test case from each partition.

Boundary value analysis

Here we have to consider the boundary values for testing. Consider a refrigerator application where it should defrost when the temperature is below 0 degrees and it should start cooling when the temperature is above 10 degrees.

So the boundary conditions will be
  •          below  0 degrees (- 1, -2, …)
  •          Above 10 degrees (11, 12...)
  •          At  0 degree
  •          At 10 degree


By these black box testing design techniques we can test the application with optimal number of test cases and the test cases selected are of most suited so that we can reduce the defect slippage.

Wednesday, 24 October 2012

Levels of Testing


There are 4 main levels of testing namely,
Unit testing
Mostly done by the developers. They will verify the internal structure of the code. Unit test cases will be created by developers to perform unit testing. One should have knowledge on the code to perform unit testing. Tested in the development environment.

Integration Testing
Done by the testers. Involves testing the application by modules and once the modules are working fine they will start combining (integrating) the modules and verifies whether the inter related functionalities between the modules are working fine. Integration test cases will be created by the testers. Tested in the test environment.

System Testing
Done by the testers. Testing the application as a whole to verify whether the end product meets the customer requirements. System test cases will be created by the testers. Tested in the test environment.

Acceptance Testing
Done by the testers or by the customers or by both. The application will be tested on the simulated real environment. It is called as UAT (User acceptance testing). It will be carried out on two different situations

1),Before accepting the build from the developers the testers will be doing sanity check kind of thing called smoke testing to verify whether the major functionalities are working fine without any crash so that the application can be subjected to more rigorous testing.

2),Done by separate team of testers in the simulated real time environment before delivering the build to the customers. This is called as Alpha Testing. Done by the customers in the real time environment by releasing the beta version of software to the limited number of customers. This is called as Beta Testing.

Testing Life Cycle
1.Requirement Analysis
The testing team will study the requirements delivered by the customers to understand and identify the functional & non functional requirements. They will understand the priorities and they will prepare Requirement traceability matrix. They will also identify whether the automation is feasible or not.

2.Test Planning
The manger involves in preparing the test plan. They will estimate the cost and effort for the project. Manager will calculate for the number of resources required to test the application & number of days needed to complete testing with the given number of resources. Manager will identify the roles & responsibilities for each tester. The test team will identify the tools and also for the training if needed.

3.Test cases Development
The testers will be involved in creating the test cases and test scripts for automation based on the requirements of the application. Rework on test cases and scripts will be performed based on the review comments. Test data will be identified & created by the testers.

4. Test environment Setup
Testers will be identifying the hardware and software requirements from the specification document. In this phase testers will be ready with the test environment setup and they will look for the connectivity related stuff.

5.Test Execution
In this phase of the STLC the testers will be actually executing the application with the set of test cases or scripts that they are ready with.

6.Test Reporting
Testers will document the test results and they will map the defects to the test cases. They will raise the defects in the defect tracker and track them till closure.

7.Test Result analysis
There will be CCB (Change Control Board) meeting with the client, where they will be analyzing the bugs and determine whether they are valid and can be assigned to the suitable team. They will also determine whether it is a defect or CR (Change Request).

8.Defect Retesting
Once the defect has been fixed by the developers, the testers will be doing a regression testing & retesting of the application.

9.Test Closure
Once the exit criteria have been met, the testers will stop the testing and they will prepare the test metrics. They will document the challenges they faced during testing and they will analyze the results to know the severity of the bugs in each modules so that they will come to know about the defect distribution rate.