Thursday, March 13, 2008

Introduction On Testing

Software Testing - Introduction

Definition

Testing is a process of evaluating a system by manual or automatic means to verify that it satisfies specified requirements or identifies differences between expected and actual results.

Testing is no longer an adjunct to the system development life cycle (SDLC), but rather a key part of it.

Why Software Testing?

* Software Testing is important as it may cause mission failure, impact on operational performance and reliability if not done properly.

* Effective software testing delivers quality software products satisfying user’s requirements, needs and expectations.

* If done poorly, it leads to high maintenance cost and user dissatisfaction.

Who participates in testing?

* Software customer

* Software user

* Software developer

* Software Tester

* Information service management

* Senior organization management

Objective of Software Tester

* To find bugs

* Find them as early as possible.

* Make sure they get fixed

Quality Principles

What is Quality?

* Quality is defined as meeting the customer’s requirements for the first time and every time.

* Quality is much more than the absence of defects, which allows us to meet customer expectations.

Five Perspectives of quality

* Transcendental View : Quality can be recognized but not defined

* Producer View : It should conform to product specification

* Product View : is tied to characteristics of the product

* User View : should conform to user’s requirements

* Value-based view : based on how much a customer is willing to pay

Why Quality?

Quality is the important factor affecting an organization’s long term performance and improves productivity and competitiveness in any organization

Quality Assurance Vs Quality Control

Quality Assurance

Quality assurance is a planned and systematic set of activities necessary to provide adequate confidence that products and services will conform to specified requirements and meet user needs.

* It is process oriented.

* Defect prevention based.

Quality Control

Quality control is the process by which product quality is compared with applicable standards and the action taken when non-conformance is detected.

* It is product oriented.

* Defect detection based.

Software Process

* A particular method of doing some thing, generally involving a number of steps or operations is process

* The process that deals with the technical and management issues of software development is called Software Process

Continuous improvement cycle

* Plan (P): Device a plan. Define your objective and determine the strategy and supporting methods required to achieve that objective.

* Do (D): Execute the plan. Create the conditions and perform the necessary training to execute the plan.

* Check (C): Check the results. Check to determine whether work is progressing according to the plan and whether the results are obtained.

* Action (A): Take the necessary action. If your checkup reveals that the work is not being performed according to plan or that results are not what was anticipated. Device measures for appropriate action

Standards/ Certifications

* ISO International Organization for Standard

* SEI CMM Software Engineering Institute Capability Maturity Model (Carnegie Melon University)

* PCMM People Capability Maturity Model.

* SPICE Software Process Improvement and Capability Determination

* NIST National Institute of Standards and Technology

* DoD Department of Defense

* Six Sigma Zero Defect Orientation

CMM Levels

* CMM level 5 Optimizing

* CMM level 4 Managed

* CMM level 3 Defined

* CMM level 2 Repeatable

* CMM level 1 Initial

Management Activities

Project Management

Project management is concerned with the overall planning and co-ordination of a project from inception to completion aimed at meeting the client's requirements and ensuring completion on time, within cost and to required quality standards.

Needs for Project Management

Subject to Budget & Schedule it includes

* Project Planning

* Project Scheduling

* Project Costing

* Monitoring Reviews

* Report Writing & Presentation

Risk Management

Risk Management is concerned with identifying risks and drawing up plans to minimize their effect on a project.

A risk is a probability that some adverse circumstance will occur

Requirement Management

Requirement Management is managing changes in the evolving software in a cost effective manner.

Manual (CM)

Project Name

* CM System (Contains all the changes or versions)

* Shared (Current Version)

* Draft (Forecasted)

Automation Tools (CM) Types of Tools:

* VSS - Visual Source Safe (Microsoft Product)

* Rational Clear Case (Rational Corporation Product)

* CVS - Concurrent Version System. (Free tool)

Software configuration management plan (SCMP)

* CI - Configurable Item

* CR - Change Request

* CCB - Change Control Board

Testing fundamentals

What is primary role of software testing?

* Determine whether the system meets specifications (producer view)

* Determine whether the system meets business and user needs (customer view)

A defect is a variance from a desired product attribute.

Two categories of defects are:

* Variation from product specifications

* Variation from customer/user expectation

Classifications of Defects

* Wrong - The specification have been implemented incorrectly.

* Missing - A specified or wanted requirement is not in the built product.

* Extra - A requirement incorporated into the product that was not specified.

Classification of Testing

Static Testing

Testing techniques that involve analysis of application components and/or processing results without the actual execution of the application. A quality assurance (QA) review is a form of static testing.

Dynamic Testing

Testing, based on specific test cases, by execution of the test object or running programs.

Techniques used are determined by the type of testing that must be conducted

Functional and Structural

Functional Testing

* Structure of the program is not considered.

* Test cases are decided based on the requirements or specification of the program or module

* Hence it is often called as “Black Box Testing”.

Structural Testing

* Concerned with testing, the implementation of the program.

* Focus on the internal structure of the program.

* The intent of structural testing is not to exercise all the different input or output condition but to exercise the different programming structures and data structures in the program.

Testing Levels

*

Unit Testing

*

Integration Testing

*

System Testing

*

User Acceptance Testing

Unit Testing

* Test each module individually.

* Follows a white box testing (Logic of the program)

Integration testing

* Integrate two or more modules and test for the communication between the modules.

* Follows a white box testing (Testing the code).

System Testing

* Confirms that the system as a whole delivers the functionality originally required.

* Follows the black box testing.

User Acceptance Testing (UAT)

* Building the confidence of the client and users is the role of the acceptance test phase.

* It is depend on the business scenario.

White Box Testing

* Also known as glass box testing.

* A software testing technique whereby explicit knowledge of the internal workings of the item being tested are used to select the test data.

* White box testing uses specific knowledge of programming code to examine outputs.

White Box Testing Techniques

* Statement coverage – execute all statements at least once.

* Decision coverage - execute each decision direction at least once.

* Condition coverage – execute each decision with all possible outcomes at least once

Black Box Testing

A testing method where the application under test is viewed as a black box and the internal behavior of the program is completely ignored. Testing occurs based upon the external specifications. Also known as behavioral testing, since only the external behaviors of the program are evaluated and analyzed

Black Box Testing Techniques

* Equivalence Partitioning

* Boundary Analysis

* Error Guessing

Equivalence Partitioning

A subset of data that is representative of a larger class

For example, a program which edits credit limits within a given range ($10,000 - $15,000 would have 3 equivalence classes:

* Less than $10,000 (invalid)

* Between $10,000 and $15,000 (valid)

* Greater than $15,000 (invalid)

Boundary Value Analysis

A technique that consists of developing test cases and data that focus on the input and output boundaries of a given function

In the same credit limit example, boundary analysis would test:

* Low boundary plus or minus one ($9,999 and $10,001)

* On the boundary ($10,000 and $15,000)

* Upper boundary plus or minus one ($14,999 and $15,001)

Error Guessing

* Based on the theory that test cases can be developed based on experience of the Test Engineer

* For example, in an example where one of the inputs is the date, a test engineer might try February 29, 2001

Incremental Testing

A disciplined method of testing the interfaces between unit-tested programs as well as between system components

Incremental Testing Types

* Top-down

* Bottom-up

Top-Down

Begins testing from the top of the module hierarchy and works down to the bottom using interim stubs to simulate lower interfacing modules or programs

Bottom-Up

* Begins testing from the bottom of the hierarchy and works up to the top

* Bottom-up testing requires the development of driver modules which provide the test input, call the module or program being testing, and display test output

Thread Testing

* A technique, often used during early integration testing

* Demonstrates key functional capabilities by testing a string of units that accomplish a specific function in the application

Verification and Validation

Verification

Verification is the process confirming that software meets its specification

Validation

Validation is the process confirming that it meets the user’s requirements

Review types

* In process Review

* Phase end/Decision Point/Milestone Review

* Post Implementation/Post Mortem

Classes of Reviews

* Informal Review (or) Peer review

* Semiformal Review (or) Walk Through

* Formal Review (or) Inspection

Informal

* Also called peer-reviews

* Generally one-to-one meeting

* No agenda (scheduling)

* Results are not formally reported

* Occur as needed through out each phase

Semiformal

* Facilitated by the author

* Presentation is made with comment throughout and at the end

* Reports are distributed to the participants

* Possible solutions for defects not discussed

* Occur one or more times during a phase

Formal

* Facilitated by moderator

* Assisted by recorder

* Meetings are planned in advance

* Directly depends on preparation of participants and Held at any time

Test Level Criteria –

Entrance Criteria

Define the required conditions and standards that must be present for entry into the next stage of the development process.

Exit Criteria

Defines standards that blocks the promotion of the defective work to subsequent stages of the development process

Requirements Exit Criteria

* Both functional and non-functional requirements are defined

* Clear and understandable requirements

* Requirements are measurable

* Test cases are measurable

Requirements Analysis Exit Criteria

* Functional and non-functional requirements are documented

* Test cases are clearly defined and reviewed

* System test cases are validated and approved

Design Exit Criteria

* Design reviewed for testability

* It should comply with applicable standard

* Design validated for completeness, correctness and requirements coverage

Code / Unit Test Exit Criteria

* Compile, Link and Load

* 100% code coverage

* A documented evidence that all paths are executed at least once

* Adequate program documentation

Integration Entrance Criteria

* A documented evidence that all basic path is executed at least once

* Adequate program documentation

* Verify that correct unit of the system has been taken for integration

Integration Exit Criteria

* Successful execution of integration test plan

* No open defects with high severity

* Component Stability

System Entrance Criteria

* Successful execution of integration test cases

* No open defects with high severity

* 75-80% of the system functionality and 90% of major functionality are delivered.

System Exit Criteria

* Successful execution of system test cases

* A documented evidence of requirements coverage and high risk system components

* 100% of total system functionality delivered

Usability Testing

Determines how well the user will be able to understand and interact with the system.

This is done prior to the testing levels.

Vendor Validation Testing

* This can be conducted jointly by software vendor and the team.

* Ensuring that all the requested functionality has been delivered.

* Prior to accepting it & installing it into a production.

Sanity Testing

A sanity test or "smoke test" is a brief run-through of the main functionality of a computer program or other product. It gives a measure of confidence that the system works as expected prior to a more exhaustive round of testing.

Ad Hoc Testing

Testing carried out using no recognized test case design technique.

Localization Testing

The process of adapting software for a particular country or region. For example, the software must support the character set of the local language and must be configured to present numbers and other values in the local format.

Regression Testing

The selective retesting of a software system that has been modified to ensure that any bugs have been fixed and that no other previously working functions have failed as a result of the reparations and that newly added features have not created problems with previous versions of the software. Also referred to as verification testing

Alpha Testing

Simulated or actual operational testing at an in-house site not otherwise involved with the software developers.

Beta Testing

Operational testing at a site not otherwise involved with the software developers.

Compatibility Testing

Testing whether the system is compatible with other systems with which it should communicate.

Benefits Realization Test

* It is a test or analysis conducted after an application is moved into production.

* To determine whether the application is likely to deliver the original benefits.

* This is conducted by the user or client group who requested the project.

Performance Testing

Load

* How many users a particular system can handle

* Operating the software with largest possible data files

Stress

* Repeatedly doing the load test

* Running software under less condition (low memory, low disk space)

* Testing on pull-down resource (testing external application such as server, or external hardware such as printer)

Testing life Cycle

* Test Plan A document that defines the overall testing approach.

* Test Case Design Document that defines what is selected to test and describes the expected result. Set of procedures in order to find out the defects.

* Test Execution Executing the test cases.

* Test log Pass and fail information will be stored in this test log.

* Defect tracking All the failed items will come under the defect tracking. (Closing defect)

* Test report Reports are generated during the TLC. Final test summary report. Generated at the end of testing for making decisions

Test Plan Preparation

Test Objective : A test objective is simply a testing “goal”. It is a statement of what the tester is expected to perform during the testing activity.

Test strategy:

* Techniques that are going to be used during the testing.

* What types of tests must be conducted.

* What stages of testing are required? (E.g. unit, system)

Test Risk Analysis:

* A risk is a condition that can result in a loss. The risk that may occur/arise during the analysis of the testing.

* The probability that the negative event will occur.

Test schedule & Resources

Task Duration Resource

Test plan 3 1

Test design 5 3

Roles and Responsibilities

* Test Manager- Manages the entire testing activity (approve).

* Test Leader - Prepare the test plan, review test case, monitor defect tracking, and provide resources.

* Test Engineer - Prepare test case design (test risk, reports).

Communications Approach

The communications during the project will be informed through this communication approach.

Test Environments

* Software Requirements.

* Hardware Requirements.

* Tools that are needed.

Test Case Design (manual)

* Test case ID a unique number to identify the test case

* Test case description a short note about the testing.

* Test case procedure Each and every step has to be mentioned in test case procedure.

* Test inputs (or) Test Data : Input data

* Expected result The Expected out come of the test case.

* Actual result what we have received is the actual result.

If Expected result = Actual result then the test is pass otherwise it is fail.

Defect Tracking

All the failed test execution/defects will come under the defect tracking.

* Defect Id Unique number to identify defect.

* Test case Id Test case which failed.

* Defect Description: Specify the defect.

* Defect Recovery procedure: Repeating the same test cases with some modifications.


Defect Status

New

Reopen

Open

Rejected

Fixed

Defects Severity

Critical

Major

Minor

Defects Priority

High

Moderate

Low


Closed

Metric

* A metric is a mathematical number that shows a relationship between two variables.

* Software Metrics are measure that is used to quantify the software, software development resource and software development process.

Classification of Metrics

Process Metric

A metric used to measure the characteristic of the methods, techniques and tools employed in developing, implementing and maintaining the software system

Product Metric

A metric used to measure the characteristic of the documentation and code.

Testing Metrics

User Participation

Use: To find the Involvement of the tester

= User Participation Test Time / Total Test Time

Acceptance Criteria Tested

Use: This metric identifies the no of user identified criteria that were evaluated during the testing process.

= Acceptance Criteria Verified / Total Acceptance Criteria

Detected Production Defect

Use: This metric show the no. of defects detected in each module.

=No of Defects detected in production /Application System size

Web Testing

* Functionality

* Usability

* Server Side Interfaces

* Client side Compatibility

* Performance

* Security

Functionality Testing

Links

Objective is to check for all the links in the website.

* All Internal Links

* All External Links

* All mail to links

* Check for Broken Links

Forms

Field Level Validation

Check for length, special characters, numerical characters etc.,

Functional Checks

Create, modify, view and delete are working.

Error handling for wrong inputs or actions.

Check whether appropriate error messages to be displayed.

Optional and mandatory fields.

Mandatory field should not be left blank. Optional should allow the user to skip

Cookies

Check whether cookies are enabled.

Web sites use cookies to simulate a continuous connection to that site.

Data Integrity

Should not be any missing or wrong data in the database.

Usability Testing

Navigation

* Application navigation is proper through tab

* Navigation through Mouse

* Main features accessible from the main/home page.

* Any hot keys, control keys to access menus.

Content

Correctness is whether the information is truthful or contains misinformation. The accuracy of the information is whether it is without grammatical or spelling errors. Remove irrelevant information from your site.

General Appearance

* Page appearance

* Color, font and size

* Frames

* Consistent design

Server Side Interfaces

* Verify that communication is done correctly, Web server-Application server

* Application server-Database server and vice versa.

* Compatibility of server software, hardware, network connections.

* Database compatibility (SQL, Oracle etc.)

Client Side Compatibility

Platform

Check for the compatibility of

* Windows (95, 98, 2000, NT)

* Unix (different sets)

* Macintosh (If applicable)

* Linux

* Solaris (If applicable)

Browsers

* Internet Explorer (3.X 4.X, 5.X)

* Netscape Navigator (3.X, 4.X, 6.X)

* AOL

* Browser settings (security settings, graphics, Java etc.)

* Frames and Cascade Style sheets

* Applets, ActiveX controls, DHTML, client side scripting HTML specifications.

Printing

* Text and image alignment

* Colours of text foreground and background

* Scalability to fit paper size

* Tables and borders

Performance

Connection speed

Try with Connection speed: 14.4, 28.8, 33.6, 56.6, ISDN, cable, DSL, T1, T3

Load

* What is the estimated number of users per time period and how will it be divided over the period?

* Will there be peak loads and how will the system react?

* Can your site handle a large amount of users requesting a certain page?

* Large amount of data from users.

Stress

* Typical areas to test are forms, logins or other information transaction components.

* Performance of memory, CPU, file handling etc.

* Error in software, hardware, memory errors (leakage, overwrite or pointers)

Security

* Valid and Invalid Login

* Limit defined for the number of tries.

* Verify Log files are maintained to store the information for Traceability.

* Verify encryption is done correctly if SSL is used (If applicable)

* No access to edit scripts on the server without authorization.

* Time-out Occurrence

5 comments:

Anonymous said...

I've found this site and its really interesting and useful.
Thank you for giving tips for freshers.
I found one site(testing tools site),which might be usefull for testers.Please Could you post it in ur bolg.
http://www.ebook-search-engine.com/mercury-winrunner-tutorial--ebook-all.html

Zakir Hussain said...

Hi Munni,

Many thanks for your comments. The site which you've mentioned is really nice and i found it really useful. Many thanks for your approach and My Best wishes to you.

Zakir

Anonymous said...

Hello Zakir ,

This site is amazing.It wll be really useful for the enire tesing community.The content is very well organized.Big Thanks to you .You have done a great job.Continue the good work.

Shan

Anonymous said...

Hi,

In One question you have mentioned,
"Information service management
Senior organization management"
are involved in S/W Testing.
Can you tell me how does they involve in the testing?

Zakir Hussain said...

They are actually the management people of the company who mediate the project, give suggestion / take decision on the tool to be used and types of testing to be carried out. It has been mentioned they are part of the testing team and not that they carry out testing activities. Hope this is clear...