Tuesday, December 8, 2009

How to Write a Bug Report - A dozen helpful tips

1. Be very specific when describing the bug. Don’t let there be any room for interpretation. More concise means less ambiguous, so less clarification will be needed later on.

2. Calling windows by their correct names (by the name displayed on the title bar) will eliminate some ambiguity.

3. Don’t be repetitive. Don’t repeat yourself. Also, don’t say things twice or three times.

4. Try to limit the number of steps to recreate the problem. A bug that is written with 7 or more steps can usually become hard to read. It is usually possible to shorten that list.

5. Start describing with where the bug begins, not before. For example, you don't have to describe how to load and launch the application if the application crashes on exit.

6. Proofreading the bug report is very important. Send it through a spell checker before submitting it.

7. Make sure that all step numbers are sequenced. (No missing step numbers and no duplicates.)

8. Please make sure that you use sentences. This is a sentence. This not sentence.

9. Don’t use a condescending or negative tone in your bug reports. Don’t say things like "It's still broken", or “It is completely wrong”.

10. Don’t use vague terms like “It doesn’t work” or “not working properly”

11. If there is an error message involved, be sure to include the exact wording of the text in the bug report. If there is a GPF (General Protection Fault) be sure to include the name of the module and address of the crash.

12. Once the text of the report is entered, you don’t know whose eyes will see it. You might think that it will go to your manager and the developer and that’s it, but it could show up in other documents that you are not aware of, such as reports to senior management or clients, to the company intranet, to future test scripts or test plans. The point is that the bug report is your work product, and you should take pride in your work.

Thursday, October 15, 2009

How to determine whether a computer is running a 32-bit or 64-bit version of the Windows OS

Windows Vista

If you have Windows Vista, there are two methods to determine whether you are running a 32-bit or a 64-bit version. If one does not work, try the other.

Method 1: View System window in Control Panel

1. Click Start , type system in the Start Search box, and then click system in the Programs list.
2. The operating system is displayed as follows:
* For a 64-bit version operating system: 64-bit Operating System appears for the System type under System.
* For a 32-bit version operating system: 32-bit Operating System appears for the System type under System.

Method 2: View System Information window

1. Click Start, type system in the Start Search box, and then click System Information in the Programs list.
2. When System Summary is selected in the navigation pane, the operating system is displayed as follows:
* For a 64-bit version operating system: x64-based PC appears for the System type under Item.
* For a 32-bit version operating system: x86-based PC appears for the System type under Item.

Windows XP

If you have Windows XP, there are two methods to determine whether you are running a 32-bit or a 64-bit version. If one does not work, try the other.

Method 1: View System Properties in Control Panel

1. Click Start, and then click Run.
2. Type sysdm.cpl, and then click OK.
3. Click the General tab. The operating system is displayed as follows:
* For a 64-bit version operating system: Windows XP Professional x64 Edition Version <> appears under System.
* For a 32-bit version operating system: Windows XP Professional Version appears under System.
Note is a placeholder for a year.

Method 2: View System Information window

1. Click Start, and then click Run.
2. Type winmsd.exe, and then click OK.
3. When System Summary is selected in the navigation pane, locate Processor under Item in the details pane. Note the value.
* If the value that corresponds to Processor starts with x86, the computer is running a 32-bit version of Windows.
* If the value that corresponds to Processor starts with ia64 or AMD64, the computer is running a 64-bit version of Windows.

Windows Server 2003

If you have Windows Server 2003, there are two methods to determine whether you are running a 32-bit or a 64-bit version. If one does not work, try the other.

Method 1: View System Properties in Control Panel

1. Click Start, and then click Run.
2. Type sysdm.cpl, and then click OK.
3. Click the General tab. The operating system is displayed as follows:
* For a 64-bit version operating system: Windows Server 2003 Enterprise x64 Edition appears under System.
* For a 32-bit version operating system: Windows Server 2003 Enterprise Edition appears under System.

Method 2: View System Information window

1. Click Start, and then click Run
2. Type winmsd.exe, and then click OK.
3. When System Summary is selected in the navigation pane, locate Processor under Item in the details pane. Note the value.
* If the value that corresponds to Processor starts with x86, the computer is running a 32-bit version of Windows.
* If the value that corresponds to Processor starts with EM64T or ia64, the computer is running a 64-bit version of Windows.

Monday, August 17, 2009


Testing Types and Testing Techniques


Testing Types
deal with what aspect of the computer software would be tested, while Testing Techniques deal with how a specific part of the software would be tested...


Testing techniques

Testing techniques refer to different methods of testing particular features a computer program, system or product and what methods or ways would be applied or calculations would be done to test a particular feature of a software.

Black box testing techniques:

* Graph Based Testing Methods
* Error Guessing
* Boundary Value analysis
* Equivalence partitioning
* Comparison Testing
* Orthogonal Array Testing

White box testing techniques:

* Basis Path Testing
* Flow Graph Notation
* Cyclomatic Complexity
* Graph Matrices
* Control Structure Testing


Testing types

Testing types refer to different approaches towards testing a computer program, system or product. In other words, we may test each function of the software to see if it is operational or we may test the internal components of the software to check if its internal workings are according to specification

The different "Types of Testing" are listed below.

* Acceptance Testing
* Ad hoc Testing
/***
o Buddy Testing
o Paired Testing
o Exploratory Testing
o Iterative / Spiral model Testing
o Agile / Extreme Testing
***/

* Aesthetics Testing
* Alpha Testing
* Automated Testing
* Beta Testing
* Black Box Testing
* Boundary Testing
* Comparison Testing
* Compatibility Testing
* Conformance Testing
* Consistency Testing (Heuristic)
* Deployment Testing
* Documentation Testing
* Domain Testing
* Download Testing
* EC Analysis Testing
* End-to-End Testing
* Fault-Injection Testing
* Functional Testing
* Fuzz Testing
* Gray Box Testing
* Guerilla Testing
* Install & Configuration Testing
* Integration Testing
/***
o System Integration
o Top-down Integration
o Bottom-up Integration
o Bi-directional Integration
***/

* Interface Testing
* Internationalization Testing
* Interoperability Testing
* Lifecycle Testing
* Load Testing
* Localization Testing
* Logic Testing
* Manual Testing
* Menu Walk-through Testing
* Performance Testing
* Pilot Testing
* Positive & Negative Testing
* Protocol Testing
* Recovery Testing
* Regression Testing
* Reliability Testing
* Requirements Testing
* Risk-based Testing
* Sanity Testing
* Scalability Testing
* Scenario Testing
* Scripted Testing
* Security Testing
* SME Testing
* Smoke Testing
* Soak Testing
* Specification Testing
* Standards / Compliance Testing
/***
o 508 accessibility guidelines
o SOX
o FDA / Patriot Act
o Other standards requiring compliance
***/

* State Testing
* Stress Testing
* System Testing
* Testability Testing
* Unit Testing
* Upgrade & Migration Testing
* Usability Testing
* White box Testing
/***
o Static Testing Techniques

/******
+ Desk checking
+ Code walk-through
+ Code reviews and inspection
******/

***/

/***
o Structural Testing Techniques

/******
+ Unit Testing
+ Code Coverage Testing
+ Statement
+ Path
+ Function
+ Condition
+ Complexity Testing / Cyclomatic complexity
+ Mutation Testing
******/

***/


The list above, hopefully covers most of the "Types of Tests" that are generally performed. There may be a few that might have missed ...

Let me know if that is the case and I'll be more than glad to add / edit them.

Wednesday, June 24, 2009

Defects related definitions


Defect

* The difference between the functional specification (including user documentation) and actual program text (source code and data). Often reported as problem and stored in defect-tracking and problem-management system

* Also called a fault or a bug, a defect is an incorrect part of code that is caused by an error. An error of commission causes a defect of wrong or extra code. An error of omission results in a defect of missing code. A defect may cause one or more failures.

* A flaw in the software with potential to cause a failure..

Defect Age - A measurement that describes the period of time from the introduction of a defect until its discovery.

Defect Density - A metric that compares the number of defects to a measure of size (e.g., defects per KLOC). Often used as a measure of defect quality.

Defect Discovery Rate - A metric describing the number of defects discovered over a specified period of time, usually displayed in graphical form.

Defect Removal Efficiency (DRE) - A measure of the number of defects discovered in an activity versus the number that could have been found. Often used as a measure of test effectiveness.

Defect Seeding - The process of intentionally adding known defects to those already in a computer program for the purpose of monitoring the rate of detection and removal, and estimating the number of defects still remaining. Also called Error Seeding.

Defect Masked - An existing defect that hasn't yet caused a failure because another defect has prevented that part of the code from being executed.

Tester’s Tips for Dealing with Developers


Thought of sharing the tips which I happen to read from a good forum on How to deal with the Developers. Its a good one. Read on...


A beautiful line from Cem Kaner’s Testing Computer Software: "The best tester is not the one who finds the most bugs or who embarrasses the most developers. The best tester is the one who gets the most bugs fixed."


Be Cordial and Patient
As a tester you may find it more difficult to convince a developer about a defect you’ve found. Often, if a tester exposes one bug, the programmer will be ready with ten justifications. It’s sometimes difficult for developers to accept the fact that their code is defective—and someone else has detected it.

Developers need support from the testing team, who can assure them that finding new bugs is desirable, healthy, and important in making the product the best it can be. A humanistic approach will always help the tester know the programmer better. Believe me, in no time the same person could be sitting with you and laughing at mistakes that introduced bugs. Cordiality typically helps in getting the developer to say “yes” to your bug report. An important first step!

Be Diplomatic
Try presenting your findings tactfully, and explaining the bug without blame. “I am sure this is a minor bug that you could handle in no time. This is an excellent program so far.” Developers will jump and welcome it.

Take a psychological approach. Praise the developer’s job from time to time. The reason why most developers dislike our bug reports is very simple: They see us as tearing down their hard work. Some testers communicate with developers only when there is a problem. For most developers, the software is their own baby, and you are just an interfering outsider. I tell my developers that because of them I exist in the company and because of me their jobs are saved. It’s a symbiotic and profitable relationship between a tester and a developer.

Don’t Embarrass
Nobody likes mistakes to be pointed out. That’s human nature. Try explaining the big-picture need for fixing that particular bug rather than just firing bulky bug reports at developers. A deluge of defects not only irritates the developer, it makes your hard work useless for them.

Just as one can’t test a program completely, developers can’t design programs without mistakes, and they need to understand this before anything else. Errors are expected; they’re a natural part of the process.

You Win Some, You Lose Some
I know of testers who make their bug reports as rigid as possible. They won’t even listen to the developer’s explanations for not being able to fix a bug or implement a feature. Try making relaxed rules for yourself. Sit with the developer and analyze the priority and severity of a bug together. If the developer has a valid and sensible explanation behind her reluctance to change something, try to understand her. Just be sure to know where to draw the line in protecting the ultimate quality of your product.

Be Cautious
Diplomacy and flexibility do not replace the need to be cautious. Developers often find an excuse to say that they refused to fix a bug because they did not realize (or you did not tell them) how serious the problem was. Design your bug reports and test documents in a way that clearly lays out the risks and seriousness of issues. What’s even better is to conduct a meeting and explain the issues to them.

A smart tester is one who keeps a balance between listening and implementing. If a developer can’t convince you a bug shouldn’t be fixed, it’s your duty to convince him to fix it.