Monday, June 23, 2008

Bug Tracking Basics

Are you in customer support? If so, you will be responsible for logging customer problems as they are phoned, written, or emailed in.
As a developer, you’ll be strictly adding new features or enhancing existing ones, right? Wrong—you’ll also be responsible for fixing any known bugs.
If you are a technical writer, your job doesn’t end with creating Help, Manual, or Web content. You’ll also be fixing any documentation errors.
If you’re a manager, you’ll need a way of assessing the viability of the product. You’ll also need to determine what work remains to be done and how soon a product can be posted or shipped.
And if you are a tester working in the SQA (Software Quality Assurance) department, your job will be to verify the integrity of the product, and log and manage the resolution of any problems that you or anyone else may find.
All of these people require the use of a bug tracking system to accomplish their tasks. If you’re one of those people and need to learn more about defect tracking, keep reading. This article will tell you what you need to know.

What Is a Bug Tracking System?:
A bug tracking system is constructed as a database. The system may be based on an existing database such as Lotus Notes, or it may use a built-in proprietary database. The bug tracking system may be homegrown or it may be a commercial system. In any case, a bug is entered as a record into this database, where it can be assessed and tracked.
The user interface usually consists of navigable views and forms for editing existing bug reports or writing new ones. Each user is assigned a unique login ID and password. Whenever a user writes up a bug, adds comments, or changes a status field, a good tracking system will log the ID of the person, creating or amending the record. A great system will save such history as read-only data so a true history of a bug report can be maintained.
A good tracking system will give each member of the team easy access to bugs assigned to them. The design should make it easy for the team members to manage their specific bugs effectively with the data sorted in many different ways. A great system will give management a variety of views and reports reflecting the health of the product, so they can assess the product situation on a build-by-build basis.
Of course, some users of the system are given more rights than others. For example, most users aren’t allowed to delete bugs. If a bug is a duplicate or entered in error, it is simply marked as such instead of being deleted.
One of the most important features of a bug tracking system is keyword searching. For example, let’s say a user finds a crash bug while saving a file. The bug tracking system should enable the user to perform a keyword search on "crash and file." If a match isn’t found, the bug can be entered. If existing bugs are found, the user should examine them to see if the report covers his circumstance.
The Bug Lifecycle
Let’s examine a bug’s lifecycle:
1. Someone finds a bug and enters it into the system
2. The bug is assigned to a developer
3. The developer looks at the bug
When a bug is entered, the author usually assigns it to a product category such as "user interface" or "calc engine."
Writing or entering a bug is also known as opening a bug. Some systems may default a bug to "New" and wait for someone to open it. From my experience, doing so is a wasted step.
Depending on how the database is constructed, the bug may be assigned automatically to a developer and QA person according to the chosen category. It’s up to the developer to look for open bugs with her name on them. It is up to the development manager to look for open, yet unassigned bugs and assign them as soon as possible. Once a bug is assigned, most systems will send the developer an automatic email notification. Bugs may be reassigned if it is determined that the default person is not the best person to fix the problem. Sometimes, if a developer sees an unassigned bug in her area, she will take on the assignment and proceed to work on it. A QA person should be just as proactive if he sees unassigned bugs in his area as well.
Once a developer takes ownership of a bug, she assesses the situation and sets a resolution field. The bug’s state is pending after at least one person (usually the developer) comments on the bug or sets the resolution field. From this point on, the bug remains pending until someone either closes or verifies it.
A bug is verified when the original steps no longer reproduce the problem and someone in QA has reviewed the fix. But this doesn’t mean that the bug can be closed. The bug is usually kept in this state until QA has done a thorough check under all supported operating systems.
If verification fails, what happens to the bug is a gray area. Some systems may revert it back to "open," or set some other flag so it lands back at the feet of the developer or development manager.
It is up to the QA person assigned and/or the author of the bug to review the resolution and decide if they agree.
QA usually has the responsibility of closing a bug. This happens when either they are satisfied that the bug has been properly verified, or they agree that the bug is no longer (or never was) an issue.
Resolutions
A resolution can be one of the following:
Fix—The developer thinks she has a fix to the problem. A field indicating which build the bug should be fixed in will accompany this. The QA person assigned the bug should wait for that build to be released, then verify whether the fix actually works. If it doesn’t work, this isn’t a plot by the developer to fool anyone. When a developer makes a fix, it may be difficult for her to verify it until the build is released.
Defer—A bug is usually deferred when management or all parties agree that the issue isn’t drastic, and can be put off (deferred) until a later time.
Dupe (Duplicate)—This means that someone else has already reported the problem. When you first start on a project, you will see this in your own bugs more often than you would like. One way to avoid starting out with many dupes is by spending an afternoon reviewing the bugs that are currently open. Mastering your system’s keyword searching capabilities is another way of avoiding too many dupes.

A good system will have a field for the original bug number. The person marking the bug as a dupe should fill in the number of the original to make it easy for others to go back, review the original, and see if they agree. It’s quite possible that the person marking the dupe is assuming too much. If you think the bug is not a dupe, you should reset the resolution field and note as such.
Need More Information (NMI)—This means the developers don’t have enough information to understand or reproduce the problem. They should note in the bug what they are looking for.
Not a Bug (NAB)—The developers disagree that it’s a bug. Another way of saying this is Working as Designed (WAD).
Not Reproducible (NREP)—The bug could not be reproduced. This may be either because there is something strange about your setup (you have a driver or service pack that the developer is missing, or vice versa), or the person looking at the bug is trying it in a later build. The bug may have been inadvertently fixed since you encountered it in the last build.
However, if you can still reproduce it on your machine with the later build, you should disagree, reset the resolution, and talk to the developer.
No Plan to Fix (NPTF)—The bug has been recognized as a problem, but the developers don’t plan to do anything about it. You’ll see a lot of this as the product approaches its ship date and the pressure mounts. If you disagree with the resolution, you may need to bring it to your boss’s attention.
User Error—This is a polite way of saying that the problem isn’t with the product. User error is sometimes referred to as UBD (User Brain Damage).
Bug Components
A typical bug report contains the following components:
1.Bug Number—A bug report contains a unique ID, so it can be tracked. Whenever you tell anyone about a bug, the first thing you’ll be asked for is the bug’s number or ID. The number or ID allows the developer to enter the database and look it up.
2.Short Description—The short description may also be referred to as the "title" of the bug. This is one of the most important fields to get right. Filling it in with "Product doesn’t work" will probably incur the wrath of not only your peers, but also your boss. When sifting through a list of bugs, the short description needs to be clear, concise, and to the point. One example is "Crash on exit only after save." That will get their attention. You will notice that the description doesn’t need to be a complete sentence.
3.Product/Project Name—Versions of a product are usually given code names. For example, Lotus 1-2-3 version 3.0 was code-named "Godiva." Dragon NaturallySpeaking version 3.52 was code-named "Yukon."
There was a time when code names were used to keep outsiders guessing as to what secret project people were working on. But you will notice more often that code names are being used in press releases, along with the actual product and version number.
4.Product Version— The product version is the public designation for when the product ships (such as Dragon NaturallySpeaking 3.52). Version numbers usually include point-release identifiers (3.01 versus 3.02). It’s usually a good idea to keep the product version number as a separate field in the bug report. Some companies assign different project names to different versions. If the point release only contains one or two bug fixes, some companies won’t bother assigning different project names.
5.Build Number—Every time someone (usually a developer or someone designated as the build/release engineer) recompiles the product and incorporates the latest bug fixes, it is given a new build number. The build in which a bug was found should be noted in the bug report.
6.Fixed Build Number—When a developer fixes (or hopes she has fixed) a bug, she must note in the bug report which build the fix will be in. The fix can’t be verified as working until a copy with that build number or higher is released.
7.Steps to Reproduce—Before you write one line on how to reproduce a bug, you should attempt to narrow any extra steps that are not absolutely needed to reproduce it. (If you’ve ever used a debugger or watched a developer use a debugger, you’ll know why.) The developer might need to walk through every line of code as it executes for each step. For instance, clicking File/Save may seem trivial, but that could launch a very long sequence of events or large loops that the developer needs to step through or wait for.
8.Comments—Additional comments should be direct and to the point. Anything not directly related to the bug should be reserved for email.
9.Author—Most systems will fill this in automatically, based on the login ID.
10.Operating System—This is very important. If a developer can’t reproduce your bug, it could be because she’s using a different operating system or version.
11.Web Browser—If you are testing a Web site, the browser you’re using is as important as the operating system.
12.Category—Always try to define a category for a bug because this helps in assigning it. If someone thinks that you chose the wrong category, that person will most likely correct it.
13.Subcategory—Some categories may not have a subcategory. If you think a subcategory is appropriate, then you should select it. You’ll need to pick one if your system demands that a selection be made. As with the category, if you make an incorrect choice, someone else will probably correct it.
14.Developer Assigned—The developer field should contain a list of developers to choose from. Some systems will assign a developer by default, based on the category. Other systems may leave this field as "unassigned." If a bug is left unassigned, it’s usually the responsibility of a manager or the developers to assign the bug to the appropriate person.
You should be diligent about the bugs you enter and make sure they don’t stay unassigned for too long. If this is the case, find out what your organization’s protocol is for assigning unassigned bugs. It may simply be that the mechanics of automatic assignment are not in place, allowing you to assign the bug yourself, working either from a list or common knowledge. If the assignment is not appropriate, someone will correct it.
If you are a developer and a bug is erroneously assigned to you, you should assign it to someone else; assign it to your boss and let him figure it out; or revert it back to an unassigned state (if possible). Whenever you change the assignment of a bug, you should leave a simple note as to why you’ve changed it so the bug isn’t reassigned to you.
15.Developer Resolution—Development sets this field after they have looked at the bug.
16.QA Assigned—The QA field should contain a list of QA engineers. As with the developer field, some systems will assign this field automatically. If it isn’t assigned automatically, then a manager will assign the field or QA people can assign bugs in their area to themselves.
17.State—Open, Pending, Verified, and Closed are the most common bug states.
18.Priority—Some bug systems allow a priority field to be set. This may be a keyword list that includes Urgent, High, Medium, and Low, or a number whose value is defined by the department. This helps developers distinguish which bugs to fix first.
19.Severity—This field is used to define how severe a bug is. For example, a user being able to log in and see someone else’s credit card information may not be a crash bug, but it’s still a severe problem.
20.History—A good bug system will record all comments and field changes, as in a read-only history field that anyone can review, but not edit.
21.Attachments—These can include bitmaps, files, anything you want. When attaching anything to a bug, make sure it’s in *.zip format. Quite often, attachments stretch hardware limits and can slow down the system. Many organizations insist on storing attachments on public servers and just leaving a reference in the bug.
22.Miscellaneous/Custom—Various departments may have other fields added to bug reports for their own use.
Tips for the Beginner:
You may be new to SQA, but you probably know that many people are rated on how many bugs they find. There is an implied time limit for how long you should spend chasing down one bug, so you shouldn’t let one bug consume your life. If you can’t nail it down, bite the bullet, write a report where you state as much as you can, and move on. If it’s a nasty crash bug, tell your boss and work out whether it requires more attention. If so, enlist the aid of others to track it down.
Because of this rating system, some people new to QA may get the urge to "stretch" a bug by making several bugs out of one. An example of this would be writing a bug where the user enters some text, noting the product crashes, then writing up the same bug, but entering different text. Don’t do this. You’ll just waste everyone’s time, and your extra bugs will probably be flagged as dupes.
Before writing up a bug, ask Development what’s fair and what isn’t. "Fair game" refers to areas where it is fair for QA to write bugs. If Development tells you that the File Import routine is still under construction and not guaranteed to work, this isn’t fair game. Writing up bugs against this area is another waste of time and can make people angry.
Another tip is to review the bug system as a whole. Examine the trends and try to get a feel for what areas are vulnerable and need more testing. If the bug count increases over time (builds), this is a bad thing. If the bug count decreases, it could mean that the developers have kidnapped the QA team, but it’s more likely to mean the product is becoming more stable (and approaching a date when it can ship/post).
One more thing to look out for is developers going wild by marking everything (including crash bugs) as NPTF (No Plan To Fix), NAB/WAD (Not A Bug/Working As Designed), or NREP (Not Reproducible).
Management’s Role
The first thing a manager should do is to make sure that all bugs are assigned. But since managers are usually stuck in meetings all day, team members need to be proactive by taking on the unassigned bugs themselves.
Managers should do everything they can to squeeze data out of the bug tracking system—especially graphs. Although I could dedicate a whole article to the types of reports and graphs that are needed, you basically need to chart how the product is doing over time. Time can best be mapped out by build number.
Final Thoughts:
Nothing in this article is cast in stone. I’ve tried to provide an overview of a generic system, based on my experience of using and building bug tracking systems. As you learn your company’s system, you may find that items and procedures are a little different.
Regardless of the system you are using, mastering its intricacies can help you to better manage your workload, generate comprehensive reports, and track progress for yourself and your department.

Saturday, June 21, 2008

V MODEL 2

The V-model promotes the idea that the dynamic test stages (on the right hand side of the model) use the documentation identified on the left hand side as baselines for testing. The V-Model further promotes the notion of early test preparation.(Above picture)


Early test preparation finds faults in baselines and is an effective way of detecting faults early. This approach is fine in principle and the early test preparation approach is always effective. However, there are two problems with the V-Model as normally presented.
The V-Model with early test preparation.
Firstly, in our experience, there is rarely a perfect, one-to-one relationship between the documents on the left hand side and the test activities on the right. For example, functional specifications don’t usually provide enough information for a system test. System tests must often take account of some aspects of the business requirements as well as physical design issues for example. System testing usually draws on several sources of requirements information to be thoroughly planned.
Secondly, and more important, the V-Model has little to say about static testing at all. The V-Model treats testing as a “back-door” activity on the right hand side of the model. There is no mention of the potentially greater value and effectiveness of static tests such as reviews, inspections, static code analysis and so on. This is a major omission and the V-Model does not support the broader view of testing as a constantly prominent activity throughout the development lifecycle

The W-Model of testing:

Paul Herzlich introduced the W-Model approach in 1993. The W-Model attempts to address shortcomings in the V-Model. Rather than focus on specific dynamic test stages, as the V-Model does, the W-Model focuses on the development products themselves. Essentially, every development activity that produces a work product is “shadowed” by a test activity. The purpose of the test activity specifically is to determine whether the objectives of a development activity have been met and the deliverable meets its requirements. In its most generic form, the W-Model presents a standard development lifecycle with every development stage mirrored by a test activity. On the left hand side, typically, the deliverables of a development activity (for example, write requirements) is accompanied by a test activity “test the requirements” and so on. If your organization has a different set of development stages, then the W-Model is easily adjusted to your situation. The important thing is this: the W-Model of testing focuses specifically on the product risks of concern at the point where testing can be most effective.

The W-Model and static test techniques.:

If we focus on the static test techniques, you can see that there is a wide range of techniques available for evaluating the products of the left hand side. Inspections, reviews, walkthroughs, static analysis, requirements animation as well as early test case preparation can all be used.




The W-Model and dynamic test techniques.:

If we consider the dynamic test techniques you can see that there is also a wide range of techniques available for evaluating executable software and systems. The traditional unit, integration, system and acceptance tests can make use of the functional test design and measurement techniques as well as the non-functional test techniques that are all available for use to address specific test objectives.The W-Model removes the rather artificial constraint of having the same number of dynamic test stages as development stages. If there are five development stages concerned with the definition, design and construction of code in your project, it might be sensible to have only three stages of dynamic testing only. Component, system and acceptance testing might fit your normal way of working. The test objectives for the whole project would be distributed across three stages, not five. There may be practical reasons for doing this and the decision is based on an evaluation of product risks and how best to address them. The W-Model does not enforce a project “symmetry” that does not (or cannot) exist in reality. The W-model does not impose any rule that later dynamic tests must be based on documents created in specific stages (although earlier documentation products are nearly always used as baselines for dynamic testing). More recently, the Unified Modeling Language (UML) described in Booch, Rumbaugh and Jacobsen’s book [5] and the methodologies based on it, namely the Unified Software Process and the Rational Unified Process™ (described in [6-7]) have emerged in importance. In projects using these methods, requirements and designs might be documented in multiple models so system testing might be based on several of these models (spread over several documents).We use the W-Model in test strategy as follows. Having identified the specific risks of concern, we specify the products that need to be tested; we then select test techniques (static reviews or dynamic test stages) to be used on those products to address the risks; we then schedule test activities as close as practicable to the development activity that generated the products to be tested.

V MODEL








Windows Compliance Standards :

These compliance standards are followed by almost all the windows based application. Any variance from these standards can result into inconvenience to the user. This compliance must be followed for every application. These compliances can be categorized according to following criteria
Compliance for each application:
1.Application should be started by double clicking on the icon.
2.Loading message should have information about application name, version number, icon etc.

3.Main window of application should have same caption as the icon in the program manager.
4.Closing of the application should result in “Are you sure?” message.
5.Behaviour for starting application more than once must be specified.
6.Try to start application while it is loading
7.On every application, if application is busy it should show hour glass or some other mechanism to notify user that it is processing.
8.Normally F1 button is used for help. If your product has help integrated, it should come by pressing F1 button.
9.Minimize and restoring functionality should work properly


Compliance for each window in the application :
1.Window caption for every application should have application name and window name. Specially, error messages.
2.Title of the window and information should make sense to the user.
3.If screen has control menu, use the entire control menu like move, close, resize etc.
4.Text present should be checked for spelling and grammar.
5.If tab navigation is present, TAB should move focus in forward direction and SHIFT+TAB in backward direction.
6.Tab order should be left to right and top to bottom within a group box.
7.If focus is present on any control, it should be presented by dotting lines around it.
8.User should not be able to select greyed or disabled control. Try this using tab as well as mouse.
9.Text should be left justified
10.In general, all the operations should have corresponding key board shortcut key for this.
11.All tab buttons should have distinct letter for it.
Text boxes :
1.Move mouse to textbox and it should be changed to insert bar for editable text field and should remain unchanged for non-editable text field.
2.Test overflowing textbox by inserting as many characters as you can in the text field. Also test width of the text field by entering all capital W.
3.Enter invalid characters, special characters and make sure that there is no abnormality.
4.User should be able to select text using Shift + arrow keys. Selection should be possible using mouse and double click should select entire text in the text box.
Radio Buttons :
1.Only one should be selected from the given option.
2.User should be able to select any button using mouse or key board
3.Arrow key should set/unset the radio buttons.
Check boxes :
1.User should be able to select any combination of checkboxes
2.Clicking mouse on the box should set/unset the checkbox.
3.Spacebar should also do the same
Push Buttons :
1.All buttons except OK/Cancel should have a letter access to them. This is indicated by a letter underlined in the button text. The button should be activated by pressing ALT
2.Clicking each button with mouse should activate it and trigger required action.
3.Similarly, after giving focus SPACE or RETURN button should also do the same.
4.If there is any Cancel button on the screen, pressing Esc should activate it.
Drop down list boxes:
1.Pressing the arrow should give list of options available to the user. List can be scrollable but user should not be able to type in.
2.Pressing Ctrl-F4 should open the list box.
3.Pressing a letter should bring the first item in the list starting with the same letter.
4.Items should be in alphabetical order in any list.
5.Selected item should be displayed on the list.
6.There should be only one blank space in the dropdown list.
Combo Box :
Similar to the list mentioned above, but user should be able to enter text in it.

List Boxes :
1.Should allow single select, either by mouse or arrow keys.
2.Pressing any letter should take you to the first element starting with that letter
3.If there are view/open button, double clicking on icon should be mapped to these behaviour.
4.Make sure that all the data can be seen using scroll bar.

Friday, June 20, 2008

WEB TESTING TIPS

Manual Testing Tips
Manual Testing TipsWeb TestingDuring testing the websites the following scenarios should be considered.

  • Functionality
  • Performance
  • Usability
  • Server side interface
  • Client side compatibility
  • Security

Functionality:
In testing the functionality of the web sites the following should be tested.
Links
Internal links
External links
Mail links
Broken links
Forms
Field validation
Functional chart
Error message for wrong input
Optional and mandatory fields
Database
Testing will be done on the database integrity.
Cookies Testing will be done on the client system side, on the temporary internet files.

Performance:
Performance testing can be applied to understand the web site's scalability, or to benchmark the performance in the environment of third party products such as servers and middleware for potential purchase.
Connection speed:
o Tested over various Networks like Dial up, ISDN etc
Load

  • What is the no. of users per time?
  • Check for peak loads & how system behaves
  • Large amount of data accessed by user.

Stress :

  • Continuous load
  • Performance of memory, cpu, file handling etc

Usability :
Usability testing is the process by which the human-computer interaction characteristics of a system are measured, and weaknesses are identified for correction.Usability can be defined as the degree to which a given piece of software assists the person sitting at the keyboard to accomplish a task, as opposed to becoming an additional impediment to such accomplishment. The broad goal of usable systems is often assessed using several

criteria:

  • Ease of learning
  • Navigation
  • Subjective user satisfaction
  • General appearance Server side interface

Server side interface:

In web testing the server side interface should be tested.This is done by Verify that communication is done properly.

Client side compatibility:

Compatibility of server with software, hardware, network and database should be tested.The client side compatibility is also tested in various platforms, using various browsers etc.

Security:
The primary reason for testing the security of an web is to identify potential vulnerabilities and subsequently repair them.
The following types of testing

  • Log Review
  • Integrity Checkers
  • Virus Detection