Saturday, December 3, 2011

SDLC PHASES notes by Lokre !!

PRODUCT - > May be Generic

PROJECT -> May be Bespoke/Custom/COTS (Commercial of the Shell)

PRODUCT: Developed based on manufacturer’s Specification and used by multiple/many users.

PROJECT: Developed based on Customer’s Specification and used by particular Customers.


S/W Process: The set of activities whose goal is to develop a application or Evaluation of Software.








PHASES & ROLES


1. REQUIREMENT PHASE -- >
Role: Business Analyst (CRS/BRS->Customer/Business Requirement Specifications)

KICK- OFF MEETING
Project Manager sends initiation note to CEO.


TASK OF THE ENGAGEMENT MANAGER

i. Deals with the excess cost of the project.
ii. Responsible for Prototype demonstration

Protype - It is roughly & rapidly developed model.

Input Document-> BRS





2.ANALYSIS -- >
Role: System Analyst/Project Manager/Team managers.

TASK:


FEASIBILITY STUDY:
A Detailed study of requirements in order to check whether they are feasible or not.

TENTATIVE PLANNING:
The Resources as well as time temporarily planned. (Fixing Target date to accomplish project)


TECHNOLOGY SELCTION:
What are all the technologies that are required for accomplish given project successfully.


REQUIREMET ANALYSIS:
The Requirements that are required to accomplish the project successfully.



Output  SRS/FRS Document (System/Functions Requirement Specification). This document contains functional requirements to be developed and system requirements to be used)

What makes a Good SRS?

Environment of S/W
Project requirements in SRS
Protyping Demonstration

Characteristics/ Features of SRS:

1. Independent
2. Easy to Validate
3. Verifiable
4. Complete
5. Consistent
6. Traceable
7. Modifiable







3. DESIGN

Role: Chief Architect/ Technical Lead

They convert all the requirements into well defined Modules.

TASK

They are doing this design in 2 ways:

1. HLD – High level Design (The overall view of S/w from root functionality to leaf functionality.)
The Process of dividing modules usually done by Chief Architect.
This design is also called as “EXTERNAL DESIGN/ARCHITECTURAL DESIGN”.


2. LLD – Low level Design
The Process of dividing modules into sub-modules in order to check internal logic of each program structure. This is usually done by Technical Lead’s

This design is also called as “INTERNAL DESIGN/DETAILED DESIGN”.

It contains some data flow diagrams and Psuedo Code.
http://en.wikipedia.org/wiki/Data_flow_diagram

Output  TDD (Technical Design Document)
TDD->Flow Diagrams + Psuedo Code




4. CODING/DEVELOPMENT/IMPLEMENTAION-

Role: Developers

TASK:

The Programmers/Developers write coding with respect to customer specified technology.

Developers are using pseudo code & following coding standards, proper indentation(Eg. Alignments while designing web page username/pwd field.) And color coding etc..

Output Source code Document/Application





5. TESTING

Role: Test Engineers/Testers

TASK:

The testers detecting the defects from the application.

Process:

Prepare the Review Report

After understanding all the requirements clearly testers take test case template and write test cases.

After receiving a s/w build or application from developers. Testers will execute all the test cases on that application.

If they find any defects, they will report to the developer. After receiving modified build, then again testers will check for side effects. This method will repeats/continues till defect free product comes.

Output Application or Quality Product





6. RELEASE & MAINTENANCE


Role: Deployment Engineer/Senior Test Engineer

TASK:

Release the software to customer.

Process:

The Deployment engineer will display the application into the client environment following the guide lines given in the deployment document.

Maintenance

After releasing S/W, if at all any problem during the utilization of S/W, then that problem become a task for maintenance team.

Depending upon that problem the corresponding team/role will be appointed and he defines the process based on the issue and resolves it.



Thursday, December 1, 2011

LEVELS OF TESTING - Lokre !!



NOTE : CLICK ON IMAGES FOR LARGE VIEW.

LEVELS OF TESTING




I) UNIT TESTING

In this phase, the programmers check the internal logic of each program structured by using White Box Techniques (WBT) techniques.

II) INTEGRATION TESTING

After Unit testing, the programmers integrate all the individual modules to make it has a complete software or Application.

They are following below approaches to integrate all the modules that are completed.


TOP-DOWN APPROACH




In this approach testing is conducted from main module to sub module. if the sub module is not developed a temporary program called STUB is used for simulate the sub module. STUB is also known as CALLED program.











BOTTOM-UP APPROACH






In this approach testing is conducted from sub module to main module, if the main module is not developed a temporary program called DRIVERS is used to simulate the main module. This Driver is also known as CALLING program.










BI-DIRECTIONAL (Or) SANDWICH (Or) HYBRID APPROACH

Combination of Top down and Bottom up Approach.


SYSTEM APPROACH (Or) BIG-BANG THEORY

Grouping all the main and sub modules to make it as complete software or Application.





III) SYSTEM TESTING:

After Integration testing the developers delivers the Software build to the testers to detect the defects. Such a Software build is called as “AUT”.

AUT  APPLICATION UNDER TEST.

Then the test engineers apply system testing on AUT in 3-ways:

1. USABILITY TESTING
2. FUNCTIONAL TESTING
3. NON- FUNCTIOANL TESTING



USABILITY TESTING

1. USER INTERFACE TESTING Or COSMETIC TESTING
2. MANUAL SUPPORT


USER INTERFACE TESTING  Look & Feel (Attractiveness)
 Easy to Use (Understandable)
 Speed in Interface (Short Navigation)



The purpose of Functional & Non Functional Testing is to check the
Requirements
Correctness
Completeness


After completion of User Interface Testing on every screen of our application. The test engineers concentrate on requirements, Correctness and Completeness by using BBT (Black Box Testing) techniques.


WHAT IS BBT TECHNIQUES?


1. Boundary Value Analysis (BVA)
2. Equivalence Class Partition(ECP)
3. Error Guessing
4. Decision Table
5. State Transition Diagram



Functional Test is done in 2 – ways:

1. Functionality Testing
2. Sanitation Testing

Functionality Testing:

In general test engineers start functional testing with functionality testing by following below approaches.

1. GUI Coverage/ Behavioral Coverage

The Valid Changes in the properties of objects in a window.

2. Error handling

The Prevention of wrong operations with meaningful errors.

3. Input Domain Coverage

The range & type of input in terms of VALID/INVALID

4. Manipulation Coverage

The Correctness of existing output.

5. Order of Functionality – Eg) TAB BUTTON

The Correctness of existing order with respect to customer requirement.

6. Database Coverage/Backend Coverage

The impact of front end screen operations on backend table content.






SANITATION TESTING

During this testing, the testers are finding extra functionalities which are not in customers specifications/BRS.

This Testing is also called as “GARBAGE TESTING”

Eg)

User Name Field
Password Field

Ok button Submit button

In the above example, there is no need of extra button called Submit or OK..



NON – FUNCTIONAL TESTING:

After completing of functional testing, the test engineers concentrate on Non-Functional testing. It is also an important testing phase in system testing.

But, It is very complex to conduct and it is expensive.

1. Platform Independent
2. Reliable/Scalability.





3. Compatibility/Portability.
4. Installation/Uninstallation.
5. Data Volume(Storage Testing)
6. Performance -> Load (How many concurrent users are using) Stress, Efficiency.
7. Security (For Authentication) – Eg. Encryption and Decryption.

Encryption -- @#$njg$#$#
Decryption -- ABCDEF




IV) UAT (USER ACCEPTANCE TESTING)

After system testing, the software developers concentrate on UAT in 2-ways:

(α)Testing (Alpha-Testing)  During this testing, the developers invites the customer to organization/firm & conducts training sessions & get the feedback.

(β) Testing (Beta-Testing)  During this testing, the developers or responsible team will go to customer place and they will explain about the product and get the feedback.

Wednesday, November 30, 2011

BBT AND WBT TECHNIQUES

NOTE : CLICK ON IMAGES FOR LARGE VIEW.




BLACK BOX TECHNIQUES (BBT)

EQUIVALENCE CLASS PARTITION (ECP):

It is technique in BBT, It is designed to minimize the number of test cases by dividing tests. In such a way that the system is expected to act the same way for all tests of each
Equivalence partition.


Input values to a program are portioned into equivalent classes.

The equivalence class is determined by examining and analyzing the input data range.

In HR system, if you take any firm…

0-16 –They don’t hire
16- 18 – They can hire but on part time.
18-55 – Full time
Above 55 – Don’t hire.



0 16 18 55 100
_ ________________________________________

Classes --> 1 2 3 4


On Boundary  16, 18, 55

>16 above- 17, 19, 56
<16 below 15, 17, 54


Class 2 – On Board 16, 18

Above – 17, 19
Below – 15, 17

Class 3 – On Board- 18, 55

Above – 19, 56
Below – 17, 54

EXAMPLE 2:




BOUNDARY VALUE ANALYSIS:


Boundary & conditions are two major sources of defect in software application or product.

Typical programming errors occurs at boundaries of equivalence classes. This may be purely psychological factors.

Programming often fail to see special processing required at boundaries of equivalence classes

Programmers may improperly use less than (<) instead of less than equal to (<=)

EXAMPLE :




















ERROR GUESSING

The Test engineers are writing test cases and finding defects with their previous experience or past experience.

• Past failures
• Experience
• Brain Storming.
• What is the craziest thing we can do?
• Intuition.





Decision Table:


Decision table can be used when the outcome or logic involve in the program is based on set of decisions and rules which need to be followed.

A decision table lists various decision variables.






State Transition Diagram:


It is useful in situations when work flow modeling or data flow modeling has been done.

When a system must remember what happened before or when valid/invalid
Order of operations exists then STATE TRANSITION TABLE/TOOL is used.

It is excellent way to capture certain types of system requirements and documents internal system design.

Eg) consider a leave application system in an organization.

An employee can rise the request for leave and if he is eligible for leave (based on number of days he is already taken leave).

The application is sent to the manager for approval.
The manager then validates and appears for rejects the leave based on duration of project or reasons for taking leave etc..














WHITE BOX TESTING TECHNIQUES(WBT)


Deals the testing of internal logic & structure of the code.
Explicit knowledge of internal working of the system being tested must be known.
Glass box/open box testing.
Covers testing of code, branches, paths, statements.


WBT TECHNIQUES:

STATEMENT COVERAGE:

In this type of testing, the code is executed in such a manner that every statement of the application is executed atleast once. It helps in assuring that all the statements execute without any side effects.


BRANCH COVERAGE:

No Software application can be written in continuous mode of coding at some point/time we need to branch out the code in order to perform particular functionality.

Branch coverage testing helps in validating of all the branches in the code & make sure that no branch leads abnormal behavior of the application.

MUTATION TESTING:

It is type of testing in which the application is tested for the code that was modified after fixing particular bug or defect.

It also helps in finding out which code or which strategy of coding can help in developing the functionality effectively.


BASIS PATH TESTING:

This testing allows the test case designer to produce logical complexity measure of procedural design and use this measure as an approach for outlining a basic set of execution paths.

The test cases produce to exercise each statement in the program atleast one time during testing.


FLOW GRAPHS:

It can be used to represent the logical flow control and therefore all the execution paths that need testing.

To ellustrate, the use of flow graphs, Consider the procedure design.



CYCLOMATIC COMPLEXITY:


It is software metric that offers an indication of logical complexity of a program.
When used in context of basis path testing approach. The value is determined for Cyclomatic complexity.

It defines the number of independent paths in the basis testing of a program and ensures all the statements have been executed atleast once.




In this example, two test cases are sufficient to achieve a complete branch coverage, while four are necessary for complete path coverage. The cyclomatic complexity of the program is 4 (as the strongly connected graph for the program contains 9 edges, 7 nodes and 1 connected component) (9-7+2).

Cyclomatic Complexity = 4.

Tuesday, November 29, 2011

SOFTWARE PROCESS DEVELOPMENT MODELS

1. WATERFALL MODEL
2. PROTOTYPE MODEL
3. EVOLUTIONARY MODEL
4. SPIRAL MODEL
5. FISH MODEL


1. WATERFALL MODEL:



ADVANTAGES:

1. It is a simple model.
2. Project monitoring and maintenance is very easy.

DISADVANTAGES:

1.Can't accept the new requirement in the middle of the process.



2. PROTOTYPE MODEL




ADVANTAGES:

Whenever the customers are not clear with their requirements then this is the best suitable model.

DISADVANTAGES:

a. It is not a full fledged process development model.
b. Prototype need to be built on companies cost.
c. Slightly time consuming model.
d. User may limit his requirement by sticking to the PROTOTYPE.


3. EVOLUTIONARY MODEL



ADVANTAGES:

Whenever the customers are evolving the requirements then this is the best suitable model. Adding the new requirements after some period of time.

DISADVANTAGES:

1. Project monitoring and maintenance is difficult.
2. Can't define the deadlines properly.


4. SPIRAL MODEL



Ex: Risk-Based Scientific projects.

ADVANTAGES:

Whenever the projects is highly risk based this is the suitable model.

DISADVANTAGES:

1. Time consuming model.
2. Costly model.
. Risk route cause analysis is not an easy task.

NOTE: Cycles depends upon Risk involved in the project and size of the project, Every class has 4 phases, except the last phase.


5. FISH MODEL



ADVANTAGES:

As both Verification and Validation are implemented the outcome will be a quality product.

DISADVANTAGES:

1. Time consuming model.
2. Costly model.

VERIFICATION:

Verification is the process of checking each and every role in the organization in order to confirm whether they are working according to the company's process guidelines are not.

VALIDATION:

Validation is the process of checking , conducted on the developed product or its related parts in order to confirm weather they are working according to the expectations are not.

VERIFICATION: QUALITY ASSURANCE PEOPLE(Review, Inspections, Audits, Walk through)
VALIDATION: QUALITY CONTROL PEOPLE (Testing)


6. V- MODEL:




ADVANTAGES:

As verification, validation, test management process is maintained. The outcome will be quality product.

DISADVANTAGES:

1. Time consuming model.
2. Costly model.


AGILE MODEL:

Before development of the application, where testers write the test cases and gives to the development team, so that it can be easy for developers to defect free programs.

Monday, November 28, 2011

SOFTWARE TESTING LIFE CYCLE (STLC) ~~

------------------------------------------------------------------------------------



Software Development Life Cycle (SDLC) Vs Software Test Life Cycle (STLC)



------------------------------------------------------------------------------------

STLC (Software Testing Life Cycle)








SOFTWARE TESTIGN LIFE CYCLE:

1) TEST INITIATION:

In General, System testing process or STLC starts with test initiation or test commencement. In this phase, project manager or Test Manager selects the reasonable approaches which are followed by Test Engineers. He prepares a document called TEST STRATEGY DOCUMENT in IEEE 829 Format.

TEST STRATEGY document is developed for each project. This document defines the scope and general directions or approach or path for testing in the project.


TEST STRATEGY MUST ANSWER THE FOLLOWING:

1. When will testing occur?
2. What kind of testing occur?
3. What kinds of risks come?
4. What are the critical success factors?
5. What is the testing objective?
6. What tools will be used?

TEST STRATEGY DOCUMENT FEATURES ARE AS FOLLOWS:

1) SCOPE & OBJECTIVE:
Brief account & purpose of the project.

2) BUDGET ISSUES:
Allocated budget for the project

3) TEST APPROACH:
Defines test approach between development stages & testing factors.

4) TEST ENVIRONMENT SPECIFICATIONS:
Require test documents developed by testing team during testing.

5) ROLES & RESPONSIBILITIES:
The consecutive jobs in both development & testing and their responsibility.

6) COMMUNICATION & STATUS REPORTING:
Required negotiation in between two consecutive roles development & Testing.

7) TESTING MEASUREMENTS & METRICS:
To estimate the work completion in terms of quality assessment, test management process capabilities.

8) TEST AUTOMATION
Possibilities to get test automation with respect to corresponding project requirements & testing facilities (or) tools availability.



9) DEFECT TRACKING SYSTEM:

Required negotiation in between development and testing team to fix the defects and resolve.


10) CHANGE & CONFIGURATION MANAGEMENT:

Required strategy to handle change request of user’s side.

11) RISK ANALYSIS & MITIGATIONS or Solutions:
Common problems appears during testing and possible solutions to recover.

12)TRAINING PLAN:
The required number of training sessions for test engineers before starting of project.


NOTE:
RISKS ARE FUTURE UNCERTAIN OR UNEXPECTED EVENTS WITH A PROBABILITY OF OCCURRENCE AND POTENTIAL FOR LOSS.




2) TEST PLAN:

After completion of the test strategy document, the test lead category people defines test plan in terms of

WHAT TO TEST? (Development Plan)
HOW TO TEST? (From SRS document)
WHEN TO TEST? (Design document/Tentative Plan)
WHO WILL TEST? (Team Foundation)


Converting of System plan to Module plan:

SYSTEM PLAN -- MODULE PLAN

Development Plan -> Team Foundation
Test Strategy ->Risk Analysis
->Prepare test plan document
->Review on test.

TEST TEAM FORMATION:

In General, the test planning process starts with test team formation depends upon the below factors:

 Availability of testers
 Availability of the test environment resources.
 Identifying Tactical Risk.

After completion of test team formation the test lead concentrate on risk analysis and mitigations or solutions.


Types of Risks:

1. Lack of knowledge on the domain.
2. Lack of budget.
3. Lack of resources.
4. Delay in deliveries.
5. Lack of development team seriousness.
6. Lack of communication.





Prepare Test Plan Document:-

After completion of test formation and risk analysis, the test lead concentrate on test plan document in IEEE format as follows:-

TEST PLAN ID:

The unique name and number will be assigned for every test plan.

INTRODUCTION/SUMMARY:


Brief account about the project.

TEST ITEMS:

Number of modules in the project.

FEATURES TO BE TESTED:

Testing only responsible modules (or) the names of the modules to be tested.


TEST ENVIRONMENT:

Required documents to prepare during testing & required H/W & S/W used in the project.

NOTE: ABOVE ALL THE POINTS FOR “WHAT TO TEST”



ENTRY CRITERIA:

Whenever the test engineers, starts test execution.
i. All test cases are completed
ii. Receive stable build or software build from developer
iii. Establish test environment.




SUSPENSION CRITERIA:


Whenever the test engineers are able to interrupt test execution
i. Major bugs or severe bugs occur.
ii. Resources are not working.

EXIT CRITERIA:

Whenever the test engineers are able to stop test execution

i. All the test cases should be completed.
ii. Cross the schedule.
iii. All the defects resolved.


TEST DELIVERABLES:


The number of required documents submitted to the test lead by the test engineers.

Documents are – 1.Test Case documents.
2. Test Summary documents.
3. Test Log.
4. Defect report documents.
5. Defect summary documents.



NOTE: The above mentioned points for HOW TO TEST?


ROLES & RESPONSIBILITIES:


The work allocation to the selected test engineers and their responsibilities.

STAFF & TRAINING PLAN:

The names of selected testing team and number of training sessions required for them.


NOTE: The above two points for WHO WILL TEST?


SCHEDULE:

The date and time allocated for project.

APPROVALS:


Signature of PM/QA/responsible people.

NOTE: The above two points for WHEN TO TEST?


REVIEW ON TEST PLAN:


After completion of test plan document, the test lead concentrate on review of the document for completeness & correctness.

In this review meeting, testing team conducts Coverage Analysis.






3) TEST DESIGN PHASE

After preparing the test plan document. The test engineers who are selected by Team leaders will concentrate to prepare Test Cases in IEEE 829 format.

TEST CASE:

A set of inputs, execution conditions and expected results developed for a particular objective or test object such as to exercise a particular program path (or) To verify complaints with a specific requirement.

The test case is not a necessarily design to expose a defect, but to gain a knowledge or information.

Eg) Whether the program PASS/FAIL the test.


WHY TEST CASE REQUIRED?

Necessary to verify successful and acceptable implementation of product requirement.

Helps to finds the problems in the requirements of an application.

Determines whether we reached clients expectations.

Helps testers SHIP (valid) or NO SHIP (Invalid) decisions.


1. Randomly selecting or writing test cases doesn’t indicate effective of testing.
2. Writing large number of test cases doesn’t mean that many errors in the system would be uncover.


The Test engineers are writing test cases in 2-methods:

1) USER INTERFACE BASED TEST CASE DESIGN.
2) FUNCTIONAL & SYSTEM BASED TEST CASE DESIGN.





USER INTERFACE BASED TEST CASE DESIGN:
A test case in applications development is a set of conditions or variables under which a tester will determine whether an application or software system is working correctly or not. All the below test cases are static, because the test cases are applicable on build without operating.

TEST CASE TITLES:

1. Check the spellings.
2. Check font uniqueness in every screen.
3. Check style uniqueness in every screen.
4. Check label uniqueness in every screen.
5. Check color contrast in every screen.
6. Check alignments of objects in every screen.
7. Check name uniqueness in every screen.
8. Check spacing uniqueness in between labels and objects.
9. Check dependent object grouping.
10. Check border of object group.
11. Check tool tips of icons in all screens.
12. Check abbreviations (or) full forms.
13. Check multiple data object positions in all screens. Eg) List Box, Menu, Tables.
14. Check Scroll bars in every screen.
15. Check short cut keys in keyboard to operate on our build.
16. Check visibility of all icons in every screen.
17. Check help documents. (Manual support testing)
18. Check Identity controls. Eg) Title of software, Version of S/w, Logo of company, Copy Rights etc.


NOTE:


Above usability test cases are applicable on any GUI application for usability test.
For these above test cases, Testers give priority as “P2”



Test Case Prioritization
Different organizations use different scales for prioritization of test cases. One of the most commonly used scale prioritizes the Test Scenarios into the following 4 levels of Priorities.
1. BVT
2. P1
3. P2
4. P3
BVT: (Build Verification Test)

Test scenario marked as BVT verifies the core functionality of the component. If BVTs fails, it may block the further testing for the component hence it is suggested not to release the Build. To certify the build all BVTs should execute successfully.

E.g. Installation of Build.


P1 (Priority 1 Test)

Test scenario marked as P1, verifies the extended functionality of the component. It verifies most of the functionalities of components. To certify the interim release of build all BVTs and majority of P1 should execute successfully.

E.g. Installation of Build with different flavors of Database (MSDE or SQL)





P2 (Priority 2 Test)

Test scenario marked as P2, verifies the extended functionality with lower priority of the components. It verifies most of the functionalities of components. To certify the release of build all BVTs and majority of P1 & P2 should execute successfully.

E.g. Validation, UI testing and logging, etc.


P3 (Priority 3 Test)
Test scenario marked as P3, verifies error handling, clean up, logging, help, usage, documentation error, spelling mistakes, etc for the components.






4)TEST EXECUTION



After completion of test case preparation, the test engineers will concentrate on Levels of testing.







LEVEL 0 -SANITY TESTING

Practically the test execution process is starting with sanity testing to estimate Stability of the Build. In this, Sanity testing testers are concentrating on below factors through the coverage of basic functionalities in that build.

1. UNDERSTANDABLE
2. OPERATABLE
3. OBSERVABLE
4. CONTROLABLE
5. CONSISTENCY
6. SIMPLICITY
7. MAINTAINABLE
8. AUTOMATABLE




The above Level 0 Sanity testing is estimating testability of the build. This level-0 testing is also known as SANITY TESTING (or) SMOKE TESTING (or) TESTABILITY TESTING (or) BUILD ACCEPTANCE TESTING (or) BUILD VERIFICATION TESTING (or) TESTER ACCEPTANCE TESTING (or) OCTANGLE TESTING



LEVEL 1 – COMPREHENSIVE TESTING (or) REAL TESTING

After completion of Level 0(Sanity testing), Testers are conducting Level 1 Real testing to detect defects in the build.

In this Level, Testers are executing all the test cases either in Manual or Automation as “TEST BATCHES”

Every TEST BATCH consists set of dependent test cases. These Test Batches also known as TEST SUIT or TEST CHAIN or TEST BELT.

In these Test Execution as batches on the build, Testers are preparing “TEST LOG” document with 3-types of entries:

1. PASS (Testers expected values is equal to actual)
2. FAIL (Testers expected value is not equal to actual)
3. BLOCKED (Postponed due to lack of time)

LEVEL 2 – REGRESSION TESTING

During Level 1 (Real or Comprehensive testing),
Testers are reporting mismatches in between our test expected values and build actual values as defect report.

After receiving defect reports from testers, the developers are conducting a review meeting to fix defects.

If our defects accepted by developers, then they are performing changes in coding and then they will release Modified build with Release note. The release note of a modified build is describe the changes in that modified build to resolve reported defects.

Testers are going to plan regression testing to be conducted on that modified build with respect to release note.


APPROACH:

Receive modified build along with release note from developers.
Apply sanity testing on that modified build.
Run that select test cases on modified build to ensure correctness of modification without having side effects in that build.

Side Effects Effecting other functionalities.


After completion of the required level of regression testing, the testers conducting remaining Level 1 test execution.



LEVEL 3 – FINAL REGRESSION


After completion of entire testing process testing team concentrate on Level 3 test execution.

This level of testing is also known as POST MORTEM TESTING (or) FINAL REGRESSION TESTING (or) PRE-ACCEPTANCE TESTING.

During this if they get any defects then it is called as GOLDEN DEFECT (or) LUCKY DEFECT.

After resolving all golden defects they will concentrate on UAT along with developers.

The act of running a test, observing the behavioral of the application and its environment is called TEST EXECUTION.

This is also referred to the sequence of the actions performed to be able to say that Test has been completed.


TEST AUTOMATION EXECUTION – S/w or tool executes all the test cases in the form of test script on the application.





5.TEST REPORT


During Level 1 & Level 2, if tester finds any mismatches between their expected and defect report in IEEE format.

There are 3-ways to submit defect to developer


1. CLASSICAL BUG REPORTING PROCESS.




DRAWBACKS:

• Time Consuming.
• No Transparency
• Redundancy.(Repeating)
• No Security(Hackers may hack the mails)

NOTE:

TE : TEST ENGINEERS
D : DEVELOPERS




2. COMMON REPOSITORY ORIENTED BUG REPORTING PROCESS.






COMMON REPOSITORY : It is a server which can allow only authorized people to upload and download.

DRAWBACKS:

• Time Consuming.
• No Transparency
• Redundancy.(Repeating)





3. BUG TRACKING TOOL ORIENTED BUG REPORTING PROCESS.




BUG TRACKING TOOL: It is a software application which can be access only by the authorized people and provides all the facilities for bug tracking and reporting.

Ex: BUGZILLA, ISSUE TRACKER, PR TRACKER (PR PERFORMANCE REPORTER)

No Transparency: Test engineer can’t see what was happening in the development deportment and developer can’t look what was the process is going in the testing deportment.
Redundancy: There is a chance that some defect will be found by all the test engineers.

Ex: Suppose all the test engineers found the defect of the login screen login button, and then they raise the same as a defect.

BUG TRACKING TOOL ORIENTED BUG REPORTING PROCESS:

The test engineer enter into bug tracking tool, he add defect to the template with add defect feature and writes the defect in corresponding columns, the test lead parallely observes it by bug tracking tool, and he assign Severity.
The development lead also enters into the bug tracking tool. He assigns the priority and assigns the task to the developer. The developer enters into the tool and understands the defect and rectifies it.
Tool: Something that is used to complete the work easily and perfectly.

Note: Some companies’ use their own Bug Tracking Tool, this tool is developed by their own language, this tool is Called ‘INHOUSE TOOLS’.



6. TEST CLOSURE ACTIVITY:

This is the final activity in the testing process done by the test lead, where he will prepare the test summary report, which contains the information like:
Number of cycles of execution,

Number of test cases executed in each cycle, Number of defects found in each cycle, Defect ratio and…………etc.





DEFECT LIFE CYCLE






DEFECT LIFE CYCLE








Defect Severity is classified into four types:

FATAL : Unavailability of functionality /navigational problems
MAJOR : defects in Major functionalities
MEDIUM : The functionality is not severe but it should be resolved
MINOR : If the defects are related to LOOK AND FEEL then it’s a MINOR.

These defects have to be rectified.

Generally it is given by senior test engineers and test lead and can be changed by Project mangers or lead people according to his convenience depending on the situation.




STATUS:

The status of defect:

NEW – Intiallay when the identified by the test engineer for the first time, then he sets the status as NEW

OPEN – When the developer accepts defect, then he sets the status as OPEN

RE-OPEN – If the test engineer feels that the raised or rised defect is not rectified properly by the developer, then tester sets status as RE-OPEN

CLOSE – when the TE feels that the rised defect is resolved, then he sets as CLOSE

FIXED –Whenever the rised defect is accepted and rectified by developer.

HOLD – When ever the developer is in dilemma to accept or reject the defect.

AS PER AS DESIGN – Whenever a new changes incorporated by developer into the build and tester is not aware of that. In that case the tester will raise/rise the defect.

TESTERS ERROR – If the developer feels that it is not at all defect, due to incorrect process of tester, then developers set status as TESTERs ERROR

REJECT- Developers require more clarity.


FIX-BY/DATE/BUILD: The developer who has fixed the defect, on which date and build no will be mentioned here in this section.

DATE CLOSURE: The date on which the defect is rectified will be mentioned here in this section.

DEFECT AGE: The time gap between “Reported on” and “Resolved On”.


TYPES OF DEFECTS


1. User Interface Bugs: SUGGESTIONS
Ex 1: Spelling Mistake → High Priority
Ex 2: Improper alignment → Low Priority

2. Boundary Related Bugs: MINOR
Ex 1: Does not allow valid type → High Priority
Ex 2: Allows invalid type also → Low Priority

3. Error Handling Bugs: MINOR
Ex 1: Does not providing error massage window → High Priority
Ex 2: Improper meaning of error massages → Low Priority

4. Calculation Bugs: MAJOR
Ex 1: Final output is wrong → Low Priority
Ex 2: Dependent results are wrong → High Priority


5. Race Condition Bugs: MAJOR
Ex 1: Dead Lock → High Priority
Ex 2: Improper order of services → Low Priority

6. Load Condition Bugs: MAJOR
Ex 1: Does not allow multiple users to operate → High Priority
Ex 2: Does not allow customer expected load → Low Priority

7. Hardware Bugs: MAJOR
Ex 1: Does not handle device → High Priority
Ex 2: Wrong output from device → Low Priority

8. ID Control Bugs: MINOR
Ex: Logo missing, wrong logo, version no mistake, copyright window missing, developers name missing, tester names missing.

9. Version Control Bugs: MINOR
Ex: Difference between two consecutive build versions.

10. Source Bugs: MINOR
Ex: Mistakes in help documents.




WAYS OF TESTING:

There are 2 ways of Testing:

1. MANUAL TESTING
2. AUTOMATION TESTING


1. MANUAL TESTING: Manual Testing is a process, in which all the phases of STLC (Software Testing Life Cycle) like Test planning, Test development, Test execution, Result analysis, Bug tracking and Reporting are accomplished successfully and manually with Human efforts.

DRAWBACKS:


1. More no of human resources are required.
2. Time consuming.
3. Less accuracy.
4. Tiredness.
5. Simultaneous actions are almost impossible.
6. Repeating the same task again and again in same fashion is almost impossible.

2. AUTOMATION TESTING: Automation Testing is a process, in which all the drawbacks of Manual Testing are addressed properly and provides speed and accuracy to the existing testing process.

DRAWBACKS:

1. Automated tools are expensive.
2. All the areas of the application can’t be tested successfully with the automated tools.
3. Lack of automation Testing experts.

Note: Automation Testing is not a replacement for Manual Testing; it is just continuation for Manual Testing.

Note: Automation Testing is recommended to be implemented only after the application has come to a stable stage.




TERMINOLOGY:

DEFECT PRODUCT: If at all the product is not satisfying some of the requirements, but still it is useful, than such type of products are known as Defect Products.

DEFECTIVE PRODUCT: If at all the product is not satisfying some of the requirements, as well as it is not usable, than such type of products are known as Defective Products.

QUALITY ASSURANCE: It is a dependent, which checks each and every role in the organization, in order to confirm whether they are working according to the company process guidelines or not.

QUALITY CONTROL: It is a department, which checks the develop products or its related parts are working according to the requirements or nt.

NCR: If the role is not following the process, the penalty given for him is known as NCR (Non Conformances Raised). Ex: IT-NCR, NON IT-MEMO.

INSPECTION: It is a process of sudden checking conducted on the roles (or) department, without any prior intimation.

AUDIT: It is a process of checking, conducted on the roles (or) department with prior intimation well in advance.

There are 2 types of Audits:

1. INTERNAL AUDIT
2. EXTRNAL AUDIT

INTERNAL AUDIT: If at all the Audit is conducted by the internal resources of the company, than the Audit is known as Internal Audit.
EXTERNAL AUDIT: If at all the Audit is conducted by the external people, than that Audit is known as External Audit.

AUDITING: To audit Testing Process, Quality people conduct three types of Measurements & Metrics.

TYPES OF TESTING

TYPES OF TESTINGS

1. BUILD ACCEPTANCE TEST/BUILD VERIFICATION TEST/SANITY TESTING:
It is type of testing In which one will perform overall testing on the released build, in order to confirm whether it is proper for conducting detailed testing or not.
Usually during this type of testing they check the following:
 Whether the build is properly installed or not
 Whether one can navigate to all the pages of application or not
 Whether all the important functionality are available or not
 Whether all the required connections are properly established or not
Some companies even called this as SMOKE TESTING, but some companies will say that before releasing the build to the testing department, the developers will check whether the build is proper or not that is known as SMOKE TESTING, and once the build is released what ever the test engineers is checking is known as BAT or BVT or SAINITY TESTING (BAT: Build Acceptance Test, BVT: Build Verification Test).

2. REGRESSION TESTING:
It is type of testing in which one will perform testing on the already tested functionality again and again. Usually we do this in 2 scenarios:

 When ever the tester’s identify the defects raise to the developers, next build is released then the test engineers will check defect functionality as well as the related functionality once again.
 When ever some new features are added, the next build is released to testing department team. Then the
test engineers will check all the related features of those new features once again this is known as
Regression Testing

Note: Testing new features for the first time is known as New testing it not the Regression testing. Note: Regression testing starts from 2nd build and continuous up to last build.

3. RETESTING:
It is type of testing in which one will perform testing on the same funcatnality again and again with deferent sets of values, in order to confirm whether it is working fine or not.
Note: Retesting starts from 1st build and continuous up to last build.
Note: During Regression testing also Retesting will be conducted.

4. ALPHA TESTING:
It is type of user acceptance testing conducted in the software company by the test engineers just before delivering the application to the client.

5. BETA TESTING:
It is also a type of user acceptance testing conducted in the client’s place either by the end users or third
party experts, just before actual implementation of the application.


6. STATIC TESTING (Look and Feel Testing):
It is a type of testing in which one will perform testing on the application or its related factors without doing any actions.
EX: GUI Testing, Document Testing, Code Reviews etc…,

7. DYNAMIC TESTING:
It is a type of testing in which one will perform testing on the application or its related factors by doing some actions.
Ex: Functional Testing.

8. INSTALLATION TESTING:
It is a type of testing in which one will install the application in to the environment, following the guide
lines provided in the deployment document(Installation Document), in order to confirm whether these guide lines are really suitable for installing the application into the environment or not.

9. PORT TESTING:
It is a type of testing in which one will install the application in to the original client’s environment and check weather it is compatible with that environment or not.

10. USABILITY TESTING:
It is a type of testing in which one will test the user friendliness of the application.

11. COMPATABILITY TESTING:
It is a type of testing in which one will install the application into multiple environments, prepared with different configurations, in order to check whether the application is suitable with those environments or not.
Usually these types of testing will focused in product based companies.

12. MONKEY TESTING:
It is a type of testing in which one will perform abnormal actions on the application. Intentionally, in order to check the stability of the application.

13. EXPLORATORY TESTING:
EXPLORING: Having basic knowledge of about some concept, doing some thing and knowing more about the same concept is known as Exploring.
It is a type of testing in which the domain experts will perform testing on the application with out having the knowledge of requirements, just by parallel exploring the functionality.

14. END TO END TESTING:
It is a type of testing in which one will perform testing on the end to end scenarios of the application. EX: Login---> Balance Enquiry ---> Withdraw ----> Balance Enquiry ---> Logout.

15. SECURITY TESTING:
It is a type of testing in which one will check whether the application is properly protected or not. To do the same the BLACK BOX TEST Engineers will perform the following types of Testing:
1. AUTHENTICATION TESTING: In this type of testing one will enter different combination of user names and passwords and check whether only the authorized people are able to access application or not.
2. DIRECT URL TESTING: In this type of testing one will directly enter the URL’s of secured pages, in order to check whether the secured pages are directly access or not with out login to the application.
3. FIRE WALL LEAKEGE TESTING(or) USER PRIVILLAGES TESTING: It is a type of testing in which one will enter in to the application as one level of user and will try to access beyond the user limits, in order to check whether the fire walls are working properly or not.

16. MUTATION TESTING:
It is a type of testing in which one will perform testing on the application are its related factors by doing some changes to them.

17. SOAK TESTING/REALIABILITY TESTING:
It is a type of testing in which one will use the application continuously for a long period of time, in order to
check the stability of the application.

18. ADHOC TESTING:
It is a type of testing in which one will perform testing in their own style after understanding the requirements clearly.
Note: Usually in the final stages of the project, This type of Testing can be encouraged.

19. INPUT DOMAIN TESTING:
It is a part of Functionality Testing. Test engineers are maintaining special structures to define size and type
of every input object.

20. INTER SYSTEM TESTING:
It is also known as end to end testing. During this test, testing team validates whether our application build co-existence with other existing software’s are not?

21. PARALLEL TESTING:
It is also known as comparative testing and applicable to software products only. During this test, testing team compare our application build with competitors products in the market.

22. PERFORMANCE TESTING:
It is an advanced testing technique and expensive to apply because testing team have to create huge environment to conduct this testing. During this test, testing team validates Speed of Processing. During this performance testing, testing team conduct load testing and stress testing.

23. LOAD TESTING:
The execution of our application under customer expected configuration and customer expected load to estimate performance is called Load Testing.

24. STRESS TESTING:
The execution of our application under customer expected configuration and un interval load’s to estimate
performance is called stress testing.

25. STORAGE TESTING:
The execution of application under huge amounts of resources to estimate storage limitations is called storage Testing.

26. DATA VOLUME TESTING:
The execution of our application under customer expected configuration to estimate peak limits of data is
called data volume testing.

27. BIG BANG TESTING/INFORMAL TESTING/SINGLE STAGE TESTING:
A testing team conducts single stage testing, after completion of entire system development instead of multiple stages.

28. INCREMENTAL TESTING/FORMAL TESTING:
A multiple stages of testing process from unit level to system level is called incremental testing. It is also known as formal testing.

Monday, October 24, 2011

DBMS AND SQL COMMANDS BY LOKRE

DBMS – DATA BASE MANAGEMENT SYSTEM.

RDBMS -- RELATIONAL DBMS.


DATA --- Collection of Files.

Files  .txt (Notepad)
.doc (Ms-Word)
.ppt (Ms- PowerPoint)
.xls (Ms- Excel)
.pdf (Acrobat)

And some more example you can find in the below links for file extensions

http://www.fileinfo.com/filetypes/common

http://en.wikipedia.org/wiki/List_of_file_formats



DATA BASE:


Logical container to store the data in the form of tables, objects & Procedures, functions etc.









ODBC  Open Data base Connectivity.



2- Tier Architecture :



FRONT END ODBC BACK END
Java, .Net (Developers) DATABASE
(SQL, MYSQL, DB2, SYBASE, FOXPRO)

Database Developers





It is named as 2-Tier Architecture because it has Front end and Back end.


Example:

Front End -- >> Gmail’s Home page (User Interface page)
Back End -- > > Data stored here (Your emails and data files, attachments etc)


If you’re using java, then it’s JDBC connectivity.
ADO. NET connectivity is for DOT NET.






3 – Tier Architecture:



APPLICATION SOFTWARE
|
|

FRONT END ODBC BACK END
(Developers) DATABASE
(SQL, MYSQL, DB2, SYBASE, FOXPRO)

Database Developers


It is named as 3-Tier Architecture because it has
Application Software, Front end and Back end.


APPLICATION SOFTWARE:

Application software, also known as an application or an "app", is computer softwaredesigned to help the user to perform specific tasks. Examples include enterprise software, accounting software, office suites, graphics software and media players. Many application programs deal principally with documents. Apps may be bundled with the computer and its system software, or may be published separately. Some users are satisfied with the bundled apps and need never install one.




4 – Tier Architecture:



APPLICATION SOFTWARE
(Desktop Applications)
|
|

FRONT END ODBC BACK END
(Developers) DATABASE
(SQL, MYSQL, DB2, SYBASE, FOXPRO)
|
| Database Developers
Web Browser
(ASP. NET, PHP)


It is named as 4-Tier Architecture because it has
Application Software, Front end, Back end and Web Browser.










DATABASE MANAGEMENT SYSTEM (DBMS)

There are 3- types of DBMS:

HDBMS -- Hierarchal DBMS
NDBMS -- Network ’’
RDBMS -- Relational ’’

RDBMS -- 12 Rules (E F CODD RULES (Edgar F. Codd Rules)).

http://en.wikipedia.org/wiki/Types_of_DBMS

RDBMS ----

ORACLE – PL/SQL -- 80% Market (Platform Independent) -12 EF Codd Rules
SQL SERVER -- 20% market
MS ACCESS
MY SQL --
DB2
SYBASE
FOXPRO
PORT GRE SQL







Oracle PL/SQL follows 12 EF codd Rules.

http://en.wikipedia.org/wiki/Codd's_12_rules

DB2, FOXPRO, PORT GRE SQL follows only 4/5 E F CODD RULES.


Platform Independent: Application that works on any platform (UNIX, Linux, MAC and Windows).



GUI – GRAPHICAL USER INTERFACE (Windows 7/XP)


NOT GUI – Eg) Command prompt.



-------------------------------------------------------------------------------------

DDL

DDL is abbreviation of Data Definition Language. It is used to create and modify the structure of database objects in database.

Examples: CREATE, ALTER, DROP statements

DCL

DCL is abbreviation of Data Control Language. It is used to create roles, permissions, and referential integrity as well it is used to control access to database by securing it.

Examples: GRANT, REVOKE statements

TCL


TCL is abbreviation of Transactional Control Language. It is used to manage different transactions occurring within a database.

Examples: COMMIT, ROLLBACK statements

DML


DML is abbreviation of Data Manipulation Language. It is used to retrieve, store, modify, delete, insert and update data in database.

Examples: SELECT, UPDATE, INSERT statements


-------------------------------------------------------------------------------------



DATA DEFINITION LANGUAGE (DDL)

CREATING A TABLE

Create table tablename
(
Column 1 data type (Size),
Column 2 data type (Size),
);


ALTERING THE TABLE

1. ALTER TABLE tablename ADD (Colname datatype (size), colname datatype (size));
2. ALTER TABLE tablename DROP (Colname datatype (size), colname datatype (size));
3. ALTER TABLE tablename RENAME Column old name TO new name;

MODIFY THE TABLE

ALTER TABLE tablename MODIFY (Column datatype (size));

DROPING THE TABLE

DROP TABLE tablename;

RENAMING THE TABLE

RENAME oldtable TO newtable;


TRUNCATE THE TABLE-Data will be completly

TRUNCATE TABLE tablename;




DESCRIBE

DESC tablename;


-------------------------------------------------------------------------


DATA MANIPULATION LANGUAGE (DML)


INSERTING THE DATA IN A TABLE

To insert records sequentially

INSERT INTO tablename VALUES (Val1, Val2, Val3…);

To insert records randomly

INSERT INTO tablename (Column1, Column2, Column3….) VALUES (Val1, Val2, Val3…)

To insert Multiple Records (Sequentially and Randomly)

INSERT INTO TABLENAME VALUE (&CID,'&CUSTNAME','&BRANCH', ‘&BRANCHID,'&CDOB');

UPDATING THE DATA IN A TABLE

UPDATE tablename SET Colname= value WHERE condition;

Eg)
UPDATE employee SET name= ‘SUNEETHA’ WHERE name=’SUNITHA’;

UPDATE banking SET CUSTNAME= ‘ARIGI’ WHERE CUSTNAME= ’suneetha’; (ERROR)

IN SQL, We can’t update and Insert a query at a time. But, we can do it in PL/SQL if there are many records.


DELETING THE DATA IN TABLE

DELETE FROM tablename WHERE condition;

Eg)

DELETE FROM employee WHERE eno=1;



-----------------------------------------------------------------------------

DATA QUERY LANGUAGE (DQL)


SELECT COMMAND -- > To retrieve the data from table.


SELECT * FROM tablename;


To retrieve a selected columns

SELECT Columns FROM tablename WHERE condition;

Eg)

SELECT eno, name, age, salary FROM emp WHERE eno=1;



---------------------------------------------------------------


TRANSACTIONAL CONTROL LANGUAGE (TCL)


(Or)

DATA TRANSACT LANGUAGE (DTL)

Syntax:

COMMIT;

Commit command saves all transactions permanently.

Syntax:

ROLLBACK;

To undo transactions.


-------------------------------------------------------




DATA CONTROL LANGUAGE (DCL):

USER CREATION

CREATE USER IDENTIFIED BY ;
Eg)
CREATE USER SUNEETHA IDENTIFIED BY SUNEETHA;


GRANT:

GRANT TO ;

PRIVILEGES - > Connect, resource

Eg) If you want to give permission to user called “TESTER” then, here is the syntax:

GRANT connect, resource to TESTER;



REVOKE

REVOKEFROM;

PRIVILEGES - > Connect, resources

Eg) If you want to take back the permission from user called “TESTER” then, here is the syntax:

REVOKE connect, resource FROM TESTER;

CONNECTING TO DATABASE

1. CONN SUNEETHA/SUNEETHA (SQL PROMPT)
2. USERNAME ---------- (SQL DEVELOPER TOOL)
PASSWORD ----------


DROP THE USER

Drop user USERNAME;
Eg) Drop user SUNEETHA;


---- CONTINUES.......

SQL BASICS BY LOKRE

SQL BASIC QUERIES

SQL>Conn /as sysdba
(Conn Username/Password)


1. SQL>Show user;

USER is SYSTEM

2. SQL>select banner from v$version;

BANNER
----------------------------------------
Oracle Database 10g Express Edition Release 10.2.0.1.0 - Product
PL/SQL Release 10.2.0.1.0 - Production
"CORE 10.2.0.1.0 Production"
TNS for 32-bit Windows: Version 10.2.0.1.0 - Production
NLSRTL Version 10.2.0.1.0 – Production


3. SQL>Select global_name from global_name;

GLOBAL_NAME
XE

4. SQL> Select * from tab;

It displays all the tables from Database.


5. SQL> Select * from all_users;

It shows the list of users.

6. SQL>Clear scr;

It clears the screen.


There are two types of tables in Oracle

1. SYSTEM TABLES (or) DATA DICTIONARY TABLES (or) PREDEFINED TABLES.
2. USER DEFINED TABLES.


SQL – STRUCTURED QUERY LANGUAGE


DDL (Data Definition Language)

CREATE, ALTER, DROP, RENAME, TRUNCATE

DML (DATA MANIPULATION LANGUAGE)

INSERT, UPDATE, DELETE

DQL (DATA QUERY LANGUAGE)

SELECT

TCL (TRANSACTIONAL CONTROL LANGUAGE)

COMMIT, ROLLBACK


DCL (DATA CONTROL LANGUAGE)

GRANT, REVOKE


LINKS TO SQL :

http://st-curriculum.oracle.com/tutorial/SQLDeveloper/index.htm
http://www.vacic.org/lib/sql/
http://beginner-sql-tutorial.com/sql.htm
http://www.w3schools.com/sql/default.asp


PL/SQL
http://www.plsql-tutorial.com/

LOKREs NOTES

BASICS:


DBMS – DATA BASE MANAGEMENT SYSTEM.

RDBMS -- RELATIONAL DBMS


DATA --- Collection of Files.

Files  .txt (Notepad)
.doc(Ms-Word)
.ppt(Ms- PowerPoint)
.xls (Ms- Excel)
.pdf(Acrobat)

And some more example you can find in the below links for file extensions

http://www.fileinfo.com/filetypes/common

http://en.wikipedia.org/wiki/List_of_file_formats



DATA BASE:

Logical container to store the data in the form of tables, objects & Procedures, functions etc.









ODBC  Open Data base Connectivity.



2- Tier Architecture :



FRONT END ODBC BACK END
Java, .Net (Developers) DATABASE
(SQL, MYSQL, DB2, SYBASE, FOXPRO)

Database Developers





It is named as 2-Tier Architecture because it has Front end and Back end.


Example:

Front End -- >> Gmail’s Home page (User Interface page)
Back End -- > > Data stored here (Your emails and data files, attachments etc)


If you’re using java, then it’s JDBC connectivity.
ADO. NET connectivity is for DOT NET.






3 – Tier Architecture:



APPLICATION SOFTWARE
|
|

FRONT END ODBC BACK END
(Developers) DATABASE
(SQL, MYSQL, DB2, SYBASE, FOXPRO)

Database Developers


It is named as 3-Tier Architecture because it has
Application Software, Front end and Back end.


APPLICATION SOFTWARE:

Application software, also known as an application or an "app", is computer softwaredesigned to help the user to perform specific tasks. Examples include enterprise software, accounting software, office suites, graphics software and media players. Many application programs deal principally with documents. Apps may be bundled with the computer and its system software, or may be published separately. Some users are satisfied with the bundled apps and need never install one.




4 – Tier Architecture:



APPLICATION SOFTWARE
(Desktop Applications)
|
|

FRONT END ODBC BACK END
(Developers) DATABASE
(SQL, MYSQL, DB2, SYBASE, FOXPRO)
|
| Database Developers
Web Browser
(ASP. NET, PHP)


It is named as 4-Tier Architecture because it has
Application Software, Front end, Back end and Web Browser.










DATABASE MANAGEMENT SYSTEM (DBMS)

There are 3- types of DBMS:

HDBMS -- Hierarchal DBMS
NDBMS -- Network ’’
RDBMS -- Relational ’’

RDMS -- 12 Rules (E F CODD RULES).

http://en.wikipedia.org/wiki/Types_of_DBMS

RDBMS ----

ORACLE – PL/SQL -- 80% Market (Platform Independent) -12 EF Codd Rules
SQL SERVER -- 20% market
MS ACCESS
MY SQL
DB2
SYBASE
FOXPRO
PORT GRE SQL







Oracle PL/SQL follows 12 EF codd Rules.
http://en.wikipedia.org/wiki/Codd's_12_rules

DB2, FOXPRO, PORT GRE SQL follows only 4/5 E F CODD RULES.


Platform Independent: It works on any platform (UNIX, Linux, and Windows).



GUI – GRAPHICAL USER INTERFACE (Windows 7/XP)


NOT GUI – Command prompt.














ORACLE stands for

OAK RIDGE AUTOMATIC COMPUTER LOGICAL ENGINE.


ORACLE

1. USER.
2. DEVELOPER.
3. ADMIN/ADMINISTRATOR.

USER -- SQL knowledge is enough.
DEVELOPER -- SQL, PL/SQL knowledge is enough.
ADMIN - He should know the following point below:
1. Sql
2. Pl/Sql
3. How to take a backup.
4. Maintenance of data
5. Tuning
6. Restore
7. Replication
8. Data Guard

Replication is the process of sharing information so as to ensure consistency between redundant resources, such as software or hardware components, to improve reliability, fault-tolerance, or accessibility.


Data Guard:

Providing the security to database.








ORACLE FLAVOURS

1. ENTERPRISE EDITION – This is for Companies.
2. EXPRESS EDITION (XE) – This is for STANDALONE (for single users/Single system).
3. STANDARD EDITION – This is for globally.
4. WORK GROUP – This is for Office purpose.


VERSIONS OF ORACLE:

1979 – Version 1
1990 – 6.0 Version
1995 – 7.0
1996 – 7.1
1997 – 7.2
1998 – 8i (‘i’ stands for internet. It is a first internet version and platform independent).

2000 – 9i
2004 –10g (‘g’ stands for GRID COMPUTING).
2007 – 11g


Grid computing is a term referring to the combination of computer resources from multiple administrative domains to reach a common goal.

http://ss64.com/ora/syntax-versions.html

Monday, September 19, 2011

SOFTWARE TESTING




TESTING:

Identifying defects in the product.

Variance in between expected and actual product.

Eg: Expected is--- > Customer/Client will expect from company.
Actual is --- > What the company designs/develops/manufactures.



SOFTWARE:

Software is an application (Apps) or Set of programmes. It might be a

1> Web application. (Websites)
2> Desktop application. (Eg. Microsoft’s Operating System and it’s applications(Ms- Office))
3> Mobile application.



SOFTWARE TESTING:

The process that involves the operations of system.

(Or)

The Process of executing a program with the intent of finding errors.

There are 2 ways of Testing:

1. MANUAL TESTING
2. AUTOMATION TESTING.


WHY DON’T DEVELOPERS INVOLOVE IN TESTING?

A developer thinks in a positive way of developing. So, He don’t take a chance of thinking in negative way i.e., Defects.






WHY SOFTWARE TESTING?

To find out the bugs in the newly developed software.
To deliver defect free product to reach the customer/client/user expectations & needs.
To satisfy the customer needs.



TESTERS ROLE:

TESTER –> (Test the) -- APPLICATION – (Find the) DEFECTS – (Sends to) DEVELOPER



IMPORTANT NOTE:

ERROR --- > OCCURS IN A PROGRAM. (While developing or writing).
DEFECT --- > After implementation, Tester finds and sends to developer.
BUG --- > Defect accepted by Developer.

HISTORY OF S/W TESTING




Usage of the term "bug" to describe a defect has been a part of engineering jargon for many decades, perhaps even from the times of Tomas Edison. In software testing it always refers to September 9th, 1945, when the first real bug traced an error in the Harvard Mark II, an electromechanical computer. This bug was carefully removed and taped to the log book (see picture: The First "Computer Bug" ). This history is usually connected with the name Grace Murray Hopper, who described this event in the computer log.

Let us start the history of testing from the 50s, when the first modern programming language was designed: FORTRAN, the "FORmula TRANslator", invented by John W. Backus, and the first FORTRAN compiler was delivered in April 1957.

The history of computers starts form Analytical Engine created in 1842 by Charles Babbage enlisted the help of Lady Ada Lovelace as a translator. Ada, called herself 'an Analyst'
Charles Babbage, (1791 – 1871) was an English mathematician, philosopher, inventor and mechanical engineer who originated the concept of a programmable computer [Wiki]

1950 - 1960 1960 - 1970 1970 - 1980 1980 - 1990 1990 - 2000 2000 - 2010


1950-1960

1953 Dr Edward Deming published: Management’s responsibility for the Use of Statistical Techniques in Industry. He outlined 14 quality principles in this book.
1954 - The first truly mass-produced computer IBM 650 was marketed.
1955 - Grace Hopper created Flow-matic, the first high-level language.
1955 the first computer user group, called SHARE was formed.
Until 1957 it was the debugging oriented period, when there was no clear difference between testing and debugging.
[History Software Testing Back to Top]


1960-1970

1960 - Digital Equipment Corporation (DEC) marketed the PDP-1,considered the first commercial minicomputer
1960 - Block structure for better organization of code in the programs was introduced in Algol.
January 1, 1961 Computer Programming Fundamentals,Mcgraw-hill Inc; 1 edition by Herbert Leeds and Jerry Weinberg, describes software testing
1962 The first computer science departments established at Purdue and Stanford.
1962 Douglas Engelbart invented the computer mouse. (The keyboard was first invented and patented in 1868 by Christopher Latham Sholes.)
1963 Adam, Carl – wrote his dissertation with topic: Petri Nets.
1964 The American National Standards Institute (ANSI) officially adopted the ASCII (American Standard Code for Information Interchange) character code.
1966 Book:"Computer Programming Fundamentals" by Herbert D. Leeds and Gerald M. Weinberg
1967 Herm Schiller creates the first software code coverage monitor, called Memmap, at IBM Poughkeepsie. It supports 360/370 Assembler language. [Richard Bender]
1968 The first introduction of the term software engineering and structured programming.
1969 Edgar F. Codd introduced the concept of the relational system.
1969 The first automatic teller machine (ATM) was put in service.
1969 Richard Bender and Earl Pottorff created the first static and dynamic analysis tools using data flow analysis for improved test coverage. It increases code based coverage by 25% over the statement and branch coverage criteria in Memmap. (search for article called "How Do You Know When You Are Done Testing" that addresses this.) Note: this work was given the first Outstanding Invention Award ever handed out by IBM for breakthroughs in software engineering.
[History Software Testing Back to Top]


1970-1980
1971 -The IEEE Computer Society was founded.
1971 Milt Bryce first applied the term "methodology" to systems development.
1972 Alan Kay developed Smalltalk the first object-oriented programming language.
1972 Dennis Ritchie and Brian Kernighan developed C language.
1973 The first computer user groups was founded in Boston.(disbanded in 1996)
1973 ( or 1970?) Elmendorf, William R. introduced cause – effect graphs in functional testing.Elmendorf is also the person who first created equivalence class testing with boundary analysis.
1973 Gruenberger, F., introduced the triangle testing problem in his article: Program testing, the historical perspective.
1974, 5 April first software related standard: "MIL-S-52779 Software Quality Program Requirements" was issued.
1974 The first international computer chess tournament is won by the Russian KAISSA program.
1975 November Hamlet, R.G., Compiler-based Systematic Testing.
1976 Software reliability : principles and practices by Glenford J. Myers
"The goal of the testers is to make the program fail. If his test case makes the program or system fail, then he is successful; if his test case does not make the program fail, then he is unsuccessful."
"A good test case is a test case that has a high probability of detecting an undiscovered error, not a test case that show that the program works correctly." - Glenford Myers.

1976 Fagan, Michael E, published his article " Design and Code Inspections to reduce errors in program development." IBM System Journal Vol. 15, No.3, 1976 pp.182-211. (developed Code Inspection process)
1976 December - The cyclomatic complexity metric for measure complexity of a routine, originally described by Tom McCabe.
1977 Atari 2600 a video game console was released.
1977 Requirements Based Testing was introduced.
1978 CompuServe pioneered the wide use of e-mail.
1978 Hayes developed the Smartmodem for the first personal computers, it took the market in 1981.
1979 Philip Crosby,published his book "Quality is free" in McGraw-Hill Publishing.
1979. - The separation of debugging from testing was initially introduced by Glenford J. Myers In his book "The Art of Software Testing" he provided definition of software testing widely used now and the first clear explanation of equivalence classes, boundaries and other testing principles .
[History Software Testing Back to Top]


1980-1990
1980 Epson MX-80 became the best-selling dot-matrix printer.
1982 William Edwards Deming offers a theory of management based on his famous 14 (quality principles) Points for Management.
1983 Boris Beizer, "Software Testing Techniques" 1st edition.
1983 Lotus 1-2-3 spreadsheet for DOS was released.
1984 Tetris game was created.
1985, July Commodore finally released the Amiga 1000 personal computer at a retail price of $1295
1985 Excel spreadsheet application launched in by the Microsoft Corporation.Excel in the best friend of a tester.
1986 Apple Macintosh Plus with 1MB of RAM was introduced.
1986 - The particulars of the Six Sigma methodology were first formulated by Bill Smith at Motorola
1987 ISO 9000 quality standards were released.
1987 The Zachman Framework for descriptive represntations of an enterprise IT environment.
1988 Eudora was the first non-mainframe e-mail client.
1988 B. W. Boehm introduced a spiral model for software development.
1988 ISO/IEC 12207- "Software Life Cycle Processes" was proposed and published in August 1995.
1988 Dave Gelperin and William C. Hetzel classified the phases and goals in software testing.
1989 WordPerfect Corporation released the WordPerfect 5.1 for DOS.
[History Software Testing Back to Top]


1990-2000
In the early 1990s, continuous quality improvement (CQI) methods were implemented.
From early 1990's Bug Tracking and Version Control tools become popular.
1991, June - publication of ISO 9000-3: "Quality management and quality assurance standards"
Part 3: Guidelines for the application of ISO 9001 to the development, supply and maintenance of software.
1991- Linus Torvalds wrote his own unix kernel for the popular now Linux operating system.
1992, October, IBM introduced the first ThinkPad model 700.
1993 Software Quality Automation, Inc., Woburn, Mass., has unveiled SQA TeamTest, a GUI testing tool implemented on a team/workgroup model.
Rational purchased SQA TeamTest v6.1
IBM acquired Rational corporation on December 2002.
1993 HP LaserJet 4L was introduced.
1993 Mosley, Daniel J., introduced Decision table method
1994, October 13, Marc Andreessen launched Web browser called Mosaic Netscape 0.9.
1994, 5 December DoD issued MIL-STD-498, software Development and Documentation.
1995, July - Microsoft released the Windows 95 operating system.
1997 UML (Unified Modeling Language) was introduced by James Rumbaugh,Grady Booch and Ivar Jacobson.
1998 Rational unified process (RUP) was introduced.
1998 K. Zambelich published article “Totally Data-Driven Automated Testing“
1999, May How (and how not) to implement Data Driven Automation using Rational Robot , by Carl Nagle SAS Institute, Inc
see details on http://safsdev.sourceforge.net/DataDrivenCompilation.html
1999 -Poston, Robert developed a specification-based test generation tool
[History Software Testing Back to Top]


2000-2010
2000, March First Keyword Driven Automation was implemented using Rational Robot
See details on http://safsdev.sourceforge.net/DataDrivenTestAutomationFrameworks.htm#KeywordDrivenAutomationFrameworkModel
Rational Unified Process (RUP) Methodology (Develop iteratively, with risk as the primary iteration driver)

END Software testing history.

Check out this link :

http://www.testingthefuture.net/2010/10/the-history-of-software-testing/