Monday, September 19, 2011
SOFTWARE TESTING
TESTING:
Identifying defects in the product.
Variance in between expected and actual product.
Eg: Expected is--- > Customer/Client will expect from company.
Actual is --- > What the company designs/develops/manufactures.
SOFTWARE:
Software is an application (Apps) or Set of programmes. It might be a
1> Web application. (Websites)
2> Desktop application. (Eg. Microsoft’s Operating System and it’s applications(Ms- Office))
3> Mobile application.
SOFTWARE TESTING:
The process that involves the operations of system.
(Or)
The Process of executing a program with the intent of finding errors.
There are 2 ways of Testing:
1. MANUAL TESTING
2. AUTOMATION TESTING.
WHY DON’T DEVELOPERS INVOLOVE IN TESTING?
A developer thinks in a positive way of developing. So, He don’t take a chance of thinking in negative way i.e., Defects.
WHY SOFTWARE TESTING?
To find out the bugs in the newly developed software.
To deliver defect free product to reach the customer/client/user expectations & needs.
To satisfy the customer needs.
TESTERS ROLE:
TESTER –> (Test the) -- APPLICATION – (Find the) DEFECTS – (Sends to) DEVELOPER
IMPORTANT NOTE:
ERROR --- > OCCURS IN A PROGRAM. (While developing or writing).
DEFECT --- > After implementation, Tester finds and sends to developer.
BUG --- > Defect accepted by Developer.
HISTORY OF S/W TESTING
Usage of the term "bug" to describe a defect has been a part of engineering jargon for many decades, perhaps even from the times of Tomas Edison. In software testing it always refers to September 9th, 1945, when the first real bug traced an error in the Harvard Mark II, an electromechanical computer. This bug was carefully removed and taped to the log book (see picture: The First "Computer Bug" ). This history is usually connected with the name Grace Murray Hopper, who described this event in the computer log.
Let us start the history of testing from the 50s, when the first modern programming language was designed: FORTRAN, the "FORmula TRANslator", invented by John W. Backus, and the first FORTRAN compiler was delivered in April 1957.
The history of computers starts form Analytical Engine created in 1842 by Charles Babbage enlisted the help of Lady Ada Lovelace as a translator. Ada, called herself 'an Analyst'
Charles Babbage, (1791 – 1871) was an English mathematician, philosopher, inventor and mechanical engineer who originated the concept of a programmable computer [Wiki]
1950 - 1960 1960 - 1970 1970 - 1980 1980 - 1990 1990 - 2000 2000 - 2010
1950-1960
1953 Dr Edward Deming published: Management’s responsibility for the Use of Statistical Techniques in Industry. He outlined 14 quality principles in this book.
1954 - The first truly mass-produced computer IBM 650 was marketed.
1955 - Grace Hopper created Flow-matic, the first high-level language.
1955 the first computer user group, called SHARE was formed.
Until 1957 it was the debugging oriented period, when there was no clear difference between testing and debugging.
[History Software Testing Back to Top]
1960-1970
1960 - Digital Equipment Corporation (DEC) marketed the PDP-1,considered the first commercial minicomputer
1960 - Block structure for better organization of code in the programs was introduced in Algol.
January 1, 1961 Computer Programming Fundamentals,Mcgraw-hill Inc; 1 edition by Herbert Leeds and Jerry Weinberg, describes software testing
1962 The first computer science departments established at Purdue and Stanford.
1962 Douglas Engelbart invented the computer mouse. (The keyboard was first invented and patented in 1868 by Christopher Latham Sholes.)
1963 Adam, Carl – wrote his dissertation with topic: Petri Nets.
1964 The American National Standards Institute (ANSI) officially adopted the ASCII (American Standard Code for Information Interchange) character code.
1966 Book:"Computer Programming Fundamentals" by Herbert D. Leeds and Gerald M. Weinberg
1967 Herm Schiller creates the first software code coverage monitor, called Memmap, at IBM Poughkeepsie. It supports 360/370 Assembler language. [Richard Bender]
1968 The first introduction of the term software engineering and structured programming.
1969 Edgar F. Codd introduced the concept of the relational system.
1969 The first automatic teller machine (ATM) was put in service.
1969 Richard Bender and Earl Pottorff created the first static and dynamic analysis tools using data flow analysis for improved test coverage. It increases code based coverage by 25% over the statement and branch coverage criteria in Memmap. (search for article called "How Do You Know When You Are Done Testing" that addresses this.) Note: this work was given the first Outstanding Invention Award ever handed out by IBM for breakthroughs in software engineering.
[History Software Testing Back to Top]
1970-1980
1971 -The IEEE Computer Society was founded.
1971 Milt Bryce first applied the term "methodology" to systems development.
1972 Alan Kay developed Smalltalk the first object-oriented programming language.
1972 Dennis Ritchie and Brian Kernighan developed C language.
1973 The first computer user groups was founded in Boston.(disbanded in 1996)
1973 ( or 1970?) Elmendorf, William R. introduced cause – effect graphs in functional testing.Elmendorf is also the person who first created equivalence class testing with boundary analysis.
1973 Gruenberger, F., introduced the triangle testing problem in his article: Program testing, the historical perspective.
1974, 5 April first software related standard: "MIL-S-52779 Software Quality Program Requirements" was issued.
1974 The first international computer chess tournament is won by the Russian KAISSA program.
1975 November Hamlet, R.G., Compiler-based Systematic Testing.
1976 Software reliability : principles and practices by Glenford J. Myers
"The goal of the testers is to make the program fail. If his test case makes the program or system fail, then he is successful; if his test case does not make the program fail, then he is unsuccessful."
"A good test case is a test case that has a high probability of detecting an undiscovered error, not a test case that show that the program works correctly." - Glenford Myers.
1976 Fagan, Michael E, published his article " Design and Code Inspections to reduce errors in program development." IBM System Journal Vol. 15, No.3, 1976 pp.182-211. (developed Code Inspection process)
1976 December - The cyclomatic complexity metric for measure complexity of a routine, originally described by Tom McCabe.
1977 Atari 2600 a video game console was released.
1977 Requirements Based Testing was introduced.
1978 CompuServe pioneered the wide use of e-mail.
1978 Hayes developed the Smartmodem for the first personal computers, it took the market in 1981.
1979 Philip Crosby,published his book "Quality is free" in McGraw-Hill Publishing.
1979. - The separation of debugging from testing was initially introduced by Glenford J. Myers In his book "The Art of Software Testing" he provided definition of software testing widely used now and the first clear explanation of equivalence classes, boundaries and other testing principles .
[History Software Testing Back to Top]
1980-1990
1980 Epson MX-80 became the best-selling dot-matrix printer.
1982 William Edwards Deming offers a theory of management based on his famous 14 (quality principles) Points for Management.
1983 Boris Beizer, "Software Testing Techniques" 1st edition.
1983 Lotus 1-2-3 spreadsheet for DOS was released.
1984 Tetris game was created.
1985, July Commodore finally released the Amiga 1000 personal computer at a retail price of $1295
1985 Excel spreadsheet application launched in by the Microsoft Corporation.Excel in the best friend of a tester.
1986 Apple Macintosh Plus with 1MB of RAM was introduced.
1986 - The particulars of the Six Sigma methodology were first formulated by Bill Smith at Motorola
1987 ISO 9000 quality standards were released.
1987 The Zachman Framework for descriptive represntations of an enterprise IT environment.
1988 Eudora was the first non-mainframe e-mail client.
1988 B. W. Boehm introduced a spiral model for software development.
1988 ISO/IEC 12207- "Software Life Cycle Processes" was proposed and published in August 1995.
1988 Dave Gelperin and William C. Hetzel classified the phases and goals in software testing.
1989 WordPerfect Corporation released the WordPerfect 5.1 for DOS.
[History Software Testing Back to Top]
1990-2000
In the early 1990s, continuous quality improvement (CQI) methods were implemented.
From early 1990's Bug Tracking and Version Control tools become popular.
1991, June - publication of ISO 9000-3: "Quality management and quality assurance standards"
Part 3: Guidelines for the application of ISO 9001 to the development, supply and maintenance of software.
1991- Linus Torvalds wrote his own unix kernel for the popular now Linux operating system.
1992, October, IBM introduced the first ThinkPad model 700.
1993 Software Quality Automation, Inc., Woburn, Mass., has unveiled SQA TeamTest, a GUI testing tool implemented on a team/workgroup model.
Rational purchased SQA TeamTest v6.1
IBM acquired Rational corporation on December 2002.
1993 HP LaserJet 4L was introduced.
1993 Mosley, Daniel J., introduced Decision table method
1994, October 13, Marc Andreessen launched Web browser called Mosaic Netscape 0.9.
1994, 5 December DoD issued MIL-STD-498, software Development and Documentation.
1995, July - Microsoft released the Windows 95 operating system.
1997 UML (Unified Modeling Language) was introduced by James Rumbaugh,Grady Booch and Ivar Jacobson.
1998 Rational unified process (RUP) was introduced.
1998 K. Zambelich published article “Totally Data-Driven Automated Testing“
1999, May How (and how not) to implement Data Driven Automation using Rational Robot , by Carl Nagle SAS Institute, Inc
see details on http://safsdev.sourceforge.net/DataDrivenCompilation.html
1999 -Poston, Robert developed a specification-based test generation tool
[History Software Testing Back to Top]
2000-2010
2000, March First Keyword Driven Automation was implemented using Rational Robot
See details on http://safsdev.sourceforge.net/DataDrivenTestAutomationFrameworks.htm#KeywordDrivenAutomationFrameworkModel
Rational Unified Process (RUP) Methodology (Develop iteratively, with risk as the primary iteration driver)
END Software testing history.
Check out this link :
http://www.testingthefuture.net/2010/10/the-history-of-software-testing/
What are Software Testing Types ?
Black box testing : You don't need to know the internal design in detail or have a good knowledge about the code for this test. It's mainly based on functionality and specifications, requirements.
White box testing : This test is based on detailed knowledged of the internal design and code. Tests are performed for specific code statements and coding styles.
Unit testing : The most micro scale of testing to test specific functions or code modules. Typically done by the programmer and not by testers, as it requires detailed knowledge of the internal program design and code. Not always easily done unless the application has a well-designed architecture with tight code, may require developing test driver modules or test harnesses.
Incremental integration testing : Continuous testing of an application as new functionality is added. Requires that various aspects of an application's functionality be independent enough to work separately before all parts of the program are completed, or that test drivers be developed as needed. Done by programmers or by testers.
Integration testing : Testing of combined parts of an application to determine if they function together correctly. It can be any type of application which has several independent sub applications, modules.
Functional testing : Black box type testing to test the functional requirements of an application. Typically done by software testers but software programmers should also check if their code works before releasing it.
System testing : Black box type testing that is based on overall requirements specifications. Covers all combined parts of a system.
End to End testing : It's similar to system testing. Involves testing of a complete application environment similar to real world use. May require interacting with a database, using network communications, or interacting with other hardware, applications, or systems.
Sanity testing or smoke testing : An initial testing effort to determine if a new sw version is performing well enough to start for a major software testing. For example, if the new software is crashing frequently or corrupting databases then it is not a good idea to start testing before all these problems are solved first.
Regression testing : Re-testing after software is updated to fix some problems. The challenge might be to determine what needs to be tested, and all the interactions of the functions, especially near the end of the sofware cycle. Automated testing can be useful for this type of testing.
Acceptance testing : This is the final testing done based on the agrements with the customer.
Load / stress / performance testing : Testing an application under heavy loads. Such as simulating a very heavy traffic condition in a voice or data network, or a web site to determine at what point the system start causing problems or fails.
Usability testing : Testing to determine how user friendly the application is. It depends on the end user or customer. User interviews, surveys, video recording of user sessions, and other techniques can be used. Programmers and testers are usually not appropriate as usability testers.
Install / Uninstall testing : Testing of full, partial, or upgrade install / uninstall processes.
Recovery / failover testing : Testing to determine how well a system recovers from crashes, failures, or other major problems.
Security testing : Testing to determine how well the system protects itself against unauthorized internal or external access and intentional damage. May require sophisticated testing techniques.
Compatability testing : Testing how well software performs in different environments. Particular hardware, software, operating system, network environment etc. Like testing a web site in different browsers and browser versions.
Exploratory testing : Often taken to mean a creative, informal software test that is not based on formal test plans or test cases; testers may be learning the software as they test it.
Ad-hoc testing : Similar to exploratory testing, but often taken to mean that the testers have significant understanding of the software before testing it.
Context driven testing : Testing driven by an understanding of the environment, culture, and intended use of software. For example, the testing approach for life critical medical equipment software would be completely different than that for a low cost computer game.
Comparison testing : Comparing software weaknesses and strengths to competing products.
Alpha testing : Testing of an application when development is nearing completion. Minor design changes may still be made as a result of such testing. Typically done by end users or others, not by programmers or testers.
Beta testing : Testing when development and testing are essentially completed and final bugs and problems need to be found before final release. Typically done by end users or others, not by programmers or testers.
Mutation testing : A method for determining if a set of test data or test cases is useful, by deliberately introducing various code changes (defects) and retesting with the original test data/cases to determine if the defects are detected. Proper implementation requires large computational resources.
White box testing : This test is based on detailed knowledged of the internal design and code. Tests are performed for specific code statements and coding styles.
Unit testing : The most micro scale of testing to test specific functions or code modules. Typically done by the programmer and not by testers, as it requires detailed knowledge of the internal program design and code. Not always easily done unless the application has a well-designed architecture with tight code, may require developing test driver modules or test harnesses.
Incremental integration testing : Continuous testing of an application as new functionality is added. Requires that various aspects of an application's functionality be independent enough to work separately before all parts of the program are completed, or that test drivers be developed as needed. Done by programmers or by testers.
Integration testing : Testing of combined parts of an application to determine if they function together correctly. It can be any type of application which has several independent sub applications, modules.
Functional testing : Black box type testing to test the functional requirements of an application. Typically done by software testers but software programmers should also check if their code works before releasing it.
System testing : Black box type testing that is based on overall requirements specifications. Covers all combined parts of a system.
End to End testing : It's similar to system testing. Involves testing of a complete application environment similar to real world use. May require interacting with a database, using network communications, or interacting with other hardware, applications, or systems.
Sanity testing or smoke testing : An initial testing effort to determine if a new sw version is performing well enough to start for a major software testing. For example, if the new software is crashing frequently or corrupting databases then it is not a good idea to start testing before all these problems are solved first.
Regression testing : Re-testing after software is updated to fix some problems. The challenge might be to determine what needs to be tested, and all the interactions of the functions, especially near the end of the sofware cycle. Automated testing can be useful for this type of testing.
Acceptance testing : This is the final testing done based on the agrements with the customer.
Load / stress / performance testing : Testing an application under heavy loads. Such as simulating a very heavy traffic condition in a voice or data network, or a web site to determine at what point the system start causing problems or fails.
Usability testing : Testing to determine how user friendly the application is. It depends on the end user or customer. User interviews, surveys, video recording of user sessions, and other techniques can be used. Programmers and testers are usually not appropriate as usability testers.
Install / Uninstall testing : Testing of full, partial, or upgrade install / uninstall processes.
Recovery / failover testing : Testing to determine how well a system recovers from crashes, failures, or other major problems.
Security testing : Testing to determine how well the system protects itself against unauthorized internal or external access and intentional damage. May require sophisticated testing techniques.
Compatability testing : Testing how well software performs in different environments. Particular hardware, software, operating system, network environment etc. Like testing a web site in different browsers and browser versions.
Exploratory testing : Often taken to mean a creative, informal software test that is not based on formal test plans or test cases; testers may be learning the software as they test it.
Ad-hoc testing : Similar to exploratory testing, but often taken to mean that the testers have significant understanding of the software before testing it.
Context driven testing : Testing driven by an understanding of the environment, culture, and intended use of software. For example, the testing approach for life critical medical equipment software would be completely different than that for a low cost computer game.
Comparison testing : Comparing software weaknesses and strengths to competing products.
Alpha testing : Testing of an application when development is nearing completion. Minor design changes may still be made as a result of such testing. Typically done by end users or others, not by programmers or testers.
Beta testing : Testing when development and testing are essentially completed and final bugs and problems need to be found before final release. Typically done by end users or others, not by programmers or testers.
Mutation testing : A method for determining if a set of test data or test cases is useful, by deliberately introducing various code changes (defects) and retesting with the original test data/cases to determine if the defects are detected. Proper implementation requires large computational resources.
What is quality assurance? Definitions and types of S/W Testing
What is Quality Assurance ?
Quality Assurance makes sure the project will be completed based on the previously agreed specifications, standards and functionality required without defects and possible problems. It monitors and tries to improve the development process from the beginning of the project to ensure this. It is oriented to "prevention".
When should QA testing start in a project ? Why?
QA is involved in the project from the beginning. This helps the teams communicate and understand the problems and concerns, also gives time to set up the testing environment and configuration. On the other hand, actual testing starts after the test plans are written, reviewed and approved based on the design documentation.
What is Software Testing ?
Software testing is oriented to "detection". It's examining a system or an application under controlled conditions. It's intentionally making things go wrong when they should not and things happen when they should not.
What is Software Quality ?
Quality software is reasonably bug free, delivered on time and within budget, meets requirements and/or expectations, and is maintainable.
What is Software Verification and Validation ?
Verification is preventing mechanism to detect possible failures before the testing begin. It involves reviews, meetings, evaluating documents, plans, code, inspections, specifications etc. Validation occurs after verification and it's the actual testing to find defects against the functionality or the specifications.
What is Test Plan ?
Test Plan is a document that describes the objectives, scope, approach, and focus of a software testing effort.
What is Test Case ?
A test case is a document that describes an input, action, or event and an expected response, to determine if a feature of an application is working correctly. A test case should contain particulars such as test case identifier, test case name, objective, test conditions/setup, input data requirements, steps, and expected results.
What is Good Software Coding ?
Good code is code that works according to the requirements, bug free, readable, expandable in the future and easily maintainable.
What is a Good Design ?
In good design, the overall structure is clear, understandable, easily modifiable, and maintainable. Works correctly when implemented and functionality can be traced back to customer and end user requirements.
Who is a Good Test Engineer ?
Good test engineer has the ability to think the unthinkable, has the test to break attitute, strong desire to quality and attention to detail.
What is Walkthrough ?
Walkthrough is quick and informal meeting for evaluation purposes.
What is Software Life Cycle ?
The Software Life Cycle begins when an application is first conceived and ends when it is no longer in use. It includes aspects such as initial concept, requirements analysis, functional design, internal design, documentation planning, test planning, coding, document preparation, integration, testing, maintenance, updates, retesting, phase-out, and other aspects.
What is Software Inspection ?
The purpose of inspection is trying to find defects and problems mostly in documents such as test plans, specifications, test cases, coding etc. It helps to find the problems and report it but not to fix it. It is one of the most cost effective methods of software quality. Many people can join the inspections but normally one moderator, one reader and one note taker are mandatory.
What are the benefits of Automated Testing ?
It's very valuable for long term and on going projects. You can automize some or all of the tests which needs to be run from time to time repeatedly or diffucult to test manually. It saves time and effort, also makes testing possible out of working hours and nights. They can be used by different people and many times in the future. By this way, you also standardize the testing process and you can depend on the results.
What do you imagine are the main problems of working in a geographically distributed team ?
The main problem is the communication. To know the team members, sharing as much information as possible whenever you need is very valuable to solve the problems and concerns. On the other hand, increasing the wired communication as much as possible, seting up meetings help to reduce the miscommunication problems.
What are the common problems in Software Development Process ?
Poor requirements, unrealistic schedule, inadequate testing, miscommunication and additional requirement changes after development begin.
Quality Assurance makes sure the project will be completed based on the previously agreed specifications, standards and functionality required without defects and possible problems. It monitors and tries to improve the development process from the beginning of the project to ensure this. It is oriented to "prevention".
When should QA testing start in a project ? Why?
QA is involved in the project from the beginning. This helps the teams communicate and understand the problems and concerns, also gives time to set up the testing environment and configuration. On the other hand, actual testing starts after the test plans are written, reviewed and approved based on the design documentation.
What is Software Testing ?
Software testing is oriented to "detection". It's examining a system or an application under controlled conditions. It's intentionally making things go wrong when they should not and things happen when they should not.
What is Software Quality ?
Quality software is reasonably bug free, delivered on time and within budget, meets requirements and/or expectations, and is maintainable.
What is Software Verification and Validation ?
Verification is preventing mechanism to detect possible failures before the testing begin. It involves reviews, meetings, evaluating documents, plans, code, inspections, specifications etc. Validation occurs after verification and it's the actual testing to find defects against the functionality or the specifications.
What is Test Plan ?
Test Plan is a document that describes the objectives, scope, approach, and focus of a software testing effort.
What is Test Case ?
A test case is a document that describes an input, action, or event and an expected response, to determine if a feature of an application is working correctly. A test case should contain particulars such as test case identifier, test case name, objective, test conditions/setup, input data requirements, steps, and expected results.
What is Good Software Coding ?
Good code is code that works according to the requirements, bug free, readable, expandable in the future and easily maintainable.
What is a Good Design ?
In good design, the overall structure is clear, understandable, easily modifiable, and maintainable. Works correctly when implemented and functionality can be traced back to customer and end user requirements.
Who is a Good Test Engineer ?
Good test engineer has the ability to think the unthinkable, has the test to break attitute, strong desire to quality and attention to detail.
What is Walkthrough ?
Walkthrough is quick and informal meeting for evaluation purposes.
What is Software Life Cycle ?
The Software Life Cycle begins when an application is first conceived and ends when it is no longer in use. It includes aspects such as initial concept, requirements analysis, functional design, internal design, documentation planning, test planning, coding, document preparation, integration, testing, maintenance, updates, retesting, phase-out, and other aspects.
What is Software Inspection ?
The purpose of inspection is trying to find defects and problems mostly in documents such as test plans, specifications, test cases, coding etc. It helps to find the problems and report it but not to fix it. It is one of the most cost effective methods of software quality. Many people can join the inspections but normally one moderator, one reader and one note taker are mandatory.
What are the benefits of Automated Testing ?
It's very valuable for long term and on going projects. You can automize some or all of the tests which needs to be run from time to time repeatedly or diffucult to test manually. It saves time and effort, also makes testing possible out of working hours and nights. They can be used by different people and many times in the future. By this way, you also standardize the testing process and you can depend on the results.
What do you imagine are the main problems of working in a geographically distributed team ?
The main problem is the communication. To know the team members, sharing as much information as possible whenever you need is very valuable to solve the problems and concerns. On the other hand, increasing the wired communication as much as possible, seting up meetings help to reduce the miscommunication problems.
What are the common problems in Software Development Process ?
Poor requirements, unrealistic schedule, inadequate testing, miscommunication and additional requirement changes after development begin.
WHAT IS SOFTWARE ?
Computer software, or just software, is a collection of computer programs and related data that provide the instructions for telling a computer what to do and how to do it. In other words, software is a conceptual entity which is a set of computer programs, procedures, and associated documentation concerned with the operation of a data processing system. We can also say software refers to one or more computer programs and data held in the storage of the computer for some purposes. In other words software is a set of programs, procedures, algorithms and its documentation. Program software performs the function of the program it implements, either by directly providing instructions to the computer hardware or by serving as input to another piece of software. The term was coined to contrast to the old term hardware (meaning physical devices). In contrast to hardware, software is intangible, meaning it "cannot be touched".[1] Software is also sometimes used in a more narrow sense, meaning application software only. Sometimes the term includes data that has not traditionally been associated with computers, such as film, tapes, and records.[2]
Examples of computer software include:
Application software includes end-user applications of computers such as word processors or video games, and ERP software for groups of users.
Middleware controls and co-ordinates distributed systems.
Programming languages define the syntax and semantics of computer programs. For example, many mature banking applications were written in the COBOL language, originally invented in 1959. Newer applications are often written in more modern programming languages.
System software includes operating systems, which govern computing resources. Today[when?] large[quantify] applications running on remote machines such as Websites are considered[by whom?] to be system software, because[citation needed] the end-user interface is generally through a graphical user interface, such as a web browser.
Teachware is any special breed of software or other means of product dedicated to education purposes in software engineering and beyond in general education[3].
Testware is any software for testing hardware or a software package.
Firmware is low-level software often stored on electrically programmable memory devices. Firmware is given its name because it is treated like hardware and run ("executed") by other software programs. Firmware often is not accessible for change by other entities but the developers' enterprises.
Shrinkware is the older name given to consumer-purchased software, because it was often sold in retail stores in a shrink-wrapped box.
Device drivers control parts of computers such as disk drives, printers, CD drives, or computer monitors.
Programming tools help conduct computing tasks in any category listed above. For programmers, these could be tools for debugging or reverse engineering older legacy systems in order to check source code compatibility.
Examples of computer software include:
Application software includes end-user applications of computers such as word processors or video games, and ERP software for groups of users.
Middleware controls and co-ordinates distributed systems.
Programming languages define the syntax and semantics of computer programs. For example, many mature banking applications were written in the COBOL language, originally invented in 1959. Newer applications are often written in more modern programming languages.
System software includes operating systems, which govern computing resources. Today[when?] large[quantify] applications running on remote machines such as Websites are considered[by whom?] to be system software, because[citation needed] the end-user interface is generally through a graphical user interface, such as a web browser.
Teachware is any special breed of software or other means of product dedicated to education purposes in software engineering and beyond in general education[3].
Testware is any software for testing hardware or a software package.
Firmware is low-level software often stored on electrically programmable memory devices. Firmware is given its name because it is treated like hardware and run ("executed") by other software programs. Firmware often is not accessible for change by other entities but the developers' enterprises.
Shrinkware is the older name given to consumer-purchased software, because it was often sold in retail stores in a shrink-wrapped box.
Device drivers control parts of computers such as disk drives, printers, CD drives, or computer monitors.
Programming tools help conduct computing tasks in any category listed above. For programmers, these could be tools for debugging or reverse engineering older legacy systems in order to check source code compatibility.
Subscribe to:
Posts (Atom)