Guide to Software Testing Types

Comments · 362 Views

Test automation tools enabled testers to execute regression tests efficiently as software changed. Tools like Selenium, QTP, and LoadRunner became popular. The automation effort also led to the new specialized role of Automation Tester.

Test automation tools enabled testers to execute regression tests efficiently as software changed. Tools like Selenium, QTP, and LoadRunner became popular. The automation effort also led to the new specialized role of Automation Tester.

In addition, the responsibilities for different aspects of testing like performance, security, and usability were grouped into specialized roles like Performance Test Engineer, Security Tester etc. This division of labor improved focus on particular test types.

Process models like the V-model also became widely adopted, enforcing discipline in the relationships between test levels. Specifications were connected with tests, and defects found during testing were systematically tracked and managed.

The 2000s: Agile, Shift Left and DevOps Trends

In 2001, the Agile software development methodology was introduced as a lighter-weight alternative to traditional waterfall development. Agile changed many assumptions for testers.

With iterative development and rapid feedback loops, testing activities were woven tightly within each Agile sprint rather than being a distinct downstream phase. Testing is about adapting to constant change versus executing predefined test cases end-to-end.

 

Two other trends that emerged are:

  1. Shift-left testing: Testing is shifted earlier into development cycles, with more testing done before code is written via practices like test-driven development (TDD).
  2. DevOps culture: Breaks down barriers between dev and ops teams; testing is distributed across the team versus centralized test groups.

These trends challenge conventional testing wisdom but add quality via fail-fast feedback and shared team ownership.

The 2010s and Beyond: Automation & AI Advances, Quality Engineering 

In the 2010s, test automation advanced significantly with machine learning applied to software testing tasks such as test case generation and intelligent test execution. AI-based tools can adapt tests to application changes dynamically.

The scope of Quality Engineering also expanded from simply detecting defects to preventing them proactively via techniques like:

- Building quality earlier into SDLC via standards, reviews etc

- Test analytics using historical artifacts

- Shifting testing to the left

The Future: Predictive Models, Virtualization, and Crowdsourced Testing

Some directions that software testing seems to be heading in include:

Predictive test models: Using machine learning on test data to forecast which test areas are likely to have more defects. This helps optimize testing efforts on highest risk areas. Virtualized test environments: Simulating large complex test beds is getting easier via frameworks supporting virtualization of SOA components, cloud infrastructure etc. This reduces physical test environment constraints.

Crowdsourced testing: Leveraging a globally distributed testing workforce on demand to achieve test coverage across many platforms/devices. Provides flexibility.

The testing field will surely see innovative new tools and methodologies emerge as software continues marching forward. I hope this high-level testing history tour gives you a good grounding and context for appreciating testing trends old and new!

The Basics of Software Testing

At the most basic level, software testing refers to the practice of finding defects in software applications. Typically, a separate quality assurance (QA) team is responsible for formally testing the software after developers have written the code. However, roles often overlap, especially in Agile environments.

Testing involves methodically exercising the application under test to verify correct behavior and customer requirements conformance. Tests derive from specification documents like requirements, design documents etc. Types of testing range from checking user interfaces, to underlying APIs, to load testing infrastructure capacity - but fundamental concepts remain the same.

Software testing broadly includes both functional testing and non-functional testing activities. Within these, many more specialized testing types like security, performance, compatibility testing etc. exist. Let’s look at these in detail.

Functional Testing

Functional testing aims to check the core application features and functionality work as expected. The system’s behavior is tested based on defined requirements and specifications. You determine WHAT the system does, not HOW. Testing is typically black-box since the workings behind the UI are not relevant.

 

Some examples of functional testing include:

- Unit testing – Testing individual software components in isolation such as methods and classes. Performed by developers.

- Integration testing – Testing interactions between module interfaces.

- System testing – Testing the full integrated system

- Sanity testing - Quick tests to ensure bugs weren’t introduced.

- Acceptance testing – Validating software works for business use cases before go-live. Includes UAT.

Keeping system architecture and technology stack separate from functionality helps enable robust functional testing.

Non-functional Testing 

In contrast to checking the WHAT, non-functional testing examines the HOW well a system works. Non-functional requirements relate to emergent system properties needed for a good user experience, not explicit behaviors. Performance, security, reliability all fall under non-functional testing areas.

Some examples of non-functional testing include:

Performance testing – Testing speed, scalability under expected load. Tools like JMeter simulate virtual users to create system load.

Stress testing – Testing behavior at load volumes higher than expected to find breaking point. 

Usability testing – Testing how easy and intuitive UI is for users via techniques like design walkthroughs, prototypes etc. 

Non-functional testing is especially critical for mission-critical systems like banking apps where security and reliability are paramount. Specialized testing expertise is often needed.

Other Testing Considerations 

In addition to functional vs. non-functional testing, some other key testing concepts include:

  • Automation testing: Executing tests via scripts rather than manual testing. Vital today for regression testing. Selenium is a well-known open-source tool.
  • Test-driven development (TDD): Developers first write failing unit test cases then code to make them pass. Helps design better code.
  • Compatibility testing: Testing software across multiple target platforms like browsers, devices and OS versions.
  • Globalization testing: Testing multi-locale software is rendered properly across geographic regions. Includes checking for cultural sensitivities.

Clearly testing has many dimensions. The specific types of testing applied depends greatly on the software under development, resources available, industry domain and other factors. But the core goals remain ensuring high-quality robust software that meets user needs!

Unit Testing

Unit tests represent some of the most basic tests written in software development. The “unit” here refers to the smallest component in the system that developers work on – typically a snippet of code like a method or class.

Unit tests are used to test this isolated unit functionality independently during coding phases. Developers write and execute unit tests as part of test-driven development to ensure code coverage.

Objectives of Unit Testing

Some key objectives of unit testing include:

  1. Find bugs early at code level before propagating downstream. Cheaper to fix here.
  2.  
  3. Enable code restructure and refactoring safely without impacting software. Unit tests fail upon unwanted changes.
  4.  
  5. Forces good modular design since all components test independently.
  6.  
  7. Simplifies integration later by fully testing components first.
  8.  
  9. Facilitates Agile development via test-driven design and rapid feedback cycles.

Unit testing allows software quality to be built from the ground up rather than an afterthought.

 

Approaches for Unit Testing

The two main approaches for writing unit test cases are:

  1. White box testing – Testing code internals with knowledge of internal logic. All branches are exercised.
  2. Black box testing – Testing from an external interface perspective without internal code knowledge. Just test input/outputs.

Black box testing is more reusable across application versions. White box testing helps maximize coverage.

Test Doubles like mocks, stubs and fakes are also commonly used to isolate code under test from other dependencies for true unit testing. For example, you can mock a database connection and test the module’s logic. This helps focus testing on one component.

Overall, unit testing is critical at the basic building block level before integration testing can proceed on a firm foundation. Unit testing takes skill to do well but pays dividends in quality and maintainability.

Integration Testing

After unit testing validates individual module functionality and logic flow, integration testing aims to test interaction points across module boundaries. Various connected modules are combined logically and tested as groups.

The two main integration testing approaches are bottom-up and top-down testing as system build-up progresses.Bottom-up testing begins at lowest level units and works up towards main modules. Stubs/drivers temporarily simulate high-level module behavior. Progress is incrementally validated in small steps.

Top-down testing takes the opposite route of starting at the highest abstraction level modules first. Stubs again simulate lower-level components. Big picture comes first. In practice, integration testing typically employs a hybrid model with a combination of bottom-up and top down testing appropriate for a particular system.

The key focus areas to check during integration testing are:

1) Module interfaces – Validating all input/output parameters mapped properly across the software interfaces tying components together.

2) Error handling – Testing failure scenarios for error catching and recovery mechanisms.

3) Data flow correctness – Ensuring intermediate data between chained component is correctly passed along the hierarchy.

Automation testing tools like Selenium test and validate these points via test suites during integration testing. Manually testing interfaces would be very difficult for large systems. Proper test architecture fundamentals set the foundation for automation.

System Testing

After unit and integration testing, individual modules and interfaces have been sufficiently tested in isolation. At the system testing phase, all the components are fully integrated together into one running system. Testing the system as a whole from end to end now starts. 

System testing aims to test thoroughly from user-centric perspective and ensure that functional and non-functional requirements are fulfilled for actual usage. Unlike unit and integration testing, real customer test data is used instead of stubs. Formal test plans, scenarios, cases get executed for full software validation before release.

Almost all types of software testing come into play at system testing because real usage at scale is evaluated under expected production scenarios. Hardware and environments are made as close to production as possible within allocated QA budgets.

Because system testing is closest to operational deployment, defect detection and fixing is also most representative at this level. By focusing rigorously on all system integration points with structured test processes, system testing aims to catch any last issues before release. Ensuring software works as specified for users is the ultimate goal!

Acceptance Testing

The final phase of software testing before release is acceptance testing, also called User Acceptance Testing (UAT). While system testing takes a technology-oriented approach validating software against documented specs, UAT focuses on ensuring software meets core business needs.

Actual business users are engaged for testing real user scenarios, user journeys, and use cases to confirm software functioning from their work point of view. Fine tuning UX issues or data validations missed during system testing gets addressed.

Example types of UAT include:

  • Alpha testing – Simulate limited customer usage internally first.
  • Beta testing – External controlled release to limited customers.
  • Functional testing – Assess key business processes.

Successful user acceptance signals readiness for full live deployment of the software application after QA signoff. UAT is the ultimate phase but should come after rigorous underlying foundation via coding best practices, unit tests etc. to maximize business value.

UAT success depends heavily on setting scope and expectations with stakeholders early. What scenarios will determine acceptance criteria? Not having this understanding leads scope creep and delays. Smart QA strategy across software dev life cycle is key!

Security Testing

For any business-critical software system today, having vulnerabilities that expose customer data or fail on reliability/compliance needs is simply unacceptable. As development accelerates, ensuring code is secure has become a key imperative via security testing processes.

The aim of security testing is to methodically find security weaknesses in the software system that an attacker could exploit such as stealing data. Security testing tries to break down systems much like real hacking attempts would try by treating the application as hostile. Typical vulnerabilities tested for include:

  • Injection flaws – Entry points for malicious SQL, commands etc.
  • Broken authentication checks – Bypassing login controls via parameter tampering, brute force etc.
  • Sensitive data exposure – Decrypting weakly protected private data.
  • Broken session management – Session hijacking techniques to gain access.

While developers employ best practices in secure coding, the role of the security tester/analyst goes much farther via penetration testing.

Penetration Testing

Penetration testing, also called pen testing or ethical hacking, involves specialized security tests trying to circumvent application defenses at infrastructure and software levels much like external cyber attacks.Pen testing may use techniques like intercepting insecure data, brute forcing login systems, Distributed Denial of Service (DDoS) attacks to flood resources and test resilience capabilities etc. OWASP (Open Web Application Security Project) provides very detailed testing guidelines.

Penetration tests uncover application weak spots missed during development due to time pressures or lack of internal expertise recognizing increasingly sophisticated attack vectors.

Just like end users who won’t tolerate vulnerable financial or medical applications losing trustworthiness, security testing is vital to ensure no cracks for criminals to infiltrate enterprise-grade software. Testing early and testing comprehensively is key.

Performance Testing 

For many modern web and mobile applications, quality of service delivered to end-users crucially depends on sustaining high performance like fast response times under varying loads. Performance testing aims to validate metrics like speed, reliability, scalability, resource usage efficiently, stability under expected user volumes and data conditions.

Some common examples of performance testing

Load testing: Tests system behavior at peak expected load volumes as per usage patterns to ensure meeting response times.

Stress testing: Tests reliability by applying extreme load volumes well beyond anticipated highest usage to understand failure points.

Spike testing: Sudden & rapid up/down changes in load are applied to validate response in volatile conditions.

The goal of performance testing is primarily to avoid software performance defects manifesting post-production which impact key end-user interactions like checkouts timing out. Test specialists follow systematic approaches in testing:

  • Plan testing environment accurately mirroring production infrastructure
  • Simulate virtual users to replicate different usage patterns
  • Monitor end user perspective response times with tools
  • Continually optimize identified bottlenecks via scalability

Top performance testing tools include LoadRunner, Apache JMeter, NeoLoad which allow massively parallel simulations. Cloud environments provide vast testing scale.

Ultimately by frontloading finding performance defects pre-release via solid testing processes across staging environments, costly application crashes or inability to handle actual traffic spikes after go-live can be avoided. Performance testing saves future pain!

Types of Performance Testing

Some key specific types of performance testing including tools used are:

  • Load Testing – Testing normal expected user loads
  • Tools: JMeter, LoadRunner
  • Stress Testing – Testing performance extremes well beyond anticipated peaks 
  • Tools: NeoLoad, LoadView
  • Scalability Testing – Testing ramp up & down behavior at scale
  • Tools: LoadView, WebLoad
  • Spike Testing – Sudden surges and drops in load
  • Tools: SmartMeter.io
  • Endurance Testing – Testing over extended period to find memory leaks
  • Tools: LoadRunner, NeoLoad
  • Volume Testing – Fill database beyond capacity

Tools: SQL load testing tools

Each type uncovers a different performance testing angle. A comprehensive strategy encompasses testing along all these dimensions - functional correctness alone isn’t sufficient for modern software to succeed. Performance defects lose customers fast in today’s world!

Compatibility Testing 

The digital landscape today for enterprises slated to deploy software products spans a vast heterogeneous array of target platforms and environments their solutions must be compatible with. Compatibility testing plays a pivotal role in ensuring software functions predictably across this ecosystem diversity that often comprises:

Comments