Task 3: STLC and QA Testing
1) List down all the Models of SDLC.
There are several Software Development Life Cycle (SDLC) models, each with its own set of principles and practices. Here are some commonly used SDLC models:
Waterfall Model:
A linear and sequential approach where each phase must be completed before moving to the next. It follows a fixed sequence of stages: Requirements, Design, Implementation, Testing, Deployment, and Maintenance.
V-Model (Verification and Validation Model):
An extension of the waterfall model, the V-Model emphasizes verification and validation activities at each stage. It correlates development phases with testing phases.
Iterative Incremental Model:
Involves the repetition of development cycles, with each cycle refining and improving the software. It allows for feedback and adjustments during each iteration.
Divides the software development process into smaller, manageable parts called increments. Each increment delivers a portion of the functionality.
Spiral Model:
Combines the idea of iterative development with aspects of the waterfall model. It includes repetitive cycles called spirals, with each spiral representing a phase in the development process.
Agile Model:
An iterative and flexible approach that emphasizes customer collaboration, responsiveness to change, and the delivery of functional software in short, incremental cycles (sprints).
Scrum Model:
A specific agile framework structures development work into fixed-length iterations called sprints, typically lasting two to four weeks.
These models vary in their approach, structure, and emphasis on different aspects of the software development process. The choice of a specific SDLC model depends on project requirements, team preferences, and the nature of the software being developed.
2) What is STLC? Also, Explain all stages of STLC.
STLC stands for Software Testing Life Cycle. It is a systematic process for planning, creating, executing, and managing tests throughout the software development life cycle. The primary goal of STLC is to ensure that the software meets quality standards and is free of defects when it is released. STLC is an integral part of the larger Software Development Life Cycle (SDLC) and is closely aligned with the development process.
The stages of STLC typically include:
Requirement Analysis:
In this stage, the testing team analyzes and reviews the requirements to understand the scope of testing. Testers collaborate with stakeholders to ensure a clear understanding of the expected behavior of the software.
Test Planning:
Test planning involves creating a detailed test plan that outlines the testing approach, objectives, scope, resources, schedule, and deliverables. It acts as a roadmap for the testing process, providing a structured framework for testing activities.
Test Case Design:
Based on the requirements and specifications, the testing team designs test cases. Test cases define the conditions to be tested, the steps to execute, and the expected results. This stage ensures comprehensive coverage of the software functionality.
Test Environment Setup:
The testing environment is set up, including the necessary hardware, software, network configurations, and test data. This stage aims to replicate the production environment to simulate real-world conditions.
Test Execution:
In this phase, the actual testing takes place. Testers execute the designed test cases in the test environment, record the results, and report any defects or issues discovered. Execution can involve manual testing, automated testing, or a combination of both.
Defect Reporting and Tracking:
When defects or issues are identified during test execution, they are reported, documented, and tracked using a defect tracking system. The development team then addresses these issues, and the testing team verifies the fixes.
Regression Testing:
After defect fixes or changes to the software, regression testing is performed to ensure that the modifications did not introduce new issues and that existing functionalities still work as intended.
Test Closure:
The test closure phase involves summarizing the testing activities, preparing test closure reports, and evaluating the testing process to identify areas for improvement. It marks the formal conclusion of the testing process for a specific release.
3) Your TL (Team Lead)has asked you to explain the difference between quality assurance (QA) and quality control (QC) responsibilities. While QC activities aim to identify defects in actual products, your TL is interested in processes that can prevent defects. How would you explain the distinction between QA and QC responsibilities to your boss?
Ans:
Quality Assurance (QA) Responsibilities:
Process-Oriented:
QA is focused on the entire software development process rather than the end product. It involves defining, implementing, and monitoring processes to ensure the development team follows best practices.
Preventive in Nature:
QA activities are proactive and aimed at preventing defects before they occur. This includes process audits, training programs, and the establishment of quality standards and guidelines.
Continuous Improvement:
QA involves continuous process improvement. By analyzing process metrics and feedback, QA teams identify areas for enhancement and implement changes to improve overall efficiency and effectiveness.
Standardization:
QA is concerned with standardizing processes across the organization. This consistency helps in predicting and controlling the quality of software products at every stage of development.
Management of Processes:
QA involves managing and optimizing processes to ensure that the development team adheres to industry standards and follows best practices. It encompasses both technical and non-technical aspects of the development lifecycle.
Documentation and Training:
QA teams create and maintain documentation for processes, procedures, and standards. They also provide training to team members to ensure that everyone is familiar with and follows the established processes.
Quality Control (QC) Responsibilities:
Product-Oriented:
QC is focused on identifying defects and ensuring that the end product meets quality standards. It involves activities that directly evaluate and test the software product.
Corrective in Nature:
QC activities are reactive and aim to identify and correct defects in the product. This includes various testing activities such as functional testing, regression testing, and performance testing.
Verification of End Products:
QC involves the verification of the final product against predefined criteria and requirements. This ensures that the software product aligns with the specified functionality and quality standards.
Testing Processes:
QC encompasses various testing processes, including manual testing, automated testing, and performance testing, to detect and report defects in the software.
Defect Identification and Correction:
QC activities involve the identification, logging, and correction of defects found during testing. This includes collaboration with development teams to fix issues and retest the corrected components.
Validation and Verification:
QC ensures that the product meets the specified requirements by validating its functionality and verifying that it conforms to the established quality standards.
Parameters | Quality Assurance (QA) | Quality Control (QC) |
Aim | The aim of quality assurance is to prevent defects. | The aim of quality control is to identify and improve the defects. |
Focus | Pays main focus is on the intermediate process. | Its primary focus is on final products. |
Team | All team members of the project are involved. | Generally, the testing team of the project is involved. |
Time consumption | It is a less time-consuming activity. | It is a more time-consuming activity. |
Order of execution | It is performed before Quality Control. | It is performed after the Quality Assurance activity is done. |
Process/ Product-oriented | It is process oriented. | It is product oriented. |
Technique | It is the technique of managing quality. | It is the technique to verify quality. |
Program execution is included? | It does not include the execution of the program. | It always includes the execution of the program. |
Technique type | It is a preventive technique. | It is a corrective technique. |
Measure type | It is a proactive measure. | It is a reactive measure. |
SDLC/ STLC? | It is responsible for the entire software development life cycle. | It is responsible for the software testing life cycle. |
Activity level | QA is a low-level activity that identifies an error and mistakes that QC cannot. | It is a high-level activity that identifies an error that QA cannot. |
Example | Verification | Validation |
4) Difference between Manual and Automation Testing?
Manual testing and automation testing are two different approaches to software testing, each with its own advantages and limitations. Here are the key differences between manual and automation testing:
Parameters | Manual Testing | Automation Testing |
Definition | In manual testing, the test cases are executed by the human tester. | In automated testing, the test cases are executed by the software tools. |
Processing Time | Manual testing is time-consuming. | Automation testing is faster than manual testing. |
Resources requirement | Manual testing takes up human resources. | Automation testing takes up automation tools and trained employees. |
Framework requirement | Manual testing doesn’t use frameworks. | Automation testing uses frameworks like Data Drive, Keyword, etc. |
Investment | In manual testing, investment is required for human resources. | In automated testing, investment is required for tools and automated engineers. |
Test results availability | In manual testing, the test results are recorded in an excel sheet so they are not readily available. | In automated testing, the test results are readily available to all the stakeholders in the dashboard of the automated tool. |
Reliability | Manual testing is not reliable due to the possibility of manual errors. | Automated testing is more reliable due to the use of automated tools and scripts. |
Exploratory testing | Exploratory testing is possible in manual testing. | Exploratory testing is not possible in automation testing. |
Performance testing | Performance testing is not possible with manual testing. | Performance testing like load testing, stress testing, spike testing, etc. |
Batch Testing | In manual testing, batch testing is not possible. | You can batch multiple tests for fast execution. |
Programming knowledge | There is no need for programming knowledge in manual testing. | Programming knowledge is a must in case of automation testing as using tools requires trained staff. |
Documentation | In manual testing, there is no documentation. | In automation testing, the documentation acts as a training resource for new developer. He/ She can look into unit test cases and understand the code base quickly. |
When to Use? | Manual testing is usable for Exploratory testing, Usability testing, and Adhoc testing. | Automated testing is suitable for Regression testing, Load testing, and Performance testing. |
5) As a test lead for a web-based application, your manager has asked you to identify and explain the different risk factors that should be included in the test plan. Can you provide a list of the potential risks and their explanations that you would include in the test plan?
Identifying and addressing potential risks is a crucial aspect of test planning. Here is a list of potential risk factors that you might consider including in your test plan along with their explanations:
Unclear or Incomplete Requirements:
Incomplete or ambiguous requirements can lead to misunderstandings and incorrect implementation. This may result in missing functionalities, incorrect features, or the need for frequent changes during the testing phase.
Tight Project Schedule:
A tight schedule may limit the time available for thorough testing, potentially leading to insufficient test coverage and increased chances of overlooking critical defects.
Resource Constraints:
Limited resources such as testing tools, environments, or skilled testers may impact the ability to conduct comprehensive testing, affecting the overall quality of the web application.
Dependency on Third-Party Components:
If the web application relies on third-party components (libraries, APIs, etc.), any issues or changes in those components may impact the application's functionality.
Security Risks:
Security vulnerabilities, such as data breaches or unauthorized access, pose a significant risk. Testing should include security testing to identify and address potential vulnerabilities.
Browser and Device Compatibility:
Differences in browser versions and device types may lead to rendering issues or functional discrepancies. Comprehensive testing on various browsers and devices is essential to ensure a consistent user experience.
Performance and Scalability:
The web application may face performance issues under heavy loads or high concurrent user access. Performance and scalability testing are crucial to identify and address any bottlenecks.
Data Integrity and Recovery:
Risks related to data loss, corruption, or challenges in data recovery must be considered. Regular backups and recovery testing should be part of the test plan.
Communication Issues:
Lack of effective communication among team members or stakeholders can lead to misunderstandings and misinterpretations, impacting the testing process and overall project success.
Lack of Training for Testers:
Explanation: Testers with insufficient training may struggle to identify and report defects effectively. Adequate training should be provided to ensure a skilled testing team.