Domain 3 • CISA Exam

SDLC & System Implementation: Auditing Development from Planning to Production

Master the complete systems development lifecycle for CISA Domain 3 certification success

CISA Domain 3—Information Systems Acquisition, Development, and Implementation—examines your ability to audit how organizations build, buy, and deploy technology solutions. Representing 12% of the exam (approximately 18 questions), this domain tests whether you can evaluate projects from business case through production deployment, ensuring systems meet requirements, follow sound methodologies, and incorporate appropriate controls throughout their lifecycle.
12%
Exam Weight
~18
Questions
2
Main Sections
14
Knowledge Areas

Domain 3 Structure and Scope

Domain 3 divides into two primary sections that cover the complete system lifecycle from initial planning through production implementation:

Section A: Information Systems Acquisition and Development

This section focuses on the front-end processes of obtaining or building systems. You must understand how to audit business cases, feasibility studies, vendor selection, contract management, project governance, requirements analysis, and development methodologies. The emphasis is on ensuring organizations make sound decisions about which systems to acquire or build and how to manage those projects effectively.

Section B: Information Systems Implementation

This section covers the deployment and rollout of systems into production environments. You evaluate testing strategies, data migration controls, user training programs, change management processes, cutover procedures, and post-implementation reviews. The focus is on ensuring systems transition smoothly from development to production while maintaining data integrity and business continuity.

Why Domain 3 Matters for IT Auditors

While smaller than Domains 4 and 5, this domain is critical because it addresses one of the highest-risk activities in any organization: implementing new systems. Failed IT projects cost organizations billions annually. As a CISA, you provide independent assurance that project governance, development methodologies, and implementation controls adequately protect organizational investments and minimize risk. Your ability to audit SDLC processes helps prevent costly failures and security vulnerabilities.


Understanding the Systems Development Lifecycle (SDLC)

The SDLC provides a structured framework for planning, creating, testing, and deploying information systems. Multiple SDLC models exist, each with different strengths, weaknesses, and appropriate use cases. Understanding these models allows auditors to evaluate whether an organization has selected an appropriate methodology for their project context.

The Waterfall Model: Sequential Development

Waterfall Methodology Overview

Structure: Linear, sequential approach where each phase must complete before the next begins. Once a phase concludes, revisiting it is difficult and costly.

Typical Phases:

1

Requirements Analysis

Gather and document all functional and technical requirements. Requirements are frozen after approval and serve as the project baseline.

2

System Design

Create detailed technical specifications, architecture diagrams, database schemas, and interface designs based on approved requirements.

3

Implementation (Coding)

Developers write code according to design specifications. Unit testing occurs during this phase but integration comes later.

4

Testing

Comprehensive testing including integration, system, and user acceptance testing to verify the system meets requirements.

5

Deployment

System is released to production environment. Training, documentation, and cutover activities occur during this phase.

6

Maintenance

Ongoing support, bug fixes, and enhancements based on user feedback and changing business needs.

Best Used For: Projects with well-defined, stable requirements that are unlikely to change. Examples include regulatory compliance systems, infrastructure projects, or replacement of existing systems with known specifications.

Audit Considerations: Verify comprehensive documentation at each phase, formal sign-offs before phase transitions, thorough requirements traceability, and change control processes to manage scope modifications. Look for evidence that testing covers all documented requirements.

The Agile Model: Iterative Development

Agile Methodology Overview

Structure: Iterative approach that breaks work into short cycles (sprints) typically lasting 2-4 weeks. Requirements can evolve based on feedback from each sprint.

Key Principles from the Agile Manifesto: Individuals and interactions over processes and tools, working software over comprehensive documentation, customer collaboration over contract negotiation, and responding to change over following a plan.

Common Frameworks:

Scrum

Fixed-length sprints with defined roles (Product Owner, Scrum Master, Development Team). Includes sprint planning, daily standups, sprint reviews, and retrospectives.

Kanban

Continuous flow model using visual boards to manage work in progress. No fixed sprint lengths. Focus on limiting WIP and optimizing flow.

Extreme Programming (XP)

Emphasizes technical practices like pair programming, test-driven development, continuous integration, and frequent releases.

Best Used For: Projects with evolving requirements, need for rapid delivery, high customer involvement, or innovative products where discovery is part of development. Examples include web applications, mobile apps, or products in competitive markets.

Audit Considerations: Evaluate sprint documentation, user story acceptance criteria, definition of done, backlog management, retrospective actions, and continuous integration practices. Verify that security and compliance requirements are addressed in each sprint rather than deferred.

DevOps: Continuous Delivery

DevOps Methodology Overview

Structure: Cultural and technical approach that unifies development and operations teams to enable continuous integration, continuous delivery (CI/CD), and automated deployment. Built on Agile principles but extends to operations.

Core Practices: Automated testing, continuous integration pipelines, infrastructure as code, automated deployment, continuous monitoring, and rapid feedback loops. Development teams own both building and operating their software.

Best Used For: Cloud-native applications, microservices architectures, SaaS products, or any environment requiring frequent releases and high availability. Organizations with mature automation capabilities.

Audit Considerations: Verify automated testing coverage, review CI/CD pipeline security, evaluate rollback procedures, assess monitoring and alerting systems, and examine segregation of duties in automated environments. Ensure that speed doesn't compromise control objectives.

Methodology Comparison

Characteristic Waterfall Agile DevOps
Approach Linear, sequential phases Iterative sprints Continuous integration/delivery
Requirements Fixed at project start Evolve throughout project Continuously refined
Testing After coding completes During each sprint Automated, continuous
Customer Involvement Primarily at requirements & UAT Continuous throughout Continuous feedback loops
Documentation Comprehensive, formal Minimal, just sufficient Automated, code as documentation
Deployment Single release at end Frequent releases (per sprint) Continuous deployment
Team Structure Siloed (dev, QA, ops separate) Cross-functional teams Fully integrated dev+ops
Change Management Formal, slow, discouraged Embraced as learning Continuous, automated
Risk High risk at end (big bang) Lower risk (incremental) Lowest risk (small changes)
Best For Stable requirements, compliance Evolving needs, innovation Rapid release, cloud-native

Hybrid Approaches (Agifall/Water-Scrum-Fall)

Many organizations blend methodologies—using Waterfall for high-level planning and Agile for development, or applying Agile development within Waterfall governance. While pragmatic, hybrid approaches require careful audit attention to ensure controls aren't lost between methodologies. Verify that governance, testing, and documentation standards apply consistently regardless of which methodology dominates specific phases.


Project Management and Governance

Regardless of SDLC methodology, effective project management and governance structures are essential for successful system implementation. Auditors must evaluate whether appropriate oversight, decision-making authority, and accountability mechanisms exist.

Business Case and Feasibility Analysis

Every significant IT investment should begin with a formal business case that justifies the project and demonstrates expected benefits. Auditors verify that business cases include:

  • Strategic Alignment: Clear linkage to organizational strategy and objectives
  • Problem Statement: Specific business problem or opportunity being addressed
  • Alternatives Analysis: Evaluation of multiple options including build, buy, outsource, or do nothing
  • Cost-Benefit Analysis: Total Cost of Ownership (TCO), Return on Investment (ROI), payback period, and Net Present Value (NPV)
  • Risk Assessment: Identification of technical, business, and operational risks with mitigation strategies
  • Resource Requirements: Human resources, infrastructure, training, and ongoing support needs
  • Success Criteria: Quantifiable metrics to measure benefit realization
  • Timeline and Milestones: Realistic schedule with key decision points

Feasibility studies should examine technical feasibility (can we build/implement this?), operational feasibility (will users accept it?), economic feasibility (does it make financial sense?), and schedule feasibility (can we meet the timeline?).

Project Governance Structures

Effective governance provides oversight, decision-making authority, and accountability throughout the project lifecycle. Key governance components include:

Steering Committee

Senior management body that provides strategic direction, approves major changes, resolves escalated issues, and ensures alignment with business objectives. Should meet regularly with documented decisions.

Project Manager

Accountable for day-to-day execution, resource management, timeline, budget, and deliverables. Must have appropriate authority and organizational support.

Project Sponsor

Executive champion who provides funding, removes organizational barriers, and ensures business engagement. Links project team to senior leadership.

Change Control Board

Reviews and approves scope changes, evaluates impact on timeline/budget/resources, and maintains project baseline. Essential for preventing scope creep.

Make-Versus-Buy Decisions

Organizations must choose whether to develop systems internally, purchase commercial off-the-shelf (COTS) software, or outsource development. Each approach has distinct audit considerations:

Build (In-House Development)

Provides maximum customization and control but requires significant resources and expertise. Higher initial cost, longer timeline. Audit focus: development standards, testing rigor, documentation quality, and long-term maintainability.

Buy (COTS Software)

Faster implementation, lower initial cost, vendor support. Limited customization, potential fit gaps, vendor dependency. Audit focus: vendor evaluation, contract terms, escrow arrangements, upgrade paths, and configuration controls.

Outsource Development

Access to specialized skills, cost efficiency, shared risk. Requires strong vendor management. Audit focus: contract provisions, intellectual property rights, service level agreements, quality assurance, and transition plans.

Software as a Service (SaaS)

Minimal upfront investment, continuous updates, scalability. Vendor controls infrastructure and security. Audit focus: data ownership, security assessments, compliance certifications, backup procedures, and exit strategies.

Audit Tip: Vendor Management

When auditing purchased or outsourced solutions, verify that contracts include right-to-audit clauses, source code escrow agreements (if critical), clear service level agreements with penalties for non-performance, and defined exit procedures including data extraction. Organizations should maintain vendor risk assessments and regularly review vendor performance.


Requirements Analysis and Management

Requirements define what a system must do (functional requirements) and how well it must perform (non-functional requirements). Poor requirements are the leading cause of project failure. Auditors evaluate whether organizations gather, document, validate, and manage requirements effectively.

Types of Requirements

Functional Requirements

Describe specific behaviors and capabilities: "System shall allow users to reset passwords via email verification." These are typically testable and measurable.

Non-Functional Requirements

Define quality attributes: performance (response time), scalability, availability, security, usability, maintainability. Often more critical than functional requirements.

Business Requirements

High-level objectives and needs from business perspective. Describe why the project exists and what business value it delivers.

Technical Requirements

Infrastructure, platform, integration, and architectural specifications. Define technical constraints and standards to follow.

Requirements Gathering Techniques

Effective requirements analysis uses multiple techniques to ensure comprehensive understanding:

  • Interviews: One-on-one or group discussions with stakeholders to understand needs and priorities
  • Workshops: Facilitated sessions with multiple stakeholders to build consensus and resolve conflicts
  • Observation: Job shadowing and workflow analysis to understand actual work processes
  • Document Analysis: Review of existing systems, procedures, regulations, and business rules
  • Prototyping: Creation of mockups or working models to validate understanding
  • Surveys and Questionnaires: Broad data collection from many users or stakeholders
  • Use Cases and User Stories: Narrative descriptions of how users interact with the system

Requirements Management

Requirements aren't static—they evolve as understanding deepens and business needs change. Key requirements management practices include:

Requirements Traceability: Every requirement should be traceable from business need through design, code, test cases, and user documentation. Traceability matrices demonstrate that all requirements are addressed and tested.

Version Control: Requirements documents should be versioned with change history. Stakeholders must approve changes formally through change control processes.

Impact Analysis: Before approving changes, analyze effects on schedule, cost, resources, dependencies, and other requirements. Document trade-offs and decisions.

Requirements Validation: Stakeholders must confirm requirements are correct, complete, consistent, testable, and achievable. Validation occurs throughout the project, not just initially.


System Development and Design

Once requirements are defined, development teams create technical designs and write code. Auditors evaluate whether appropriate controls exist during development to ensure security, quality, and maintainability.

Design Phase Controls

System design translates requirements into technical specifications. Key design activities include:

Architecture Design

High-level structure including application layers, integration points, data flows, and technology choices. Should follow enterprise architecture standards.

Database Design

Data models, schemas, relationships, normalization, indexing strategies, backup/recovery approaches. Must support data integrity and performance.

Interface Design

APIs, data formats, protocols for integration with other systems. User interfaces for human interaction. Should follow usability and accessibility standards.

Security Design

Authentication, authorization, encryption, audit logging, data protection. Security by design, not bolt-on afterthought. Follow secure design principles.

Secure Coding Practices

Development teams should follow established secure coding standards to prevent common vulnerabilities. Auditors verify whether organizations:

  • Use coding standards and style guides consistently across teams
  • Implement input validation to prevent injection attacks
  • Apply proper authentication and session management
  • Handle errors securely without revealing sensitive information
  • Use parameterized queries to prevent SQL injection
  • Implement proper encryption for sensitive data at rest and in transit
  • Conduct code reviews (peer or automated) before integration
  • Use static application security testing (SAST) tools
  • Maintain separation between development, testing, and production code
  • Document code with comments explaining complex logic

Configuration Management

Configuration management tracks and controls changes to code, documentation, and configuration items throughout the project. Essential components include:

Version Control Systems: Tools like Git, SVN, or Mercurial that maintain history of all code changes, allow branching for parallel development, support code reviews through pull requests, and enable rollback to previous versions if needed.

Build Management: Automated compilation and packaging of code into deployable artifacts. Build processes should be repeatable, documented, and produce consistent results.

Environment Management: Separate development, testing, staging, and production environments with appropriate access controls. Configuration differences between environments should be minimized and documented.


Testing: The Quality Assurance Foundation

Testing validates that systems meet requirements and function correctly. Comprehensive testing occurs at multiple levels throughout development. Inadequate testing is a primary cause of production failures and security vulnerabilities.

The Four Levels of Testing

1. Unit Testing

Scope: Individual components, functions, or methods tested in isolation

Performed By: Developers during coding

Purpose: Verify each code unit works correctly according to specifications. Catch bugs early when they're cheapest to fix.

Techniques: White-box testing with knowledge of internal code structure. Test all code paths, boundary conditions, error handling. Often automated using frameworks like JUnit, pytest, or Mocha.

Audit Focus: Verify unit test coverage metrics (typically 70-80% code coverage minimum), review test quality (tests actually validate behavior, not just execute code), confirm tests run automatically in CI pipeline.

2. Integration Testing

Scope: Multiple components or modules tested together to verify they interact correctly

Performed By: Developers or QA engineers

Purpose: Identify interface issues, data flow problems, and communication errors between integrated components. Verify modules work together as designed.

Approaches: Big-bang (integrate all at once), top-down (start with high-level modules, use stubs for lower), bottom-up (start with low-level modules, use drivers for higher), or hybrid approaches.

Audit Focus: Review integration test plans, verify testing of all critical interfaces, confirm error handling between components, validate data transformation accuracy, check API contract testing.

3. System Testing

Scope: Complete, integrated system tested end-to-end in environment similar to production

Performed By: Independent QA team

Purpose: Validate entire system meets functional and non-functional requirements. Ensure system behaves correctly in realistic scenarios.

Types: Functional testing (features work as specified), performance testing (speed, scalability, capacity), security testing (vulnerability scanning, penetration testing), usability testing (user experience), compatibility testing (browsers, devices, OS), and regression testing (existing functionality still works).

Audit Focus: Verify test coverage of all requirements, review defect tracking and resolution, confirm independent testing team, validate test environment matches production, check performance benchmarks met.

4. User Acceptance Testing (UAT)

Scope: Real users test system in production-like environment with actual business scenarios

Performed By: Business users, customers, or end-user representatives

Purpose: Confirm system meets business needs and users can accomplish their work. Final validation before production deployment.

Types: Alpha testing (users test at developer site), beta testing (users test in their environment), business acceptance testing (business processes validated), regulatory acceptance testing (compliance verified).

Audit Focus: Verify user involvement in test scenario creation, review UAT sign-off documentation, confirm all critical business processes tested, validate defect resolution before production, ensure training materials reflect actual system.

Testing Best Practices

Test Early and Often: Defects found in requirements cost $1 to fix, in design $10, in development $100, in testing $1,000, and in production $10,000. Testing should occur throughout SDLC, not just at the end.

Automate When Possible: Automated tests run faster, more consistently, and more frequently than manual tests. They enable continuous integration and rapid feedback on code changes.

Test for Security: Security testing should not be optional. Include vulnerability scanning, penetration testing, and security code reviews. Follow OWASP guidelines for web applications.

Document Everything: Test plans, test cases, test data, and results must be documented. Defects should be tracked in a system with status, priority, assignment, and resolution details.


System Implementation and Deployment

Implementation transitions systems from development into production. This critical phase requires careful planning to maintain business continuity, protect data integrity, and ensure user readiness.

Implementation Planning

Comprehensive implementation plans address all aspects of deployment. Key planning elements include:

  • Deployment Strategy: Phased rollout, parallel operation, pilot program, or big-bang cutover based on risk tolerance
  • Infrastructure Preparation: Hardware installation, network configuration, software installation, capacity planning
  • Data Migration: Data cleansing, transformation, migration scripts, validation procedures, rollback plans
  • User Training: Training materials, instructor-led sessions, online tutorials, help desk preparation, super-user identification
  • Change Management: Communication plans, stakeholder engagement, resistance management, benefits realization
  • Cutover Procedures: Detailed step-by-step instructions, timing, responsibilities, checkpoints, go/no-go criteria
  • Contingency Plans: Rollback procedures, workarounds, support escalation, crisis communication

Data Migration Controls

Data migration—transferring data from legacy systems to new systems—presents significant risk. Poor data migration can corrupt critical business data. Essential controls include:

Data Quality Assessment: Analyze source data for completeness, accuracy, consistency, and validity before migration. Cleanse data to fix known issues.

Mapping and Transformation: Document how source fields map to target fields. Develop transformation rules for data format changes. Handle missing values, duplicates, and referential integrity.

Migration Testing: Perform trial migrations in test environments. Verify data accuracy through reconciliation reports comparing source to target record counts, totals, and samples. Validate business logic and relationships.

Backup and Recovery: Back up all data before migration. Test restoration procedures. Maintain point-in-time recovery capability if migration must be reversed.

Audit Trails: Log all migration activities with timestamps, users, and results. Maintain records for compliance and troubleshooting.

Release Management

Release management controls the deployment of system changes into production. Mature organizations maintain formal release processes including:

Release Planning

Schedule releases during maintenance windows, coordinate with other projects, communicate to affected users, prepare documentation and training materials.

Change Approval

Change Advisory Board (CAB) reviews and approves production deployments. Emergency changes have expedited approval with post-implementation review.

Deployment Automation

Automated deployment scripts reduce human error. Configuration management tools ensure consistency across environments. Version control tracks what's deployed where.

Verification Testing

Smoke testing immediately after deployment confirms basic functionality. Monitor system health closely for hours/days after release. Ready to rollback if issues arise.

Post-Implementation Review

After deployment stabilizes, conduct formal post-implementation reviews (PIRs) to capture lessons learned and assess project success. Effective PIRs examine:

Benefits Realization: Did the system deliver expected benefits defined in the business case? Are performance metrics being measured? What's the actual ROI?

Budget and Schedule: Final costs compared to budget. Timeline variance. Reasons for deviations. How could estimates improve?

Requirements Fulfillment: Percentage of requirements delivered. Critical gaps. User satisfaction with functionality.

Quality Metrics: Defect rates post-production. System availability and performance. Security incidents. User-reported issues.

Process Effectiveness: What worked well? What caused problems? Recommendations for future projects. Update organizational standards and templates.

Common Implementation Failures

Inadequate Training: Users resist systems they don't understand. Budget sufficient time and resources for comprehensive training.

Poor Data Quality: Migrating bad data creates bad reports and business decisions. Invest in data cleansing before migration.

Insufficient Testing: Skipping UAT to meet deadlines results in production issues. Never compromise on user acceptance testing.

Lack of Executive Support: Without visible sponsorship, users perceive projects as optional. Maintain engaged leadership throughout.


Auditing Domain 3: Key Examination Areas

When auditing systems acquisition, development, and implementation, focus your examination on these critical areas:

Strategic Alignment and Governance

  • Review business cases and feasibility studies for completeness and approval
  • Verify projects align with IT strategy and organizational objectives
  • Examine steering committee meeting minutes and decision documentation
  • Assess project portfolio management and prioritization processes
  • Evaluate change control board effectiveness and change approval records

Project Management and Execution

  • Review project plans, schedules, and resource allocations
  • Assess project status reporting and variance analysis
  • Examine risk registers and mitigation strategies
  • Verify scope change documentation and approvals
  • Evaluate project manager qualifications and authority

Requirements and Design

  • Review requirements documentation for completeness and stakeholder approval
  • Verify requirements traceability to design, code, and tests
  • Assess design review processes and technical architecture decisions
  • Examine security requirements integration throughout SDLC
  • Evaluate handling of non-functional requirements (performance, scalability)

Development Controls

  • Review coding standards and secure development practices
  • Assess version control usage and branch management
  • Examine code review processes (peer reviews or automated)
  • Verify separation of development, testing, and production environments
  • Evaluate developer access controls and privilege management

Testing Adequacy

  • Review test plans and coverage of all requirements
  • Verify independent testing team for system and UAT phases
  • Examine defect tracking, prioritization, and resolution records
  • Assess test environment configuration and data management
  • Validate security testing including vulnerability scans and pen tests
  • Confirm UAT sign-off by business users before production

Implementation Controls

  • Review implementation plans including rollback procedures
  • Assess data migration controls and reconciliation
  • Examine change management and user training programs
  • Verify release approval documentation
  • Review post-implementation support arrangements
  • Evaluate lessons learned and post-implementation review findings

Vendor Management (for Purchased/Outsourced Systems)

  • Review vendor selection criteria and evaluation documentation
  • Examine contracts for SLAs, penalties, escrow, and audit rights
  • Assess vendor performance monitoring and issue resolution
  • Verify intellectual property ownership and licensing terms
  • Evaluate vendor risk assessments and security certifications

Domain 3 Study Strategy

Focus Your Preparation

Understand Methodologies Conceptually: Know when Waterfall, Agile, and DevOps are appropriate. Understand trade-offs, not just definitions. Questions present scenarios—you must recommend suitable approaches.

Master Testing Levels: Clearly differentiate unit, integration, system, and UAT. Know what each tests, who performs it, and when it occurs. Questions often ask about appropriate testing for specific situations.

Learn Project Governance: Understand roles (sponsor, PM, steering committee) and their responsibilities. Know what business cases should contain. Recognize good vs. poor governance practices.

Know Change Management: Understand change control processes, impact analysis, and approval workflows. This applies to both project scope changes and production release management.

Study Vendor Management: Know evaluation criteria, contract essentials, and ongoing oversight. Make-versus-buy decisions appear frequently. Understand SaaS, outsourcing, and COTS considerations.

Common Domain 3 Question Patterns

CISA questions in Domain 3 typically present project scenarios and ask you to identify the best practice, greatest risk, or most appropriate next step. Practice thinking like an auditor: What controls should exist? What could go wrong? What provides assurance?

Common themes include: identifying missing controls in development processes, recommending appropriate testing for specific situations, recognizing inadequate requirements management, spotting poor change control, evaluating vendor contract provisions, and assessing implementation risks.

Remember: the "best" answer follows systematic methodology, addresses risk appropriately, and aligns with ISACA guidance—even if it seems more bureaucratic than what happens in practice.


Conclusion

Domain 3 may represent only 12% of the CISA exam, but the concepts it covers—systems development lifecycles, project management, testing methodologies, and implementation controls—form the foundation of modern IT audit work. Organizations spend billions annually on IT projects, many of which fail to deliver expected value due to poor governance, inadequate requirements, insufficient testing, or botched implementations.

As a CISA professional, your role is to provide independent assurance that development and implementation processes include appropriate controls to protect organizational investments, maintain data integrity, ensure security, and deliver business value. Understanding the full spectrum of SDLC models, testing approaches, and governance mechanisms allows you to evaluate projects objectively and recommend improvements that prevent costly failures.

Master Domain 3 by understanding not just what should happen, but why each control matters and what risks occur when controls are absent. Think systematically through project lifecycles, identify control weaknesses, and recommend practical solutions. This analytical mindset—combined with solid knowledge of SDLC frameworks and best practices—will serve you well both on the exam and throughout your audit career.

Ready to Test Your Knowledge?

Practice Domain 3 questions extensively. Focus on scenario-based questions that require you to apply concepts rather than just recall definitions. Use the ISACA Question, Answer & Explanation database to build familiarity with question formats and reasoning patterns specific to this domain.

Ready to Master IT Audit & Pass CISA?

Test your knowledge with 2000+ CISA practice questions covering all 5 exam domains