Quiz-summary
0 of 9 questions completed
Questions:
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
Information
Premium Practice Questions
You have already completed the quiz before. Hence you can not start it again.
Quiz is loading...
You must sign in or sign up to start the quiz.
You have to finish following quiz, to start this quiz:
Results
0 of 9 questions answered correctly
Your time:
Time has elapsed
Categories
- Not categorized 0%
Unlock Your Full Report
You missed {missed_count} questions. Enter your email to see exactly which ones you got wrong and read the detailed explanations.
Submit to instantly unlock detailed explanations for every question.
Success! Your results are now unlocked. You can see the correct answers and detailed explanations below.
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- Answered
- Review
-
Question 1 of 9
1. Question
How do different methodologies for Virtual Private Cloud (VPC) or Virtual Network (VNet) design and configuration. compare in terms of effectiveness? An internal auditor is reviewing the cloud architecture of a financial services firm that hosts its core banking applications and development environments within the same cloud region. The auditor’s primary objective is to evaluate whether the current network design sufficiently mitigates the risk of lateral movement in the event of a compromise. The firm currently uses a single VPC with multiple subnets and relies on Network Access Control Lists (NACLs) for isolation. Which of the following design changes would most effectively improve the security posture and auditability of the network?
Correct
Correct: A multi-VPC hub-and-spoke architecture provides the highest level of isolation and risk mitigation. By separating production and development into distinct VPCs, the organization creates a hard boundary that prevents lateral movement at the network layer. A centralized transit gateway or hub allows for consistent security monitoring, deep packet inspection, and centralized policy enforcement, which aligns with internal audit requirements for robust control and visibility.
Incorrect: Increasing NACL complexity within a single VPC is prone to human error and does not provide the same level of isolation as separate VPCs. Full-mesh peering creates a complex, difficult-to-audit environment that lacks a central point of control for traffic inspection. A flat network topology is a significant security risk as it removes internal boundaries, making it easier for an attacker to move laterally across the network once an initial foothold is gained.
Takeaway: A hub-and-spoke VPC architecture enhances security and auditability by providing clear environment isolation and a centralized point for traffic inspection and policy enforcement.
Incorrect
Correct: A multi-VPC hub-and-spoke architecture provides the highest level of isolation and risk mitigation. By separating production and development into distinct VPCs, the organization creates a hard boundary that prevents lateral movement at the network layer. A centralized transit gateway or hub allows for consistent security monitoring, deep packet inspection, and centralized policy enforcement, which aligns with internal audit requirements for robust control and visibility.
Incorrect: Increasing NACL complexity within a single VPC is prone to human error and does not provide the same level of isolation as separate VPCs. Full-mesh peering creates a complex, difficult-to-audit environment that lacks a central point of control for traffic inspection. A flat network topology is a significant security risk as it removes internal boundaries, making it easier for an attacker to move laterally across the network once an initial foothold is gained.
Takeaway: A hub-and-spoke VPC architecture enhances security and auditability by providing clear environment isolation and a centralized point for traffic inspection and policy enforcement.
-
Question 2 of 9
2. Question
The quality assurance team at a fintech lender identified a finding related to Wireless network security protocols and best practices. as part of outsourcing. The assessment reveals that the third-party service provider utilizes a legacy WPA2-Personal configuration for its internal office wireless network, which is used by staff to access the lender’s sensitive customer databases. Although the provider claims that the use of a 20-character complex passphrase provides sufficient protection, the internal auditor notes that the passphrase has not been rotated since the contract began 18 months ago and is shared among all 150 employees. Which of the following recommendations should the auditor prioritize to best align the provider’s wireless security with industry best practices for high-security environments?
Correct
Correct: WPA3-Enterprise with 802.1X authentication is the industry standard for secure corporate environments because it provides unique encryption keys for each user session and requires individual credentials. This eliminates the risks associated with shared pre-shared keys (PSKs), such as the inability to revoke access for a single user or the vulnerability to offline dictionary attacks if a single password is compromised.
Incorrect: Rotating a pre-shared key (PSK) does not address the fundamental weakness of shared credentials in a large environment. Disabling SSID broadcasting and using MAC filtering are considered ‘security by obscurity’ and are easily bypassed by attackers using basic packet sniffing and MAC spoofing tools. Captive portals provide a layer of session management but do not improve the underlying encryption or prevent unauthorized users from intercepting traffic if the wireless protocol itself is weak.
Takeaway: Enterprise-grade wireless security requires moving away from shared passwords toward individualized authentication mechanisms like 802.1X to ensure non-repudiation and strong data protection.
Incorrect
Correct: WPA3-Enterprise with 802.1X authentication is the industry standard for secure corporate environments because it provides unique encryption keys for each user session and requires individual credentials. This eliminates the risks associated with shared pre-shared keys (PSKs), such as the inability to revoke access for a single user or the vulnerability to offline dictionary attacks if a single password is compromised.
Incorrect: Rotating a pre-shared key (PSK) does not address the fundamental weakness of shared credentials in a large environment. Disabling SSID broadcasting and using MAC filtering are considered ‘security by obscurity’ and are easily bypassed by attackers using basic packet sniffing and MAC spoofing tools. Captive portals provide a layer of session management but do not improve the underlying encryption or prevent unauthorized users from intercepting traffic if the wireless protocol itself is weak.
Takeaway: Enterprise-grade wireless security requires moving away from shared passwords toward individualized authentication mechanisms like 802.1X to ensure non-repudiation and strong data protection.
-
Question 3 of 9
3. Question
Excerpt from an internal audit finding: In work related to Hybrid cloud connectivity strategies. as part of change management at an audit firm, it was noted that the organization recently implemented a dedicated private connection to a cloud service provider to handle sensitive client data. However, during a recent failover test, the transition from the dedicated circuit to the backup site-to-site IPsec VPN caused significant latency and asymmetric routing, leading to the failure of several automated audit tools. Which of the following network configurations would best address the risk of suboptimal path selection and ensure predictable failover behavior in this hybrid environment?
Correct
Correct: In a hybrid cloud environment, BGP is the standard protocol for managing dynamic routing between on-premises infrastructure and cloud providers. By using BGP on both the primary dedicated circuit and the backup VPN, the organization can use attributes like AS Path prepending to make the VPN path appear less desirable (longer) to the cloud provider. This ensures that the cloud provider prefers the dedicated circuit for return traffic, maintaining symmetric routing and predictable failover behavior.
Incorrect: Static routes with floating administrative distances are insufficient because they only react to local interface failures and cannot influence how the cloud provider routes traffic back to the on-premises environment, often leading to asymmetric routing. Spanning Tree Protocol (STP) is a Layer 2 protocol used to prevent loops in Ethernet networks and is not applicable for routing between Layer 3 environments like a data center and a cloud VPC. Layer 2 extensions (L2TP) are generally discouraged in hybrid cloud designs as they extend the broadcast domain, increasing the risk of broadcast storms and making the network less scalable and harder to troubleshoot.
Takeaway: Effective hybrid cloud connectivity requires dynamic routing protocols like BGP to manage path preference and ensure symmetrical traffic flow during failover events.
Incorrect
Correct: In a hybrid cloud environment, BGP is the standard protocol for managing dynamic routing between on-premises infrastructure and cloud providers. By using BGP on both the primary dedicated circuit and the backup VPN, the organization can use attributes like AS Path prepending to make the VPN path appear less desirable (longer) to the cloud provider. This ensures that the cloud provider prefers the dedicated circuit for return traffic, maintaining symmetric routing and predictable failover behavior.
Incorrect: Static routes with floating administrative distances are insufficient because they only react to local interface failures and cannot influence how the cloud provider routes traffic back to the on-premises environment, often leading to asymmetric routing. Spanning Tree Protocol (STP) is a Layer 2 protocol used to prevent loops in Ethernet networks and is not applicable for routing between Layer 3 environments like a data center and a cloud VPC. Layer 2 extensions (L2TP) are generally discouraged in hybrid cloud designs as they extend the broadcast domain, increasing the risk of broadcast storms and making the network less scalable and harder to troubleshoot.
Takeaway: Effective hybrid cloud connectivity requires dynamic routing protocols like BGP to manage path preference and ensure symmetrical traffic flow during failover events.
-
Question 4 of 9
4. Question
During a committee meeting at a wealth manager, a question arises about Interoperability between different vendor devices. as part of business continuity. The discussion reveals that the firm currently utilizes a mix of legacy and modern networking hardware from three different manufacturers. The Chief Risk Officer notes that a recent 24-hour outage was exacerbated because a replacement switch from a secondary vendor could not participate in the primary vendor’s proprietary trunking and routing protocols. Which of the following actions should the internal auditor recommend to most effectively mitigate the risk of interoperability failure during emergency hardware restoration?
Correct
Correct: The use of industry-standard, open protocols like OSPF (Open Shortest Path First) and IEEE 802.1Q ensures that devices from different manufacturers can communicate and exchange data seamlessly. In a business continuity scenario, this allows the organization to swap hardware from different vendors without losing core functionality, thereby reducing the risk of extended downtime caused by proprietary protocol lock-in.
Incorrect: A single-vendor strategy might simplify management but increases vendor lock-in risk and does not address the immediate interoperability issues of the existing mixed environment. Increasing spare inventory addresses hardware availability but does not solve the underlying protocol incompatibility if a different model or brand must be used in an emergency. Static routing and manual assignments are highly prone to human error and lack the scalability and failover capabilities required for modern wealth management operations.
Takeaway: Adopting open-standard protocols is the most effective control for ensuring network interoperability and flexibility in multi-vendor environments.
Incorrect
Correct: The use of industry-standard, open protocols like OSPF (Open Shortest Path First) and IEEE 802.1Q ensures that devices from different manufacturers can communicate and exchange data seamlessly. In a business continuity scenario, this allows the organization to swap hardware from different vendors without losing core functionality, thereby reducing the risk of extended downtime caused by proprietary protocol lock-in.
Incorrect: A single-vendor strategy might simplify management but increases vendor lock-in risk and does not address the immediate interoperability issues of the existing mixed environment. Increasing spare inventory addresses hardware availability but does not solve the underlying protocol incompatibility if a different model or brand must be used in an emergency. Static routing and manual assignments are highly prone to human error and lack the scalability and failover capabilities required for modern wealth management operations.
Takeaway: Adopting open-standard protocols is the most effective control for ensuring network interoperability and flexibility in multi-vendor environments.
-
Question 5 of 9
5. Question
An incident ticket at a private bank is raised about Understanding of network operating systems and their specific commands and features. during change management. The report states that during a scheduled maintenance window at 02:00 AM, a senior network engineer attempted to harden the access layer switches by disabling insecure management protocols. However, after a sudden power fluctuation caused a reboot of the primary distribution switch, all recent configuration changes were lost, and the switch reverted to its previous state. The engineer had verified the commands were active in the current session before the power event occurred. Which action was likely omitted during the change management process, leading to the loss of the configuration?
Correct
Correct: In most network operating systems (NOS), such as Cisco IOS, configuration changes are applied immediately to the running configuration stored in volatile RAM. To ensure these changes are permanent and survive a reboot, the administrator must manually copy the running configuration to the startup configuration stored in non-volatile RAM (NVRAM). The failure to perform this step is a common cause of configuration loss during unexpected power events.
Incorrect
Correct: In most network operating systems (NOS), such as Cisco IOS, configuration changes are applied immediately to the running configuration stored in volatile RAM. To ensure these changes are permanent and survive a reboot, the administrator must manually copy the running configuration to the startup configuration stored in non-volatile RAM (NVRAM). The failure to perform this step is a common cause of configuration loss during unexpected power events.
-
Question 6 of 9
6. Question
What distinguishes Cloud VPN and direct connect solutions. from related concepts for NET Achievement Measurement 2S (AM2S)? An internal auditor is reviewing the network architecture of a financial services firm that is migrating its core processing engine to a hybrid cloud environment. The firm requires a connection that minimizes latency jitter and ensures consistent bandwidth for high-volume database replication between the on-premises data center and the cloud provider. When evaluating the risk of performance degradation, which of the following best describes the fundamental difference between these two connectivity options?
Correct
Correct: Direct Connect (or similar dedicated cloud interconnects) establishes a private, physical link between the customer’s network and the cloud provider’s edge. This bypasses the public internet entirely, providing consistent (deterministic) latency and throughput. In contrast, a Cloud VPN uses IPsec or SSL/TLS to create a secure tunnel over the public internet, which means its performance is inherently tied to the variable conditions of the internet’s routing and congestion.
Incorrect: The suggestion that Cloud VPN uses dedicated fiber while Direct Connect uses SD-WAN is a reversal of the actual technologies. The claim that Direct Connect is for client-to-site remote access is incorrect, as Direct Connect is a site-to-site/data center solution, while VPNs are more commonly used for remote access. The assertion that Cloud VPN is more secure due to Layer 2 encryption is technically inaccurate; IPsec VPNs typically operate at Layer 3, and while Direct Connect does not encrypt by default, it is considered more private because it does not traverse the public internet.
Takeaway: Direct Connect provides deterministic performance via private infrastructure, while Cloud VPN offers cost-effective, encrypted connectivity over the unpredictable public internet.
Incorrect
Correct: Direct Connect (or similar dedicated cloud interconnects) establishes a private, physical link between the customer’s network and the cloud provider’s edge. This bypasses the public internet entirely, providing consistent (deterministic) latency and throughput. In contrast, a Cloud VPN uses IPsec or SSL/TLS to create a secure tunnel over the public internet, which means its performance is inherently tied to the variable conditions of the internet’s routing and congestion.
Incorrect: The suggestion that Cloud VPN uses dedicated fiber while Direct Connect uses SD-WAN is a reversal of the actual technologies. The claim that Direct Connect is for client-to-site remote access is incorrect, as Direct Connect is a site-to-site/data center solution, while VPNs are more commonly used for remote access. The assertion that Cloud VPN is more secure due to Layer 2 encryption is technically inaccurate; IPsec VPNs typically operate at Layer 3, and while Direct Connect does not encrypt by default, it is considered more private because it does not traverse the public internet.
Takeaway: Direct Connect provides deterministic performance via private infrastructure, while Cloud VPN offers cost-effective, encrypted connectivity over the unpredictable public internet.
-
Question 7 of 9
7. Question
Which safeguard provides the strongest protection when dealing with Spine-leaf architectures.? An internal auditor is evaluating the design of a high-availability data center fabric that has transitioned from a traditional three-tier model to a spine-leaf topology. The audit objective is to ensure that the network design minimizes the risk of broadcast storms and maximizes link utilization for east-west traffic patterns. During the review of the network configuration, the auditor identifies several potential control implementations for managing redundancy and path selection.
Correct
Correct: In a spine-leaf architecture, the most robust safeguard against network instability and underutilization is the use of Layer 3 routing protocols (such as OSPF or BGP) combined with Equal-Cost Multi-Pathing (ECMP). This approach allows the network to utilize all available links between the leaf and spine layers simultaneously. By operating at Layer 3, the network avoids the inherent limitations of the Spanning Tree Protocol (STP), which would otherwise block redundant paths to prevent loops, thereby ensuring faster convergence and more efficient bandwidth distribution.
Incorrect: Restricting communication to a single trunk link is incorrect because it creates a significant bottleneck and a single point of failure, defeating the purpose of the spine-leaf redundancy. Utilizing a centralized STP root bridge is a legacy approach that leads to inefficient path usage, as STP must disable redundant links to prevent loops, which is counterproductive in a fabric designed for high-capacity east-west traffic. Configuring static routes directly between leaf switches is incorrect because, in a standard spine-leaf topology, leaf switches do not have direct physical connections to one another; all inter-leaf traffic must traverse the spine layer.
Takeaway: The transition to spine-leaf architectures is best supported by Layer 3 routing and ECMP to provide non-blocking, loop-free redundancy that surpasses the limitations of traditional Layer 2 Spanning Tree Protocol.
Incorrect
Correct: In a spine-leaf architecture, the most robust safeguard against network instability and underutilization is the use of Layer 3 routing protocols (such as OSPF or BGP) combined with Equal-Cost Multi-Pathing (ECMP). This approach allows the network to utilize all available links between the leaf and spine layers simultaneously. By operating at Layer 3, the network avoids the inherent limitations of the Spanning Tree Protocol (STP), which would otherwise block redundant paths to prevent loops, thereby ensuring faster convergence and more efficient bandwidth distribution.
Incorrect: Restricting communication to a single trunk link is incorrect because it creates a significant bottleneck and a single point of failure, defeating the purpose of the spine-leaf redundancy. Utilizing a centralized STP root bridge is a legacy approach that leads to inefficient path usage, as STP must disable redundant links to prevent loops, which is counterproductive in a fabric designed for high-capacity east-west traffic. Configuring static routes directly between leaf switches is incorrect because, in a standard spine-leaf topology, leaf switches do not have direct physical connections to one another; all inter-leaf traffic must traverse the spine layer.
Takeaway: The transition to spine-leaf architectures is best supported by Layer 3 routing and ECMP to provide non-blocking, loop-free redundancy that surpasses the limitations of traditional Layer 2 Spanning Tree Protocol.
-
Question 8 of 9
8. Question
A procedure review at a listed company has identified gaps in Packet analysis and interpretation using tools like Wireshark. as part of periodic review. The review highlights that the IT security department lacks a standardized protocol for handling sensitive data captured during live network troubleshooting sessions. Specifically, during a recent 72-hour diagnostic window, analysts captured unencrypted Personally Identifiable Information (PII) while investigating a latency issue on the HR server. Which of the following internal audit recommendations best addresses the risk of unauthorized data exposure during packet analysis?
Correct
Correct: Implementing capture filters at the point of data collection is a proactive control that prevents sensitive information from being recorded in the first place. Coupled with a time-bound deletion policy, this approach minimizes the data footprint and ensures compliance with privacy regulations by reducing the risk of long-term exposure of PII.
Incorrect: Using automated scripts instead of a GUI does not inherently address the privacy risk if the scripts are still capturing full payloads. Installing packet analysis tools on all workstations increases the attack surface and the risk of unauthorized network sniffing. Outsourcing the function does not remove the organization’s liability for data protection and may introduce additional third-party risks without solving the fundamental issue of how data is filtered and stored.
Takeaway: Internal auditors should ensure that network diagnostic procedures incorporate data minimization techniques, such as capture filters, to protect sensitive information during packet analysis.
Incorrect
Correct: Implementing capture filters at the point of data collection is a proactive control that prevents sensitive information from being recorded in the first place. Coupled with a time-bound deletion policy, this approach minimizes the data footprint and ensures compliance with privacy regulations by reducing the risk of long-term exposure of PII.
Incorrect: Using automated scripts instead of a GUI does not inherently address the privacy risk if the scripts are still capturing full payloads. Installing packet analysis tools on all workstations increases the attack surface and the risk of unauthorized network sniffing. Outsourcing the function does not remove the organization’s liability for data protection and may introduce additional third-party risks without solving the fundamental issue of how data is filtered and stored.
Takeaway: Internal auditors should ensure that network diagnostic procedures incorporate data minimization techniques, such as capture filters, to protect sensitive information during packet analysis.
-
Question 9 of 9
9. Question
During your tenure as risk manager at a listed company, a matter arises concerning Understanding of network operating systems and their specific commands and features. during regulatory inspection. The a customer complaint suggests that unauthorized changes were made to the core routing logic within the last 72 hours, potentially compromising data integrity. Upon reviewing the network operating system (NOS) logs, you find that several administrative sessions were established via unencrypted channels. To address the underlying risk of unauthorized configuration changes and ensure accountability for administrative actions, which of the following represents the most comprehensive control strategy?
Correct
Correct: The most effective way to manage the risk of unauthorized administrative access and ensure accountability is through the implementation of AAA (Authentication, Authorization, and Accounting). Authentication verifies the user, Authorization determines what commands they can run, and Accounting creates an audit trail of what was changed. Combining this with secure, encrypted protocols like SSH and HTTPS prevents the interception of administrative credentials in transit, which is a primary vulnerability of unencrypted protocols like Telnet or HTTP.
Incorrect: While local password policies and physical security are important, they do not address the risks associated with remote administrative access or provide the granular accountability required to track specific changes. Standard ACLs filtering ICMP traffic protect against reconnaissance but do not secure the management plane of the network operating system. Spanning Tree Protocol (STP) configurations like Root Guard are essential for Layer 2 stability and preventing man-in-the-middle attacks at the switching level, but they do not control or audit administrative access to the device’s configuration commands.
Takeaway: Securing a network operating system requires a combination of encrypted management protocols and a centralized AAA framework to ensure granular command authorization and a verifiable audit trail.
Incorrect
Correct: The most effective way to manage the risk of unauthorized administrative access and ensure accountability is through the implementation of AAA (Authentication, Authorization, and Accounting). Authentication verifies the user, Authorization determines what commands they can run, and Accounting creates an audit trail of what was changed. Combining this with secure, encrypted protocols like SSH and HTTPS prevents the interception of administrative credentials in transit, which is a primary vulnerability of unencrypted protocols like Telnet or HTTP.
Incorrect: While local password policies and physical security are important, they do not address the risks associated with remote administrative access or provide the granular accountability required to track specific changes. Standard ACLs filtering ICMP traffic protect against reconnaissance but do not secure the management plane of the network operating system. Spanning Tree Protocol (STP) configurations like Root Guard are essential for Layer 2 stability and preventing man-in-the-middle attacks at the switching level, but they do not control or audit administrative access to the device’s configuration commands.
Takeaway: Securing a network operating system requires a combination of encrypted management protocols and a centralized AAA framework to ensure granular command authorization and a verifiable audit trail.