To fulfill evolving demands in modern engineering, the use of software tools is no longer optional but an integral requirement. Engineers increasingly depend on advanced modeling, simulation, and analytical platforms that enable them to handle complex systems with a level of precision and efficiency beyond what manual methods can achieve. While engineering software has revolutionized the speed and accuracy of analysis, it carries an inherent vulnerability: the “garbage in, garbage out” principle. A computer design program, no matter how powerful, relies heavily on data and assumptions provided. If inputs are flawed or if the engineer does not fully understand the underlying theory, outputs may appear mathematically precise but can be dangerously wrong.
Description of Technical Competency 1.5:
The purpose of Technical Competency 1.5 is to ensure that engineers are not only capable of producing results with computer-aided programs or spreadsheets but can also independently verify the accuracy, validity, and reliability of those results using sound engineering judgment. Technical Competency 1.5, as outlined in the Competency-Based Assessment (CBA) guidelines published by most Canadian professional engineering regulators using the 34-competency framework, is as follows:
“You understand solution methods and can independently verify the results.”
The regulators that use the 22-competency framework, such as APEGA, Engineers Yukon, and NAPEG, define this competency as follows:
“…Where computer-aided solutions or spreadsheet calculations were used, demonstrate how you verified your work with manual calculations, field data, or independent testing. Where computer solutions were not used, demonstrate how you verified your work through a separate and independent process.…”
Independent Verification of Software Results:
When using reputable computer software, it can often be tempting to quickly generate results and assume they are correct simply because the software incorporates established engineering principles. While the software can accurately execute any method or principle from its library, it cannot determine which approach is the most suitable for a specific scenario. It is the engineer’s responsibility to select the appropriate principle or method, set relevant parameters, and critically evaluate whether the results are valid, reasonable, and applicable to the situation. When demonstrating Competency 1.5, the emphasis should be on the engineer’s understanding of the underlying theoretical principles, rather than on proficiency with just the software itself. Engineers must also be able to independently verify and validate the results produced by the software.
Duty of Care with Use of Software:
Engineers have a professional duty of care when using engineering software, as reliance on automated outputs does not absolve them of responsibility. Even if a program accurately executes its internal models, it is the engineer’s responsibility to select the appropriate method, define parameters, and ensure that results are valid for the specific scenario.
Good practice includes recognizing the software’s limitations, keeping records of versions and input data, and applying independent checks such as simplified calculations, benchmarks, or comparison with field data. By doing so, engineers can demonstrate competence, ensure defensible results, and uphold the trust placed in their professional judgment.
Techniques for Independent Verification:
Peer Review:
Peer review is one of the most reliable ways to verify engineering results, as it introduces a second layer of professional judgment. In this process, a qualified engineer who could either be internal or external to the organization reviews the assumptions, modelling approach, and outputs. The purpose is not to redo every calculation but to check that the methods are appropriate, inputs are realistic, and conclusions align with accepted standards of practice. Internal reviews are often part of in-house quality assurance, while external or independent reviews add further rigor when high-risk or complex projects are involved. The peer review process helps identify omissions, test assumptions, and assess risks, reducing the chance of errors that might be overlooked when working in isolation. Ultimately, it provides both technical confidence and a documented safeguard of professional responsibility. Figure 1 below outlines the step-by-step process of the peer review process:
Figure 1: Peer Review process (Drewien et al., Sandia/Lockheed Martin, 2014).
Alternate or Manual Computation:
Independent verification through manual computation involves engineers identifying a key result from the software and then performing a simplified calculation based on first principles or standard formulas. The goal should not be to replicate the model in full detail but to verify that the software’s result falls within a reasonable and expected range. If the two values closely agree, confidence in the model is strengthened. If there is a significant error, the discrepancy indicates a need to revisit assumptions, inputs, or modeling choices.
Benchmarking Against Industry Best Practices & Industry Guidelines:
Benchmarking as a form of independent verification means comparing software outputs against recognized industry benchmarks, test cases, or published guidance that engineers generally accept as good practice. Unlike mandatory codes, which must always be followed, benchmarking often relies on voluntary standards, recurring industry reports, and research-based references that engineers across sectors use to test whether their own results are reasonable. Every industry generally maintains published guidelines that, while not legally binding, are widely accepted as representing generally acceptable practice, and these serve as valuable benchmarks for verification.
For example, in the municipal sector, recurring programs such as the Municipal Benchmarking Network Canada (MBNC) and MIDAS (Municipal Information & Data Analysis System) offer shared performance data on water, wastewater, and road networks, giving civil engineers a reference point to validate outputs like pavement lifecycle costs or renewal rates. Similarly, in the railway and transportation sector, safety and infrastructure performance are benchmarked through recurring frameworks, with practices published by the American Public Transport Association (APTA) providing widely recognized reference points.
Validation with Test or Field Data:
Validation with test or field data involves taking the results produced by software and checking them against what happens in reality. The engineer first models a component or system in the software, then compares the predictions with measurements from a laboratory test, prototype, or field operation. If the software and real-world data align within reasonable limits, the model can be trusted. If they don’t, the assumptions or inputs need to be revisited.
This process is especially important in new product development, where confidence in the software may not yet be established.
Some Practical Examples:
Electrical & Electronics Engineering Example –
Consider an example of an electrical engineer in a typical electricity utility setting performing a load-flow study on a distribution network with hundreds of buses, stations, and generation sources. Before hitting the “run” button of the software, the engineer should carefully choose the analysis technique i.e. Newton Raphson, Gauss-Seidel, or decoupled power flow, that works best for the given network conditions and analysis objectives.
To independently verify the results, the engineer can select one representative bus and perform a manual voltage drop calculation using the line impedance and estimated load. If this hand calculation produces a value close to what the software shows for that bus, it confirms that the model setup is reasonable. The engineer should also consider validating the study against real-world conditions. SCADA data or a site visit can confirm actual regulator tap positions, feeder configurations, and equipment ratings. In some cases, operational practices may limit the feasibility of the switching schemes suggested by the software. By comparing software recommendations with manual checks and field evidence, the engineer can verify that the software recommendations are not only numerically correct but also practical in operation.
Mechanical Engineering Example –
In the automobile sector, engineers often use software to model cooling systems for engines under different load conditions. These tools simulate heat transfer, coolant flow, and radiator performance, but the results should not be accepted without checks. To validate them, an engineer can perform a basic heat balance calculation using thermodynamic equations to estimate the heat produced by combustion and the cooling capacity of the radiator. If the manual estimate is close to the simulation output, the model can be considered reliable. Additional confirmation comes from prototype testing, where sensor data verifies coolant temperatures, flow rates, and fan operation. In practice, factors such as limited airflow in a compact engine bay may also restrict the performance of a theoretically optimal design. By comparing software predictions with manual checks and real test data, the engineer ensures the cooling system is both accurate in analysis and practical in operation.
Civil Engineering Example –
In civil or structural engineering, software such as SAP2000 or STAAD.Pro is frequently used to analyze beams, trusses, and frames under applied loads. Independent verification can be done by selecting a key output, like the bending moment in a mid-span girder, and comparing it with a simplified closed-form solution from classical beam theory. For example, for a simply supported span, the software's bending moment can be checked against manual calculation. If they agree within an acceptable margin, it indicates that the software model is properly configured. Significant discrepancies, however, suggest the need to reassess boundary conditions, loading assumptions, or input parameters.
Conclusion:
Independent verification is a core responsibility of engineers when using computer programs or software to support their work. Competency 1.5 emphasizes that while software is essential, it cannot replace professional judgment. Engineers must demonstrate understanding of the underlying theory and confirm results through peer review, manual checks, benchmarking, or field data.
When demonstrating this competency, it is not enough to simply name the software or list analysis techniques. Engineers should explain why a particular tool or method was chosen, how it fits the engineering context, and how the results were validated. Focus should particularly be made on how the professional judgment was exercised, limitations were recognized, and outcomes were ensured reliable and practical.
Disclaimer:
The information in this blog is provided for general guidance only. While care has been taken to ensure accuracy, no guarantee is made regarding technical precision or completeness. The views expressed reflect publicly available information and personal insights and may not capture the latest developments. Readers should verify details and seek professional advice before acting on the content.