LUSAS User Area
LUSAS Software Verification
LUSAS is one of the world's leading
structural analysis systems. It uses finite element analysis
techniques to provide accurate and trusted solutions for all types
of linear and nonlinear stress, dynamic, and thermal/field problems.
The two main components of the system are:
- LUSAS Modeller - a fully
interactive graphical user interface for model building and
viewing of results from an analysis.
- LUSAS Solver - a powerful finite
element analysis engine that carries out the analysis of the
problem defined in LUSAS Modeller.
Over many years, LUSAS has established
documented procedures which sub-divide the software development process into a number of well defined
tasks. These
tasks include the creation of functional and detailed
specifications, code walk-throughs, technical and development reports
and project
approval. During this process the LUSAS Modeller and Solver development teams
run a comprehensive set of non-interactive tests, which enable
trapping of any bugs introduced during the development of new
features and enhancements, as well identifying any changes that
affect existing facilities. Interactive testing also takes place,
and by the time a software kit is built for final installation tests
and subsequent release it will have passed a vast number of Quality
Assurance tests designed to ensure that newly added features not
only work as intended, but do not compromise existing capabilities.
Modeller QA
The Modeller test suite is extremely
mature, with a large percentage of the tests dating back more than
20 years. Each testcase uses a Visual Basic Script file to either
build a model from scratch, using the LUSAS Programmable Interface,
or to open an existing model file. Each testcase produces multiple
pictures, tabulated output, model files, Solver data files - all for
comparison with previous versions, and crucially captures the
Modeller text output window to which progress messages, errors and
warnings are written.
Some testcases cover solely modelling
operations, finishing with the creation of a Solver data file that
can be checked against expected output. Other testcases start by
opening a LUSAS results file, before carrying out solely
post-processing operations. Many other testcases cover the whole
modelling and results viewing process from creating geometry,
assigning attributes, solving and viewing the resulting contours and
graphs. The type of testcase created depends on its purpose, and
generally, the nature of the development project for which it was
required.
All testcases are usually run weekly,
and always for each formal release kit. This takes around 36 hours
and comprises a total of 3756 testcases, creating 92539 output files
which are each automatically compared against expected output. A
subset of around 3000 testcases is run every night by each Modeller
developer on that day's work, taking around 15 hours to complete,
and resulting in around 60,000 output files. Developers are
extending, modifying and writing new testcases every day, so these
numbers are continuously increasing.
Solver QA
The Solver test suite constitutes
vital regression testing which ensures that the integrity of the
software is never compromised. It comprises 6300 tests testing every
facility in Solver and generates around 30,000 output files for
checking. It also includes around 350 tests that are included in the
LUSAS Verification Manual that is released to clients, as well as
NAFEMS benchmarks and HECB calibration tests. Each QA test involves
running one or more data files through Solver and comparing results
against validated output. The tests and output for each version of
the software occupy around 6.3Gb; of which 6Gb is checked during
each QA run.
Each QA is run in 32-bits and
64-bits, single-threaded and multi-threaded, across a variety of
equation solvers, and takes 25-29 hours to complete. Tests for both
the current release version and the next version under development
are run regularly; daily or on alternate days depending on the
overall runtime. A QA run for each kit is also carried out before
final release. Like the Modeller test suite, it is under constant
expansion as new developments are added.
Interactive testing
Interactive tests are carried out
with Micro Focus Unified Functional Testing software. This software
drives the user interface and replicates a user carrying out tasks
interactively, as opposed to using a LUSAS script or LUSAS
Programmable Interface (LPI) commands.
Once a beta version of the software
becomes available for testing, all step-by-step LUSAS worked examples
available to clients are tested, along with discontinued examples
for legacy reasons - amounting to around 60 tests in total. As new
worked examples are written they are added to the test suite. Due to
the extensive options available in LUSAS, not all variations of
dialogs (such as the various country options for bridge loading, for
example) are covered by these examples, so additional interactive
tests are carried out on a number of dialogs including those
relating to bridge loading, influences, materials, and rail and
vehicle load optimisation.
Output from these tests, normally in
the form of plain text and proprietary LUSAS picture files, is
automatically checked for consistency with previously verified
results obtained for each example. Any difference in behaviour or
output - from an unexpected error message to an unexpected contour
value - is reported for investigation and fixing by the development
team. If any error is found, a new kit is built and the testing
process repeated. Even with automation and the use of multiple
machines running in parallel, it takes around one week to
interactively test a release kit assuming all tests pass.
Informal testing
Informal testing is also used as part
of the software release process, and takes many forms. Software
developments are reviewed repeatedly with a team of reviewers ahead
of any QA runs being carried out, often resulting in coding errors
being identified and removed. A "road testing" stage,
which runs ahead of and in parallel with interactive testing, is
also performed. This uses a different set of reviewers who set
themselves tasks in an attempt to find any outstanding issues.
Documentation of new features and examples in online help and
related user manuals using development or beta versions can also
highlight differences. These additional testing and documentation
operations all help to find and minimise errors in released
software.
Installation testing
Installation testing aims to cover
all possible installer actions and user choices in as many machine
scenarios as possible. All versions of Windows supported by LUSAS,
and any upcoming pre-release versions of Windows are tested in
various different machine states ranging from a new machine with a
fresh copy of Windows, to older machines with previous versions of
LUSAS and other software installed. Installs, uninstalls, upgrades
and other maintenance actions are all tested, as are the effects of
these actions on the target machine and the software already
installed on it. Variations on how the installer may be run are also
subject to testing. Fully interactive installs through the installer
wizard, silent installs from the command line, web installs, and
installs from DVD are all tested.
Installation testing is largely
carried out using virtual machines to quickly and reproducibly
invoke snapshots of machines in various states. Machine states pre
and post install action are compared to determine if a test has been
completed successfully. For installs and upgrade actions, LUSAS
applications are run to ensure that the installer deployed a
complete and fully working copy of LUSAS. The suite of installer
tests is run for every version of LUSAS released. If a failure
occurs the cause is investigated and may result in a fix and rebuild
of the problem component. In this case QA and interactive tests as
well as installer testing may be run again. All Installation testing must
pass before a version of LUSAS is approved for release.
Reporting of errors
All software errors found during
development, from the sources mentioned, as well as those identified
by users, are logged in a single database so that their status can
be tracked. Each error is categorised as either Critical, Major,
Minor or Cosmetic, given a priority and release target, and actioned
to a software developer for fixing. When resolved, the changes
required are committed to a Subversion file management system and as
a result the software version in which they are fixed and
subsequently released in is recorded for posterity. Errors fixed in
a new release version are reported to all clients via a software
release note supplied with each installation. Users are individually
notified by email of any fixes for errors they personally reported.
Last updated: 31 January 2022. |