...\" @OPENGROUP_COPYRIGHT@ ...\" COPYRIGHT NOTICE ...\" Copyright (c) 1990, 1991, 1992, 1993 Open Software Foundation, Inc. ...\" Copyright (c) 1996, 1997, 1998, 1999, 2000 The Open Group ...\" ALL RIGHTS RESERVED (MOTIF). See the file named COPYRIGHT.MOTIF for ...\" the full copyright text. ...\" ...\" This software is subject to an open license. It may only be ...\" used on, with or for operating systems which are themselves open ...\" source systems. You must contact The Open Group for a license ...\" allowing distribution and sublicensing of this software on, with, ...\" or for operating systems which are not Open Source programs. ...\" ...\" See http://www.opengroup.org/openmotif/license for full ...\" details of the license agreement. Any use, reproduction, or ...\" distribution of the program constitutes recipient's acceptance of ...\" this agreement. ...\" ...\" EXCEPT AS EXPRESSLY SET FORTH IN THIS AGREEMENT, THE PROGRAM IS ...\" PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY ...\" KIND, EITHER EXPRESS OR IMPLIED INCLUDING, WITHOUT LIMITATION, ANY ...\" WARRANTIES OR CONDITIONS OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY ...\" OR FITNESS FOR A PARTICULAR PURPOSE ...\" ...\" EXCEPT AS EXPRESSLY SET FORTH IN THIS AGREEMENT, NEITHER RECIPIENT ...\" NOR ANY CONTRIBUTORS SHALL HAVE ANY LIABILITY FOR ANY DIRECT, ...\" INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL ...\" DAMAGES (INCLUDING WITHOUT LIMITATION LOST PROFITS), HOWEVER CAUSED ...\" AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT ...\" LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ...\" ANY WAY OUT OF THE USE OR DISTRIBUTION OF THE PROGRAM OR THE ...\" EXERCISE OF ANY RIGHTS GRANTED HEREUNDER, EVEN IF ADVISED OF THE ...\" POSSIBILITY OF SUCH DAMAGES. ...\" ...\" ...\" HISTORY # $XConsortium: ch03.sm /main/3 1995/07/13 20:08:54 drk $ ...\" (c) Copyright 1992 OPEN SOFTWARE FOUNDATION, INC. .H 1 "Running Automated Tests" .P This section describes the runtime environment for automated testing. .H 2 "Environment" .P The automation code does make some assumptions about key bindings, window manager modes, and the like. Shipped with the test suite is an entire home directory for a test user, which we refer to as the \*Lqauser\*O environment. Test environments should use the \*L.Xdefaults\*O, \*L.mwmrc\*O and \*L.motifbind\*O files from this environment in their testing. The \*L.motifbind\*O file may need modification for hardware platforms other than the reference machines used by OSF. Consult the \*LVirtual Bindings\*O man page for more information. .H 2 "RUN Scripts" .P Each automated directory contains a file \*LRUN.custom\*O, which is used to store specifics about what flags to pass to each test which should be executed. An executable \*LRUN\*O script is created by issuing the command: .iS make RUN .iE .P The RUN script is "made" by substituting the unique \*LRUN.custom\*O section into the appropriate place in the standard template \*L$TOP/tests/General/RUN_template\*O. In addition to the flags specified in the customization file, additional flags may be passed to all tests by passing those flags to the \*LRUN\*O script. For example, to run all the tests in Trace mode (\*L-T\*O), the following command would be used: .iS RUN -T .iE .P Examine a built \*LRUN\*O script. Note that tests can be defined with special flags and run more than once, with different flag combinations. These flags are most often used to allow for multiple executions of the same test for different test purposes: .in +3 .VL .5i .LI "\*L-I\*O" Denotes a specific instruction file. .LI "\*L-S\*O" Denotes a specific script file. .LI "\*L-V\*O" Denotes a specific visuals file. .LI "\*L-O\*O" Denotes a specific output (stdout) file. .LI "\*L-u\*O" User data, often used by tests to support different modes. .LE .in -3 .P By default, a test named \*LTraversal1\*O will read in an instruction file \*LTraversal1.Dat\*O, a script file \*LTraversal1.Scr\*O, redirect standard output and standard error to a file \*LTraversal1.prt\*O, and store in or read images from a file \*LTraversal1.vis\*O. If a test supports two modes, \*Ldestroy\*O, where widgets are destroyed when activated, and \*Lunmap\*O, where widgets are unmapped when activated, we might wish to run the test twice, once in each mode. During the two runs, one can use the same instructions and scripts but expects different visuals, behavior and output. In the \*LRUN.custom\*O file, one might specify two runs this way: .iS set Traversal1a = `Traversal1 -I Traversal1.Dat \e -S Traversal1.Scr -V Traversal1a.vis \e -O Traversal1a.prt` set Traversal2a = `Traversal1 -I Traversal1.Dat \e -S Traversal1.Scr -V Traversal1b.vis \e -O Traversal1b.prt` .iE .P The two variables \*L$Traversal1a\*O and \*L$Traversal1b\*O would appear in the \*Lforeach\*O list that loops through each test case. Of course, if the two modes of the test also required different instructions and different scripts, one could specify that, as well. .P The last section of the \*LRUN\*O script loops through the new output files, comparing them (via \*Ldiff\*O) to the saved output files from a previous run, and finally prints out a summary of which files showed differences. \*EThese differences should be examined very carefully.\*O Often, a serious error may be revealed only by a minor difference in the output file. For example, if PushButtons suddenly stopped generating callbacks on activation, the only clue might be a diff between output files that showed a printed message about "Callback received" missing! Also, visual differences will be highlighted in the differences. Please do not overlook this important step in analyzing test results. .H 2 "Execution modes" .P There are a large number of modes and flags available. Only the major features will be discussed in this section. The complete list of flags can be viewed by running any properly compiled test with the \*L-help\*O option. Also, the Reference section of this book repeats the same information available through \*L-help\*O. .P .H 3 "Record mode" .P As stated earlier, the purpose of a regression test is to compare the appearance and behavior of one, stable version of software, against a newer, less stable version. Thus, one must first \*Erecord\*O an automated test. .P The term \*Erecord\*O as used in this context should not be confused with the actions necessary to record a test under most record-and-playback test software available today. Because of the scripting language, it is not necessary for the user to physically carry out the actions he or she wishes to generate. Instead, those actions are described programmatically via the script. Our belief is that conventional record-the-user-actions mechanisms are more cumbersome and error-prone, since any mistake made by the user can often be repaired only by rerecording the entire session. Scripts, like software programs, can be built up section by section until a long working session can be simulated. .P A test is recorded by running it with the flag \*L-r\*O. If the environment variable \*L$AUTOVPATH\*O is set, the output (\*L.prt\*O) and visual (\*L.vis\*O) files will be written out in the directory specified by \*L$AUTOVPATH\*O. If it is not, the files will be written out in the current directory, overwriting any other files that might be present. .P The QATS shipped by OSF contains visuals and output files recorded against OSF/Motif Version 1.2 during the final regression tests conducted at OSF. Those using this package to retest implementations of OSF/Motif might choose to use OSF's visuals rather than record their own. .P Tests being recorded look quite odd while they are running. This is because the automation code must take control of the colors used in order to notice certain variations in widgets between record and execution. Do not be concerned about the strange appearance of the tests being recorded. .H 3 "Batch Mode" .P Batch mode can also be called comparison, playback or execution mode. In this mode, which is the default, visuals located in the the \*LAUTOVPATH\*O directory are compared with the visuals of the currently executing test. Failures are listed in the output file. There is a flag, \*L-b\*O, for batch mode, but it is optional. A test executed with no flags specified runs in batch mode. .P Be certain that \*L$AUTOVPATH\*O is specified, and that the visual files in that directory are readable. Otherwise, the test will simply print out a warning that visual comparisons will be ignored and continue. This could waste quite a bit of time if it is not discovered until a run is complete! .H 3 "Nocompare Mode" .P The flag \*L-c\*O lets tests be run without any visual comparisons. This mode is useful as one is debugging a script, since eliminating visual comparisons speeds execution. It is also useful when, for whatever reason, one might wish to verify the test output without considering visual differences. Its speed advantages can also be useful if one is linked with utility libraries which seriously impede performance. Possible examples are memory analysis tools, performance profilers, test coverage analyzers. .H 3 "Interactive mode" .P Interactive mode is the most useful mode for debugging. In this mode, a DialogBox is popped up whenever a visual difference is encountered. The dialog box shows the recorded image and the current image side-by-side, with differences highlighted in red. The differences may also be overlaid on either the recorded or the current image, in order to determine exactly what pixels have changed between the two runs. When the visual difference results because the recorded and current images differ in size, only the recorded image is shown. Other debugging clients, such as \*Lxmag\*O, can be used to help determine the exact cause of such size differences. .P The interactive dialog also sports two buttons: an \*LAccept\*O and a \*LFailure\*O button. If the tester is able to immediately determine that some variation is acceptable (for example, new functionality that changes the appearance), he can press the \*LAccept\*O button, and a message noting the acceptance will be written out the the output file. If the difference is a definite or possible failure, the tester would use the \*LFailure\*O button. OSF testers have found that it is often most useful to run the entire directory in batch mode, then examine any test with visual differences by running it in interactive mode.