The OpenVMS Frequently Asked Questions (FAQ)


Previous Contents Index

5.42 Please help me with the OpenVMS BACKUP utility?

5.42.1 Why isn't BACKUP/SINCE=BACKUP working?

If you are seeing more files backed up than previously, you are seeing the result of a change that was made to ensure BACKUP can perform an incrementation restoration of the files. In particular, if a directory file modification date changes, all files underneath it are included in the BACKUP, in order to permit incremental restoration should a directory file get renamed.

5.42.1.1 Why has OpenVMS gone through the agony of this change?

When a directory is renamed, the modified date is changed. When the restoration needs to restore the directory and its contents, and the restoration should not result in the restoration of the older directory name when a series of incremental BACKUPs are restored. Thus an incremental BACKUP operation needs to pick up all of the changes.

Consider performing an incremental restoration, to test the procedures. This testing was how OpenVMS Engineering found out about the problem that was latent with the old BACKUP selection scheme---the old incremental BACKUP scheme would have missed restoring any files under a renamed directory. Hence the change to the selection mechanisms mentioned in Section 5.42.1.

5.42.1.2 Can you get the old BACKUP behaviour back?

Yes, please see the /NOINCREMENTAL qualifier available on recent OpenVMS versions (and ECO kits). Use of this qualifier informs BACKUP that you are aware of the limitations of the old BACKUP behaviour around incremental disk restorations.

5.42.2 What can I do to improve BACKUP performance?

Use the documented commands in the manual for performing incremental BACKUPs. Use the documented incremental procedures. Don't try to use incremental commands in a non-incremental context.

Also consider understanding and then using /NOALIAS, which will likely be a bigger win than will anything to do with the incremental BACKUPs, particularly on system disks and any other disks with directory aliases.

See the OpenVMS documentation for additional details.

Ignoring hardware performance and process quotas, the performance of BACKUP during a disk saveset creation is typically limited by three factors:

  1. Default extend size
    The default behavior can have poor performance, as the extend operation can involve extensive additional processing and I/O operations. Consider changing the default extend value on the volume, or change the extend for the process:


    $ set rms/extend=65000 
    

  2. Output IO size
    The default IO size for writing an RMS sequential file is 32 blocks, an increase from the value of 16 blocks used on earlier versions. Setting this to the maximum of 127 can reduce the number of IOs by almost a factor of 4:


    $ set rms/block=127 
    

    Note that the performance might be better on some controllers if the block count is a multiple of 4 - e.g. 124

  3. Synchronous writes to the saveset
    Starting with OpenVMS V7.3, you can now persuade RMS to turn on write-behind for sequential files opened unshared. (Please see the V7.3 release notes or more recent documentation for details.) Enabling the write-behind operations involves setting the dynamic system parameter RMS_SEQFILE_WBH to 1. This parameter is dynamic, and it can be enabled and disabled without a reboot, and changes in its setting can and will directly effect the running system. In order to get the full benefit from write-behind operations, you also need to increase the RMS local buffer count from the default of 2 to a larger number. Raising the value to 10 is probably a reasonable first estimate for this value.


    $ run sys$system:sysman 
    PARAMETERS USE ACTIVE 
    PARAMETERS SET RMS_SEQFILE_WBH 1 
    PARAMETERS WRITE ACTIVE 
    EXIT 
    $ SET RMS/BUFFER=10/EXTEND=65000/BLOCK=127 
    $ BACKUP source-specification ddcu:[dir]saveset.bck/SAVE 
    

5.42.3 Why is BACKUP not working as expected?

First, please take the time to review the BACKUP documentation, and particularly the BACKUP command examples. Then please download and install the most current BACKUP eco kit. Finally, please please set the process quotas per the System Management documentation. These steps tend to resolve most problems seen.

BACKUP has a very complex interface, and there are numerous command examples and extensive user documentation available. For a simpler user interface for BACKUP, please see the documentation for the BACKUP$MANAGER tool.

As for recent BACKUP changes, oddities, bugs, etc:

When working with BACKUP, you will want to:

When working with the BACKUP callable API:

5.42.4 How do I fix a corrupt BACKUP saveset?

BACKUP savesets can be corrupted by FTP file transfers and by tools such as zip (particularly when the zip tool has not been asked to save and restore OpenVMS file attributes or when it does not support OpenVMS file attributes; use the zip "-V" option), as well as via other means of corruptions.

If you have problems (eg: NOTSAVESET errors) with the BACKUP savesets after unzipping them or after an FTP file transfer, you can try restoring the appropriate saveset attributes using the tool:


$ BACKUP/LIST saveset.bck/SAVE 
Listing of save set(s) 
 
%BACKUP-F-NOTSAVESET, saveset.bck/SAVE is not a BACKUP save set 
$ @SRH:[UTIL]RESET_BACKUP_SAVESET_FILE_ATTRIBUTES.COM saveset.bck 
$ BACKUP/LIST saveset.bck/SAVE 
Listing of save set(s) 
 
Save set:          saveset.bck 
Written by:        username 
... 

This tool is available on the OpenVMS Freeware (in the [000TOOLS] directory). The Freeware is available at various sites---see the Freeware location listings elsewhere in the FAQ---and other similar tools are also available from various sources.

In various cases, a SET FILE/ATTRIBUTES command can also be used. As the parameters of this command must be varied as the target BACKUP saveset attributes vary, this approach is not recommended.

Also see the "SITE VMS", /FDL, and various other file-attributes options available in various FTP tools. (Not all available FTP tools support any or all of these options.)

Browser downloads (via FTP) and incorrect (binary or ascii FTP transfer modes) are notorious for causing RMS file corruptions and particularly BACKUP saveset corruptions. You can sometimes help encourage the browser to select the correct FTP transfer type code (via RFC1738):

You can also often configure the particular web browser to choose the appropriate transfer mode by default, based on the particular file extensions, using a customization menu available in most web browsers. You can select that the specific file extentions involved use the FTP binary transfer mode, which will reduce the number of corruptions seen.

5.42.5 How do I write a BACKUP saveset to a remote tape?

How to do this correctly was described at DECUS long ago. On the OpenVMS host with the tape drive, create the following SAVE-SET.FDL file:


RECORD 
        FORMAT                  fixed 
        SIZE                    8192 

Then create BACKUP_SERVER.COM:


$ ! 
$ ! BACKUP_SERVER.COM - provide remote tape service for BACKUP. 
$ ! 
$ set noon 
$ set rms/network=16 
$ allocate mka500 tapedev 
$ mount/nounload/over:id/block=8192/assist tapedev 
$ convert/fdl=SAVE-SET sys$net tapedev:save-set. 
$ dismount/unload tapedev 
$ stop/id=0 

On the node where you want to do the backup, use the DCL command:


$ backup - 
    srcfilespec - 
    node"user pwd"::"task=backup_server"/block=8192/save 

One area which does not function here is the volume switch; multi-reel or multi-cartridge savesets. Since the tape is being written through DECnet and RMS and the magtape ACP, BACKUP won't see the media switch and will split an XOR group across the reel boundary. BACKUP might well be willing to read such a multi-reel or multi-cartridge saveset (directly, not over the net) as the XOR blocks are effectively ignored until and unless needed for error recovery operations. BACKUP likely will not be able to perform an XOR-based recovery across reel or cartridge boundaries.

Unfortunately BACKUP can't read tapes over the network because the RMS file attributes on a network task access look wrong; the attributes reported include variable length records.

5.42.6 How to perform a DoD security disk erasure?

Sometimes refered to as disk, tape, or media declassification, as formatting, as pattern erasure, or occasionally by the generic reference of data remanence. Various references to the US Deparment of Defence (DoD) or NCSC "Rainbow Books" documentation are also seen in this context.

While this erasure task might initially appear quite easy, basic characteristics of the storage media and of the device error recovery and bad block handling can make this effort far more difficult than it might initially appear.

Obviously, data security and sensitivity, the costs of exposure, applicable legal or administrative requirements (DoD, HIPPA or otherwise), and the intrinsic value of the data involved are all central factors in this discussion and in the decision of the appropriate resolution, as is the value of the storage hardware involved.

With data of greater value or with data exposure (sometimes far) more costly than the residual value of the disk storage involved, the physical destruction of the platters may well be the most expedient, economical, and appropriate approach. The unintended exposure of a bad block containing customer healthcare data or of credit card numbers can quite be costly, of course, both in terms of the direct loss, and the longer-term and indirect costs of such exposures.

Other potential options include the Freeware RZDISK package, the OpenVMS INITIALIZE/ERASE command (and potentially in conjunction with the $erapat system service) and OpenVMS Ask The Wizard (ATW) topics including (841), (3926), (4286), (4598), and (7320). For additional information on sys$erapat, see the OpenVMS Programming Concepts manual and the OpenVMS VAX examples module SYS$EXAMPLES:DOD_ERAPAT.MAR. Some disk controllers and even a few disks contain support for data erasure. Some DSSI Disk ISEs, for instance.

For the prevention of casual disk data exposures, a generic INITIALIZE/ERASE operation is probably sufficient. This is not completely reliable, particularly if the data is valuable, or if legal, administrative or contractual restrictions are stringent---there may well be revectored blocks that are not overwritten or not completely overwritten by this erasure, as discussed above, and these blocks can obviously contain at least part of most any data that was stored on the disk -- but this basic disk overwrite operation is likely sufficient to prevent the typical information disclosures.

You will want to consult with your site security officer, your corporate security or legal office, with HP Services or your prefered service organization, or with a firm that specializes in erasure or data declassification tasks. HP Services does traditionally offer a secure disk declassification service.

5.42.7 How to enable telnet virtual terminals?

To enable virtual terminal support for telnet and rlogin devices, add the following logical name definitions into SYLOGICALS.COM:


$ DEFINE/SYSTEM/EXECUTIVE TCPIP$RLOGIN_VTA TRUE 
$ DEFINE/SYSTEM/EXECUTIVE TCPIP$TELNET_VTA TRUE 

See SYS$STARTUP:SYLOGICALS.TEMPLATE for details on the typical contents of SYLOGICALS.COM.

In SYSTARTUP_VMS.COM, ensure that a command similar to the following is invoked:


$ SYSMAN IO CONNECT VTA0/NOADAPTER/DRIVER=SYS$LOADABLE_IMAGES:SYS$TTDRIVER.EXE 

In MODPARAMS.DAT, add the following line or (if already present) mask the specified hexidecimal value into an existing TTY_DEFCHAR2, and perform a subsequent AUTOGEN with an eventual reboot:


TTY_DEFCHAR2 = %X20000 

This value is TT2$M_DISCONNECT.

On older TCP/IP Services---versions prior to V5.0---you will have to perform the following UCX command:


$ UCX 
UCX> SET CONF COMM/REMOTE=VIRTUAL 

5.42.7.1 Volume Shadowing MiniCopy vs MiniMerge?

MiniMerge support has been available for many years with OpenVMS host-based volume shadowing, so long as you had MSCP controllers (eg: HSC, HSJ, or HSD) which supported the Volume Shadowing Assist known as "Write History Logging".

If you are interested in mini-merge and similar technologies, please see the Fibre Channel webpage and the information available there:

Mini-Merge support was originally intended to be controller-based and was expected with HSG80 series storage controllers and was expected to require ACS 8.7 and OpenVMS Alpha V7.3-1.

Host-based Mini-Merge (HBMM) is now available for specific OpenVMS releases via a shadowing ECO kit, and is also present in OpenVMS V8.2 and later. HBMM applies to the HSG80 series and---like host-based volume shadowing---to most other (all other?) supported storage devices.

The following sections describe both Mini-Copy and Mini-Merge, and can provide a basis for discussions.

5.42.7.1.1 Mini-Copy?

A Shadowing Full Copy occurs when you add a disk to an existing shadowset using a MOUNT command; the entire contents of the disk are effectively copied to the new member (using an algorithm that goes through in 127-block increments and reads one member, compares with the target disk, and if the data differs, writes the data to the target disk and loops back to the read step, until the data is equal for that 127-block section). (This is one of the reasons why the traditional recommendation for adding new volumes to a shadowset was to use a BACKUP/PHYSICAL copy of an existing shadowset volume, simply because the reads then usually matched and thus shadowing usually avoided the need for the writes.)

If you warn OpenVMS ahead of time (at dismount time) that you're planning to remove a disk from a shadowset but re-add it later, OpenVMS will keep a bitmap tracking what areas of the disk have been modified while the disk was out of the shadowset, and when you re-add it later with a MOUNT command OpenVMS only has to update the areas of the returned disk that the bit-map indicates are now out-of-date. OpenVMS does this with a read source / write target algorithm, which is much faster than the shenanigans the Full Copy does, so even if all of the disk has changed, a Mini-Copy is faster than a Full Copy.

5.42.7.1.2 Mini-Merge?

A Shadowing Merge is initiated when an OpenVMS node in the cluster (which had a shadowset mounted) crashes or otherwise leaves unexpectedly, without dismounting the shadowset first. In this case, OpenVMS must ensure that the data is identical, since Shadowing guarantees that the data on the disks in a shadowset will be identical. In a regular Merge operation, Shadowing uses an algorithm similar to the Full Copy algorithm (except that it can choose either of the members' contents as the source data, since both are considered equally valid), and scans the entire disk. Also, to make things worse, for any read operations in the area ahead of what has been merged, Shadowing will first merge the area containing the read data, then allow the read to occur.

A Merge can be very time-consuming and very I/O intensive. If a node crashes, the surviving nodes can query to determine what exact areas of the disk the departed node was writing to just before the crash, and thus Shadowing only needs to merge just those few areas, so this tends to take seconds, as opposed to potentially requiring many minutes or even hours for a regular full Merge.

5.43 Please explain DELETE/ERASE and File Locks?

DELETE/ERASE holds the file lock and also holds a lock on the parent directory for the duration of the erasure. This locking can obviously cause an access conflict on either the file or on the directory---it might well pay to rename files into a temporary directory location before issuing the DELETE/ERASE, particularly for large files and/or for systems with multiple overwrite erase patterns in use; for any systems where the DELETE/ERASE erasure operation will take a while.

5.44 Managing File Versions?

Some applications will automatically roll file version numbers over, and some will require manual intervention. Some will continue to operate without the ability to update the version, and some will be unable to continue. Some sites will specifically (attempt to) create a file with a version of ;32767 to prevent the creation of additional files, too.

To monitor and resolve file versions, you can use commands including:


$ SET FILE/VERSION_LIMIT=n filename 
$ SET DIRECTORY/VERSION_LIMIT=n [directory] 

And you can also monitor file version numbers, and can report problems with ever-increasing file versions to the organization(s) supporting the application(s) generating files with ever-increasing version numbers for details on potential problems, and for any recommendations on resetting the version numbers for the particular product or package. If required, of course.

The following pair of DCL commands---though obviously subject to timing windows--- can be used to rename all the versions of a file back down to a contiguous sequence of versions starting at 1:


$ RENAME file.typ;*   RENAME.TMP; 
$ RENAME RENAME.TMP;* file.typ; 

The key to the success of this RENAME sequence is the specification of (only) the trailing semicolon on the second parameter of each of the RENAME commands.

You may also see the numbers of files reduced with DELETE commands, with multiple directories, or with PURGE commands such as the following examples:


$ PURGE/BEFORE="-2-" 
$ PURGE/BEFORE="TODAY-2-" 
$ PURGE/KEEP=10" 

You can use DFU (Freeware) to quickly and efficiently scan for all files with large(r) version numbers:


DFU SEARCH/VERSION=MINIMUM=nnnn 

If you are creating or supporting an application, selecting temporary or log file filenames from among a set of filenames---selecting filenames based on time, on process id, on the day of week, week number, or month, on the f$unique lexical (V7.3-2 and later), etc---is often useful, as this approach more easily permits on-line adjustments to the highest file versions and easily permits on-line version compression using techniques shown above. With differing filenames, you are less likely to encounter errors resulting from files that are currently locked. You can also detect the impending version number limit within the application, and can clean up older versions and roll the next file version creation to ;1 or such.

Also see Section 9.4.


Previous Next Contents Index