The OpenVMS Frequently Asked Questions (FAQ)


Previous Contents Index

15.6.2.1.1 Why no shadowing for a Quorum Disk?

Stated simply, Host-Based Volume Shadowing uses the Distributed Lock Manager (DLM) to coordinate changes to membership of a shadowset (e.g. removing a member). The DLM depends in turn on the Connection Manager enforcing the Quorum Scheme and deciding which node(s) (and quorum disk) are participating in the cluster, and telling the DLM when it needs to do things like a lock database rebuild operation. So you can't introduce a dependency of the Connection Manager on Shadowing to try to pick proper shadowset member(s) to use as the Quorum Disk when Shadowing itself is using the DLM and thus indirectly depending on the Connection Manager to keep the cluster membership straight---it's a circular dependency.

So in practice, folks simply depend on controller-based mirroring (or controller-based RAID) to protect the Quorum Disk against disk failures (and dual-redundant controllers to protect against most cases of controller and interconnect failures). Since this disk unit appears to be a single disk up at the VMS level, there's no chance of ambiguity.

15.6.2.2 Explain disk (or tape) allocation class settings?

The allocation class mechanism provides the system manager with a way to configure and resolve served and direct paths to storage devices within a cluster. Any served device that provides multiple paths should be configured using a non-zero allocation class, either at the MSCP (or TMSCP) storage controllers, at the port (for port allocation classes), or at the OpenVMS MSCP (or TMSCP) server. All controllers or servers providing a path to the same device should have the same allocation class (at the port, controller, or server level).

Each disk (or tape) unit number used within a non-zero disk (or tape) allocation class must be unique, regardless of the particular device prefix. For the purposes of multi-path device path determination, any disk (or tape) device with the same unit number and the same disk (or tape) allocation class configuration is assumed to be the same device.

If you are reconfiguring disk device allocation classes, you will want to avoid the use of allocation class one ($1$) until/unless you have Fibre Channel storage configured. (Fibre Channel storage specifically requires the use of allocation class $1$. eg: $1$DGA0:.)

15.6.2.2.1 How to configure allocation classes and Multi-Path SCSI?

The HSZ allocation class is applied to devices, starting with OpenVMS V7.2. It is considered a port allocation class (PAC), and all device names with a PAC have their controller letter forced to "A". (You might infer from the the text in the "Guidelines for OpenVMS Cluster Configurations" that this is something you have to do, though OpenVMS will thoughtfully handle this renaming for you.)

You can force the device names back to DKB by setting the HSZ allocation class to zero, and setting the PKB PAC to -1. This will use the host allocation class, and will leave the controller letter alone (that is, the DK controller letter will be the same as the SCSI port (PK) controller). Note that this won't work if the HSZ is configured in multibus failover mode. In this case, OpenVMS requires that you use an allocation class for the HSZ.

When your configuration gets even moderately complex, you must pay careful attention to how you assign the three kinds of allocation class: node, port and HSZ/HSJ, as otherwise you could wind up with device naming conflicts that can be painful to resolve.

The display-able path information is for SCSI multi-path, and permits the multi-path software to distinguish between different paths to the same device. If you have two paths to $1$DKA100, for example by having two KZPBA controllers and two SCSI buses to the HSZ, you would have two UCBs in a multi-path set. The path information is used by the multi-path software to distinguish between these two UCBs.

The displayable path information describes the path; in this case, the SCSI port. If port is PKB, that's the path name you get. The device name is no longer completely tied to the port name; the device name now depends on the various allocation class settings of the controller, SCSI port or node.

The reason the device name's controller letter is forced to "A" when you use PACs is because a shared SCSI bus may be configured via different ports on the various nodes connected to the bus. The port may be PKB on one node, and PKC on the other. Rather obviously, you will want to have the shared devices use the same device names on all nodes. To establish this, you will assign the same PAC on each node, and OpenVMS will force the controller letter to be the same on each node. Simply choosing "A" was easier and more deterministic than negotiating the controller letter between the nodes, and also parallels the solution used for this situation when DSSI or SDI/STI storage was used.

To enable port allocation classes, see the SYSBOOT command SET/BOOT, and see the DEVICE_NAMING system parameter.

This information is also described in the Cluster Systems and Guidelines for OpenVMS Cluster Configurations manuals.

15.6.3 Tell me about SET HOST/DUP and SET HOST/HSC

The OpenVMS DCL commands SET HOST/DUP and SET HOST/HSC are used to connect to storage controllers via the Diagnostics and Utility Protocol (DUP). These commands require that the FYDRIVER device driver be connected. This device driver connection is typically performed by adding the following command(s) into the system startup command procedure:

On OpenVMS Alpha:


$ RUN SYS$SYSTEM:SYSMAN 
SYSMAN> IO CONNECT FYA0/NOADAPTER/DRIVER=SYS$FYDRIVER 

On OpenVMS VAX:


$ RUN SYS$SYSTEM:SYSGEN 
SYSGEN> CONNECT FYA0/NOADAPTER 

Alternatives to the DCL SET HOST/DUP command include the console SET HOST command available on various mid- to recent-vintage VAX consoles:

Access to Parameters on an Embedded DSSI controller:


SET HOST/DUP/DSSI[/BUS:{0:1}] dssi_node_number PARAMS 

Access to Directory of tools on an Embedded DSSI controller:


SET HOST/DUP/DSSI[/BUS:{0:1}] dssi_node_number DIRECT 

Access to Parameters on a KFQSA DSSI controller:


SHOW UQSSP ! to get port_controller_number PARAMS 
SET HOST/DUP/UQSSP port_controller_number PARAMS 

These console commands are available on most MicroVAX and VAXstation 3xxx series systems, and most (all?) VAX 4xxx series systems. For further information, see the system documentation and---on most VAX systems---see the console HELP text.

EK-410AB-MG, _DSSI VAXcluster Installation and Troubleshooting_, is a good resource for setting up a DSSI VMScluster on OpenVMS VAX nodes. (This manual predates coverage of OpenVMS Alpha systems, but gives good coverage to all hardware and software aspects of setting up a DSSI-based VMScluster---and most of the concepts covered are directly applicable to OpenVMS Alpha systems. This manual specifically covers the hardware, which is something not covered by the standard OpenVMS VMScluster documentation.)

Also see Section 15.3.3, and for the SCS name of the OpenVMS host see Section 5.7.

15.6.4 How do I rename a DSSI disk (or tape?)

If you want to renumber or rename DSSI disks or DSSI tapes, it's easy---if you know the secret incantation...

From OpenVMS:


$ RUN SYS$SYSTEM:SYSGEN 
SYSGEN> CONNECT FYA0/NOADAPTER 
SYSGEN> ^Z 
$ SET HOST/DUP/SERV=MSCP$DUP/TASK=PARAMS <DSSI-NODE-NAME> 
... 
PARAMS> STAT CONF 
<The software version is normally near the top of the display.> 
PARAMS> EXIT 
... 

From the console on most 3000- and 4000-class VAX system consoles... (Obviously, the system must be halted for these commands...)

Integrated DSSI:


SET HOST/DUP/DSSI[/BUS:[0:1]] dssi_node_number PARAMS 

KFQSA:


SET HOST/DUP/UQSSP port_controller_number PARAMS 

For information on how to get out into the PARAMS subsystem, also see the HELP at the console prompt for the SET HOST syntax, or see the HELP on SET HOST /DUP (once you've connected FYDRIVER under OpenVMS).

Once you are out into the PARAMS subsystem, you can use the FORCEUNI option to force the use of the UNITNUM value and then set a unique UNITNUM inside each DSSI ISE---this causes each DSSI ISE to use the specfied unit number and not use the DSSI node as the unit number. Other parameters of interest are NODENAME and ALLCLASS, the node name and the (disk or tape) cluster allocation class.

Ensure that all disk unit numbers used within an OpenVMS Cluster disk allocation class are unique, and all tape unit numbers used within an OpenVMS Cluster tape allocation class are also unique. For details on the SCS name of the OpenVMS host, see Section 5.7. For details of SET HOST/DUP, see Section 15.6.3.

15.6.5 Where can I get Fibre Channel Storage (SAN) information?

15.6.6 Which files must be shared in an OpenVMS Cluster?

The following files are expected to be common across all nodes in a cluster environment, and though SYSUAF is very often common, it can also be carefully coordinated---with matching UIC values and matching binary identifier values across all copies. (The most common use of multiple SYSUAF files is to allow different quotas on different nodes. In any event, the binary UIC values and the binary identifier values must be coordinated across all SYSUAF files, and must match the RIGHTSLIST file.) In addition to the list of files (and directories, in some cases) shown in Table 15-1, please review the VMScluster documentation, and the System Management documentation.

Table 15-1 Cluster Common Shared Files
Filename Default Specification
SYSUAF SYS$SYSTEM:.DAT
SYSUAFALT SYS$SYSTEM:.DAT
SYSALF SYS$SYSTEM:.DAT
RIGHTSLIST SYS$SYSTEM:.DAT
NETPROXY SYS$SYSTEM:.DAT
NET$PROXY SYS$SYSTEM:.DAT
NETOBJECT SYS$SYSTEM:.DAT
NETNODE_REMOTE SYS$SYSTEM:.DAT
QMAN$MASTER SYS$SYSTEM:; this is a set of related files
LMF$LICENSE SYS$SYSTEM:.LDB
VMSMAIL_PROFILE SYS$SYSTEM:.DATA
VMS$OBJECTS SYS$SYSTEM:.DAT
VMS$AUDIT_SERVER SYS$MANAGER:.DAT
VMS$PASSWORD_HISTORY SYS$SYSTEM:.DATA
NETNODE_UPDATE SYS$MANAGER:.COM
VMS$PASSWORD_POLICY SYS$LIBRARY:.EXE
LAN$NODE_DATABASE SYS$SYSTEM:.DAT
VMS$CLASS_SCHEDULE SYS$SYSTEM:.DATA
SYS$REGISTRY SYS$SYSTEM:; this is a set of related files

In addition to the documentation, also see the current version of the file SYS$STARTUP:SYLOGICALS.TEMPLATE. Specifically, please see the most recent version of this file available, starting on or after OpenVMS V7.2.

A failure to have common or (in the case of multiple SYSUAF files) synchronized files can cause problems with batch operations, with the SUBMIT/USER command, with the general operations with the cluster alias, and with various SYSMAN and related operations. Object protections and defaults will not necessarily be consistent, as well. This can also lead to system security problems, including unintended access denials and unintended object accesses, should the files and particularly should the binary identifier values become skewed.

15.6.7 How can I split up an OpenVMS Cluster?

Review the VMScluster documentation, and the System Management documentation. The following are the key points, but are likely not the only things you will need to change.

OpenVMS Cluster support is directly integrated into the operating system, and there is no way to remove it. You can, however, remote site-specific tailoring that was added for a particular cluster configuration.

First: Create restorable image BACKUPs of each of the current system disks. If something gets messed up, you want a way to recover, right?

Create standalone BACKUP kits for the OpenVMS VAX systems, and create or acquire bootable BACKUP kits for the OpenVMS Alpha systems.

Use CLUSTER_CONFIG or CLUSTER_CONFIG_LAN to remove the various system roots and to shut off boot services and VMScluster settings.

Create as many architecture-specific copies of the system disks as required. Realize that the new systems will all likely be booting through root SYS0---if you have any system-specific files in any other roots, save them.

Relocate the copies of the VMScluster common files onto each of the new system disks.

Reset the console parameters and boot flags on each system for use on a standalone node.

Reset the VAXCLUSTER and NISCS_LOAD_PEA0 parameters to 0 in SYSGEN and in MODPARAMS.DAT.

Clobber the VMScluster group ID and password using SYSMAN.

Reboot the systems seperately, and run AUTOGEN on each.

Shut off MOP services via NCP or LANCP on the boot server nodes.

Permanent seperation also requires the duplication of shared files. For a list of the files commonly shared, please see Section 15.6.6.

Also see the topics on "cluster divorce" in the Ask The Wizard area.

For additional information on the OpenVMS Ask The Wizard (ATW) area and for a pointer to the available ATW Wizard.zip archive, please see Section 3.8. ATW has been superceded (for new questions) by the ITRC discussion forums; the area remains available for reference.

Information on changing node names is included in Section 5.7.

15.6.8 Details on Volume Shadowing?

This section contains information on host-based volume shadowing; on the disk mirroring capabilities available within OpenVMS.

15.6.8.1 Does volume shadowing require a non-zero allocation classes?

Yes, use of host-based Volume Shadowing requires that the disk(s) involved be configured in a non-zero allocation class.

Edit SYS$SYSTEM:MODPARAMS.DAT to include a declaration of an non-zero allocation class, such as setting the host allocation class to the value 7:


ALLOCLASS = 7 

Then AUTOGEN the system, and reboot.

You should now be able to form the shadow set via a command such as the following:


$ MOUNT dsa1007: /SHADOW=($7$dkb300:,$7$dkb500:) volumelabel 

When operating in an OpenVMS Cluster, this sequence will typically change the disk names from the SCSNODE prefix (scsnode$dkann) to the allocation-class prefix ($7$dkannn). This may provide you with the opportunity to move to a device-independent scheme using logical name constructs such as the DISK$volumelabel logical names in your startup and application environments; an opportunity to weed out physical device references.

Allocation class one is used by Fibre Channel devices; it can be best to use another non-zero allocation class even if Fibre Channel is not currently configured and not currently planned.


Index Contents