release notes | Book: 1.9.5, 1.9.12 (opt, FHS), 2.11 (FHS), 2.12 (FHS), 2.13 (FHS), 2.14 (FHS), | Wiki | Q&A black_bg
Web: Multi-page, Single page | PDF: A4-size, Letter-size | eBook: epub black_bg

Calling sequence

The external script or binary is launed with 3 positional arguments and at least one option (-si=<storageInfo>). Additonial options may follow if defined so with the hsm set pool command. Arguments and options are separated by at least on blank character.

    Syntax :
    
        <binary> put|get <pnfsid> <localFileName>   \
               -si=<storageInfo> [more options]

The put|get argument determines the data transfer direction seen from the HSM. put means, that data has to be stored into the HSM while get means it has be fetched out of the HSM.

The <storageInfo> option is a collection of key value pairs, separated by semicola. All these values are derived from the pnfs database. The possible keys slightly differ, depending on which HSM is addressed. The order of the key value pairs is not determined and may vary between calls. The -si= string shouldn’t contain blank TAB or newline characters.

Example:

    -si=size=1048576000;new=true;stored=false;sClass=desy:cms-sc3;cClass=-;hsm=osm;Host=desy;

Table 8.1. Mandatory StorageInfo keys

KeyMeaning
size Size of the file in bytes
newFalse if file already in the dCache
storedTrue if file already stored in the HSM
sClass HSM depended. Used by the PoolManager for pool attraction
cClass Parent Director tag (cacheClass). Used by the PoolManager for poolattraction. May be '-'
hsm Storage Manager name (enstore/osm). Can be overwritten by parent directory tag (hsmType).

Table 8.2. Optional StorageInfo keys but used by all HSM’s

KeyMeaning
flag-lSize of the file (if size exceeds 2G)
flag-s* if file is defined sticky
flag-cCRC value (currently 1:<hexAdler32>

Table 8.3. Enstore specific

KeyMeaning
groupStorage Group (e.g. cdf,cms ...)
familyFile family (e.g. sgi2test,h6nxl8, ...)
bfidBitfile Id (GET only) (e.g. B0MS105746894100000)
volumeTape Volume (GET only) (e.g. IA6912)
locationLocation on tape (GET only) (e.g. : 0000_000000000_0000117)

Table 8.4. OSM specific

KeyMeaning
storeOSM store (e.g. zeus,h1, ...)
groupOSM Storage Group (e.g. h1raw99, ...)
bfidBitfile Id (GET only) (e.g. 000451243.2542452542.25424524)

There might be more key values pairs which are used by the dCache internally and which should not affect the behaviour of the hsm copy script.

Table 8.5. Return codes

Return CodeMeaningPool Behaviour
Into HSMFrom HSM
30 <= rc < 40User definedDeactivates requestReports Problem to PoolManager
41No Space Left on device

Pool Retries

Disables Pool

Reports Problem to PoolManager

42Disk Read I/O Error
43Disk Write I/O Error
All other Pool RetriesReports Problem to PoolManager

[return to top]

Special Cases and exceptions

[return to top]

Reading vers. Writing HSM files

When fetching a file from an HSM, the command line contains sufficient information about the location of the dataset within the HSM to get the file. No additional interaction with pnfs is needed. So pnfs doesn’t need to be mounted on read-only pools.

This is different for storing files into an HSM. As a return from the actual HSM put operation, some data has to be stored in pnfs. Currently this has to be done directly by the corresponding external HSM script. So, other then for read pools, write pools still need to have pnfs mounted.

A future approache will be to transfer the necessay HSM information from the HSM copy script into the dCache using STDOUT. The dCache subsequently performs the necessary pnfs store operation through the PnfsManager.

[return to top]

Precious files are removed from pnfs

In case a precious file is removed from pnfs before the hsmcopy-Script (osmcp.sh or real-encp.sh) is called, the copy on disk is removed and the hsmcopy-Script is not called.

If the file is removed while the hsmcopy-Script is active, the script will encounter an error when writing HSM data into the various pnfs layers. In this case it’s recommended to return an error code in the 30–39 range to have the request deactivated. So manual intervention is needed to get the situation cleaned up but no attempt is made by the dCache to get the corresponding dataset stored into the HSM again.