Version 3.1.3 is fully backward compatible. This means that the 3.1.3 servers can run against databases created with previous server versions. BUT after using the hard link feature within 3.1.3 previous software versions can no longer be used.
Starting with version 3.1.3 Pnfs supports hard links. The external trigger which currently simply creates a file in the Trash Directory is fired after the last link has been removed.
The pathfinder command gets totally confused if the original filename was unlinked.
By default, the hard link capability is enabled. Setting the /usr/etc/pnfsSetup variable hardlinks to off disallows the creation of hardlinks. The hardlink unlink rules are not affected by this variable.
The pnfs.server stop procedure no longer produces inconsistent entries in the filesystem if performed on a highly busy system.Remark :
An nfs filesystem request is received by one of the pnfsDeamons and devided into several database operations. Those operations are then send to the corresponding database server via the shared memory link. In cases where the shutdown doesn't allow all of those subrequests to be finished, there is a high chance of inconsistent filesystem entries. With 3.1.3 this can't happen any longer as long as the following shutdown sequence is used :Consequently it is not longer recommended to modify the running state of the database servers by the mdb update command without stopping the pnfs deamons first.This is what pnfs.server stop does.
- Send SIGTERM to all pnfsd's.
- Wait until they are all finished.
- Send SIGTERM to all dbservers.
Some methods to obtain statistical informations have been added.
.(get)(database)(<dbNumber>)
Reading from .(get)(database)(<dbNumber>) returns a one line description of the database <dbNumber>. If database <dbNumber> doesn't exist, the 'file not found' error is returned..(get)(counters)(<dbNumber>)
Reading from .(get)(counters)(<dbNumber>) returns the access counts of database <dbNumber>. If database <dbNumber> doesn't exist, the command blocks. It would be wise to check the existence with the .(get)(database)(<dbNumber>) call first..(get)(position)
.(get)(position) returns the description of the current directory position seen from pnfs. The position is identified by the ID and the permission of the current directory and the ID of the mountpoint. The information is needed by the DESY osmcp to find the mountpoint of the current path without OS specific system calles.dirID=000000000000000000001788 dirPerm=0000001400000020 mountID=000000000000000000001040
The file remove action handler performs the following actions if the link count of a file becomes zero.For each file level with a content greater then zero, this content is copied to a file called :
where $trash is the /usr/etc/pnfsSetup trash variable.
- $trash/<levelNumber>/.<pnfsID> if the corresponding directory path exists.
- After the copy operation the file is renamed to $trash/<levelNumber>/<pnfsID>.
This behaviour makes the creation of the trashFile more or less atomic. The consequence is that files starting with a dot must be ignored in the trash directories.
The export facility now supports multiple indirections, mountgroups and exporting filesystems to whole subnets.The previous scheme is part of the new model so there is no need to change the configuration.
The following error message may show up in the dbserverLog file if the system is heavily loaded.... - Request <ReqNum> removed caused by late arrivalAt the same time a... (-335) -> -222will appear in the pnfsdLog. This printout simply indicates that an nfs request arrived which is older than the request previously processed.We have to throw away those requests because on heavy load, the nfs client tends to retry nfs requests. Both requests may be queued in different pnfs deamons and we can't determine the sequence in which they will be processed. Under some very unlucky conditions we may reply to both requests with a significant time interval inbetween. Because other requests concerning the same filesystem object may have been processed in the meantime the result can be very destructive.
So the error message above can be ignored.
With version 3.1.3 the Group id of a newly created file or directory is set to the effective group id of the user, taken from the unix credentials, if the group ID was not set in the attributes. Before doing so, we observed strange behaviours with Linux and HP-UX.
Currently we ignore the users group list which is sent as part of the unix credentials. This may cause wrong behavious in conjunction with the chgrp or chown command.
Currently all 8 file levels are created with the same attibutes like owner, group and permissions. Changing the attributes of the top layer will always change the corresponding attributes of layer ONE as well, while the other layers are not affected. Layer TWO to SEVEN can be changed by the special .(use)(<levelNumber>)(<filename>).