In this section we explain the format of the different files that are used by
both gPlazma1
and gPlazma2
plug-ins.
In gPlazma
, except for the kpwd
plug-in, authorization is a
two-step process. First, a username is obtained from a mapping of the
user’s DN or his DN and role, then a mapping of username to UID
and GID with optional additional session parameters like the root
path is performed. For the second mapping usually the file called
storage-authzdb
is used.
The default location of the storage-authzdb
is
/etc/grid-security
. Before the
mapping entries there has to be a line specifying the version of the
used file format.
Example:
version 2.1
dCache supports versions 2.1 and to some extend 2.2.
Except for empty lines and comments (lines start with #
)
the configuration lines have the following format:
authorize <username> (read-only|read-write) <UID> <GID>[,<GID>]* <homedir> <rootdir>
For legacy reasons there may be a third path entry which is ignored by dCache. The username here has to be the name the user has been mapped to in the first step (e.g., by his DN).
Example:
authorize john read-write 1001 100 / /data/experiments /
In this example user <john> will be mapped to
UID 1001 and GID 100 with read access on the directory /data/experiments
. You may choose to
set the user’s root directory to /
.
Example:
authorize adm read-write 1000 100 / / /
In this case the user <adm> will be granted read/write access in any path, given that the file system permissions in Chimera also allow the transfer.
The first path is nearly always left as “/
”, but it may be used as a
home directory in interactive session, as a subdirectory of the root
path. Upon login, the second path is used as the user’s root, and a
“cd” is performed to the first path. The first path is
always defined as being relative to the second path.
Multiple GIDs can be assigned by using comma-separated values for the GID file, as in
Example:
authorize john read-write 1001 100,101,200 / / /
The lines of the storage-authzdb
file are similar
to the “login” lines of the
dcache.kpwd
file. If you already have a
dcache.kwpd
file, you can easily create
storage-authzdb
by taking the lines from your
dcache.kpwd
file that start with the word
login
, for example,
Example:
login john read-write 1001 100 / /data/experiments /
and replace the word login
with
authorize
. The following line does this for you.
[root] #
sed "s/^ *login/authorize/" dcache.kpwd|grep "^authorize" > storage-authzdb
The gPlazma
policy file
/etc/dcache/dcachesrm-gplazma.policy
contains
two lines for this plug-in.
# Built-in gPLAZMAlite grid VO role mapping gridVoRolemapPath="/etc/grid-security/grid-vorolemap" gridVoRoleStorageAuthzPath="/etc/grid-security/storage-authzdb"
The second is the storage-authzdb
used in other
plug-ins. See the above documentation on storage-authdb
for how to create the file.
The file is similar in format to the grid-mapfile
,
however there is an additional field following the DN (Certificate
Subject), containing the FQAN (Fully Qualified Attribute Name).
"/C=DE/O=GermanGrid/OU=DESY/CN=John Doe" "/some-vo" doegroup "/C=DE/DC=GermanGrid/O=DESY/CN=John Doe" "/some-vo/Role=NULL" doegroup "/C=DE/DC=GermanGrid/O=DESY/CN=John Doe" "/some-vo/Role=NULL/Capability=NULL" doegroup
Therefore each line has three fields: the user’s DN, the user’s FQAN, and the username that the DN and FQAN combination are to be mapped to.
The FQAN is sometimes semantically referred to as the “role”. The same user can be mapped to different usernames depending on what their FQAN is. The FQAN is determined by how the user creates their proxy, for example, using voms-proxy-init. The FQAN contains the user’s Group, Role (optional), and Capability (optional). The latter two may be set to the string “NULL”, in which case they will be ignored by the plug-in. Therefore the three lines in the example above are equivalent.
Example:
If a user is authorized in multiple roles, for example
"/DC=org/DC=doegrids/OU=People/CN=John Doe" "/some-vo/sub-grp" vo_sub_grp_user "/DC=org/DC=doegrids/OU=People/CN=John Doe" "/some-vo/sub-grp/Role=user" vouser "/DC=org/DC=doegrids/OU=People/CN=John Doe" "/some-vo/sub-grp/Role=admin" voadmin "/DC=org/DC=doegrids/OU=People/CN=John Doe" "/some-vo/sub-grp/Role=prod" voprod
he will get the username corresponding to the FQAN found in the proxy that the user creates for use by the client software. If the user actually creates several roles in his proxy, authorization (and subsequent check of path and file system permissions) will be attempted for each role in the order that they are found in the proxy.
In a GridFTP
URL, the user may also explicitly request a username.
gsiftp://doeprod@ftp-door.example.org:2811/testfile1
in which case other roles will be disregarded.
Instead of individual DNs, it is allowed to use *
or
"*"
as the first field, such as
Example:
"*" "/desy/Role=production/" desyprod
In that case, any DN with the corresponding role will match. It should
be noted that a match is first attempted with the explicit DN. Therefore
if both DN and "*"
matches can be made, the DN match
will take precedence. This is true for the revocation matches as well
(see below).
Thus a user with subject /C=DE/O=GermanGrid/OU=DESY/CN=John
Doe
and role /desy/Role=production
will
be mapped to username desyprod
via the above
storage-authzdb
line with "*"
for the DN, except if there is also a line such as
"/C=DE/O=GermanGrid/OU=DESY/CN=John Doe" "/desy/Role=production" desyprod2
in which case the username will be desyprod2
.
To create a revocation entry, add a line with a dash
(-
) as the username, such as
"/C=DE/O=GermanGrid/OU=DESY/CN=John Doe" "/desy/production" -
or modify the username of the entry if it already exists. The behaviour is undefined if there are two entries which differ only by username.
Since DN is matched first, if a user would be authorized by his VO
membership through a "*"
entry, but is matched
according to his DN to a revocation entry, authorization would be
denied. Likewise if a whole VO were denied in a revocation entry, but
some user in that VO could be mapped to a username through his DN, then
authorization would be granted.
Example:
Suppose that there are users in production roles that are expected to
write into the storage system data which will be read by other users. In
that case, to protect the data the non-production users would be given
read-only access. Here in
/etc/grid-security/grid-vorolemap
the production
role maps to username cmsprod
, and the role which
reads the data maps to cmsuser
.
"*" "/cms/uscms/Role=cmsprod" cmsprod "*" "/cms/uscms/Role=cmsuser" cmsuser
The read-write privilege is controlled by the third field in the lines
of /etc/grid-security/storage-authzdb
authorize cmsprod read-write 9811 5063 / /data / authorize cmsuser read-only 10001 6800 / /data /
Example:
Another use case is when users are to have their own directories within
the storage system. This can be arranged within the gPlazma
configuration files by mapping each user’s DN to a unique username and
then mapping each username to a unique root path. As an example, lines
from /etc/grid-security/grid-vorolemap
would
therefore be written
"/DC=org/DC=doegrids/OU=People/CN=Selby Booth" "/cms" cms821 "/DC=org/DC=doegrids/OU=People/CN=Kenja Kassi" "/cms" cms822 "/DC=org/DC=doegrids/OU=People/CN=Ameil Fauss" "/cms" cms823
and the corresponding lines from
/etc/grid-security/storage-authzdb
would be
authorize cms821 read-write 10821 7000 / /data/cms821 / authorize cms822 read-write 10822 7000 / /data/cms822 / authorize cms823 read-write 10823 7000 / /data/cms823 /
The section in the gPlazma
policy file for the kpwd plug-in
specifies the location of the dcache.kpwd
file, for example
Example:
# dcache.kpwd
kpwdPath="/etc/dcache/dcache.kpwd"
To maintain only one such file, make sure that this is the same
location as defined in
/usr/share/dcache/defaults/dcache.properties
.
Use /usr/share/dcache/examples/gplazma/dcache.kpwd
to create this file.
Two file locations are defined in the policy file for this plug-in:
# grid-mapfile gridMapFilePath="/etc/grid-security/grid-mapfile" storageAuthzPath="/etc/grid-security/storage-authzdb"
The grid-mapfile
is the same as that used in other
applications. It can be created in various ways, either by connecting
directly to VOMS or GUMS servers, or by hand.
Each line contains two fields: a DN (Certificate Subject) in quotes, and the username it is to be mapped to.
Example:
"/C=DE/O=GermanGrid/OU=DESY/CN=John Doe" johndoe
When using the grid-mapfile
plug-in, the
storage-authzdb
file must also be configured.
See the section called “storage-authzdb
” for details.
There are two lines in the policy file for this plug-in.
# SAML-based grid VO role mapping
mappingServiceUrl="https://gums.oursite.edu:8443/gums/services/GUMSAuthorizationServicePort"
# Time in seconds to cache the mapping in memory
saml-vo-mapping-cache-lifetime="60"
The first line contains the URL for the GUMS web service. Replace the
URL with that of the site-specific GUMS. When using the
GUMSAuthorizationServicePort
", the service will only
provide the username mapping and it will still be necessary to have the
storage-authzdb file used in other plug-ins. See the above documentation
storage-authzdb
for how to create the file. If a GUMS server providing a
StorageAuthorizationServicePort
with correct UID,
GID, and root path information for your site is available, the
storage-authzdb file is not necessary.
The second line contains the value of the caching lifetime. In order to
decrease the volume of requests to the SAML authorization (GUMS)
service, authorizations for the saml-vo-mapping
plug-in are by default cached for a
period of time. To change the caching duration, modify the
saml-vo-mapping-cache-lifetime
value in
/etc/dcache/dcachesrm-gplazma.policy
saml-vo-mapping-cache-lifetime="120"
To turn off caching, set the value to 0
. The default
value is 180
seconds.
gPlazma
includes an authorization plug-in, to support the XACML
authorization schema. Using XACML with SOAP messaging allows gPlazma
to acquire authorization mappings from any service which supports the obligation
profile for grid interoperability. Servers presently supporting
XACML mapping are the latest releases of GUMS and SCAS. Using the new
plug-in is optional, and previous configuration files are still compatible
with gPlazma
. It is normally not necessary to change this file, but
if you have customized the previous copy, transfer your changes to the new
batch file.
The configuration is very similar to that for the saml-vo-mapping
plug-in. There are
two lines for the configuration.
# XACML-based grid VO role mapping XACMLmappingServiceUrl="https://gums.example.org:8443/gums/services/GUMS"; XACMLAuthorizationServicePort="8443" # Time in seconds to cache the mapping in memory xacml-vo-mapping-cache-lifetime="180"
# XACML-based grid VO role mapping XACMLmappingServiceUrl="https://scas.europeansite.eu:8443" # Time in seconds to cache the mapping in memory xacml-vo-mapping-cache-lifetime="180"
As for the saml-vo-mapping
plug-in, the first line contains the URL for the web
service. Replace the URL with that of the site-specific GUMS or
SCAS server. When using the
GUMSXACMLAuthorizationServicePort
(notice the
difference in service name from that for the saml-vo-mapping
plug-in) with a
GUMS server, the service will only provide the username mapping and
it will still be necessary to have the storage-authzdb file used in
other plug-ins. See the above documentation about storage-authzdb
for how to create the file. An SCAS server will return a UID, a
primary GID, and secondary GIDs, but not a root path. A
storage-authzdb
file will be necessary to assign
the root path. Since SCAS does not return a username, the
convention in gPlazma
is to use uid:gid
for the username, where uid
is the string
representation of the UID returned by SCAS, and
gid
is the string representation of the primary
GID returned by SCAS. Thus a line such as
Example:
authorize 13160:9767 read-write 13160 9767 / /data /
in /etc/grid-security/storage-authzdb
will serve to
assign the user mapped by SCAS to UID=13160
and
primary GID=9767
the root path
/data
. It is best for consistency’s sake to fill in
the UID and GID fields with the same values as in the
uid:gid
field. Additional secondary GIDs can be
assigned by using comma-separated values in the GID field. Any GIDs
there not already returned as secondary GIDs by SCAS will be added to
the secondary GIDs list.
The second line contains the value of the caching lifetime. In order to
decrease the volume of requests to the XACML authorization (GUMS or
SCAS) service, authorizations for the saml-vo-mapping
plug-in method are by default
cached for a period of time. To change the caching duration, modify the
xacml-vo-mapping-cache-lifetime
value in
/etc/dcache/dcachesrm-gplazma.policy
saml-vo-mapping-cache-lifetime="120"
To turn off caching, set the value to 0
. For the
xacml-vo-mapping
plug-in the default value is 180
seconds.