Chapter 10. dCache as xroot-Server
This chapter explains how to configure dCache in order to access it via the xroot
protocol, allowing xroot
-Clients like ROOT’s TXNetfile and xrdcp to do file operations against a dCache instance in a transparent manner. dCache implements version 2.1.6 of xroot
protocol.
Setting up
To allow file transfers in and out of dCache using xroot, a new xrootd door
must be started. This door acts then as the entry point to all xroot requests. Compared to the native xrootd server-implementation (produced by SLAC), the xrootd door
corresponds to the redirector node
.
To enable the xrootd door
, you have to change the layout file corresponding to your dCache-instance. Enable the xrootd-service within the domain that you want to run it by adding the following line
[<domainName>/xrootd]
Example:
You can just add the following lines to the layout file:
[xrootd-${host.name}Domain]
[xrootd-${host.name}Domain/xrootd]
After a restart of the domain running the DOOR-XROOTD, done e.g. by executing
dcache restart xrootd-babelfishDomain
|Stopping xrootd-babelfishDomain (pid=30246) 0 1 2 3 4 5 6 7 done
|Starting xrootd-babelfishDomain done
the xrootd door should be running. A few minutes later it should appear at the web monitoring interface under “Cell Services” (see the section called “The Web Interface for Monitoring dCache”).
Parameters
The default port the xrootd door
is listening on is 1094. This can be changed two ways:
- Per door: Edit your instance’s layout file, for example
/etc/dcache/layouts/example.conf
and add the desired port for the xrootd door in a separate line (a restart of the domain(s) running the xrootd door is required):
[xrootd-${host.name}Domain]
[xrootd-${host.name}Domain/xrootd]
port = 1095
- Globally: Edit
/etc/dcache/dcache.conf
and add the variablexrootd.net.port
with the desired value (a restart of the domain(s) running thexroot door
is required):
xrootd.net.port=1095
For controlling the TCP
-portrange within which xrootd
-movers will start listening in the dcache.net.lan.port.min
and dcache.net.lan.port.max to /etc/dcache/dcache.conf
and adapt them according to your preferences. The default values can be viewed in /usr/share/dcache/defaults/dcache.properties
.
dcache.net.lan.port.min=30100
dcache.net.lan.port.max=30200
QUICK TESTS
The subsequent paragraphs describe a quick guide on how to test xroot
using the xrdcp and ROOT clients.
Copying files with xrdcp
A simple way to get files in and out of dCache via xroot
is the command xrdcp. It is included in every xrootd and ROOT distribution.
To transfer a single file in and out of dCache, just issue
xrdcp /bin/sh root://<xrootd-door.example.org>/pnfs/<example.org>/data/xrd_test
xrdcp root://<xrootd-door.example.org>/pnfs/<example.org>/data/xrd_test /dev/null
Accessing files from within ROOT
This simple ROOT example shows how to write a randomly filled histogram to a file in dCache:
root [0] TH1F h("testhisto", "test", 100, -4, 4);
root [1] h->FillRandom("gaus", 10000);
root [2] TFile *f = new TXNetFile("root://<door_hostname>//pnfs/<example.org>/data/test.root","new");
061024 12:03:52 001 Xrd: Create: (C) 2004 SLAC INFN XrdClient 0.3
root [3] h->Write();
root [4] f->Write();
root [5] f->Close();
root [6] 061101 15:57:42 14991 Xrd: XrdClientSock::RecvRaw: Error reading from socket: Success
061101 15:57:42 14991 Xrd: XrdClientMessage::ReadRaw: Error reading header (8 bytes)
Closing remote xroot
files that live in dCache produces this warning, but has absolutely no effect on subsequent ROOT commands. It happens because dCache closes all TCP
connections after finishing a file transfer, while the SLAC xroot client expects to keep them open for later reuse.
To read it back into ROOT from dCache:
root [7] TFile *reopen = TXNetFile ("root://<door_hostname>//pnfs/<example.org>/data/test.root","read");
root [8] reopen->ls();
TXNetFile** //pnfs/<example.org>/data/test.root
TXNetFile* //pnfs/<example.org>/data/test.root
KEY: TH1F testhisto;1 test
Pool memory requirements
In general, each xroot
connection to the pool will require approximately 8 MiB of Java direct memory. This is a consequence of several factors. First, the default XRD_CPCHUNKSIZE
is 8 MiB, and the xrootd client requires the server to read off the entire frame + body of a message on the connection, which dCache currently holds in memory as a single request. Second, our Netty implementations of both the xroot framework and the mover channel use the default preference for Java NIO [= “new I/O” or “non-blocking I/O”] which avoids buffer-to-buffer copying from user to kernel space and back, so the direct memory requirements are greater.
This would mean that to sustain 1000 concurrent connections, you would need a minimum of 8 GiB of direct memory, e.g.:
[${host.name}-5Domain]
dcache.java.memory.heap=...
dcache.java.memory.direct=8192m
If these are all write requests, the requirement is actually pushed up to around 12 GiB.
There are several possible approaches to mitigating the allocation of this much memory on each pool. The first would be to lower the XRD_CPCHUNKSIZE
so that the client is sending smaller frames. This would allow more concurrent sharing of direct memory. Obviously, this is not uniformly enforceable on the connecting clients, so in essence is not a real solution.
The second possibility is to try to lower the corresponding dCache max frame size. By default, this is also 8 MiB (to match the xrootd native default).
Going from 8 MiB to 128 KiB, for instance, by doing
pool.mover.xrootd.frame-size=131072
will also cut down individual connection consumption; this, however, is mostly useful for reads, since writes are currently implemented to read off the entire xroot frame (and thus the entire chunk sent by the client).
For reads, the following comparison should serve to illustrate what the lower buffer sizes can accomplish:
70 clients/connections
8M frame/buffer size
PEAK DIRECT MEMORY USAGE = 720 MiB
vs.
70 clients/connections
128K frame/buffer size
PEAK DIRECT MEMORY USAGE = 16 MiB
So the savings here is pretty significant.
As mentioned above, however, writes profit less from manipulation of the frame size. Writing 100mb files in parallel, with 1 GiB of direct memory allocated to the JVM, for instance:
8 MiB: out of memory at 55 concurrent transfers
vs.
128 KiB: out of memory at 82 concurrent transfers
In either case, it does not appear that individual bandwidth is greatly affected:
8M 128K
read: 111.1MB/s vs 111.1MB/s
write: 70.42MB/s vs 69.93MB/s
High concurrent transfers, however, may have a somewhat more pronounced affect.
The third and final approach to handling connection concurrency is to limit the number of active movers on the pool by creating protocol-specific I/O queues.
As an example, the following would configure an xroot-specific queue limited to 1000 movers (be sure to do save
to write these to the setup file):
\s <pools> mover queue create XRootD -order=LIFO
\s <pools> mover set max active -queue=XRootD 1000
\s <pools> jtm set timeout -queue=XRootD -lastAccess=14400 -total=432000
\s <pools> save
One would also need to add the following corresponding property to the dcache configuration on the door(s):
xrootd.mover.queue=XRootD
It is suggested that the first approach to protecting pools from out-of-memory errors be some combination of increased allocation and throttling via I/O queues; decreasing the pool.mover.xrootd.frame-size
should be reserved as a last resort.
XROOT security
Read-Write access
Per default dCache xroot is restricted to read-only, because plain xroot is completely unauthenticated. A typical error message on the clientside if the server is read-only looks like:
xrdcp -d 1 /bin/sh root://ford.desy.de//pnfs/desy.de/data/xrd_test2
|Setting debug level 1
|061024 18:43:05 001 Xrd: main: (C) 2004 SLAC INFN xrdcp 0.2 beta
|061024 18:43:05 001 Xrd: Create: (C) 2004 SLAC INFN XrdClient kXR_ver002+kXR_asyncap
|061024 18:43:05 001 Xrd: ShowUrls: The converted URLs count is 1
|061024 18:43:05 001 Xrd: ShowUrls: URL n.1: root://ford.desy.de:1094//pnfs/desy.de/data/asdfas.
|061024 18:43:05 001 Xrd: Open: Access to server granted.
|061024 18:43:05 001 Xrd: Open: Opening the remote file /pnfs/desy.de/data/asdfas
|061024 18:43:05 001 Xrd: XrdClient::TryOpen: doitparallel=1
|061024 18:43:05 001 Xrd: Open: File open in progress.
|061024 18:43:06 5819 Xrd: SendGenCommand: Server declared: Permission denied. Access is read only.(error code: 3003)
|061024 18:43:06 001 Xrd: Close: File not opened.
|Error accessing path/file for root://ford//pnfs/desy.de/data/asdfas
To enable read-write access, add the following line to ${dCacheHome}/etc/dcache.conf
xrootdIsReadOnly=false
and restart any domain(s) running a xrootd door
.
Please note that due to the unauthenticated nature of this access mode, files can be written and read to/from any subdirectory in the pnfs namespace (including the automatic creation of parent directories). If there is no user information at the time of request, new files/subdirectories generated through xroot
will inherit UID/GID from its parent directory. The user used for this can be configured via the xrootd.authz.user
property.
Permitting read/write access on selected directories
To overcome the security issue of uncontrolled xroot
read and write access mentioned in the previous section, it is possible to restrict read and write access on a per-directory basis (including subdirectories).
To activate this feature, a colon-seperated list containing the full paths of authorized directories must be added to /etc/dcache/dcache.conf
. You will need to specify the read and write permissions separately.
xrootd.authz.read-paths=/pnfs/<example.org>/rpath1:/pnfs/<example.org>/rpath2
xrootd.authz.write-paths=/pnfs/<example.org>/wpath1:/pnfs/<example.org>/wpath2
A restart of the xrootd
door is required to make the changes take effect. As soon as any of the above properties are set, all read or write requests to directories not matching the allowed path lists will be refused. Symlinks are however not restricted to these prefixes.
TLS
As of 6.2, dCache supports TLS according to the protocol requirements specified by the xroot Protocol 5.0.
The xroot protocol allows a negotiation between the client and server as to when to initiate the TLS handshake. The server-side options are explained in the xrootd.properties file. Currently supported is the ability to require TLS on all connections to the door and pool, or to make TLS optional, depending on the client. For the former, one can also specify whether to begin TLS before login or after. The “after” option is useful in the case of TLS being used with a strong authentication protocol such as GSI, in which case it would make sense not to protect the login as GSI already requires a Diffie-Hellman handshake to protect the passing of credential information.
For third-party, the dCache embedded client (on the destination server) will initiate TLS if (a) TLS is available on the destination pool (not turned off), and (b) the source server supports or requires it. In the case that the source does not support TLS, but the triggering client has expressed ‘tls.tpc=1’ (requiring TLS on TPC), the connection will fail.
As of 6.2, dCache has not yet implemented the GP file or data channel options; stay tuned for further developments in those areas.
A note on TLS configuration for the pools
Given that pools may need to service clients that do not support TLS (they may, for instance, be using a non-xroot protocol), it is probably not practical to make the pools require TLS by setting pool.mover.xrootd.security.tls.mode=STRICT
.
Token-based authorization
The xroot
dCache implementation includes a generic mechanism to plug in different authorization handlers.
SciTokens
As of 6.2, xroot authorization has been integrated with gPlazma SciToken support.
Add
auth sufficient scitoken
to the gplazma.conf configuration file in order to enable authorization.
The token for xroot is passed as an ‘authz’ query element on paths. For example,
xrdcp -f xroots:///my-xroot-door.example.org:1095///pnfs/fs/usr/scratch/testdata?authz=eyJ0eXAiOiJKV1QiLCJhb... /dev/null
dCache will support different tokens during the same client session, as well as different tokens on source and destination endpoints in a third-party transfer.
To enable scitoken authorization on an xroot door, use “authz:scitokens” to load the authorization plugin.
Here is an example layout configuration:
##
# 1095: TLS LOGIN, TLS mode=STRICT, SCITOKEN AUTHZ
##
[xrootd-${host.name}Domain]
[xrootd-${host.name}Domain/xrootd]
xrootd.cell.name=xrootd-${host.name}
xrootd.net.port=1095
xrootd.authz.write-paths=/pnfs/fs/usr/test
xrootd.authz.read-paths=/pnfs/fs/usr/test
xrootd.security.tls.mode=STRICT
xrootd.security.tls.require-login=true
xrootd.plugins=gplazma:none,authz:scitokens
xrootd.plugin!scitokens.strict=true
Note that the above configuration enforces TLS (STRICT); this is highly recommended with SciToken authorization as the token hash is not secure unless encrypted. While it is not strictly required to start TLS at login (since the actual token is not passed until a request involving a path, in this case, ‘open’) –– xrootd.security.tls.require-session=true
would have been sufficient –– the extra protection on login of course will not hurt.
The xroot protocol states that the server can specify supporting different authentication protocols via a list which the client should try in order. While our library code allows for the chaining of multiple such handlers, dCache currently only supports one protocol, either GSI or none, at a time.
Authorization, on the other hand, takes place after the authentication phase; the current library code assumes that the authorization module it loads is the only procedure allowed, and there is no provision for passing a failed authorization on to a successive handler on the pipeline.
We thus make provision here for failing over to “standard” behavior via xrootd.plugin!scitokens.strict
. If it is true
, then the presence of a scitoken is required. If false
, and the token is missing, whatever restrictions that are already in force from the login apply.
A Note on Pool configuration with Scitokens
What this means, however, is that unless the client requests TLS, it will not be turned on.
SECURITY CONSIDERATION
In order to protect the bearer token, the client should always require TLS by using ‘xroots’ as the URL schema. This is because the xrootd clients continue to pass the token in the open request’s path query regardless of whether the server supports TLS or has indicated that it should be turned on.
By using ‘xroots’, the client guarantees TLS will be on at login or the connection will fail.
Scitokens (JWT) and the ZTN protocol
Scitokens are for authorization; however, the XrootD protocol also defines an authentication equivalent, ZTN, where a token is passed as a credential at authentication (just after login).
Originally, this was a countermeasure taken to prevent stray clients from accessing the vanilla server via methods where there was no path (and thus no CGI authz element). However, recent changes to the vanilla client and server have allowed a ZTN token to be used as a fallback authorization token as well, without further need to express a base64-encoded token as part of the path query.
dCache now supports this strategy. To illustrate, here are two different door configurations.
This one:
[xrootd-1095-${host.name}Domain]
dcache.java.memory.heap=2048m
dcache.java.memory.direct=2048m
[xrootd-1095-${host.name}Domain/xrootd]
xrootd.cell.name=xrootd-1095-${host.name}
xrootd.net.port=1095
xrootd.authz.write-paths=/
xrootd.authz.read-paths=/
xrootd.plugins=gplazma:none,authz:scitokens
xrootd.security.tls.mode=STRICT
xrootd.security.tls.require-login=true
xrootd.plugin!scitokens.strict=true
indicates that any client will be allowed through with anonymous credentials (NOBODY) at authentication time, but ultimately will need a token on the path in order to be authorized, with the subject and claim being converted into dCache user and restrictions at the time of the request containing the path.
This configuration:
[xrootd-1095-${host.name}Domain]
dcache.java.memory.heap=2048m
dcache.java.memory.direct=2048m
[xrootd-1095-${host.name}Domain/xrootd]
xrootd.cell.name=xrootd-1095-${host.name}
xrootd.net.port=1095
xrootd.authz.write-paths=/
xrootd.authz.read-paths=/
xrootd.plugins=gplazma:ztn,authz:scitokens
xrootd.security.tls.mode=STRICT
xrootd.security.tls.require-login=true
xrootd.plugin!scitokens.strict=false
xrootd.authz.anonymous-operations=FULL
on the other hand, turns on ZTN in the door. For seamless functioning, this should be coupled with a loosening of the strict requirement on the CGI/path token, and FULL anonymous access (which just means that an anonymous user will be allowed to try all operations, with underlying acls determining whether these succeed or not). With this configuration, the client will need to be provided a ZTN token via an environment variable, e.g.,
XDG_RUNTIME_DIR=/run/user/8773
it will look for a file named ‘bt_<uid>’ in that directory. With that token in hand, authorization will also take place. A second token can still be passed as the path query CGI element (authz=Bearer%20), and will override the original if present, but this is treated as optional, not required.
Token-based authorization as suggested in http://people.web.psi.ch/feichtinger/doc/authz.pdf.
The first thing to do is to setup the keystore. The keystore file basically specifies all RSA-keypairs used within the authorization process and has exactly the same syntax as in the native xrootd tokenauthorization implementation. In this file, each line beginning with the keyword KEY corresponds to a certain Virtual Organisation (VO) and specifies the remote public (owned by the file catalogue) and the local private key belonging to that VO. A line containing the statement "KEY VO:*"
defines a default keypair that is used as a fallback solution if no VO is specified in token-enhanced xrootd
requests. Lines not starting with the KEY keyword are ignored. A template can be found in /usr/share/dcache/examples/xrootd/keystore
.
The keys itself have to be converted into a certain format in order to be loaded into the authorization plugin. dCache expects both keys to be binary DER-encoded (Distinguished Encoding Rules for ASN.1). Furthermore the private key must be PKCS #8-compliant and the public key must follow the X.509-standard.
The following example demonstrates how to create and convert a keypair using OpenSSL:
Generate new RSA private key
openssl genrsa -rand 12938467 -out key.pem 1024
Create certificate request
openssl req -new -inform PEM -key key.pem -outform PEM -out certreq.pem
Create certificate by self-signing certificate request
openssl x509 -days 3650 -signkey key.pem -in certreq.pem -req -out cert.pem
Extract public key from certificate
openssl x509 -pubkey -in cert.pem -out pkey.pem
openssl pkcs8 -in key.pem -topk8 -nocrypt -outform DER -out <new_private_key>
openssl enc -base64 -d -in pkey.pem -out <new_public_key>
Only the last two lines are performing the actual conversion, therefore you can skip the previous lines in case you already have a keypair. Make sure that your keystore file correctly points to the converted keys.
To enable the plugin, it is necessary to add the following two lines to the file /etc/dcache/dcache.conf
, so that it looks like
xrootdAuthzPlugin=org.dcache.xrootd.security.plugins.tokenauthz.TokenAuthorizationFactory
xrootdAuthzKeystore=<Path_to_your_Keystore>
After doing a restart of dCache, any requests without an appropriate token should result in an error saying “authorization check failed: No authorization token found in open request, access denied.(error code: 3010)”.
If both tokenbased authorization and read-only access are activated, the read-only restriction will dominate (local settings have precedence over remote file catalogue permissions).
Strong authentication
The xroot
-implementation in dCache includes a pluggable authentication framework. To control which authentication mechanism is used by xroot
, add the xrootdAuthNPlugin
option to your dCache configuration and set it to the desired value.
Example:
For instance, to enable GSI
authentication in xroot
, add the following line to /etc/dcache/dcache.conf
:
xrootdAuthNPlugin=gsi
When using GSI
authentication, depending on your setup, you may or may not want dCache to fail if the host certificate chain can not be verified against trusted certificate authorities. Whether dCache performs this check can be controlled by setting the option dcache.authn.hostcert.verify
:
dcache.authn.hostcert.verify=true
Authorization of the user information obtained by strong authentication is performed by contacting the gPlazma service. Please refer to Chapter 10, Authorization in dCache for instructions about how to configure gPlazma.
SECURITY CONSIDERATION
In general
GSI
onxroot
is not secure. It does not provide confidentiality and integrity guarantees and hence does not protect against man-in-the-middle attacks.
Precedence of security mechanisms
The previously explained methods to restrict access via xroot
can also be used together. The precedence applied in that case is as following:
NOTE
The
xrootd-door
can be configured to use either token authorization or strong authentication withgPlazma
authorization. A combination of both is currently not possible.
The permission check executed by the authorization plugin (if one is installed) is given the lowest priority, because it can controlled by a remote party. E.g. in the case of token based authorization, access control is determined by the file catalogue (global namespace).
The same argument holds for many strong authentication mechanisms - for example, both the GSI
protocol as well as the Kerberos
protocols require trust in remote authorities. However, this only affects user authentication, while authorization decisions can be adjusted by local site administrators by adapting the gPlazma
configuration.
To allow local site’s administrators to override remote security settings, write access can be further restricted to few directories (based on the local namespace, the pnfs
). Setting xroot access to read-only has the highest priority, overriding all other settings.
Tried-hosts
Xrootd uses the path URL CGI “tried” and “triedrc” as hints to the redirector/manager not to reselect a data source because of some error condition or preference. dCache provides limited support for this attribute. In particular, it will honor it in the case that the indicated cause suggests some error previously encountered that suggests an IO malfunction on the node.
The property
xrootd.enable.tried-hosts
is true by default. When it is off, the ‘tried’ element on the path is simply ignored. dCache also ignored the tried hosts when ‘triedrc’ is not provided, or when it is not ‘enoent’ or ‘ioerr’. In the latter two cases, the xrootd door will forward the list of previously tried hosts to the Pool Manager to ask that they be excluded from selection.
See xrootd.properties
for further information.
Other configuration options
The xrootd-door
has several other configuration properties. You can configure various timeout parameters, the thread pool sizes on pools, queue buffer sizes on pools, the xroot root path, the xroot user and the xroot IO queue. Full descriptions on the effect of those can be found in /usr/share/dcache/defaults/xrootd.properties
.
XROOTD Third-party Transfer
Starting with dCache 4.2, native third-party transfers between dCache and another xroot server (including another dCache door) are possible. These can be done either in unauthenticated mode, or with GSI (X509) authentication, using the client provided by SLAC (xrdcp or xrdcopy).
To enforce third-party copy, one must execute the transfer using
xrdcp --tpc only <source> <destination>
One can also try third party and fail over to one-hop two-party (through the client) by using
xrdcp --tpc first <source> <destination>
TPC from dCache to another xroot server
Very few changes in the dCache door were needed to accomplish this. If dCache is merely to serve as file source, then all that is needed is to update to version 4.2+ on the nodes running the xrootd doors.
TPC from another xroot server to dCache, or between dCache instances
As per the protocol, the destination pulls/reads the file from the source and writes it locally to a selected pool. This is achieved by an embedded third-party client which runs on the pool. Hence, using dCache as destination means the pools must also be running dCache 4.2+.
Pools without the additional functionality provided by 4.2+ will not be able to act as destination in a third-party transfer and a “tpc not supported” error will be reported if --tpc only
is specified.
Changes to dCache configuration for authenticated (GSI) transfers
For dCache as source, gPlazma configuration is identical to that needed for normal two-party reads and writes, with the caveat that the necessary destination DNs must be mapped on the dCache end. This will depend upon the nature of the proxy credential being used by the source.
To use dCache as TPC destination, some additional steps need to be taken.
First, for all pools that will receive files through xroot TPC, the GSI service provider plugin must be loaded by including this in the configuration or layout:
pool.mover.xrootd.tpc-authn-plugins=gsi
Credential (proxy) delegation
With the 5.2.0 release, full GSI (X509) credential delegation is available in dCache. This means that the dCache door, when it acts as destination, will ask the client to sign a delegation request.
If both endpoints support delegation (dCache 5.2+, XrootD 4.9+), nothing further need be done by way of configuration. dCache keeps the proxy in memory and discards it when the session is disconnected.
To indicate that you wish delegation, the xroot client requires:
xrdcp --tpc delegate only <source> <destination>
or
xrdcp --tpc delegate first <source> <destination>
Like the xrootd server and client, dCache can determine whether the endpoint with which it is communicating supports delegation, and fail over to the pre-delegation protocol if not.
In the case of communication with pre-4.9 xrootd or pre-5.2 dCache instances, or when using a pre-4.9 xroot client, one can still make use of third-party copy with a few extra configuration steps.
There are two ways of providing authentication capability to the pools in this case:
- Generate a proxy from a credential that will be recognized by the source, and arrange to have it placed (and periodically refreshed) on each pool that may be the recipient of files transfered via xrootd TPC. The proxy path must be indicated to dCache by setting this property:
xrootd.gsi.tpc.proxy.path={path-to-proxy}
-
If this property is left undefined, dCache will auto-generate a proxy from the
hostcert.pem
/hostkey.pem
of the node on which the pool is running. While this solution means no cron job is necessary to keep the proxy up to date, it is also rather clunky in that it requires the hostcert DNs of all the pools to be mapped on the source server end. -
Note: For reading the file in dCache (dCache as TPC source), the third-party server needs only a valid certificate issued by a recognized CA; anonymous read access is granted to files (even privately owned) on the basis of the rendezvous token submitted with the request.
Proxy delegation and host aliasing
A feature of the xrootd client is that it will refuse to delegate a proxy to a server endpoint if the hostname of the host credential is unverified.
This can occur if hostname aliasing is used but the host certificate was not issued with the proper SAN extensions. This is because the xrootd client by default does not trust the DNS service to resolve the alias.
In the case where dCache is the destination of a third-party transfer and the client does not delegate a proxy to the door, one may thus see an error on the pool due to the missing proxy. It is possible to configure dCache to attempt to generate a proxy from the pool host certificate in this case, but one may similarly see an error response from the source if the host DN is not mapped there.
Short of having the host certificate reissued with a SAN extension for the alias, DNS lookup can be forced in the client by setting this environment variable:
XrdSecGSITRUSTDNS
0 do not use DNS to aide in certificate hostname validation.
1 use DNS, if needed, to validate certificate hostnames.
Default is 0.
WARNING: this is considered to be a security hole. The recommended solution is to issue the certificate with SAN extensions.
Please consult the xrootd.org document for further information; this policy may be subject to change in the future.
https://xrootd.slac.stanford.edu/doc/dev50/sec_config.htm
Client timeout control
The Third-party embedded client has a timer which will interrupt and return an error if the response from the server does not arrive after a given amount of time.
The default values for this can be controlled by the properties pool.mover.xrootd.tpc-server-response-timeout
and pool.mover.xrootd.tpc-server-response-timeout.unit
.
These are set to 2 seconds to match the aggressive behavior of the SLAC implementation. However, dCache allows you to control this dynamically as well, using the admin command:
\s <xrootd-door> xrootd set server response timeout
This could conceivably be necessary under heavier load.
Signed hash verification support
The embedded third-party client will honor signed hash verification if the source server indicates it must be observed.
Starting with dCache 5.0, the dCache door/server also provides the option to enable signed hash verification.
However, there is a caveat here. Since dCache redirects reads from the door to a selected pool, and since the subsequent connection to the pool is unauthenticated (this has always been the case; the connection fails if the opaque id token dCache gives back to the client is missing), the only way to get signed hash verification on the destination-to-pool connection is to set the kXR_secOFrce flag. This means that the pool will then require unix authentication from the destination and that it will expect unencrypted hashes.
While the usefulness of unencrypted signed hash verification is disputable, the specification nevertheless provides for it, and this was the only way, short of encumbering our pool interactions with yet another GSI handshake, to allow for sigver on the dCache end at all, since the main subsequent requests (open, read, etc.) are made to the pool, not the door.
dCache 5.0 will provide the following properties to control security level and force unencrypted signing:
dcache.xrootd.security.level={0-4}
dcache.xrootd.security.force-signing={true,false}
In the case that the latter is set to true, and one anticipates there will be xroot TPC transfers between two dCache instances or two dCache doors, one also would need to include the unix service provider plugin in all the relevant pool configurations:
pool.mover.xrootd.tpc-authn-plugins=gsi,unix