Reading and writing data to and from a dCache instance can be
done with a number of protocols. After a standard installation,
these protocols are dCap
, GSIdCap
, and GridFTP
. In
addition dCache comes with an implementation of the SRM
protocol which negotiates the actual data transfer protocol.
Create the root of the Chimera namespace and a world-writable directory by
[root] #
/usr/bin/chimera-cli mkdir /data
[root] #
/usr/bin/chimera-cli mkdir /data/world-writable
[root] #
/usr/bin/chimera-cli chmod /data/world-writable 777
To use WebDAV
you need to define a WebDAV
service in
your layout file. You can define this service in an extra
domain, e.g. [webdavDomain]
or add it to
another domain.
[webdavDomain] [webdavDomain/webdav] webdavAnonymousAccess=FULL
to the file /etc/dcache/layouts/mylayout.conf
.
Note
Depending on the client you might need to set
webdav.redirect.on-read=false
and/or
webdav.redirect.on-write=false
.
# ---- Whether to redirect GET requests to a pool # # If true, WebDAV doors will respond with a 302 redirect pointing to # a pool holding the file. This requires that a pool can accept # incoming TCP connections and that the client follows the # redirect. If false, data is relayed through the door. The door # will establish a TCP connection to the pool. # (one-of?true|false)webdav.redirect.on-read=true # ---- Whether to redirect PUT requests to a pool # # If true, WebDAV doors will respond with a 307 redirect pointing to # a pool to which to upload the file. This requires that a pool can # accept incoming TCP connections and that the client follows the # redirect. If false, data is relayed through the door. The door # will establish a TCP connection to the pool. Only clients that send # a Expect: 100-Continue header will be redirected - other requests # will always be proxied through the door. # (one-of?true|false)webdav.redirect.on-write=true
Now you can start the WebDAV
domain
[root] #
dcache start webdavDomain
and access your files via
http://<webdav-door.example.org>:2880
with your browser.
You can connect the webdav server to your file manager and copy a file into your dCache.
To use curl
to copy a file into your dCache you will need
to set webdav.redirect.on-write=false
.
Example:
Write the file test.txt
[root] #
curl -T test.txt http://webdav-door.example.org:2880/data/world-writable/curl-testfile.txt
and read it
[root] #
curl http://webdav-door.example.org:2880/data/world-writable/curl-testfile.txt
dCache can also be used with a mounted file system.
Before mounting the name space you need to edit the
/etc/exports
file. Add the lines
/ localhost(rw) /data
stop the portmapper
[root] #
/etc/init.d/portmap stop
Stopping portmap: portmap
and restart dCache.
[root] #
dcache restart
Now you can mount Chimera.
[root] #
mount localhost:/ /mnt
With the root of the namespace mounted you can establish
wormhole files so dCap
clients can discover the dCap
doors.
[root] #
mkdir /mnt/admin/etc/config/dCache
[root] #
touch /mnt/admin/etc/config/dCache/dcache.conf
[root] #
touch /mnt/admin/etc/config/dCache/'.(fset)(dcache.conf)(io)(on)'
[root] #
echo "<dcache.example.org>:22125" > /mnt/admin/etc/config/dCache/dcache.conf
Create the directory in which the users are going to store their data and change to this directory.
[root] #
mkdir -p /mnt/data
[root] #
cd /mnt/data
To be able to use dCap
you need to have the dCap
door
running in a domain.
Example:
[dCacheDomain] [dCacheDomain/dcap]
For this tutorial install dCap
on your worker node. This
can be the machine where your dCache is running.
Get the gLite repository (which contains dCap
) and
install dCap
using yum.
[root] #
cd /etc/yum.repos.d/
[root] #
wget http://grid-deployment.web.cern.ch/grid-deployment/glite/repos/3.2/glite-UI.repo
[root] #
yum install dcap
Create the root of the Chimera namespace and a
world-writable directory for dCap
to write into by
[root] #
/usr/bin/chimera-cli mkdir /data
[root] #
/usr/bin/chimera-cli mkdir /data/world-writable
[root] #
/usr/bin/chimera-cli chmod /data/world-writable 777
Copy the data (here /bin/sh
is used as
example data) using the dccp command and the dCap
protocol describing the location of the file using a URL,
where <dcache.example.org> is
the host on which the dCache is
running
[root] #
dccp -H /bin/sh dcap://<dcache.example.org>/data/world-writable/my-test-file-1
[##########################################################################################] 100% 718 kiB 735004 bytes (718 kiB) in 0 seconds
and copy the file back.
[root] #
dccp -H dcap://<dcache.example.org>/data/world-writable/my-test-file-1 /tmp/mytestfile1
[##########################################################################################] 100% 718 kiB 735004 bytes (718 kiB) in 0 seconds
To remove the file you will need to mount the namespace.
dCap
can also be used with a mounted file system.
Before mounting the name space you need to edit the
/etc/exports
file. Add the lines
/ localhost(rw) /data
stop the portmapper
[root] #
/etc/init.d/portmap stop
Stopping portmap: portmap
and restart dCache.
[root] #
dcache restart
Now you can mount Chimera.
[root] #
mount localhost:/ /mnt
With the root of the namespace mounted you can establish
wormhole files so dCap
clients can discover the dCap
doors.
[root] #
mkdir /mnt/admin/etc/config/dCache
[root] #
touch /mnt/admin/etc/config/dCache/dcache.conf
[root] #
touch /mnt/admin/etc/config/dCache/'.(fset)(dcache.conf)(io)(on)'
[root] #
echo "<dcache.example.org>:22125" > /mnt/admin/etc/config/dCache/dcache.conf
Create the directory in which the users are going to store their data and change to this directory.
[root] #
mkdir -p /mnt/data
[root] #
cd /mnt/data
Now you can copy a file into your dCache
[root] #
dccp /bin/sh my-test-file-2
735004 bytes (718 kiB) in 0 seconds
and copy the data back using the dccp command.
[root] #
dccp my-test-file-2 /tmp/mytestfile2
735004 bytes (718 kiB) in 0 seconds
The file has been transferred succesfully.
Now remove the file from the dCache.
[root] #
rm my-test-file-2