In order to enable the SRM SpaceManager you need to add
	the spacemanager service to your layout file
      
[<srm-${host.name}Domain>]
[<srm-${host.name}Domain>/srm]
[<srm-${host.name}Domain>/spacemanager]
	and add (uncomment) the following definition in the file
	/etc/dcache/dcache.conf
      
srmSpaceManagerEnabled=true
	    Each SRM space reservation is made against the total
	    available disk space of a particular link group.  If
	    dCache is configured correctly each byte of disk space,
	    that can be reserved, belongs to one and only one link
	    group. See the section called “SpaceManager configuration for Explicit Space Reservations” for a detailed
	    description.
	  
Important
Make sure that no pool belongs to more than one pool group, no pool group belongs to more than one link and no link belongs to more than one link group.
If a space reservation is specified, the file will be stored in it (assuming the user has permission to do so in the name space).
Files written into a space made within a particular link group will end up on one of the pools belonging to this link group. The difference between the link group’s available space and the sum of all the current space reservation sizes is the available space in the link group.
The total space in dCache that can be reserved is the sum of the available spaces of all link groups.
	    dCache can perform implicit space reservations for
	    non-SRM transfers, SRM Version 1 transfers and
	    for SRM Version 2.2 data transfers that are not given
	    the space token explicitly. The parameter that enables
	    this behavior is
	    srmImplicitSpaceManagerEnabled, which
	    is described in the section called “SRM configuration for experts”.
	    If no implicit space reservation can be made, the transfer
	    will fail.
	  
	    In case of SRM version 1.1 data transfers, when the
	    access latency and retention policy cannot be specified,
	    and in case of SRM V2.2 clients, when the access latency
	    and retention policy are not specified, the default values
	    will be used. First SRM will attempt to use the values
	    of AccessLatency and
	    RetentionPolicy tags from the directory
	    to which a file is being written. If the tags are present,
	    then the AccessLatency and
	    RetentionPolicy will be set on basis of
	    the system wide defaults, which are controlled by
	    DefaultRetentionPolicy and
	    DefaultAccessLatencyForSpaceReservation
	    variables in
	    /etc/dcache/dcache.conf.
	  
	    You can check if the AccessLatency and
	    RetentionPolicy tags are present by
	    using the following commands:
	  
[root] #/usr/bin/chimera-cli lstag /path/to/directoryTotal: numberOfTags tag1 tag2 .. AccessLatency RetentionPolicy
	    If the output contains the lines
	    AccessLatency and
	    RetentionPolicy then the tags are
	    already present and you can get the actual values of these
	    tags by executing the following commands, which are shown
	    together with example outputs:
	  
Example:
[root] #/usr/bin/chimera-cli readtag /data/experiment-a AccessLatencyONLINE[root] #/usr/bin/chimera-cli readtag /data/experiment-a RetentionPolicyCUSTODIAL
	    The valid AccessLatency values are
	    ONLINE and NEARLINE,
	    valid RetentionPolicy values are
	    REPLICA and
	    CUSTODIAL.
	  
To create/change the values of the tags, please execute :
[root] #/usr/bin/chimera-cli writetag /path/to/directory AccessLatency "<New AccessLatency>"[root] #/usr/bin/chimera-cli writetag /path/to/directory RetentionPolicy "<New RetentionPolicy>"
Note
Some clients also have default values, which are used when not explicitly specified by the user. I this case server side defaults will have no effect.
Note
If the implicit space reservation is not enabled the pools in the link groups will be excluded from consideration and only the remaining pools will be considered for storing the incoming data, and classical pool selection mechanism will be used.