Usually you do not want to "hardcode" an unconditional promotion attempt of DRBD resources, because you usually cannot know a-priory whether this instance of DRBD will have access to good data (yet).
Typical setups are using a cluster manager like pacemaker or the less feature rich but also less complex drbd-reactor to coordinate promotion attempts and service starts.
But in situation where you "know" that you always want this node to promote and use DRBD and the peer(s) are never going to take over, but only used for "DR" purposes, then this target unit may be useful.
It is intended to be used as dependency of any mount or other use of the specific DRBD resource. The implicit dependency on drbd@RESNAME.service will configure DRBD, an optional drbd-lvchange@RESNAME.service can be used to attempt to activate the backend logical volumes first. The optional (but in this scenario necessary) drbd-wait-promotable@RESNAME.service is then used to wait for DRBD to connect to its peers and establish access to good data.
Assuming you have a DRBD resource named webdata, its backing devices being LVM logical volumes, with an xfs on one volume showing up as /dev/drbd0, this should make your boot sequence successfully mount that drbd to /mnt/point (unless DRBD really finds no access to good data in time, or some peer is already primary):
systemctl enable firstname.lastname@example.org systemctl enable email@example.com echo "/dev/drbdX /mnt/point xfs defaults,nofail,firstname.lastname@example.org 0 0" >> /etc/fstab
LINBIT HA Solutions GmbH https://linbit.com
drbd-lvchange@.service(7), drbd.service(7), drbd@.service(7), drbd-wait-promotable@.service(7).