In this section you will update the cluster created in Lab I and mount the filesystem created earlier in section a. of this lab.
source ~/environment/env_vars
export filesystem_id=${FSX_ID}
export filesystem_dns=$(aws fsx --region ${AWS_REGION} describe-file-systems --file-system-ids $filesystem_id --query "FileSystems[0].DNSName" --output text)
export filesystem_mountname=$(aws fsx --region ${AWS_REGION} describe-file-systems --file-system-ids $filesystem_id --query "FileSystems[].LustreConfiguration[].MountName" --output text)
cat > mount-fsx.sh << EOF
#!/bin/bash
sudo mkdir -p /fsx
sudo mount -t lustre -o noatime,flock ${filesystem_dns}@tcp:/${filesystem_mountname} /fsx
EOF
aws s3 cp mount-fsx.sh s3://${BUCKET_NAME_DATA}
export S3PATH=s3://${BUCKET_NAME_DATA}/mount-fsx.sh
yq -i '(.Scheduling.SlurmQueues[0].CustomActions.OnNodeConfigured.Script=env(S3PATH)) |
(.Scheduling.SlurmQueues[0].Iam.AdditionalIamPolicies[0]={"Policy": "arn:aws:iam::aws:policy/AmazonFSxFullAccess"}) |
(.Scheduling.SlurmQueues[0].Iam.AdditionalIamPolicies[1]={"Policy": "arn:aws:iam::aws:policy/AmazonS3FullAccess"})' ~/environment/my-cluster-config.yaml
pcluster update-cluster -n hpc-cluster-lab -c my-cluster-config.yaml --region ${AWS_REGION} --suppress-validators ALL
pcluster describe-cluster -n hpc-cluster-lab --query clusterStatus --region ${AWS_REGION}
Once the cluster is updated, you will see a UPDATE_COMPLETE status.
pcluster update-compute-fleet -n hpc-cluster-lab --status START_REQUESTED --region ${AWS_REGION}
pcluster describe-compute-fleet -n hpc-cluster-lab --query status --region ${AWS_REGION}
You should see a RUNNING status after re-starting the compute fleet.
You have now successfully mounted the created Lustre filesystem on the cluster. In the next section, you will monitor the Filesystem and learn more about the HSM capabilities between FSx Lustre and Amazon S3.