chore: add more content to use cases and troubleshooting
This commit is contained in:
parent
9c11e3883a
commit
b7614eeea9
9 changed files with 282 additions and 297 deletions
77
.obsidian/workspace.json
vendored
77
.obsidian/workspace.json
vendored
|
|
@ -6,22 +6,8 @@
|
|||
{
|
||||
"id": "293ae152699ad4dc",
|
||||
"type": "tabs",
|
||||
"dimension": 58.608695652173914,
|
||||
"dimension": 48.892405063291136,
|
||||
"children": [
|
||||
{
|
||||
"id": "e7be67cc73914413",
|
||||
"type": "leaf",
|
||||
"state": {
|
||||
"type": "markdown",
|
||||
"state": {
|
||||
"file": "Use Cases/Advanced Use Cases.md",
|
||||
"mode": "source",
|
||||
"source": false
|
||||
},
|
||||
"icon": "lucide-file",
|
||||
"title": "Advanced Use Cases"
|
||||
}
|
||||
},
|
||||
{
|
||||
"id": "ea383a5dae57775a",
|
||||
"type": "leaf",
|
||||
|
|
@ -50,43 +36,25 @@
|
|||
"title": "Adding SLURM and MPI to the Compute Node"
|
||||
}
|
||||
}
|
||||
]
|
||||
],
|
||||
"currentTab": 1
|
||||
},
|
||||
{
|
||||
"id": "5575fc1f3edcb038",
|
||||
"type": "tabs",
|
||||
"dimension": 41.391304347826086,
|
||||
"dimension": 51.10759493670886,
|
||||
"children": [
|
||||
{
|
||||
"id": "2e25f7a996a520e8",
|
||||
"type": "leaf",
|
||||
"state": {
|
||||
"type": "markdown",
|
||||
"state": {
|
||||
"file": "OpenCHAMI Wiki.md",
|
||||
"mode": "source",
|
||||
"source": false
|
||||
},
|
||||
"icon": "lucide-file",
|
||||
"title": "OpenCHAMI Wiki"
|
||||
}
|
||||
},
|
||||
{
|
||||
"id": "c4d0bb8b9f100417",
|
||||
"type": "leaf",
|
||||
"state": {
|
||||
"type": "markdown",
|
||||
"state": {
|
||||
"file": "Use Cases/Advanced Use Cases.md",
|
||||
"mode": "source",
|
||||
"source": false
|
||||
},
|
||||
"icon": "lucide-file",
|
||||
"title": "Advanced Use Cases"
|
||||
"type": "graph",
|
||||
"state": {},
|
||||
"icon": "lucide-git-fork",
|
||||
"title": "Graph view"
|
||||
}
|
||||
}
|
||||
],
|
||||
"currentTab": 1
|
||||
]
|
||||
}
|
||||
],
|
||||
"direction": "vertical"
|
||||
|
|
@ -143,7 +111,7 @@
|
|||
}
|
||||
],
|
||||
"direction": "horizontal",
|
||||
"width": 213.5
|
||||
"width": 200
|
||||
},
|
||||
"right": {
|
||||
"id": "f3c2a2165de3d045",
|
||||
|
|
@ -159,7 +127,7 @@
|
|||
"state": {
|
||||
"type": "backlink",
|
||||
"state": {
|
||||
"file": "Use Cases/Advanced Use Cases.md",
|
||||
"file": "Use Cases/Adding SLURM and MPI to the Compute Node.md",
|
||||
"collapseAll": false,
|
||||
"extraContext": false,
|
||||
"sortOrder": "alphabetical",
|
||||
|
|
@ -169,7 +137,7 @@
|
|||
"unlinkedCollapsed": true
|
||||
},
|
||||
"icon": "links-coming-in",
|
||||
"title": "Backlinks for Advanced Use Cases"
|
||||
"title": "Backlinks for Adding SLURM and MPI to the Compute Node"
|
||||
}
|
||||
},
|
||||
{
|
||||
|
|
@ -220,7 +188,7 @@
|
|||
}
|
||||
],
|
||||
"direction": "horizontal",
|
||||
"width": 300
|
||||
"width": 200
|
||||
},
|
||||
"left-ribbon": {
|
||||
"hiddenItems": {
|
||||
|
|
@ -232,26 +200,27 @@
|
|||
"command-palette:Open command palette": false
|
||||
}
|
||||
},
|
||||
"active": "a9a4b24504ce51fe",
|
||||
"active": "3e1275d1c1e5449d",
|
||||
"lastOpenFiles": [
|
||||
"Use Cases/Discovering Nodes Dynamically with Redfish.md",
|
||||
"Use Cases/Using Image Layers to Customize Boot Image with a Common Base.md",
|
||||
"Software/Magellan.md",
|
||||
"Deployments/Deployments.md",
|
||||
"Deployments/Deploying with Docker Compose.md",
|
||||
"Deployments/Deploying with Podman Quadlets.md",
|
||||
"Troubleshooting.md",
|
||||
"Getting Started.md",
|
||||
"OpenCHAMI Wiki.md",
|
||||
"Use Cases/Advanced Use Cases.md",
|
||||
"Use Cases/Using `kexec` to Reboot Nodes For an Upgrade or Specialized Kernel.md",
|
||||
"Use Cases/Enable WireGuard Security for the `cloud-init-server`.md",
|
||||
"Use Cases/Using Image Layers to Customize Boot Image with a Common Base.md",
|
||||
"Use Cases/Serving the Root Filesystem with NFS (import-image.sh).md",
|
||||
"Use Cases/Enable WireGuard Security for the `cloud-init-server`.md",
|
||||
"Use Cases/Adding SLURM and MPI to the Compute Node.md",
|
||||
"Getting Involved.md",
|
||||
"Troubleshooting.md",
|
||||
"OpenCHAMI Wiki.md",
|
||||
"Getting Started.md",
|
||||
"Software/Software.md",
|
||||
"Deployments/Deploying with Podman Quadlets.md",
|
||||
"Use Cases",
|
||||
"Software/Magellan.md",
|
||||
"Deployments",
|
||||
"Software",
|
||||
"Deployments/Deployments.md",
|
||||
"Software/State Management Database (SMD).md",
|
||||
"Untitled.canvas",
|
||||
"Welcome.md",
|
||||
|
|
|
|||
0
Deployments/Deploying with Docker Compose.md
Normal file
0
Deployments/Deploying with Docker Compose.md
Normal file
|
|
@ -1,10 +1,5 @@
|
|||
OpenCHAMI offers deploying the microservices in several ways. This document covers the supported ways to deploy
|
||||
## Podman Quadlets
|
||||
|
||||
### Discovering Nodes
|
||||
|
||||
#### Static Discovery
|
||||
#### Dynamic Discovery
|
||||
|
||||
## Docker Compose
|
||||
- [[Deploying with Podman Quadlets]]
|
||||
- [[Deploying with Docker Compose]]
|
||||
|
||||
|
|
|
|||
|
|
@ -4,16 +4,18 @@ Sometimes, things don't always work out as we would expect them to when trying t
|
|||
|
||||
### Certificate and TLS Errors
|
||||
|
||||
|
||||
|
||||
### Cannot Make Request to Service
|
||||
|
||||
#### Access Token Errors
|
||||
|
||||
When making a request, if you receive errors related to the access, there are a few things you may want to check.
|
||||
Errors that deny you access should usually have a clear indication that the appropriate variable is not set or your access token has expired. If you receive these kinds of related errors, there are a few things you may want to check.
|
||||
|
||||
1. If you're making requests using the `ochami` CLI to services like SMD, make sure that the `ACCESS_TOKEN` environment variable is set.
|
||||
2. If you're
|
||||
1. If you're making requests using the `ochami` CLI to services like SMD, make sure that the `<name>_ACCESS_TOKEN` environment variable is set.
|
||||
2. If you're making requests using `curl` to services like SMD, make sure that you are including the `Authorization` header in your request.
|
||||
|
||||
```bash
|
||||
curl https://demo.openchami.cluster:8443/hsm/v2/Inventory/EthernetInterfaces -H "Authorization: Bearer $ACCESS_TOKEN"
|
||||
```
|
||||
|
||||
### Cannot Discover Nodes
|
||||
|
||||
|
|
|
|||
|
|
@ -0,0 +1,228 @@
|
|||
After getting our nodes to boot using our compute images, let's try running a test MPI job. We need to install and configure both SLURM and MPI to do so. We can do this at least two ways here:
|
||||
|
||||
- Create a new `compute-mpi` image similar to the `compute-debug` image using the `compute-base` image as a base. You do not have to rebuild the parent images unless you want to make changes to them, but keep in mind that you will also have to rebuild any derivative images.
|
||||
|
||||
### Building Into the Image
|
||||
|
||||
We can use the `image-builder` tool to include the SLURM and OpenMPI packages directly in the image. Since we're building a new image for our compute node, we'll base our new image on the compute image definition from the tutorial.
|
||||
|
||||
You should already have a directory at `/opt/workdir/images`. Make sure you already have a base compute image with `s3cmd ls`.
|
||||
|
||||
```bash
|
||||
# TODO: put the output of `s3cmd ls` here with the compute-base image
|
||||
```
|
||||
|
||||
If you do not have the image, go back to [this step](https://github.com/OpenCHAMI/tutorial-2025/blob/main/Phase%202/Readme.md#243-configure-the-base-compute-image) in the tutorial, build the image, and push it to S3. Once you have done that, proceed to the next step.
|
||||
|
||||
Now, edit a new file at path `/opt/workdir/images/compute-slurm-rocky9.yaml` and copy the contents below.
|
||||
|
||||
```bash
|
||||
options:
|
||||
layer_type: 'base'
|
||||
name: 'compute-slurm'
|
||||
publish_tags:
|
||||
- 'rocky9'
|
||||
pkg_manager: 'dnf'
|
||||
parent: 'demo.openchami.cluster:5000/demo/rocky-base:9'
|
||||
registry_opts_pull:
|
||||
- '--tls-verify=false'
|
||||
|
||||
# Publish SquashFS image to local S3
|
||||
publish_s3: 'http://demo.openchami.cluster:9000'
|
||||
s3_prefix: 'compute/base/'
|
||||
s3_bucket: 'boot-images'
|
||||
|
||||
# Publish OCI image to container registry
|
||||
#
|
||||
# This is the only way to be able to re-use this image as
|
||||
# a parent for another image layer.
|
||||
publish_registry: 'demo.openchami.cluster:5000/demo'
|
||||
registry_opts_push:
|
||||
- '--tls-verify=false'
|
||||
|
||||
repos:
|
||||
- alias: 'Epel9'
|
||||
url: 'https://dl.fedoraproject.org/pub/epel/9/Everything/x86_64/'
|
||||
gpg: 'https://dl.fedoraproject.org/pub/epel/RPM-GPG-KEY-EPEL-9'
|
||||
|
||||
packages:
|
||||
- slurm
|
||||
- openmpi
|
||||
```
|
||||
|
||||
Notice that the only changes to the new image definition were to the `options.name` and `packages`. Since we're basing this image on another image, we only need the packages we want to add to the new image. We can build the image and push it to S3 now.
|
||||
|
||||
```bash
|
||||
podman run --rm --device /dev/fuse --network host -e S3_ACCESS=admin -e S3_SECRET=admin123 -v /opt/workdir/images/compute-slurm-rocky9.yaml:/home/builder/config.yaml ghcr.io/openchami/image-build:latest image-build --config config.yaml --log-level DEBUG
|
||||
```
|
||||
|
||||
Wait until the build finishes and check the S3 bucket to confirm that it is there with `s3cmd ls` again. Add a new boot script to `/opt/workdir/boot/boot-compute-slurm.yaml` which we will use to boot our compute nodes.
|
||||
|
||||
```bash
|
||||
kernel: 'http://172.16.0.254:9000/boot-images/efi-images/compute/debug/vmlinuz-5.14.0-570.21.1.el9_6.x86_64'
|
||||
initrd: 'http://172.16.0.254:9000/boot-images/efi-images/compute/debug/initramfs-5.14.0-570.21.1.el9_6.x86_64.img'
|
||||
params: 'nomodeset ro root=live:http://172.16.0.254:9000/boot-images/compute/debug/rocky9.6-compute-slurm-rocky9 ip=dhcp overlayroot=tmpfs overlayroot_cfgdisk=disabled apparmor=0 selinux=0 console=ttyS0,115200 ip6=off cloud-init=enabled ds=nocloud-net;s=http://172.16.0.254:8081/cloud-init'
|
||||
macs:
|
||||
- 52:54:00:be:ef:01
|
||||
- 52:54:00:be:ef:02
|
||||
- 52:54:00:be:ef:03
|
||||
- 52:54:00:be:ef:04
|
||||
- 52:54:00:be:ef:05
|
||||
```
|
||||
|
||||
Set and confirm that the boot parameters have been set correctly.
|
||||
|
||||
```bash
|
||||
ochami bss boot params set -f yaml -d @/opt/workdir/boot/boot-compute-slurm.yaml
|
||||
ochami bss boot params get -F yaml
|
||||
```
|
||||
|
||||
Finally, boot the compute node.
|
||||
|
||||
```bash
|
||||
sudo virt-install \
|
||||
--name compute1 \
|
||||
--memory 4096 \
|
||||
--vcpus 1 \
|
||||
--disk none \
|
||||
--pxe \
|
||||
--os-variant centos-stream9 \
|
||||
--network network=openchami-net,model=virtio,mac=52:54:00:be:ef:01 \
|
||||
--graphics none \
|
||||
--console pty,target_type=serial \
|
||||
--boot network,hd \
|
||||
--boot loader=/usr/share/OVMF/OVMF_CODE.secboot.fd,loader.readonly=yes,loader.type=pflash,nvram.template=/usr/share/OVMF/OVMF_VARS.fd,loader_secure=no \
|
||||
--virt-type kvm
|
||||
```
|
||||
|
||||
Your compute node should start up with iPXE output. If your node does not boot, check the [troubleshooting](Troubleshooting.md) sections for common issues.
|
||||
### Installing via Cloud-Init
|
||||
|
||||
Alternatively, we can install the necessary SLURM and MPI packages after booting by adding packages to our cloud-init config and use the `cmds` section for configuration.
|
||||
|
||||
Let's start by making changes to the cloud-init config file in `/opt/workdir/cloud-init/computes.yaml` that we used previously. Note that we are using a pre-built RPMs to install SLURM and OpenMPI from the Rocky 9 repos.
|
||||
|
||||
```bash
|
||||
- name: compute
|
||||
description: "compute config"
|
||||
file:
|
||||
encoding: plain
|
||||
content: |
|
||||
## template: jinja
|
||||
#cloud-config
|
||||
merge_how:
|
||||
- name: list
|
||||
settings: [append]
|
||||
- name: dict
|
||||
settings: [no_replace, recurse_list]
|
||||
users:
|
||||
- name: root
|
||||
ssh_authorized_keys: {{ ds.meta_data.instance_data.v1.public_keys }}
|
||||
disable_root: false
|
||||
packages:
|
||||
- slurm
|
||||
- openmpi
|
||||
cmds:
|
||||
- systemctl enable slurmctld
|
||||
- systemctl enable slurmdbd
|
||||
```
|
||||
|
||||
We added the `packages` section to tell cloud-init to install the `slurm` and `openmpi` packages after booting the compute
|
||||
|
||||
### Prepare SLURM on Head Node
|
||||
|
||||
### Run a Sample MPI job across two VMs
|
||||
|
||||
After we have installed both SLURM and OpenMPI on the compute node, let's try and launch a "hello world" MPI job. To do so, we will need three things:
|
||||
|
||||
1. Source code for MPI program
|
||||
2. Compiled MPI executable binary
|
||||
3. SLURM job script
|
||||
|
||||
We create the MPI program in C. First, create a new directory to store our source code. Then, edit the `/opt/workdir/apps/hello.c` file.
|
||||
|
||||
```bash
|
||||
mkdir -p /opt/workdir/apps/mpi/hello
|
||||
# edit /opt/workdir/apps/mpi/hello/hello.c
|
||||
```
|
||||
|
||||
Now copy the contents below into the `hello.c` file.
|
||||
|
||||
```c
|
||||
/*The Parallel Hello World Program*/
|
||||
#include <stdio.h>
|
||||
#include <mpi.h>
|
||||
|
||||
main(int argc, char **argv)
|
||||
{
|
||||
int node;
|
||||
|
||||
MPI_Init(&argc,&argv);
|
||||
MPI_Comm_rank(MPI_COMM_WORLD, &node);
|
||||
|
||||
printf("Hello World from Node %d\n",node);
|
||||
|
||||
MPI_Finalize();
|
||||
}
|
||||
```
|
||||
|
||||
Compile the program.
|
||||
|
||||
```bash
|
||||
cd /opt/workdir/apps/mpi/hello
|
||||
mpicc hello.c -o hello
|
||||
```
|
||||
|
||||
You should have an `hello` executable in the `/opt/workdir/apps/mpi/hello` directory now. We can use this binary executable with SLURM to launch process in parallel.
|
||||
|
||||
Let's create a job script to launch the executable we just created. Create a new directory to hold our SLURM job script. Then, edit a new file called `launch-hello.sh` in the new `/opt/workdir/jobscripts` directory.
|
||||
|
||||
```bash
|
||||
mkdir -p /opt/workdir/jobscripts
|
||||
cd /opt/workdir/jobscripts
|
||||
# edit launch.sh
|
||||
```
|
||||
|
||||
Copy the contents below into the `launch-hello.sh` job script.
|
||||
|
||||
> [!NOTE]
|
||||
> The contents of your job script may vary significantly depending on your cluster. Refer to the documentation for your institution and adjust the script accordingly to your needs.
|
||||
|
||||
```bash
|
||||
#!/bin/bash
|
||||
|
||||
#SBATCH --job-name=hello
|
||||
#SBATCH --account=account_name
|
||||
#SBATCH --partition=partition_name
|
||||
#SBATCH --nodes=1
|
||||
#SBATCH --ntasks-per-node=1
|
||||
#SBATCH --cpus-per-task=4
|
||||
#SBATCH --time=00:00:30
|
||||
export OMP_NUM_THREADS=$SLURM_CPUS_PER_TASK /opt/workdir/apps/mpi/hello/hello
|
||||
```
|
||||
|
||||
We should now have everything we need to test our MPI job with our compute node(s). Launch the job with the `sbatch` command.
|
||||
|
||||
```bash
|
||||
sbatch /opt/workdir/jobscripts/launch-hello.sh
|
||||
```
|
||||
|
||||
We can confirm the job is running with the `squeue` command.
|
||||
|
||||
```bash
|
||||
squeue
|
||||
```
|
||||
|
||||
You should see a list with a job named `hello` that was given in the `launch-hello.sh` job script.
|
||||
|
||||
```bash
|
||||
# TODO: add output of squeue above
|
||||
```
|
||||
|
||||
If you saw the output above, you should now be able to inspect the output of the job when it completes.
|
||||
|
||||
```bash
|
||||
# TODO: add output of MPI job (should be something like hello.o and/or hello.e)
|
||||
```
|
||||
|
||||
And that's it! You have successfully launched an MPI job with SLURM from an OpenCHAMI deployed system.
|
||||
|
|
@ -11,233 +11,4 @@ Some of the use cases includes:
|
|||
5. [Using `kexec` to Reboot Nodes For an Kernel Upgrade or Specialized Kernel](Using%20`kexec`%20to%20Reboot%20Nodes%20For%20an%20Upgrade%20or%20Specialized%20Kernel.md)
|
||||
6. [Discovering Nodes Dynamically with Redfish](Discovering%20Nodes%20Dynamically%20with%20Redfish.md)
|
||||
|
||||
## Adding SLURM and MPI to the Compute Node
|
||||
|
||||
After getting our nodes to boot using our compute images, let's try running a test MPI job. We need to install and configure both SLURM and MPI to do so. We can do this at least two ways here:
|
||||
|
||||
- Create a new `compute-mpi` image similar to the `compute-debug` image using the `compute-base` image as a base. You do not have to rebuild the parent images unless you want to make changes to them, but keep in mind that you will also have to rebuild any derivative images.
|
||||
|
||||
### Building Into the Image
|
||||
|
||||
We can use the `image-builder` tool to include the SLURM and OpenMPI packages directly in the image. Since we're building a new image for our compute node, we'll base our new image on the compute image definition from the tutorial.
|
||||
|
||||
You should already have a directory at `/opt/workdir/images`. Make sure you already have a base compute image with `s3cmd ls`.
|
||||
|
||||
```bash
|
||||
# TODO: put the output of `s3cmd ls` here with the compute-base image
|
||||
```
|
||||
|
||||
If you do not have the image, go back to [this step](https://github.com/OpenCHAMI/tutorial-2025/blob/main/Phase%202/Readme.md#243-configure-the-base-compute-image) in the tutorial, build the image, and push it to S3. Once you have done that, proceed to the next step.
|
||||
|
||||
Now, edit a new file at path `/opt/workdir/images/compute-slurm-rocky9.yaml` and copy the contents below.
|
||||
|
||||
```bash
|
||||
options:
|
||||
layer_type: 'base'
|
||||
name: 'compute-slurm'
|
||||
publish_tags:
|
||||
- 'rocky9'
|
||||
pkg_manager: 'dnf'
|
||||
parent: 'demo.openchami.cluster:5000/demo/rocky-base:9'
|
||||
registry_opts_pull:
|
||||
- '--tls-verify=false'
|
||||
|
||||
# Publish SquashFS image to local S3
|
||||
publish_s3: 'http://demo.openchami.cluster:9000'
|
||||
s3_prefix: 'compute/base/'
|
||||
s3_bucket: 'boot-images'
|
||||
|
||||
# Publish OCI image to container registry
|
||||
#
|
||||
# This is the only way to be able to re-use this image as
|
||||
# a parent for another image layer.
|
||||
publish_registry: 'demo.openchami.cluster:5000/demo'
|
||||
registry_opts_push:
|
||||
- '--tls-verify=false'
|
||||
|
||||
repos:
|
||||
- alias: 'Epel9'
|
||||
url: 'https://dl.fedoraproject.org/pub/epel/9/Everything/x86_64/'
|
||||
gpg: 'https://dl.fedoraproject.org/pub/epel/RPM-GPG-KEY-EPEL-9'
|
||||
|
||||
packages:
|
||||
- slurm
|
||||
- openmpi
|
||||
```
|
||||
|
||||
Notice that the only changes to the new image definition were to the `options.name` and `packages`. Since we're basing this image on another image, we only need the packages we want to add to the new image. We can build the image and push it to S3 now.
|
||||
|
||||
```bash
|
||||
podman run --rm --device /dev/fuse --network host -e S3_ACCESS=admin -e S3_SECRET=admin123 -v /opt/workdir/images/compute-slurm-rocky9.yaml:/home/builder/config.yaml ghcr.io/openchami/image-build:latest image-build --config config.yaml --log-level DEBUG
|
||||
```
|
||||
|
||||
Wait until the build finishes and check the S3 bucket to confirm that it is there with `s3cmd ls` again. Add a new boot script to `/opt/workdir/boot/boot-compute-slurm.yaml` which we will use to boot our compute nodes.
|
||||
|
||||
```bash
|
||||
kernel: 'http://172.16.0.254:9000/boot-images/efi-images/compute/debug/vmlinuz-5.14.0-570.21.1.el9_6.x86_64'
|
||||
initrd: 'http://172.16.0.254:9000/boot-images/efi-images/compute/debug/initramfs-5.14.0-570.21.1.el9_6.x86_64.img'
|
||||
params: 'nomodeset ro root=live:http://172.16.0.254:9000/boot-images/compute/debug/rocky9.6-compute-slurm-rocky9 ip=dhcp overlayroot=tmpfs overlayroot_cfgdisk=disabled apparmor=0 selinux=0 console=ttyS0,115200 ip6=off cloud-init=enabled ds=nocloud-net;s=http://172.16.0.254:8081/cloud-init'
|
||||
macs:
|
||||
- 52:54:00:be:ef:01
|
||||
- 52:54:00:be:ef:02
|
||||
- 52:54:00:be:ef:03
|
||||
- 52:54:00:be:ef:04
|
||||
- 52:54:00:be:ef:05
|
||||
```
|
||||
|
||||
Set and confirm that the boot parameters have been set correctly.
|
||||
|
||||
```bash
|
||||
ochami bss boot params set -f yaml -d @/opt/workdir/boot/boot-compute-slurm.yaml
|
||||
ochami bss boot params get -F yaml
|
||||
```
|
||||
|
||||
Finally, boot the compute node.
|
||||
|
||||
```bash
|
||||
sudo virt-install \
|
||||
--name compute1 \
|
||||
--memory 4096 \
|
||||
--vcpus 1 \
|
||||
--disk none \
|
||||
--pxe \
|
||||
--os-variant centos-stream9 \
|
||||
--network network=openchami-net,model=virtio,mac=52:54:00:be:ef:01 \
|
||||
--graphics none \
|
||||
--console pty,target_type=serial \
|
||||
--boot network,hd \
|
||||
--boot loader=/usr/share/OVMF/OVMF_CODE.secboot.fd,loader.readonly=yes,loader.type=pflash,nvram.template=/usr/share/OVMF/OVMF_VARS.fd,loader_secure=no \
|
||||
--virt-type kvm
|
||||
```
|
||||
|
||||
Your compute node should start up with iPXE output. If your node does not boot, check the [troubleshooting](Troubleshooting.md) sections for common issues.
|
||||
### Installing via Cloud-Init
|
||||
|
||||
Alternatively, we can install the necessary SLURM and MPI packages after booting by adding packages to our cloud-init config and use the `cmds` section for configuration.
|
||||
|
||||
Let's start by making changes to the cloud-init config file in `/opt/workdir/cloud-init/computes.yaml` that we used previously. Note that we are using a pre-built RPMs to install SLURM and OpenMPI from the Rocky 9 repos.
|
||||
|
||||
```bash
|
||||
- name: compute
|
||||
description: "compute config"
|
||||
file:
|
||||
encoding: plain
|
||||
content: |
|
||||
## template: jinja
|
||||
#cloud-config
|
||||
merge_how:
|
||||
- name: list
|
||||
settings: [append]
|
||||
- name: dict
|
||||
settings: [no_replace, recurse_list]
|
||||
users:
|
||||
- name: root
|
||||
ssh_authorized_keys: {{ ds.meta_data.instance_data.v1.public_keys }}
|
||||
disable_root: false
|
||||
packages:
|
||||
- slurm
|
||||
- openmpi
|
||||
cmds:
|
||||
- systemctl enable slurmctld
|
||||
- systemctl enable slurmdbd
|
||||
```
|
||||
|
||||
We added the `packages` section to tell cloud-init to install the `slurm` and `openmpi` packages after booting the compute
|
||||
|
||||
### Prepare SLURM on Head Node
|
||||
|
||||
### Run a Sample MPI job across two VMs
|
||||
|
||||
After we have installed both SLURM and OpenMPI on the compute node, let's try and launch a "hello world" MPI job. To do so, we will need three things:
|
||||
|
||||
1. Source code for MPI program
|
||||
2. Compiled MPI executable binary
|
||||
3. SLURM job script
|
||||
|
||||
We create the MPI program in C. First, create a new directory to store our source code. Then, edit the `/opt/workdir/apps/hello.c` file.
|
||||
|
||||
```bash
|
||||
mkdir -p /opt/workdir/apps/mpi/hello
|
||||
# edit /opt/workdir/apps/mpi/hello/hello.c
|
||||
```
|
||||
|
||||
Now copy the contents below into the `hello.c` file.
|
||||
|
||||
```c
|
||||
/*The Parallel Hello World Program*/
|
||||
#include <stdio.h>
|
||||
#include <mpi.h>
|
||||
|
||||
main(int argc, char **argv)
|
||||
{
|
||||
int node;
|
||||
|
||||
MPI_Init(&argc,&argv);
|
||||
MPI_Comm_rank(MPI_COMM_WORLD, &node);
|
||||
|
||||
printf("Hello World from Node %d\n",node);
|
||||
|
||||
MPI_Finalize();
|
||||
}
|
||||
```
|
||||
|
||||
Compile the program.
|
||||
|
||||
```bash
|
||||
cd /opt/workdir/apps/mpi/hello
|
||||
mpicc hello.c -o hello
|
||||
```
|
||||
|
||||
You should have an `hello` executable in the `/opt/workdir/apps/mpi/hello` directory now. We can use this binary executable with SLURM to launch process in parallel.
|
||||
|
||||
Let's create a job script to launch the executable we just created. Create a new directory to hold our SLURM job script. Then, edit a new file called `launch-hello.sh` in the new `/opt/workdir/jobscripts` directory.
|
||||
|
||||
```bash
|
||||
mkdir -p /opt/workdir/jobscripts
|
||||
cd /opt/workdir/jobscripts
|
||||
# edit launch.sh
|
||||
```
|
||||
|
||||
Copy the contents below into the `launch-hello.sh` job script.
|
||||
|
||||
> [!NOTE]
|
||||
> The contents of your job script may vary significantly depending on your cluster. Refer to the documentation for your institution and adjust the script accordingly to your needs.
|
||||
|
||||
```bash
|
||||
#!/bin/bash
|
||||
|
||||
#SBATCH --job-name=hello
|
||||
#SBATCH --account=account_name
|
||||
#SBATCH --partition=partition_name
|
||||
#SBATCH --nodes=1
|
||||
#SBATCH --ntasks-per-node=1
|
||||
#SBATCH --cpus-per-task=4
|
||||
#SBATCH --time=00:00:30
|
||||
export OMP_NUM_THREADS=$SLURM_CPUS_PER_TASK /opt/workdir/apps/mpi/hello/hello
|
||||
```
|
||||
|
||||
We should now have everything we need to test our MPI job with our compute node(s). Launch the job with the `sbatch` command.
|
||||
|
||||
```bash
|
||||
sbatch /opt/workdir/jobscripts/launch-hello.sh
|
||||
```
|
||||
|
||||
We can confirm the job is running with the `squeue` command.
|
||||
|
||||
```bash
|
||||
squeue
|
||||
```
|
||||
|
||||
You should see a list with a job named `hello` that was given in the `launch-hello.sh` job script.
|
||||
|
||||
```bash
|
||||
# TODO: add output of squeue above
|
||||
```
|
||||
|
||||
If you saw the output above, you should now be able to inspect the output of the job when it completes.
|
||||
|
||||
```bash
|
||||
# TODO: add output of MPI job (should be something like hello.o and/or hello.e)
|
||||
```
|
||||
|
||||
And that's it! You have successfully launched an MPI job with SLURM from an OpenCHAMI deployed system.
|
||||
|
|
@ -7,7 +7,7 @@ For this demonstration, we have two prerequisites:
|
|||
|
||||
The `magellan` repository has an emulator included in the project that we can used for quick and dirty testing. This is useful if we want to try out the capabilities of the tool without have to put to much time and effort setting up an environment. However, we want to use multiple BMCs to show how `magellan` can distinguish between Redfish and non-Redfish services.
|
||||
|
||||
TODO: Add content setting up multiple emulated BMCs with Redfish services (the quickstart in the deployment-recipes has this already).
|
||||
**TODO: Add content setting up multiple emulated BMCs with Redfish services (the quickstart in the deployment-recipes has this already).**
|
||||
|
||||
### Performing a Scan
|
||||
|
||||
|
|
|
|||
|
|
@ -0,0 +1 @@
|
|||
When nodes boot in OpenCHAMI, they make a request out to the `cloud-init-server` to retrieve a cloud-init config. The request is not encrypted and can be intercepted and modified.
|
||||
|
|
@ -1 +1,20 @@
|
|||
For this tutorial, we served images via HTTP using a local S3 bucket (MinIO) and OCI registry. We could instead serve our images using NFS by setting up and running a NFS server on the head node, include NFS tools in our base image, and configuring our nodes to work with NFS.
|
||||
For the [tutorial](https://github.com/OpenCHAMI/tutorial-2025), we served images via HTTP with a local S3 bucket using MinIO and an OCI registry. We could instead serve our images by network mounting the directories that hold our images with NFS. We can spin up a NFS server on the head node by including NFS tools in our base image and configure our nodes to mount the images.
|
||||
|
||||
Configure NFS to serve your SquashFS `nfsroot` with as much performance as possible.
|
||||
|
||||
```bash
|
||||
sudo mkdir -p /opt/nfsroot && sudo chown rocky /opt/nfsroot
|
||||
```
|
||||
|
||||
Create a file at path `/etc/exports` and copy the following contents to export the `/opt/nfsroot` directory for use by our compute nodes.
|
||||
|
||||
```bash
|
||||
/opt/nfsroot *(ro,no_root_squash,no_subtree_check,noatime,async,fsid=0)
|
||||
```
|
||||
|
||||
Reload the NFS daemon to apply the changes.
|
||||
|
||||
```bash
|
||||
modprobe -r nfsd && modprobe nfsd
|
||||
```
|
||||
|
||||
|
|
|
|||
Loading…
Add table
Add a link
Reference in a new issue