Deploying SCONE LAS
SCONE LAS is the SCONE Local Attestation Service. It has to run on each platform, i.e., on each server, on which confidential applications should run. We explain how to start SCONE LAS with the help of helm. Note that SCONE LAS is implicitly attested by SCONE CAS and hence, we do not need to attest SCONE LAS explicitly.
Prerequisites
- A Kubernetes cluster
- Helm setup was performed
Deploying SCONE LAS
The sconeapps/las
chart will deploy SCONE LAS:
helm install las sconeapps/las
This starts SCONE LAS in the default Kubernetes namespace. The application is called las
. As an alternative, you can deploy SCONE LAS with Kubeapps.
Configuration Options
You can learn about the configuration options by executing:
helm install las sconeapps/las --dry-run
One option to set is the image
to be deployed. If you want to run in release mode, you need to specify a different SCONE LAS image by first defining environment variable SCONE_LAS_IMAGE
appropriately. You can do this as follows:
helm install las sconeapps/las --set image=$SCONE_LAS_IMAGE
You can check the status of this release, i.e., the instance that you just deployed, by executing:
helm status las
Determining the Platform IDs
SCONE policies can limit the execution of an services to certain platforms, i.e., to certain computers. For example, you might need to ensure that your data can only be processed in a certain datacenter. Each platform is identified by a unique public key: This is the public key of its local attestation service.
When a las
instance is restarted on the same platform, it will get the same public key. The public key might, however, change after a microcode update or an update of the las
code base. In these cases, you will have to update the list of permitted platforms in your policies.
In a Kubernetes cluster, you can determine the platform ids with the help of the following bash function that queries the public keys of all las
instances:
# set INSTANCE to the name of your LAS instance
# set NAMESPACE in case you do not use the default namespace
function get_platform_ids {
NAME=${INSTANCE:-las}
NS=${NAMESPACE:-default}
kubectl get pod --namespace "$NS" --selector="app.kubernetes.io/name=$NAME,app.kubernetes.io/instance=$NAME" -o json \
| python3 <(cat <<EOF
import sys, json, subprocess;
j=json.load(sys.stdin)
print("platforms: [", end="")
comma=""
for i in j['items']:
pubkey=subprocess.check_output("kubectl logs pod/%s | grep 'public key' | tail -n 1 | awk '{print \$8}'" % i['metadata']['name'], shell=True, encoding='ascii').rstrip("\n")
print(comma, pubkey, end="")
comma=", "
print("]")
EOF
)
}
In case you do not execute las
in the default Kubernets namespace, set environment variable NAMESPACE
. If you do not name your service las
, set environment variable INSTANCE
accordingly. Then just execute:
get_platform_ids
The output might look like this:
platforms: [ 126C8098408FEB2002F5EB8B9E6C2AE26197E3617875633C5BD4EB4454278C34, BCA7B05F55BCA38EA7A0BDEBDC402942BE77BEC7F0D3F37C72F6C1898047B312]
You can copy and paste this to your policy to ensure that your application(s) can only run in a certain Kubernetes cluster.
If you do not even trust your Kubernetes cluster to correctly determine the public keys of the LAS instances, let us know and we can suggest more secure alternatives.