Deploying SCONE LAS
SCONE LAS is the SCONE Local Attestation Service. It has to run on each platform, i.e., on each server, on which confidential applications should run. We explain how to start SCONE LAS with the help of helm. Note that SCONE LAS is implicitly attested by SCONE CAS and hence, we do not need to attest SCONE LAS explicitly.
Prerequisites
- A Kubernetes cluster
- Helm setup was performed
Deploying SCONE LAS
The sconeapps/las
chart will deploy SCONE LAS:
helm install las sconeapps/las
This starts SCONE LAS in the default Kubernetes namespace. The application is called las
. As an alternative, you can deploy SCONE LAS with Kubeapps.
Configuration Options
You can learn about the configuration options by executing:
helm install las sconeapps/las --dry-run
One option to set is the image
to be deployed. If you want to run in release mode, you need to specify a different SCONE LAS image by first defining environment variable SCONE_LAS_IMAGE
appropriately. You can do this as follows:
helm install las sconeapps/las --set image=$SCONE_LAS_IMAGE
You can check the status of this release, i.e., the instance that you just deployed, by executing:
helm status las
Running on Azure Icelake machines
To run LAS on Azure Icelake machines (standalone VMs or AKS nodepools) and perform the local attestation of SCONE applications, some Azure-specific libraries are required. For that matter, we provide a special image crafted for Azure, that must be used if running on Icelakes
(families DCsv3 and DDCsv3): registry.scontain.com:5050/sconecuratedimages/kubernetes:las.microsoft-azure
You can install this version of LAS as follows:
helm install las sconeapps/las --set useSGXDevPlugin=azure --set image=registry.scontain.com:5050/sconecuratedimages/kubernetes:las.microsoft-azure
Compatibility
Right now we maintain two LAS images:
Image | EPID | DCAP | Icelakes (not Azure) | Azure DCsv3 / DCsv3 |
---|---|---|---|---|
...:las |
||||
...:las-azure |
In the next SCONE release, we will merge these two images into a single las
image.
Determining the Platform IDs
SCONE policies can limit the execution of an services to certain platforms, i.e., to certain computers. For example, you might need to ensure that your data can only be processed in a certain datacenter. Each platform is identified by a unique public key: This is the public key of its local attestation service.
When a las
instance is restarted on the same platform, it will get the same public key. The public key might, however, change after a microcode update or an update of the las
code base. In these cases, you will have to update the list of permitted platforms in your policies.
In a Kubernetes cluster, you can determine the platform ids with the help of the following bash function that queries the public keys of all las
instances:
# set INSTANCE to the name of your LAS instance
# set NAMESPACE in case you do not use the default namespace
function get_platform_ids {
NAME=${INSTANCE:-las}
NS=${NAMESPACE:-default}
kubectl get pod --namespace "$NS" --selector="app.kubernetes.io/name=$NAME,app.kubernetes.io/instance=$NAME" -o json \
| python3 <(cat <<EOF
import sys, json, subprocess;
j=json.load(sys.stdin)
print("platforms: [", end="")
comma=""
for i in j['items']:
pubkey=subprocess.check_output("kubectl logs pod/%s | grep 'public key' | tail -n 1 | awk '{print \$8}'" % i['metadata']['name'], shell=True, encoding='ascii').rstrip("\n")
print(comma, pubkey, end="")
comma=", "
print("]")
EOF
)
}
In case you do not execute las
in the default Kubernets namespace, set environment variable NAMESPACE
. If you do not name your service las
, set environment variable INSTANCE
accordingly. Then just execute:
get_platform_ids
The output might look like this:
platforms: [ 126C8098408FEB2002F5EB8B9E6C2AE26197E3617875633C5BD4EB4454278C34, BCA7B05F55BCA38EA7A0BDEBDC402942BE77BEC7F0D3F37C72F6C1898047B312]
You can copy and paste this to your policy to ensure that your application(s) can only run in a certain Kubernetes cluster.
If you do not even trust your Kubernetes cluster to correctly determine the public keys of the LAS instances, let us know and we can suggest more secure alternatives.
Troubleshooting
As we mentioned above, one can check if the helm release las
is installed by executing:
helm status las
You can retrieve the manifest for las as follows:
helm get manifest las
One can determine the spawned las
pods as follows:
kubectl get pods | grep las
This might result in an output like follows:
las-fbhgn 1/1 Running 0 16m
We can now look at the logs as follows:
kubectl logs las-fbhgn
and this will print a log which might look as follows
/var/run/aesmd/aesm.socket not found - Starting local in-container AESM service
The path of system bundle: System Bundle
ecdsa_quote_service_bundle_name:2.0.0
epid_quote_service_bundle_name:2.0.0
le_launch_service_bundle_name:2.0.0
linux_network_service_bundle_name:2.0.0
pce_service_bundle_name:2.0.0
quote_ex_service_bundle_name:2.0.0
system_bundle:4.0.0
...