Skip to content

Deploying SCONE LAS

SCONE LAS is the SCONE Local Attestation Service. It has to run on each platform, i.e., on each server, on which confidential applications should run. Note that SCONE LAS is implicitly attested by SCONE CAS and hence, we do not need to attest SCONE LAS explicitly.

We explain in this section on how to start SCONE LAS with the help of helm. The easiest way to deploy SCONE LAS is to type (see details):

sconectl scone_init --image-pull-secret --secret-email "$SECRET_EMAIL"  --secret-token "$SECRET_TOKEN" --secret-username "$SECRET_USERNAME"

Note that scone_init will use the LAS Kubernetes Operator to keep SCONE LAS running in the cluster. In case you do not want to deploy an operator, you can use the helm chart instead.

Deploying SCONE LAS with helm


The sconeapps/las chart will deploy SCONE LAS:

helm install las sconeapps/las

This starts SCONE LAS in the default Kubernetes namespace. The application is called las. As an alternative, you can deploy SCONE LAS with Kubeapps.

Configuration Options

You can learn about the configuration options by executing:

helm install las sconeapps/las --dry-run

One option to set is the image to be deployed. If you want to run in release mode, you need to specify a different SCONE LAS image by first defining environment variable SCONE_LAS_IMAGE appropriately. You can do this as follows:

helm install las sconeapps/las --set image=$SCONE_LAS_IMAGE

You can check the status of this release, i.e., the instance that you just deployed, by executing:

helm status las

Running on Azure Icelake machines

To run LAS on Azure Icelake machines (standalone VMs or AKS nodepools) and perform the local attestation of SCONE applications, some Azure-specific libraries are required. For that matter, we provide a special image crafted for Azure, that must be used if running on Icelakes (families DCsv3 and DDCsv3):

You can install this version of LAS as follows:

helm install las sconeapps/las --set useSGXDevPlugin=azure --set


Right now we maintain two LAS images:

Image EPID DCAP Icelakes (not Azure) Azure DCsv3 / DCdsv3
...:las ✔️ ✔️ ✔️ ❌ ❌ ✔️ ✔️ ✔️

In the next SCONE release, we will merge these two images into a single las image.

Determining the Platform IDs

SCONE policies can limit the execution of an services to certain platforms, i.e., to certain computers. For example, you might need to ensure that your data can only be processed in a certain datacenter. Each platform is identified by a unique public key: This is the public key of its local attestation service.

When a las instance is restarted on the same platform, it will get the same public key. The public key might, however, change after a microcode update or an update of the las code base. In these cases, you will have to update the list of permitted platforms in your policies.

In a Kubernetes cluster, you can determine the platform ids with the help of the following bash function that queries the public keys of all las instances:

# set INSTANCE to the name of your LAS instance
# set NAMESPACE in case you do not use the default namespace
function get_platform_ids {
    kubectl get pod --namespace "$NS" --selector="$NAME,$NAME" -o json \
    | python3   <(cat <<EOF
import sys, json, subprocess;
print("platforms: [", end="")
for i in j['items']:
    pubkey=subprocess.check_output("kubectl logs pod/%s | grep 'public key' | tail -n 1 | awk '{print \$8}'" % i['metadata']['name'], shell=True, encoding='ascii').rstrip("\n")
    print(comma, pubkey, end="")
    comma=", "

In case you do not execute las in the default Kubernets namespace, set environment variable NAMESPACE. If you do not name your service las, set environment variable INSTANCE accordingly. Then just execute:


The output might look like this:

platforms: [ 126C8098408FEB2002F5EB8B9E6C2AE26197E3617875633C5BD4EB4454278C34,  BCA7B05F55BCA38EA7A0BDEBDC402942BE77BEC7F0D3F37C72F6C1898047B312]

You can copy and paste this to your policy to ensure that your application(s) can only run in a certain Kubernetes cluster.

If you do not even trust your Kubernetes cluster to correctly determine the public keys of the LAS instances, let us know and we can suggest more secure alternatives.


As we mentioned above, one can check if the helm release las is installed by executing:

helm status las

You can retrieve the manifest for las as follows:

helm get manifest las

One can determine the spawned las pods as follows:

kubectl get pods | grep las

This might result in an output like follows:

las-fbhgn   1/1     Running   0          16m

We can now look at the logs as follows:

kubectl logs las-fbhgn

and this will print a log which might look as follows

/var/run/aesmd/aesm.socket not found - Starting local in-container AESM service
The path of system bundle: System Bundle