Deploying SCONE LAS
SCONE LAS is the SCONE Local Attestation Service. It has to run on each platform, i.e., on each server, on which confidential applications should run. Note that SCONE LAS is implicitly attested by SCONE CAS, and hence, we do not need to attest SCONE LAS explicitly.
We explain in this section how to start SCONE LAS with the help of helm.
The recommended way to deploy SCONE LAS is to use the SCONE Operator:
kubectl create -f https://raw.githubusercontent.com/scontain/operator-samples/main/base_v1beta1_sgxplugin.yaml
You can use the las
helm chart if you do not want to deploy the SCONE operator. For local development, you could deploy las
with the help of docker compose
.
Deploying SCONE LAS with helm
Prerequisites
- A Kubernetes cluster
- Helm setup was performed
The sconeapps/las
chart will deploy SCONE LAS:
helm install las sconeapps/las
This starts SCONE LAS in the default Kubernetes namespace. The application is called las
. As an alternative, you can deploy SCONE LAS with Kubeapps.
Configuration Options
You can learn about the configuration options by executing the following:
helm install las sconeapps/las --dry-run
One option to set is the image
to be deployed. To run in release mode, you need to specify a different SCONE LAS image by appropriately defining the environment variable SCONE_LAS_IMAGE
. You can do this as follows:
helm install las sconeapps/las --set image=$SCONE_LAS_IMAGE
You can check the status of this release, i.e., the instance that you just deployed, by executing:
helm status las
Running on Azure Icelake machines
To run LAS on Azure Icelake machines (standalone VMs or AKS nodepools) and perform the local attestation of SCONE applications, some Azure-specific libraries are required. We provide a unique image crafted for Azure that must be used if running on Icelakes.
(families DCsv3 and DDCsv3): registry.scontain.com/sconecuratedimages/kubernetes:las.microsoft-azure
You can install this version of LAS as follows:
helm install las sconeapps/las --set useSGXDevPlugin=azure --set image=registry.scontain.com/sconecuratedimages/kubernetes:las.microsoft-azure
Compatibility
Until SCONE 5.7, we provide two LAS images:
Image | EPID | DCAP | Icelakes (not Azure) | Azure DCsv3 / DCdsv3 |
---|---|---|---|---|
...:las |
||||
...:las.microsoft-azure |
Starting with SCONE release 5.8, we will only have a single las
image: registry.scontain.com/scone.cloud/las
Determining the Platform IDs
SCONE policies can limit the execution of services to specific platforms, i.e., to specific computers. For example, you might need to ensure that your data can only be processed in a particular data center. A unique public key identifies each platform: This is the public key of its local attestation service.
When a las
instance is restarted on the same platform, it will get the same public key. The public key might, however, change after a microcode update or an update of the las
code base. In these cases, you will have to update the list of permitted platforms in your policies.
In a Kubernetes cluster, you can determine the platform ids with the help of the following bash function that queries the public keys of all las
instances:
# set INSTANCE to the name of your LAS instance
# set NAMESPACE in case you do not use the default namespace
function get_platform_ids {
LAS=$(kubectl get pods -n $NAMESPACE | grep las- | awk '{print $1}')
for l in $LAS ; do
kubectl logs -n $NAMESPACE $l | grep public | tail -n 1 | awk '{print $8}'
done
}
If you do not execute las
in the default Kubernetes namespace, set the environment variable NAMESPACE
. If you do not name your service las
, set the environment variable INSTANCE
accordingly. Then execute the following:
get_platform_ids | sort | uniq | tr '\n' ',' | sed 's/,$//' | awk '{ print "platforms: [" $1 "]" }'
The output might look like this:
platforms: [ 126C8098408FEB2002F5EB8B9E6C2AE26197E3617875633C5BD4EB4454278C34, BCA7B05F55BCA38EA7A0BDEBDC402942BE77BEC7F0D3F37C72F6C1898047B312]
You can copy and paste this to your policy to ensure that your application(s) can only run in a specific Kubernetes cluster:
If you do not even trust your Kubernetes cluster to determine the public keys of the LAS instances correctly, let us know, and we can suggest more secure alternatives.
Troubleshooting
As we mentioned above, one can check if the helm release las
is installed by executing this:
helm status las
You can retrieve the manifest for las
as follows:
helm get manifest las
One can determine the spawned las
pods as follows:
kubectl get pods | grep las
This might result in an output like follows:
las-fbhgn 1/1 Running 0 16m
We can now look at the logs as follows:
kubectl logs las-fbhgn
And this will print a log that might look as follows:
/var/run/aesmd/aesm.socket not found - Starting local in-container AESM service
The path of system bundle: System Bundle
ecdsa_quote_service_bundle_name:2.0.0
epid_quote_service_bundle_name:2.0.0
le_launch_service_bundle_name:2.0.0
linux_network_service_bundle_name:2.0.0
pce_service_bundle_name:2.0.0
quote_ex_service_bundle_name:2.0.0
system_bundle:4.0.0
...