Skip to content

Deploying SCONE CAS with helm

The Configuration and Attestation Service, or CAS, is a key component in the SCONE framework. It is responsible for managing security policies used to attest services and platforms, and to manage and generate secrets, keys and certificates securely.

Production CAS using Kubernetes

For running CAS in a Kubernetes Cluster in production, we recommend to use the SCONE Operator instead of the CAS helm chart.

Platform requirements

To run CAS in production, the following platform requirements must be met:

For DCAP systems

At least one of:

  • Hardware is registered within Intel and a valid subscription key; OR
  • A Quoting Provider Library (QPL) that is able to fetch PCK certificates for the platform.

For EPID systems

  • A valid set of IAS credentials.

CAS Helm Chart

To install the SCONE CAS Helm chart to a Kubernetes cluster, make sure to add the Helm repository sconeapps first. Check the Helm chart documentation to learn more about the supported parameters.

Example (Azure):

# Get latest chart versions.
helm repo update

# Create an imagePullSecret named "sconeapps" so you can pull private images from Scontain's registry.
export SCONE_HUB_USERNAME="<GitLab username>"
export SCONE_HUB_ACCESS_TOKEN="<GitLab token>"
export SCONE_HUB_EMAIL="<GitLab email>"
kubectl create secret docker-registry sconeapps --docker-server=registry.scontain.com --docker-username=$SCONE_HUB_USERNAME --docker-password=$SCONE_HUB_ACCESS_TOKEN --docker-email=$SCONE_HUB_EMAIL

# Install LAS for Azure.
helm install las sconeapps/las \
    --set image="registry.scontain.com/sconecuratedimages/kubernetes:las.microsoft-azure-scone5.7.0" \
    --set useSGXDevPlugin="azure"

# Set the CAS image.
CAS_PROD_IMAGE="<production CAS image>"

# Install CAS to cluster: configure the production image, persistence using Azure "managed"
# storage class, the Azure SGX device plugin and with a load balancer exposing this CAS
# to the external world.
helm install cas sconeapps/cas \
    --set image="$CAS_PROD_IMAGE" \
    --set persistence.enabled=true \
    --set persistence.storageClass="managed" \
    --set useSGXDevPlugin="azure" \
    --set service.type="LoadBalancer"

Operating CAS

This section shows to provision a production CAS. Provisioning allows the CAS owner to claim ownership and to provide the initial configuration for CAS. Check our documentation to learn more about CAS configuration and how to operate CAS.

Provisioning

  • Specify your DCAP subscription key.

💡 This sample uses the LAS image crafted for Azure. This image ships with a QPL that is able to fetch the required PCK certificates correctly. In this scenario, a valid DCAP subscription key is not required, and you can specify a dummy one:

DCAP_SUBSCRIPTION_KEY="00000000000000000000000000000000"
  • Retrieve the relevant CAS information for remote attestation: MrEnclave, MrSigner, ISVSVN and ISVPRODID.
CAS_MRENCLAVE="$(docker run -t --rm -e SCONE_HASH=1 $CAS_PROD_IMAGE | grep -oe "[0-9a-f]\{64\}")"
CAS_MRSIGNER="$(docker run -t --rm --entrypoint bash $CAS_PROD_IMAGE -c "scone-signer info /usr/local/bin/cas | grep MRSIGNER" | grep -oe "[0-9a-f]\{64\}")"
CAS_ISVPRODID="$(docker run -t --rm --entrypoint bash $CAS_PROD_IMAGE -c "scone-signer info /usr/local/bin/cas" | grep "ISV Product ID" | awk ' { print $4 } ' | grep -oe "[0-9]\{5\}" -m 1 )"
CAS_ISVSVN="$(docker run -t --rm --entrypoint bash $CAS_PROD_IMAGE -c "scone-signer info /usr/local/bin/cas" | grep "ISV SVN" | awk ' { print $3 } ' | grep -oe [0-9] -m 1)"
  • Retrieve the CAS key hash and provisioning token from the CAS pod logs.

⚠️ The following commands assume that the CAS pod is named cas-0.

export CAS_KEY_HASH=$(kubectl logs cas-0 | grep "CAS key hash" | awk ' { print $7 } ')
export CAS_PROVISIONING_TOKEN=$(kubectl logs cas-0 | grep "CAS provisioning token" | awk ' { print $7 } ')
export SCONE_CAS_ADDR=$(kubectl get svc --namespace default cas --template "{{ range (index .status.loadBalancer.ingress 0) }}{{.}}{{ end }}")
mkdir -p owner-config
cat > owner-config/config.toml <<EOF
[api_identity]
common_name = "mycas"
alt_names = ["mycas", "cas", "cas.default", "localhost", "$SCONE_CAS_ADDR"]

[dcap]
subscription_key = "$DCAP_SUBSCRIPTION_KEY"
EOF
  • The CAS owner is identified by a PKCS#8 private key. This identity gives the owner powers to upgrade and perform backups. The key must be protected, and can be generated as follows:
mkdir -p identity
openssl genrsa -out identity/keypair.pem 2048
openssl pkcs8 -topk8 -inform PEM -outform PEM -nocrypt -in identity/keypair.pem -out identity/pkcs8.key
  • Start a SCONE CLI container locally to provision CAS and create policies.
docker run -it --rm \
    --network=host \
    -v $PWD/identity:/identity \
    -v $PWD/owner-config:/owner-config \
    -e SCONE_CLI_CONFIG="/identity/config.json" \
    -e CAS_KEY_HASH="$CAS_KEY_HASH" \
    -e CAS_PROVISIONING_TOKEN="$CAS_PROVISIONING_TOKEN" \
    -e CAS_MRENCLAVE="$CAS_MRENCLAVE" \
    -e CAS_MRSIGNER="$CAS_MRSIGNER" \
    -e CAS_ISVSVN="$CAS_ISVSVN" \
    -e CAS_ISVPRODID="$CAS_ISVPRODID" \
    -e SCONE_CAS_ADDR="$SCONE_CAS_ADDR" \
    registry.scontain.com/sconecuratedimages/sconecli:alpine3.10-scone5
  • Inside of the SCONE CLI container, provision CAS. The owner identity and CLI configuration will be persisted to your local machine (at ./identity/config.json).
$ scone cas provision $SCONE_CAS_ADDR \
    -c $CAS_KEY_HASH \
    --token $CAS_PROVISIONING_TOKEN \
    --config-file /owner-config/config.toml \
    with-attestation -CS --mrenclave $CAS_MRENCLAVE --mrsigner $CAS_MRSIGNER --isvprodid $CAS_ISVPRODID --isvsvn $CAS_ISVSVN
This command will provision and claim ownership of a CAS.
    The ownership will be bound to the cryptographic identity used during this provisioning.
    Holding the owner identity is necessary to change CAS settings, register backups and upgrade to new CAS versions.
    Please be sure to store the owner identity securely.
    Note: If no identity is specified on the command line, the identity stored in the CLI configuration file is used.
This tool seems to run inside of a container.
    Please be extra careful that the owner identity is not lost when when the container is removed!
CAS owner configuration contains DCAP API credentials. Going to attempt to attest CAS using DCAP quote.
CAS localhost at https://localhost:8081/ is trustworthy
Done, CAS configuration was successfully provisioned. You are now owner of the CAS.

Using CAS

This section shows how to use CAS with a sample applications. We submit policies to CAS and then run the applications with remote attestation on Kubernetes.

Examples:

Example 1: Hello World in Python (debug enclave)

Creating policies

  • Retrieve the MrEnclave of the application.
IMAGE="registry.scontain.com/examples/quickstart/hello-world-python:scone5.7.0"
MRENCLAVE="$(docker run -it --rm -e SCONE_HASH=1 $IMAGE | grep -oe "[0-9a-f]\{64\}")"
  • Create a policy file.
mkdir -p policies client
USER_ID="$RANDOM$RANDOM"
cat > policies/hello-world.yaml <<EOF
name: hello-world-python-$USER_ID
version: "0.3"

security:
  attestation:
    tolerate: [debug-mode, hyperthreading, outdated-tcb, software-hardening-needed]
    ignore_advisories: "*"

services:
   - name: application
     mrenclaves: [$MRENCLAVE]
     command: python3 /app/hello-world.py
     pwd: /
     environment:
        SECRET: "s3cr3t-from-policy"
        SCONE_LOG: "error"
EOF
  • Submit a policy to CAS.
docker run -it --rm \
    --network=host \
    -v $PWD/policies:/policies \
    -v $PWD/client:/root/.cas \
    -e SCONE_CAS_ADDR="$SCONE_CAS_ADDR" \
    -e CAS_MRENCLAVE="$CAS_MRENCLAVE" \
    -e CAS_MRSIGNER="$CAS_MRSIGNER" \
    -e CAS_ISVPRODID="$CAS_ISVPRODID" \
    -e CAS_ISVSVN="$CAS_ISVSVN" \
    registry.scontain.com/sconecuratedimages/sconecli:alpine3.10-scone5

Inside of the SCONE CLI container, run:

# Attest CAS.
scone cas attest -CS $SCONE_CAS_ADDR --mrenclave $CAS_MRENCLAVE --mrsigner $CAS_MRSIGNER --isvprodid $CAS_ISVPRODID --isvsvn $CAS_ISVSVN
# Create policy.
scone session create /policies/hello-world.yaml

Exit the SCONE CLI container.

Deploy and attest the application

  • Run the application.
cat > hello-world.yaml <<EOF
apiVersion: batch/v1
kind: Job
metadata:
  name: hello-world
spec:
  template:
    spec:
      restartPolicy: Never
      imagePullSecrets:
      - name: sconeapps
      containers:
      - name: hello-world
        image: $IMAGE
        imagePullPolicy: Always
        env:
        - name: SECRET
          value: s3cr3t-from-untrusted-k8s
        - name: SCONE_CAS_ADDR
          value: $SCONE_CAS_ADDR
        - name: SCONE_CONFIG_ID
          value: hello-world-python-$USER_ID/application
        - name: SCONE_LAS_ADDR
          valueFrom:
            fieldRef:
              fieldPath: status.hostIP
        resources:
          limits:
            sgx.intel.com/enclave: 1
EOF

# Deploy to Kubernetes.
kubectl apply -f hello-world.yaml

# Check application logs.
kubectl logs job/hello-world

You should expect the following logs:

Hello Confidential World with SCONE!
The value of the env. var. SECRET is: 's3cr3t-from-policy'

Notice that the value of SECRET is s3cret-from-policy as defined in the policy. The value defined in the Kubernetes manifest s3cr3t-from-untrusted-k8s is ignored, since the Kubernetes cluster is potentially untrusted.

Example 2: Redis (production enclave)

Creating policies

  • Retrieve the MrEnclave of the application.
IMAGE="registry.scontain.com/cicd/redis:latest"
MRENCLAVE="$(docker run -it --rm -e SCONE_HASH=1 $IMAGE | grep -oe "[0-9a-f]\{64\}")"
  • Create a policy file.
mkdir -p policies
USER_ID="$RANDOM$RANDOM"
cat > policies/redis.yaml <<EOF
name: redis-$USER_ID
version: "0.3"

security:
  attestation:
    tolerate: [hyperthreading, outdated-tcb, software-hardening-needed]

services:
   - name: server
     mrenclaves: [$MRENCLAVE]
     command: redis-server
     pwd: /
     environment:
        SCONE_LOG: "error"
EOF
  • Submit a policy to CAS.
docker run -it --rm \
    --network=host \
    -v $PWD/policies:/policies \
    -v $PWD/client:/root/.cas \
    -e SCONE_CAS_ADDR="$SCONE_CAS_ADDR" \
    -e CAS_MRENCLAVE="$CAS_MRENCLAVE" \
    -e CAS_MRSIGNER="$CAS_MRSIGNER" \
    -e CAS_ISVPRODID="$CAS_ISVPRODID" \
    -e CAS_ISVSVN="$CAS_ISVSVN" \
    registry.scontain.com/sconecuratedimages/sconecli:alpine3.10-scone5

Inside of the SCONE CLI container, run:

# Attest CAS.
scone cas attest -CS $SCONE_CAS_ADDR --mrenclave $CAS_MRENCLAVE --mrsigner $CAS_MRSIGNER --isvprodid $CAS_ISVPRODID --isvsvn $CAS_ISVSVN
# Create policy.
scone session create /policies/redis.yaml

Exit the SCONE CLI container.

Deploy and attest Redis

  • Run the application.
cat > redis.yaml <<EOF
apiVersion: apps/v1
kind: Deployment
metadata:
  name: redis
spec:
  selector:
    matchLabels:
      run: redis
  replicas: 1
  template:
    metadata:
      labels:
        run: redis
    spec:
      imagePullSecrets:
        - name: sconeapps
      containers:
      - name: redis
        image: $IMAGE
        imagePullPolicy: Always
        ports:
        - containerPort: 6379
        env:
        - name: SCONE_LOG
          value: error
        - name: SCONE_CAS_ADDR
          value: $SCONE_CAS_ADDR
        - name: SCONE_CONFIG_ID
          value: redis-$USER_ID/server
        - name: SCONE_LAS_ADDR
          valueFrom:
            fieldRef:
              fieldPath: status.hostIP
        resources:
          limits:
            sgx.intel.com/enclave: 1
---
apiVersion: v1
kind: Service
metadata:
  name: redis
  labels:
    run: redis
spec:
  ports:
  - port: 6379
    protocol: TCP
  selector:
    run: redis

EOF

# Deploy to Kubernetes.
kubectl apply -f redis.yaml

# Check application logs.
kubectl logs deploy/redis -f

You should expect the Redis server to be up:

1:C 03 Aug 2022 16:43:35.565 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
1:C 03 Aug 2022 16:43:35.565 # Redis version=6.2.7, bits=64, commit=00000000, modified=0, pid=1, just started
1:C 03 Aug 2022 16:43:35.565 # Warning: no config file specified, using the default config. In order to specify a config file use redis-server /path/to/redis.conf
1:M 03 Aug 2022 16:43:35.566 # You requested maxclients of 10000 requiring at least 10032 max file descriptors.
1:M 03 Aug 2022 16:43:35.566 # Server can't set maximum open files to 10032 because of OS error: Operation not permitted.
1:M 03 Aug 2022 16:43:35.566 # Current maximum open files is 4096. maxclients has been reduced to 4064 to compensate for low ulimit. If you need higher maxclients increase 'ulimit -n'.
1:M 03 Aug 2022 16:43:35.566 * monotonic clock: POSIX clock_gettime
1:M 03 Aug 2022 16:43:35.568 # A key '__redis__compare_helper' was added to Lua globals which is not on the globals allow list nor listed on the deny list.
1:M 03 Aug 2022 16:43:35.568 * Running mode=standalone, port=6379.
1:M 03 Aug 2022 16:43:35.569 # Server initialized
1:M 03 Aug 2022 16:43:35.606 * Ready to accept connections