Confidential Document Manager Application
This application demo is a confidential document web application. This service enables users to upload and download documents and ensures that the documents are always encrypted. Users can create accounts. We use a simple password-based authentication. For production, one should add a two-factor authentication. The application consists of the following components:
- a Python FastAPI service serving as the application's REST API,
- a MariaDB service stores the documents and user authentication data,
- a memcached service, serving as a rate limiter for the application, and
- an nginx instance serves as a proxy server for the application, ensuring termination and forwarding with TLS.
All of these components run securely inside of enclaves using the SCONE framework. These services are also integrity protected, and attest each other transparently using TLS in conjunction with a SCONE Configuration and Attestation Service (CAS). Furthermore, the application protects the confidentiality and integrity of all data it receives. We deploy this application using helm.
An overview of how these different components interact with one another is as follows:
Prerequisites
- A Kubernetes cluster
- Helm setup was performed
- Either the SCONE SGX Plugin or the Azure SGX Plugin is installed.
Azure Kubernetes Services (AKS)
You can run this demo on AKS. To set up the necessary infrastructure on AKS, you need to start a local attestations service: (you may have to change the tags according to your AKS Nodes)
helm install las sconeapps/las \
--set useSGXDevPlugin=azure \
--set sgxEpcMem=8 \
--set image=registry.scontain.com/sconecuratedimages/kubernetes:las \
--set nodeSelector.agentpool="confcompool1"
TL;DR
- please, register an account with the SCONE registry
- ask us for access to SconeApps via email
- pull this repository
- create the corresponding Kubernetes secret for the registry and then
$ export IMAGE_PULL_SECRET=<your_secret>
- set target images that you can push to for
MEMCACHED_TARGET_IMAGE
,NGINX_TARGET_IMAGE
andMARIADB_TARGET_IMAGE
(again with$ export ...
) - execute
$ ./setup-secure-doc-management.sh
to upload the policies to SCONE CAS and - issue client requests:
kubectl exec --stdin --tty $(kubectl get pods --selector=app.kubernetes.io/name=client-scone -o jsonpath='{.items[*].metadata.name}') -- /bin/bash
- available commands for the REST API are documented in
./secure-doc-management/templates/NOTES.txt
(displayed after script has terminated)
Secure FastAPI service
We implement a simple FastAPI-based service. The Python code implements a REST API:
- we create a new account by sending a request to
/users/create_account
with the flag-u<username>:<password>
- all subsequent requests containing the
-u<username>:<password>
flag authenticate this user - we upload documents by sending a
POST
to/documents/create_document
with ajson
containing the namesrecord_id
with the value of the ID of the record, andcontent
, with the value of the documents contents. - we get a document by sending a
GET
request to/documents/<record_id>
, retrieving the document with the corresponding record ID
We execute the Python service within an enclave, thus ensuring that even privileged adversaries with root access cannot read the contents of the documents from the Python process.
Binary File system
This FastAPI service employs the new SCONE features binary-fs
. We thereby bind the file system of the service to the binary. Among other security advantages, the MRENCLAVE of the service reflects this inclusion of the file system. Thus, we can verify that the Python service is indeed running the code we expect it to run. For more information, please refer to https://sconedocs.github.io/binary_fs/.
Secure MariaDB
We store the documents and user authentication information in a secure MariaDB, which we also run inside of an enclave. Further, we ensure its memory remains encrypted. As such, the documents and login data users store inside the database remain secure. We further elaborate on the database in this section.
Secure Memcached Rate Limiter
To ensure that our MariaDB is not overloaded by excess requests, we limit requests per user using a Memcached service. We thereby ensure that a user can make at most 10 requests per minute. If the user's requests exceed that limit, we do not serve requests to their corresponding IP address for the remainder of the minute. We implement this limiter by storing the user's IP and their number of requests in a key/value pair in the Memcached service in our Python program: mc_client.set(key=client_ip, value=1, expire=60)
. Upon a request, we increment the key/value pair using Memcached's incr
command: mc_client.incr(key=client_ip, value=1)
. If this value exceeds 10, we do not serve the requests. The key value pair expires after 60 seconds.
As we attribute the IP of a user to Personally Identifiable Information (PII), we have to protect this data. We ensure this protection by running the Memcached inside of an enclave.
Secure NGINX Proxy Server
We employ an NGINX to be the main API of the service, and to ensure secure communication over the wire with TLS. The NGINX requires users to send their requests over TLS, as we specify in its configuration. The NGINX then forwards such requests to the FastAPI service, also using TLS. It thereby specifies the correct FastAPI port, thus enabling the user to send the service requests without having to specify the port themselves.
As the NGINX thereby receives the sensitive information of the documents, we run the NGINX proxy server in an enclave as well.
TLS Certificates
We must secure the communication between 1) the application's services and between 2) the user and the application. Therefore, we need to issue and provision multiple certificates:
- MariaDB: requires server certificates
- memcached: requires server certificates
- FastAPI: requires server certificates and client certificates for the Memcached and MariaDB for client verification
- nginx: requires server certificates, whereby the certification authority (CA) certificate must be accessible to users, and client certificates for the FastAPI for client verification.
An overview of how these certificates secure the communication is as follows:
We provision these certificates using SCONE policies. We now illustrate the use of the policies for certificate provisioning by inspecting the NGINX policy.
secrets:
# nginx - fastapi tls
- name: FASTAPI_CLIENT_CERT
import:
session: $FASTAPI_SESSION
secret: FASTAPI_CLIENT_CERT
- name: FASTAPI_CA_CERT
import:
session: $FASTAPI_SESSION
secret: FASTAPI_CA_CERT
# specific for nginx - client tls
- name: server-key # automatically generate SERVER server certificate
kind: private-key
- name: server # automatically generate SERVER server certificate
private_key: server-key
issuer: SERVER_CA_CERT
kind: x509
dns:
- $NGINX_HOST
- secure-doc-management
- name: SERVER_CA_KEY # export session CA certificate as SERVER CA certificate
kind: private-key
- name: SERVER_CA_CERT # export session CA certificate as SERVER CA certificate
kind: x509-ca
common_name: SERVER_CA
private_key: SERVER_CA_KEY
export_public: true
In the first section, # nginx - fastapi tls
, the NGINX session imports the client and the server CA certificates from the FastAPI session. It thereby uses the CA certificate to identify the FastAPI service, and identifies itself to the service using the client certificates.
In the second section, # (...) nginx - client tls
, we issue server certificates for the NGINX. For the NGINX Server CA certificate, we set export_public: true
. This settings allows users to extract the CA certificate corresponding to this policy, thus enabling them to verify the NGINX server.
To make the certificates available to the NGINX service, we inject the certificates into the NGINX file system. We specify this injection in the NGINX policy using the previous SCONE policy secrets as follows:
images:
- name: nginx_image
injection_files:
# nginx - fastapi tls
- path: /etc/nginx/fastapi-ca.crt
content: $$SCONE::FASTAPI_CA_CERT.chain$$
- path: /etc/nginx/fastapi-client.crt
content: $$SCONE::FASTAPI_CLIENT_CERT.crt$$
- path: /etc/nginx/fastapi-client.key
content: $$SCONE::FASTAPI_CLIENT_CERT.key$$
# specific for nginx - client tls
- path: /etc/nginx/server.crt
content: $$SCONE::server.crt$$
- path: /etc/nginx/server.key
content: $$SCONE::server.key$$
The NGINX can then access these injected files as normal files, e.g., in its configuration:
events {}
http {
server {
listen 443 ssl;
server_name secure-doc-management-nginx-scone;
ssl_certificate /etc/nginx/server.crt;
ssl_certificate_key /etc/nginx/server.key;
location / {
proxy_pass https://secure-doc-management-fastapi-scone:8000;
proxy_ssl_certificate /etc/nginx/fastapi-client.crt;
proxy_ssl_certificate_key /etc/nginx/fastapi-client.key;
proxy_ssl_trusted_certificate /etc/nginx/fastapi-ca.crt;
proxy_ssl_verify on;
proxy_ssl_session_reuse on;
}
}
}
TLS-Based Mutual Attestation
As we observed in the previous section, we encrypt the communication between the NGINX and the FastAPI using TLS, enabled by SCONE policies. The NGINX and FastAPI policies thereby export the necessary certificates to each other by referencing each other in their policies. They thereby attest each other. They achieve this attestation by verifying the policies that they reference in their own policy. Furthermore, they check that the corresponding service satisfies all requirements specified in the service's policy.
We can easily enforce mutual attestation using TLS client authentication. We illustrate this attestation with the FastAPI's policy, which ensures TLS-based attestation between itself and the NGINX service. This FastAPI policy generates a FastAPI CA certificate (FASTAPI_CA_CERT
) and a FastAPI server certificate (fastapi
) as well as a corresponding FastAPI client certificate (FASTAPI_CLIENT_CERT
) and client key (FASTAPI_CLIENT_KEY
). The policy exports this certificate and private key to the NGINX policy.
secrets:
...
# specific for fastapi - nginx tls
- name: fastapi-key # automatically generate FASTAPI server certificate
kind: private-key
- name: fastapi # automatically generate FASTAPI server certificate
private_key: fastapi-key
issuer: FASTAPI_CA_CERT
kind: x509
dns:
- $FASTAPI_HOST
- name: FASTAPI_CLIENT_KEY
kind: private-key
export:
- session: $NGINX_SESSION
- name: FASTAPI_CLIENT_CERT # automatically generate client certificate
private_key: FASTAPI_CLIENT_KEY
issuer: FASTAPI_CA_CERT
common_name: FASTAPI_CLIENT_CERT
kind: x509
export:
- session: $NGINX_SESSION # export client cert/key to upload session
- name: FASTAPI_CA_KEY # export session CA certificate as FASTAPI CA certificate
kind: private-key
- name: FASTAPI_CA_CERT # export session CA certificate as FASTAPI CA certificate
kind: x509-ca
common_name: FASTAPI_CA
private_key: FASTAPI_CA_KEY
export:
- session: $NGINX_SESSION # export the session CA certificate to upload session
We thereby replace $NGINX_SESSION
with the policy name of the NGINX. For increased security, we can also specify the corresponding session hash. In this scenario, the policies are on the same SCONE CAS. In more complex cases, the policies can be stored on different SCONE CAS.
The NGINX can then import the FastAPI CA certificate, client certificate and private key, as we saw before:
secrets:
# nginx - fastapi tls
- name: FASTAPI_CLIENT_CERT
import:
session: $FASTAPI_SESSION
secret: FASTAPI_CLIENT_CERT
- name: FASTAPI_CA_CERT
import:
session: $FASTAPI_SESSION
secret: FASTAPI_CA_CERT
# specific for nginx - client tls
- name: server-key # automatically generate SERVER server certificate
kind: private-key
- name: server # automatically generate SERVER server certificate
private_key: server-key
issuer: SERVER_CA_CERT
kind: x509
dns:
- $NGINX_HOST
- secure-doc-management
- name: SERVER_CA_KEY # export session CA certificate as SERVER CA certificate
kind: private-key
- name: SERVER_CA_CERT # export session CA certificate as SERVER CA certificate
kind: x509-ca
common_name: SERVER_CA
private_key: SERVER_CA_KEY
export_public: true
$FASTAPI_SESSION
is the exporting policy of the FastAPI. The NGINX can then use these exported secrets, as we also saw before:
images:
- name: nginx_image
injection_files:
# nginx - fastapi tls
- path: /etc/nginx/fastapi-ca.crt
content: $$SCONE::FASTAPI_CA_CERT.chain$$
- path: /etc/nginx/fastapi-client.crt
content: $$SCONE::FASTAPI_CLIENT_CERT.crt$$
- path: /etc/nginx/fastapi-client.key
content: $$SCONE::FASTAPI_CLIENT_CERT.key$$
To ensure that the SCONE CAS supplying these policies is authentic, we typically attest the SCONE CAS before uploading any policies using the SCONE CAS CLI.
Binary FS for FastAPI
As we mentioned before, we secure our FastAPI server using the binary-fs
. We use the tool while building the initial Docker image. We seperate this process in three stages. In the first stage, we apply the binaryfs
command to create a file system .c
file. In the second stage, we compile the .c
file into an .so
file using the scone gcc
. In the third stage, we link the generated .so
file into the binary using patchelf
. We apply these stages in the Dockerfile as follows:
# First stage: apply the binary-fs
FROM sconecuratedimages/apps:python-3.7.3-alpine3.10 AS binary-fs
COPY rest_api.py /.
COPY requirements.txt requirements.txt
RUN pip3 install -r requirements.txt
# here we apply the binary-fs command to create the file system .c file
RUN rm /usr/lib/python3.7/config-3.7m-x86_64-linux-gnu/libpython3.7m.a && \
SCONE_MODE=auto scone binaryfs / /binary-fs.c -v \
--include '/usr/lib/python3.7/*' \
--include /lib/libssl.so.1.1 \
--include /lib/libcrypto.so.1.1 \
--include '/lib/libz.so.1*' \
--include '/usr/lib/libbz2.so.1*' \
--include '/usr/lib/libsqlite3.so.0*' \
--include '/usr/lib/libev.so.4*' \
--include '/usr/lib/libffi.so.6*' \
--include '/usr/lib/libexpat.so.1*' \
--include /rest_api.py
# Second stage: compile the binary fs
FROM registry.scontain.com/sconecuratedimages/crosscompilers:alpine-scone5 as crosscompiler
COPY --from=binary-fs /binary-fs.c /.
RUN scone gcc /binary-fs.c -O0 -shared -o /libbinary-fs.so
# Third stage: patch the binary-fs into the enclave executable
FROM registry.scontain.com/sconecuratedimages/apps:python-3.7.3-alpine3.10
COPY --from=crosscompiler /libbinary-fs.so /.
RUN apk add --no-cache patchelf && \
patchelf --add-needed libbinary-fs.so `which python3` && \
apk del patchelf
ENV SCONE_HEAP=512M
ENV SCONE_LOG=debug
ENV LD_LIBRARY_PATH="/"
CMD sh -c "python3 /rest_api.py"
The binary also requires certain /etc/
files for networking within the cluster, e.g., /etc/resolv.conf
. Currently, we simply inject these files into the confidential service using the FastAPI's policy.
The source code of this example is available on our repository- ask us for permission, and then continue with the following commands:
git clone https://gitlab.scontain.com/community/secure-doc-management.git
cd secure-doc-management
Execution on a Kubernetes Cluster
We intend this example to be run on a Kubernetes cluster.
Install SCONE Services
Get access to SconeApps
(see https://sconedocs.github.io/helm/):
helm repo add sconeapps https://${GH_TOKEN}@raw.githubusercontent.com/scontain/SconeApps/master/
helm repo update
Give SconeApps access to the private docker images (see SconeApps):
export SCONE_HUB_USERNAME=...
export SCONE_HUB_ACCESS_TOKEN=...
export SCONE_HUB_EMAIL=...
kubectl create secret docker-registry sconeapps --docker-server=registry.scontain.com --docker-username=$SCONE_HUB_USERNAME --docker-password=$SCONE_HUB_ACCESS_TOKEN --docker-email=$SCONE_HUB_EMAIL
Start Local Attestation Service (LAS) on Azure (we use a remote CAS in this case):
# nodeSelector may have to be adjusted according to your nodes
helm install las sconeapps/las \
--set useSGXDevPlugin=azure \
--set sgxEpcMem=8 \
--set image=registry.scontain.com/sconecuratedimages/kubernetes:las-scone5 \
--set nodeSelector.agentpool="confcompool1"
To setup on a vanilla Kubernetes cluster:
helm install las sconeapps/las --set service.hostPort=true
helm install sgxdevplugin sconeapps/sgxdevplugin
Run the Application
Start by by specifying Docker image repositories to which you may push:
export MARIADB_TARGET_IMAGE=your/repo:mariadb-protected
export MEMCACHED_TARGET_IMAGE=your/repo:memcached-tls-protected
export NGINX_TARGET_IMAGE=your/repo:nginx-proxy-server-protected
Then use the Helm chart in ./secure-doc-management
to deploy the application to a Kubernetes cluster. We strongly recommend using the script:
export NAMESPACE=<your_kubernetes_namespace> # e.g. default
./setup-secure-doc-management.sh
# without Azure:
# export USE_AZURE=false; ./setup-secure-doc-management.sh
Test the Application
After all resources are Running
, you can test the API via Helm:
helm test secure-doc-management
Helm will run a pod with a couple of pre-set queries to check if the API is working properly.
Access the Application
For ease of use, we access the application within the cluster through a client pod, which the helm charts also deploys.
kubectl exec --stdin --tty $(kubectl get pods --selector=app.kubernetes.io/name=client-scone -o jsonpath='{.items[*].metadata.name}') -- /bin/bash
The IP of the host is stored under $NGINX_HOST
in the pod. The pod has also retrieved the certificate of the application beforehand, in /tmp/nginx-ca.crt
. The exact commands to establish the secure communication with the application are:
echo "Attest SCONE CAS"
scone cas attest --accept-configuration-needed --accept-group-out-of-date --only_for_testing-debug --only_for_testing-ignore-signer $SCONE_CAS_ADDR $CAS_MRENCLAVE
echo "Get SCONE CAS CACERT"
scone cas show-certificate $SCONE_CAS_ADDR > /tmp/scone-ca.crt
echo "Get nginx public CACERT from CAS REST API"
echo -e $(curl --cacert /tmp/scone-ca.crt https://$SCONE_CAS_ADDR:8081/v1/values/session=$NGINX_CONFIG_ID,secret=SERVER_CA_CERT | jq '.value') | head -c -2 | tail -c +2 > /tmp/nginx-ca.crt
Now, from within the pod, you can perform queries such as:
curl --cacert /tmp/nginx-ca.crt -ubobross:password123 -X GET https://$NGINX_HOST/users/create_account
curl --cacert /tmp/nginx-ca.crt -ubobross:password123 -X POST -H "Content-Type: application/json" -d '{"record_id":"31","content":"Ever make mistakes in life? Lets make them birds. Yeah, theyre birds now."}' https://$NGINX_HOST/documents/create_document
curl --cacert /tmp/nginx-ca.crt -ubobross:password123 -X GET https://secure-doc-management-nginx-scone/documents/31
Clean Up
To uninstall the charts we installed during this demo, execute:
helm uninstall las
helm uninstall secure-doc-management
# if you are not using Azure:
# helm uninstall sgxdevplugin