Turbolift is a distribution interface for Rust programs made by Dominic Burkart. Today, it provides a simple interface for distributing programs over a Kubernetes cluster. Support for other distribution managers is planned.
How It Works
With Turbolift, the programmer tags specific functions to be converted automatically into services, which are distributed, and handles communication between the main program (which is not distributed) and each generated service. Distribution can be activated with a feature defined in the project's Cargo.toml (cargo doc, example).
Here is an example of a program that distributes a function as a Kubernetes service using Turbolift:
If the function is not async (as above), it is made async. Aside from returning a future, Turbolift wraps the output type T in a result type Result<T, turbolift::DistributionError>.
There are no additional configuration files; the orchestration is part of the code. There isn't a lot of configuration at all– Turbolift doesn't provide the customizability of a more complex distribution interface. But, the exciting thing about Turbolift is that, for programs that do fit the pattern, the code change to enable distribution is extremely minor.
When a programmer writes a program that uses Turbolift, they declare a global state manager, and tag functions to be distributed as services using a macro.
At compile time, these macros perform two important tasks. First, the macros extract the necessary dependencies to run the function and rewrite the function as an http service. The source code for these derived services is then stored in the program binary, to be accessed at runtime. Second, the macros rewrite the function in the main program to instantiate and then call the newly generated service, using the global state manager.
At runtime, the source code for each derived service is transformed into a format readable by the cluster manager. For Kubernetes, this means putting the source code into a Docker image, and requesting the necessary pod, deployment, service, autoscaler, and ingress.
When the main program completes and Turbolift's global state is dropped, cleanup code is executed to remove the distributed services for the specific session. This means that if your code panics or your network fails, your services may not be deleted. Make sure to avoid panics in your code, and to clean out old services from your cluster if you experience network problems. Images are not cleaned up by Turbolift.
Each time Turbolift is executed, a new version of each service is generated with a unique name, to avoid collisions in case multiple programs are distributing functions with the same name on the same cluster at the same time.
Turbolift sets up the following while targeting Kubernetes:
- Async/.await service instantiation and calls.
- Autoscale based on CPU pressure.
- Startup, readiness, and liveness probes.
Turbolift extracts the code for a given function, transforms it into a simple server program with the relevant probes, and generates a generic microservice with the following architecture for each distributed function. These are the Kubernetes components generated by Turbolift:
- Horizontal Pod Autoscaler
Turbolift requires the user to:
- Set the scaling limit N. The number of pods backing a service scale from 1 to N replicas depending on traffic. This is done when instantiating the Turbolift k8s interface.
- Provide a function that handles copying a local docker image to a registry accessible by the cluster. We may provide helper functions for common approaches in the future.
The Kubernetes implementation assumes that ingresses can be generated and that they will not be exposed to public traffic. Here are the requirements and technologies used in the CI:
- Kubernetes version 1.19+.
- An ingress that works with the networking.k8s.io/v1 API. The project CI is built with ingress-nginx. Other ingresses may work, but are not continuously tested.
- Access to the target cluster via kubectl in the program's runtime environment.
- In cluster environment variables or a local kubeconfig compatible with kube-rs's try_default(). This should already be configured if you have a working kubectl.
- Ingress accessible on the localhost:80 at runtime.
- Access to Docker at runtime.
Turbolift itself can be installed using Cargo and work in any environment that meets the requirements. If you want to set up a local cluster that can work with turbolift, see the install guide for a turbolift-compatible raspberry pi cluster. If you have a turbolift-ready cluster and want to adapt a current project, see this pr to use turbolift in a docker-based project.