OttoScalr is an autoscaling solution for kubernetes workloads, that works by continuously monitoring workload resource utilization patterns to proactively configure and tune the horizontal pod autoscaler policies (HPA, ScaledObjects) autonomously ensuring optimal resource utilization and cost efficiency. This alleviates developers concerns of continuously having to tune the HPAs in accordance with the changing traffic/load patterns and performance profile of the workload. It is designed to work with various kubernetes workload types viz. deployment and argo rollouts. With its pluggable design to incorporate different flavors of policy recommenders, it can be extended to customize the policy generation algorithms to suit the needs of the workload/cluster administrators.
To get started with OttoScalr, you’ll need to have a Kubernetes cluster up and running where the ottoscalr can be installed. To understand the concepts and constructs that drive ottoscalr please go through the ottoscalr wiki.
Ottoscalr is easy to be installed and configured in any kubernetes cluster. The installation process involves setting up the necessary Kubernetes resources for OttoScalr to run. Detailed instructions will be provided in the installation and configuration guide.
Contributions to OttoScalr are welcome! Please read our contributing guide to learn about our development process, how to propose bugfixes and improvements, and how to build and test your changes to OttoScalr.
If you encounter any problems or have any questions about OttoScalr, please open an issue on our GitHub repository.