Our operator has three custom resources. Resources, Resource Monitors, and Bookings. The following is general information on how they represent a running instance on a cloud provider.
Their example manifests can be found in the config/samples directory. Once we modify their details, we can directly apply them to the cluster.
Prerequisites
This section assumes that you've set up your cloud instances to be manageable by the operator. To make sure you've done that, check out tagging instances.
Adding Resources
We start with the resources. They represent one or more instances grouped by a tag name. Once we have tagged our resources per the information here, we are ready to create the resources on the cluster.
With Resource Monitor
A quick and easy way to represent tagged cloud instances as resources on the cluster is to use a resource monitor.
Resource monitors continuously scan the cloud provider for changes to the instances and apply the changes to the cluster. At this moment only newly created resources that are not present on the cluster will trigger change.
As seen in the sample manifest, they require just a type of supported cloud resource.
apiVersion: manager.kotaico.de/v1
kind: ResourceMonitor
metadata:
labels:
app.kubernetes.io/name: resourcemonitor
app.kubernetes.io/instance: ec2
app.kubernetes.io/part-of: resource-booking-operator
app.kuberentes.io/managed-by: kustomize
app.kubernetes.io/created-by: resource-booking-operator
name: ec2
spec:
type: ec2
We create the resource monitor on the cluster with kubectl
:
kubectl apply -f config/samples/manager_v1_resourcemonitor.yaml
Once created, a resource monitor will populate the cluster with initial resources of the given type and will continue to scan for newly tagged instances until it is removed.
Manually
A more involved way of creating resources is applying their manifests directly to the cluster.
Initially, we can reuse the sample manifest in config/samples/manager_v1_resource.yaml
that looks like this:
apiVersion: manager.kotaico.de/v1
kind: Resource
metadata:
labels:
app.kubernetes.io/name: resource
app.kubernetes.io/instance: analytics
app.kubernetes.io/part-of: resource-booking-operator
app.kuberentes.io/managed-by: kustomize
app.kubernetes.io/created-by: resource-booking-operator
name: ec2.analytics
spec:
booked_by: ""
booked_until: ""
tag: analytics
type: ec2
Note that spec.booked
field needs to be false
as this is our initially desired state, and it’s set true
by the controller only when there is an active booking.
We create the resource on the cluster with kubectl
:
kubectl apply -f config/samples/manager_v1_resource.yaml
Creating Bookings
After we make sure that we have created a resource on the cluster, we can create a booking for it.
The spec
of a booking requires the filling of a resource name (the tag we used for the instances), start and end time of the booking. The default date time format we use is RFC3339
.
The chosen time slot can be happening now, which will mark the booking status as IN PROGRESS
and the resource as booked. Or at some point in the future, which will lead to the status of the booking being SCHEDULED
.
apiVersion: manager.kotaico.de/v1
kind: Booking
metadata:
labels:
app.kubernetes.io/name: booking
app.kubernetes.io/instance: backup-jan10
app.kubernetes.io/part-of: resource-booking-operator
app.kuberentes.io/managed-by: kustomize
app.kubernetes.io/created-by: resource-booking-operator
name: backup-jan10
spec:
resource_name: ec2.analytics
start_at: 2023-01-10T22:35:00Z
end_at: 2023-01-10T22:45:00Z
user_id: cd39ad8bc3
We create the booking on the cluster with kubectl
:
kubectl apply -f config/samples/manager_v1_booking.yaml
Creating bookings on a schedule
For some purposes, creating bookings manually becomes cumbersome. This is where we can use BookingSchedulers. BookingSchedulers are a way to create bookings on a schedule. They require 3 fields.
spec.schedule
- a cron expression that defines when the booking should be created (e.g.0 0 * * *
for every day at midnight)spec.duration
- the duration of the booking in minutesspec.bookingTemplate
- a template for the booking that will be created on the schedule. Using the same fields that the booking resource expects.
apiVersion: manager.kotaico.de/v1
kind: BookingScheduler
metadata:
labels:
app.kubernetes.io/name: bookingscheduler
app.kubernetes.io/instance: bookingscheduler-sample
app.kubernetes.io/part-of: resource-booking-operator
app.kuberentes.io/managed-by: kustomize
app.kubernetes.io/created-by: resource-booking-operator
name: bookingscheduler-sample
spec:
schedule: "0 0 * * *"
duration: 20
bookingTemplate:
resource_name: ec2.analytics
user_id: cd39ad8bc3
Underneath, the schedulers create a regular booking resource. They are just a scaffold for bookings, with extra capabilities for automation. The best way to debug a scheduler is to check the bookings it created. Note that:
- Schedulers don't create bookings in the future upon creation. A single booking is created every time the schedule is triggered.
- Deleting a scheduler won't remove the bookings it created. It will only be prevented from creating new ones.
- Modifying a scheduler at any time, will instantly affect the next execution of the scheduler — using the new given values.
We create the scheduler on the cluster with kubectl
:
kubectl apply -f config/samples/manager_v1_bookingscheduler.yaml
How do we watch for changes
Every change to a custom resource triggers its Reconcile
controller function, which is responsible for updating the spec
and status
of the resource.
Resource
’s Reconcile
function runs every 30 seconds, which is needed to provide up-to-date information about the instances it watches over.
Booking
’s Reconcile
runs every minute, so that we can let the Resource
know that there’s an active booking, or that the currently active booking finished. Finished bookings are not constantly checked. Once a Booking
is marked as FINISHED
, its Reconcile
is not called ever again.