Getting Started¶
Prerequisites¶
- Make
- Go version 1.22 or later
- Docker (for building/pushing controller images)
- An available test cluster. A local kind or minikube cluster will work just fine in many cases.
- Operator Builder installed.
- kubectl installed.
- A set of static Kubernetes manifests that can be used to deploy your workload. It is highly recommended that you apply these manifests to a test cluster and verify the resulting resources work as expected. If you don't have a workload of your own to use, you can use the examples provided in this guide.
Guide¶
This guide will walk you through the creation of a Kubernetes operator for a single workload. This workload can consist of any number of Kubernetes resources and will be configured with a single custom resource. Please review the prerequisites prior to attempting to follow this guide.
This guide consists of the following steps:
- Create a repository.
- Determine what fields in your static manifests will need to be configurable for deployment into different environments. Add commented markers to the manifests. These will serve as instructions to Operator Builder.
- Create a workload configuration for your project.
- Use the Operator Builder CLI to generate the source code for your operator.
- Test the operator against your test cluster.
- Build and install your operator's controller manager in your test cluster.
- Build and test the operator's companion CLI.
Step 1: Create a Repo¶
Create a new directory for your operator's source code. We recommend you follow the standard code organization guidelines. In that directory initialize a new git repo.
1 |
|
And intialize a new go module. The module should be the import path for your
project, usually something like github.com/user-account/project-name
. Use the
command go help importpath
for more info.
1 |
|
Lastly create a directory for your static manifests. Operator Builder will use these as a source for defining resources in your operator's codebase. It must be a hidden directory so as not to interfere with source code generation.
1 |
|
Put your static manifests in this .source-manifests
directory. In the next
step we will add commented markers to them. Note that these static manifests
can be in one or more files. And you can have one or more manifests (separated
by ---
) in each file. Just organize them in a way that makes sense to you.
Step 2: Add Manifest Markers¶
Look through your static manifests and determine which fields will need to be configurable for deployment into different environments. Let's look at a simple example to illustrate. Following is a Deployment, Ingress and Service that may be used to deploy a workload.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 |
|
There are two fields in the Deployment manifest that will need to be configurable. They are noted with comments. The Deployment's replicas and the Pod's container image will change between different environments. For example, in a dev environment the number of replicas will be low and a development version of the app will be run. In production, there will be more replicas and a stable release of the app will be used. In this example we don't have any configurable fields in the Ingress or Service.
Next we need to use +operator-builder:field
markers in comments to inform Operator Builder
that the operator will need to support configuration of these elements.
Following is the Deployment manifest with these markers in place.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 |
|
These markers should always be provided as an in-line comment or as a head
comment. The marker always begins with +operator-builder:field:
or
+operator-builder:collection:field:
See Markers to learn
more.
Step 3: Create a Workload Config¶
Operator Builder uses a workload configuration to provide important details for
your operator project. This guide uses a standalone
workload. Save a workload config to your
.source-manifests
directory by using one of the following commands (or
simply copy/pasting the YAML below the commands):
1 2 3 4 5 |
|
This will generate the following YAML:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 |
|
The name
is arbitrary and can be whatever you like.
In the spec
, the following fields are required:
api.domain
: This must be a globally unique name that will not be used by other organizations or groups. It will contain groups of API types.api.group
: This is a logical group of API types used as a namespacing mechanism for your APIs.api.version
: Provide the intiial version for your API.api.kind
: The name of the API type that will represent the workload you are managing with this operator.resources
: An array of filenames where your static manifests live. List the relative path from the workload manifest to all the files that contain the static manifests we talked about in step 2.
For more info about API groups, versions and kinds, check out the Kubebuilder docs.
The following fields in the spec
are optional:
api.clusterScoped
: If your workload includes cluster-scoped resources like namespaces, this will need to betrue
. The default isfalse
.companionCLIRootcmd
: If you wish to generate source code for a companion CLI for your operator, include this field. We recommend you do. Your end users will appreciate it.name
: The root command your end users will type when using the companion CLI.description
: The general information your end users will get if they use thehelp
subcommand of your companion CLI.
At this point in our example, our .source-manifests
directory looks as
follows:
1 2 3 4 5 |
|
Our StandaloneWorkload config is in workload.yaml
and the Deployment, Ingress
and Service manifests are in app.yaml
and referenced under spec.resources
in
our StandaloneWorkload config.
We are now ready to generate our project's source code.
Step 4: Generate Operator Source Code¶
We first use the init
command to create the general scaffolding. We run this
command from the root of our repo and provide a single argument with the path to
our workload config.
1 2 |
|
With the basic project now set up, we can now run the create api
command to
create a new custom API for our workload.
1 2 3 4 |
|
We again provide the same workload config file. Here we also added the
--controller
and --resource
arguments. These indicate that we want both a
new controller and new custom resource created. Please
note that in the above example both flags are not necessary and are
only provided in the example for verbosity. These options are set by default.
You now have a new working Kubernetes Operator! Next, we will test it out.
Step 5: Run & Test the Operator¶
Assuming you have a kubeconfig in place that allows you to interact with your cluster with kubectl, you are ready to go.
First, install the new custom resource definition (CRD).
1 |
|
Now we can run the controller locally to test it out.
1 |
|
Operator Builder created a sample manifest in the config/samples
directory.
For this example it looks like this:
1 2 3 4 5 6 7 8 |
|
You will notice the fields and values in the spec
were derived from the
markers you added to your static manifests.
Next, in another terminal, create a new instance of your workload with the provided sample manifest.
1 |
|
You should see your custom resource sample get created. Now use kubectl
to
inspect your cluster to confirm the workload's resources got created. You should
find all the resources that were defined in your static manifests.
1 |
|
Clean up by stopping your controller with ctrl-c in that terminal and then remove all the resources you just created.
1 |
|
Step 6: Build & Deploy the Controller Manager¶
Now let's deploy your controller into the cluster.
First export an environment variable for your container image.
1 |
|
Run the rest of the commands in this step 6 in this same terminal as most of
them will need this IMG
env var.
In order to run the controller in-cluster (as opposed to running locally with
make run
) we will need to build a container image for it.
1 |
|
Now we can push it to a registry that is accessible from the test cluster.
1 |
|
Finally, we can deploy it to our test cluster.
1 |
|
Next, perform the same tests from step 5 to ensure proper operation of our operator.
1 |
|
Again, verify that all the resources you expect are created.
Once satisfied, remove the instance of your workload.
1 |
|
For now, leave the controller running in your test cluster. We'll use it in Step 7.
Step 7: Build & Test Companion CLI¶
Now let's build and test the companion CLI.
You will have a make target that includes the name of your CLI. For this example it is:
1 |
|
We can view the help info as follows.
1 |
|
Your end users can use it to create a new custom resource manifest.
1 |
|
If you would like to change any of the default values, edit the file.
1 |
|
Then you can apply it to the cluster.
1 |
|
If your end users find they wish to make changes to the resources that aren't supported by the operator, they can generate the resources from the custom resource.
1 |
|
This will print the resources to stdout. These may be piped into an overlay tool or written to disk and modified before applying to a cluster.
That's it! You have a working operator without manually writing a single line
of code. If you'd like to make any changes to your workload's API, you'll find
the code in the apis
directory. The controller's source code is in
controllers
directory. And the companion CLI code is in cmd
.
Don't forget to clean up. Remove the controller, CRD and the workload's resources as follows.
1 |
|
Next Step¶
Learn about Workloads.