This document outlines the process of creating a local experimental environment with Gateway API on kind, enabling effective learning and hands-on testing of Gateway API concepts.
This guide is your gateway into experimenting with Gateway API using kind (Kubernetes in Docker), a streamlined tool for local Kubernetes cluster management. The goal here isn't production-level deployment; rather, it’s about learning and testing the fundamental concepts of the Gateway API without the added layers of complexity often present in a live environment.
### Important Caution
Keep in mind that the setup detailed here is strictly for experimentation. None of the components are designed for production use. So once you’re ready to transition your work to a production level, you’ll want to explore a suitable [implementation](https://gateway-api.sigs.k8s.io/implementations/) that can effectively meet the demands of a production environment.
### What You’ll Learn
Throughout this guide, you’ll accomplish several fundamental tasks to help you grasp how the Gateway API operates:
- Initiate a local Kubernetes cluster using kind.
- Deploy [cloud-provider-kind](https://github.com/kubernetes-sigs/cloud-provider-kind), a provider that simplifies LoadBalancer Services and implements a Gateway API controller.
- Create a Gateway and corresponding HTTPRoute to direct traffic toward a demonstration application.
- Conduct local testing of your Gateway API configuration.
This setup serves as an excellent foundation for anyone looking to understand or experiment with the capabilities of Gateway API without needing a production-grade infrastructure.
### Getting Started: Prerequisites
Before diving into the setup, ensure you have the following installed on your machine:
- **[Docker](https://docs.docker.com/get-docker/)**: Essential for running both kind and cloud-provider-kind.
- **[kubectl](https://kubernetes.io/docs/tasks/tools/)**: The command-line interface for interacting with Kubernetes clusters.
- **[kind](https://kind.sigs.k8s.io/docs/user/quick-start/#installation)**: This tool allows you to run Kubernetes clusters in Docker containers.
- **[curl](https://curl.se/)**: Useful for testing the routes you’ll create.
### Setting Up Your Cluster
To create a new kind cluster, you simply execute the following command:
```bash
kind create cluster
```
This command will spin up a single-node Kubernetes cluster residing within a Docker container, streamlining your testing and experimentation process.
### Installing cloud-provider-kind
The next step involves installing [cloud-provider-kind](https://github.com/kubernetes-sigs/cloud-provider-kind/). This component brings two key functionalities:
1. A LoadBalancer controller for managing address assignments to LoadBalancer-type Services.
2. A Gateway API controller that brings the Gateway API specification to life.
Deploy it as follows, on the same host that’s running your kind cluster:
```bash
VERSION="$(basename $(curl -s -L -o /dev/null -w '%{url_effective}' https://github.com/kubernetes-sigs/cloud-provider-kind/releases/latest))"
docker run -d --name cloud-provider-kind --rm --network host -v /var/run/docker.sock:/var/run/docker.sock registry.k8s.io/cloud-provider-kind/cloud-controller-manager:${VERSION}
```
Keep in mind you'll likely need elevated privileges on some systems to access the Docker socket.
To confirm it's running, check with:
```bash
docker ps --filter name=cloud-provider-kind
```
And you can dive deeper into the logs with:
```bash
docker logs cloud-provider-kind
```
### Exploring Gateway API Resources
With your cluster primed and ready, it's time to delve into the Gateway API. The installation of cloud-provider-kind seamlessly provisions a GatewayClass named `cloud-provider-kind`, which will play a central role in your configurations.
Notably, the term "cloud-provider" in this context is a bit of a misnomer. Although kind itself isn't a cloud service, its design mimics cloud-like features, making it an effective simulation for testing cloud-native applications.
### Deploying Your Gateway
To bring your Gateway online, you’ll use a manifest that does several important tasks:
- It creates a new namespace called `gateway-infra`.
- It sets up a Gateway that listens on port 80.
- It accepts HTTPRoutes that conform to the `*.exampledomain.example` hostname pattern.
- It permits routes from any namespace to connect to the Gateway, though in a production scenario, you'll want to limit this to maintain security.
Here’s how you can apply this manifest:
```yaml
---
apiVersion: v1
kind: Namespace
metadata:
name: gateway-infra
---
apiVersion: gateway.networking.k8s.io/v1
kind: Gateway
metadata:
name: gateway
namespace: gateway-infra
spec:
gatewayClassName: cloud-provider-kind
listeners:
- name: default
hostname: "*.exampledomain.example"
port: 80
protocol: HTTP
allowedRoutes:
namespaces:
from: All
```
After deploying, verify that your Gateway has been correctly configured and check for an assigned address:
```bash
kubectl get gateway -n gateway-infra gateway
```
The output should confirm that the PROGRAMMED column displays True and the ADDRESS is populated.
### Launching a Demo Application
Now that your Gateway is set up, the final step is to deploy a basic echo application to test your Gateway configuration. This application will:
- Listen on port 3000.
- Echo back details of requests, such as paths, headers, and environment variables.
- Operate within a dedicated `demo` namespace.
You’re now set to begin your testing, launching into the world of Gateway API with confidence.
Establishing an HTTPRoute
Setting up an HTTPRoute is key for directing traffic from your Gateway to the echo application. This route will be structured to:
- Handle requests sent to the hostname
some.exampledomain.example.
- Direct that traffic to the designated echo application.
- Be associated with the Gateway situated in the
gateway-infra namespace.
To implement this, apply the following manifest:
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: echo
namespace: demo
spec:
parentRefs:
- name: gateway
namespace: gateway-infra
hostnames: ["some.exampledomain.example"]
rules:
- matches:
- path:
type: PathPrefix
value: /
backendRefs:
- name: echo
port: 3000
Verifying Your Route
Testing your setup is crucial. You’ll need to send a request to the Gateway’s IP address, using the hostname you've set,
some.exampledomain.example. If you're in a POSIX shell environment, employ the command below, but you might need to tweak it based on your specific setup:
GW_ADDR=$(kubectl get gateway -n gateway-infra gateway -o jsonpath='{.status.addresses[0].value}')
curl --resolve some.exampledomain.example:80:${GW_ADDR} http://some.exampledomain.example
Upon doing this, you should receive a JSON response resembling the following:
{
"path": "/",
"host": "some.exampledomain.example",
"method": "GET",
"proto": "HTTP/1.1",
"headers": {
"Accept": [
"*/*"
],
"User-Agent": [
"curl/8.15.0"
]
},
"namespace": "demo",
"ingress": "",
"service": "",
"pod": "echo-dc48d7cf8-vs2df"
}
If you get this response, you've succeeded — your Gateway API setup is functioning properly.
Navigating Troubleshooting Steps
It’s common for things not to work the way we expect. When that happens, a quick check on your resources can reveal what's gone awry.
Checking Gateway Status
Start by reviewing the status of your Gateway resource:
kubectl get gateway -n gateway-infra gateway -o yaml
In the
status section, ensure your Gateway shows the following:
-
Accepted: True — Confirms the Gateway was accepted by the controller.
-
Programmed: True — Indicates successful configuration of the Gateway.
- Relevant IP address shown under
.status.addresses.
Assessing the HTTPRoute Status
Next, examine your HTTPRoute:
kubectl get httproute -n demo echo -o yaml
Check the
status.parents section for any irregularities. Common errors might include:
- A
ResolvedRefs status set to False and labeled with a
BackendNotFound reason, which signals that there’s a mismatch in your backend Service's existence or name.
- An
Accepted status of False indicates that the route couldn’t connect to the Gateway, a situation often resulting from namespace permissions or hostname mismatches.
Understanding and addressing these issues is essential for smooth operation.
Final Thoughts on Your Gateway API Journey
Having worked through your local experiments with the Gateway API, you're now poised on the edge of something much bigger. The hands-on experience you've just acquired isn't merely academic; it's a crucial step toward implementing this technology in more demanding environments. You'll want to leverage that knowledge as you consider taking the next steps.
Moving from Test to Production
Here’s the essence: while local setups using kind (Kubernetes in Docker) are great for getting acquainted with Gateway API functionalities, they fall short in production scenarios. Ensure you select a production-grade implementation tailored to your specific use-case. To begin that journey, dive into the array of [Gateway API implementations](https://gateway-api.sigs.k8s.io/implementations/) available. Each comes with distinct features that might better serve your organizational needs.
Deepening Your Understanding
If you're keen to grasp the finer points of the Gateway API—like TLS management, traffic routing, and header adjustments—make sure to explore the comprehensive [Gateway API documentation](https://gateway-api.sigs.k8s.io/). This resource is not just an overview; it’s your guide to mastering advanced functionalities that could significantly enhance your service deployment capabilities.
Push the Envelope
Consider pushing your experiments further by testing out advanced routing techniques. Features like path-based routing and request mirroring are potent tools that can offer remarkable flexibility and control in traffic management. The [Gateway API user guides](https://gateway-api.sigs.k8s.io/guides/getting-started/) provide valuable insights to help you incorporate these capabilities effectively.
Proceed with Caution
Just a word of caution: despite the excitement that comes with experimenting, remember that the kind setup is intended solely for development purposes. Don’t venture into production with it. Transitioning to a strong, reliable Gateway API setup is not just recommended; it’s necessary for handling real-world workloads effectively.
Understanding this distinction is vital. As you shift from learning to deploying, ensure your strategies and implementations are solidified in a context that can support your growth and the demands of the production environment.