Skip to content

Commit

Permalink
test
Browse files Browse the repository at this point in the history
  • Loading branch information
philipsens committed Nov 7, 2024
1 parent ef818a6 commit c0bbaca
Showing 1 changed file with 185 additions and 0 deletions.
185 changes: 185 additions & 0 deletions Academy.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,185 @@
# Kubernetes Guide

This guide explains the main sections available in the Academy. For a deeper understanding, it's important to explore the external resources and references mentioned.

## Containers

This section talks about containers—what they are, how they work, and why they're useful in modern software development.

### Containers vs. Virtual Machines (VMs)

It’s helpful to understand how containers differ from virtual machines (VMs) to see why containers are a better fit for certain tasks.

#### What is a VM?

A virtual machine (VM) is like a computer within a computer. It acts as if it’s a complete machine on its own, allowing different operating systems to run on one physical computer.

Benefits of VMs include:
- **Isolation:** Each VM runs separately, so they don’t interfere with each other. This is sometimes referred to as "sandboxing", because each VM has its own isolated environment.
- **Portability:** VMs can easily be moved from one machine to another.
- **Flexible Resources:** VMs can adapt to the needs of different applications.

#### What is a Container?

A container is a lighter, faster way to run software. It packages an app and everything it needs to run, but it shares the operating system with other containers, making it more efficient than VMs.

Key features of containers:
- **Lightweight:** They use fewer resources than VMs.
- **Fast:** Containers start and stop quickly.
- **Isolated:** Containers keep apps from conflicting with each other.

Containers are faster than virtual machines (VMs) because they are lighter and more efficient.

- **VMs** need to run a full operating system (OS) inside each VM. Each VM includes its own set of libraries, software, and the OS itself. This means the computer has to run multiple full OSs at the same time, which takes up a lot of memory and processing power.

- **Containers**, on the other hand, share the host computer's OS. Instead of each container having its own OS, they only include the code and files needed for the application to run. This makes containers much smaller and faster because they don’t need all the extra overhead that VMs do.

## Docker

### What is Docker?

Docker is a popular tool that helps developers easily create and manage containers. It includes everything you need to develop containers, like a build system, package manager, and container runtime. Although containers can be used without Docker, it simplifies the process.

You can learn more about Docker [here](https://docs.docker.com/get-started/).

#### Important Docker Components

##### Images

A Docker image is a pre-packaged "snapshot" that includes everything needed to run an application in a container, such as code, system libraries, and tools.

##### Dockerfile

Docker images are created using a special file called a `Dockerfile`. The Dockerfile contains a set of instructions that define the environment and dependencies needed to run your application. For example, you might specify a base image (like a specific version of Linux or Node.js) and then add layers for copying code or installing packages. You can see more about Dockerfile [here](https://docs.docker.com/reference/dockerfile/).

**Example of a simple Dockerfile:**
```dockerfile
# Use an official Node.js runtime as a base image
FROM node:14

# Set the working directory in the container
WORKDIR /app

# Copy the package.json file and install dependencies
COPY package*.json ./
RUN npm install

# Copy the rest of the application code
COPY . .

# Expose the application port
EXPOSE 8080

# Start the app
CMD ["node", "server.js"]
```

##### Layers

Docker images are built in layers. Each instruction in a Dockerfile creates a new layer in the image. Layers are reused (cached) to improve efficiency. For example, if you change your application code but not the dependencies, Docker will only rebuild the layer with the new code, making the process faster.

##### Containers

Containers are instances of Docker images. They can be started, stopped, and destroyed as needed.

### What is Docker Compose?

Docker Compose is a tool that helps manage multiple containers at once, making it easier to run complex applications without the need for long commands. It uses a YAML configuration file (`docker-compose.yml`) to define all the services your application requires, such as databases, web servers, or background workers.

An example `docker-compose.yml` file might look like this:
```yaml
version: '3'
services:
web:
image: my-app
ports:
- "8080:8080"
volumes:
- .:/app
redis:
image: "redis:alpine"
```
In this example, two services are defined: `web`, which uses a custom image (`my-app`), and `redis`, which uses the official Redis image. Docker Compose helps you start all these services with a single command: `docker-compose up`.

Learn more about Docker Compose [here](https://docs.docker.com/compose/) and [getting started](https://docs.docker.com/compose/gettingstarted/). You can also refer to the [Compose file reference](https://docs.docker.com/reference/compose-file/).

## Docker Hub

Docker Hub is an online service where developers can share and download container images. For example, the Frank!Framework has its own section on Docker Hub where you can find its official images: [Frank!Framework Docker Hub](https://hub.docker.com/u/frankframework).

## Kubernetes

### What is Kubernetes?

Kubernetes (K8s) is a system that automatically manages and scales apps that run in containers, ensuring smooth operation even as demand increases. It provides features like self-healing, automated rollouts, and service discovery.

You can learn more about Kubernetes from the [official overview](https://kubernetes.io/docs/concepts/overview/).

### How is Kubernetes Different from Docker Compose?

While Docker Compose works well for running multiple containers on one machine, Kubernetes is designed for managing large-scale apps across many machines (nodes). It also provides extra features like automatic scaling, load balancing, and failover handling. With Docker Compose, you configure apps in a single file, but Kubernetes typically requires more complex configurations due to its distributed nature.

For example, when you want to mount a file in Docker Compose, you simply create a mount volume, which acts like a file mapping. In Kubernetes, you need to create a `ConfigMap`, define it as a `volume` in the `Deployment`, and then add a `volumeMount` to the container. This more detailed configuration allows for flexibility in choosing various storage systems like Secrets, Persistent Volume Claims, dynamic volumes, NFS, local storage, etc.

### Pods

Whereas Docker Compose uses a single container as smallest deployable unit, Kubernetes uses pods. These Pods are the smallest deployable units that you can create and manage in Kubernetes.

A Pod (as in a pod of whales or pea pod) is a group of one or more containers, with shared storage and network resources, and a specification for how to run the containers.

Pods are the most important basic Kubernetes objects and you can read more about them [here](https://kubernetes.io/docs/concepts/workloads/pods/pod/).

### Networking in Kubernetes

When working with Kubernetes, networking plays a significant role. You are often working across multiple machines, and sometimes even across different zones or sites. Even when running Kubernetes on a single machine, the same networking principles apply, creating an emulated internal network that can feel cumbersome at times. Networking concepts like service discovery, DNS resolution, and load balancing are handled within Kubernetes, but you still need to manage how pods communicate with each other and external systems.

To dive deeper into Kubernetes networking concepts, you can visit the [Networking in Kubernetes](https://kubernetes.io/docs/concepts/services-networking/) documentation.

### What is a Kubernetes Cluster?

A Kubernetes cluster is a group of machines (real or virtual) that work together to run your applications inside containers. These machines, or nodes, provide the necessary computing power. The cluster also includes networking, storage, and other components needed to run applications efficiently across different nodes.

### How Kubernetes Works with Programs

Kubernetes doesn't perform many functions by itself; it mainly serves to manage and monitor the lifecycle of resources. For example, when you create an ingress to make a service accessible via a URL, you are actually creating a resource with an `ingressClass` specification. This `ingressClass` tells Kubernetes which ingress controller (e.g., Nginx) should create the ingress. The ingress controller is itself an application, often running within the cluster, like Nginx, that performs the task.

In some cases, Kubernetes resources can also use infrastructure from a cloud platform. For example, when using Azure’s integrated gateway, the ingress isn't handled by an application inside the cluster but rather through Azure’s infrastructure. This shows how Kubernetes acts as a management tool for various external services and applications.

### Differences Between AKS, EKS, and On-Prem

- **AKS (Azure Kubernetes Service):** Microsoft’s managed Kubernetes service that simplifies Kubernetes operations in the cloud.
- **EKS (Elastic Kubernetes Service):** Amazon’s managed Kubernetes service that integrates seamlessly with AWS.
- **On-Prem:** Running Kubernetes on your own servers, giving you full control but requiring more management, particularly around hardware and networking.

### Working with Kubernetes Resources

Kubernetes operates using YAML configuration files that describe how your containers should behave. This could include scaling rules, storage solutions, and networking configurations. Everything you define in a Kubernetes file becomes a resource within the cluster. Resources like Deployments, Services, Ingresses, ConfigMaps, and Secrets are used to manage different aspects of containerized applications.

For an in-depth overview of Kubernetes resources, you can explore the [Kubernetes API Reference](https://kubernetes.io/docs/reference/kubernetes-api/).

#### Viewing Resources

Kubernetes can sometimes make it challenging to get an overview of all resources, as `kubectl` lists resources separately. Tools like the [Kubernetes Dashboard](https://kubernetes.io/docs/tasks/access-application-cluster/web-ui-dashboard/) or [Lens](https://k8slens.dev/) provide a more user-friendly overview of resources, making it easier to manage and visualize the entire cluster. Lens even allows you to create resources directly from its interface.

### Helm

Helm is a tool that simplifies the process of installing and managing Kubernetes applications by packaging them into charts. Helm Charts contain all the necessary configuration to deploy a given application and can be customized to fit your environment.

You can read more about Helm [here](https://helm.sh/docs/).

#### Helm Charts

Helm Charts use Go templates, which allow developers to create dynamic values and generate configurations automatically. Helm supports sub-charts and library charts, which help modularize large applications, making it easier to include additional services or reuse common configurations.

Learn about creating Helm charts in the [Helm Chart documentation](https://helm.sh/docs/topics/charts/).

##### Complexity of Helm Charts

Although Helm simplifies many aspects of Kubernetes deployment, it still requires careful configuration, especially when dealing with storage solutions or ingress controllers. Customization is often necessary based on the specific cluster's setup, which requires a good understanding of the cluster's architecture.

For example, charts like those for the Frank!Framework allow for customization by letting you add resources directly, avoiding excessive abstraction while still simplifying deployment. This ensures that if you know how to deploy an application manually, you can apply that same knowledge to the Helm chart without any hidden magic or unfamiliar properties.

## What is Kustomize?

Kustomize is another tool for managing Kubernetes apps. Instead of packaging apps like Helm, it focuses on making small changes to existing configurations. You can explore more about Kustomize [here](https://kustomize.io/).

0 comments on commit c0bbaca

Please sign in to comment.