Skip to content

Commit

Permalink
Fix capitalization of Kubernetes in the documentation.
Browse files Browse the repository at this point in the history
  • Loading branch information
a-robinson committed Jul 20, 2015
1 parent 7536db6 commit acd1bed
Show file tree
Hide file tree
Showing 61 changed files with 149 additions and 149 deletions.
4 changes: 2 additions & 2 deletions docs/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -40,8 +40,8 @@ Documentation for other releases can be found at
a Kubernetes cluster or administering it.

* The [Developer guide](devel/README.md) is for anyone wanting to write
programs that access the kubernetes API, write plugins or extensions, or
modify the core code of kubernetes.
programs that access the Kubernetes API, write plugins or extensions, or
modify the core code of Kubernetes.

* The [Kubectl Command Line Interface](user-guide/kubectl/kubectl.md) is a detailed reference on
the `kubectl` CLI.
Expand Down
4 changes: 2 additions & 2 deletions docs/admin/accessing-the-api.md
Original file line number Diff line number Diff line change
Expand Up @@ -33,7 +33,7 @@ Documentation for other releases can be found at

# Configuring APIserver ports

This document describes what ports the kubernetes apiserver
This document describes what ports the Kubernetes apiserver
may serve on and how to reach them. The audience is
cluster administrators who want to customize their cluster
or understand the details.
Expand All @@ -44,7 +44,7 @@ in [Accessing the cluster](../user-guide/accessing-the-cluster.md).

## Ports and IPs Served On

The Kubernetes API is served by the Kubernetes APIServer process. Typically,
The Kubernetes API is served by the Kubernetes apiserver process. Typically,
there is one of these running on a single kubernetes-master node.

By default the Kubernetes APIserver serves HTTP on 2 ports:
Expand Down
2 changes: 1 addition & 1 deletion docs/admin/authentication.md
Original file line number Diff line number Diff line change
Expand Up @@ -69,7 +69,7 @@ with a value of `Basic BASE64ENCODEDUSER:PASSWORD`.
We plan for the Kubernetes API server to issue tokens
after the user has been (re)authenticated by a *bedrock* authentication
provider external to Kubernetes. We plan to make it easy to develop modules
that interface between kubernetes and a bedrock authentication provider (e.g.
that interface between Kubernetes and a bedrock authentication provider (e.g.
github.com, google.com, enterprise directory, kerberos, etc.)


Expand Down
4 changes: 2 additions & 2 deletions docs/admin/cluster-troubleshooting.md
Original file line number Diff line number Diff line change
Expand Up @@ -75,7 +75,7 @@ Root causes:
- Network partition within cluster, or between cluster and users
- Crashes in Kubernetes software
- Data loss or unavailability of persistent storage (e.g. GCE PD or AWS EBS volume)
- Operator error, e.g. misconfigured kubernetes software or application software
- Operator error, e.g. misconfigured Kubernetes software or application software

Specific scenarios:
- Apiserver VM shutdown or apiserver crashing
Expand Down Expand Up @@ -127,7 +127,7 @@ Mitigations:
- Action: Snapshot apiserver PDs/EBS-volumes periodically
- Mitigates: Apiserver backing storage lost
- Mitigates: Some cases of operator error
- Mitigates: Some cases of kubernetes software fault
- Mitigates: Some cases of Kubernetes software fault

- Action: use replication controller and services in front of pods
- Mitigates: Node shutdown
Expand Down
12 changes: 6 additions & 6 deletions docs/admin/dns.md
Original file line number Diff line number Diff line change
Expand Up @@ -33,7 +33,7 @@ Documentation for other releases can be found at

# DNS Integration with Kubernetes

As of kubernetes 0.8, DNS is offered as a [cluster add-on](http://releases.k8s.io/HEAD/cluster/addons/README.md).
As of Kubernetes 0.8, DNS is offered as a [cluster add-on](http://releases.k8s.io/HEAD/cluster/addons/README.md).
If enabled, a DNS Pod and Service will be scheduled on the cluster, and the kubelets will be
configured to tell individual containers to use the DNS Service's IP to resolve DNS names.

Expand All @@ -42,7 +42,7 @@ assigned a DNS name. By default, a client Pod's DNS search list will
include the Pod's own namespace and the cluster's default domain. This is best
illustrated by example:

Assume a Service named `foo` in the kubernetes namespace `bar`. A Pod running
Assume a Service named `foo` in the Kubernetes namespace `bar`. A Pod running
in namespace `bar` can look up this service by simply doing a DNS query for
`foo`. A Pod running in namespace `quux` can look up this service by doing a
DNS query for `foo.bar`.
Expand All @@ -53,14 +53,14 @@ supports forward lookups (A records) and service lookups (SRV records).
## How it Works

The running DNS pod holds 3 containers - skydns, etcd (a private instance which skydns uses),
and a kubernetes-to-skydns bridge called kube2sky. The kube2sky process
watches the kubernetes master for changes in Services, and then writes the
and a Kubernetes-to-skydns bridge called kube2sky. The kube2sky process
watches the Kubernetes master for changes in Services, and then writes the
information to etcd, which skydns reads. This etcd instance is not linked to
any other etcd clusters that might exist, including the kubernetes master.
any other etcd clusters that might exist, including the Kubernetes master.

## Issues

The skydns service is reachable directly from kubernetes nodes (outside
The skydns service is reachable directly from Kubernetes nodes (outside
of any container) and DNS resolution works if the skydns service is targeted
explicitly. However, nodes are not configured to use the cluster DNS service or
to search the cluster's DNS domain by default. This may be resolved at a later
Expand Down
4 changes: 2 additions & 2 deletions docs/admin/kube-apiserver.md
Original file line number Diff line number Diff line change
Expand Up @@ -38,7 +38,7 @@ Documentation for other releases can be found at
### Synopsis


The kubernetes API server validates and configures data
The Kubernetes API server validates and configures data
for the api objects which include pods, services, replicationcontrollers, and
others. The API Server services REST operations and provides the frontend to the
cluster's shared state through which all other components interact.
Expand Down Expand Up @@ -80,7 +80,7 @@ cluster's shared state through which all other components interact.
--kubelet_port=0: Kubelet port
--kubelet_timeout=0: Timeout for kubelet operations
--long-running-request-regexp="(/|^)((watch|proxy)(/|$)|(logs|portforward|exec)/?$)": A regular expression matching long running requests which should be excluded from maximum inflight request handling.
--master-service-namespace="": The namespace from which the kubernetes master services should be injected into pods
--master-service-namespace="": The namespace from which the Kubernetes master services should be injected into pods
--max-requests-inflight=400: The maximum number of requests in flight at a given time. When the server exceeds this, it rejects requests. Zero for no limit.
--min-request-timeout=1800: An optional field indicating the minimum number of seconds a handler must keep a request open before timing it out. Currently only honored by the watch request handler, which picks a randomized value above this number as the connection timeout, to spread out load.
--old-etcd-prefix="": The previous prefix for all resource paths in etcd, if any.
Expand Down
2 changes: 1 addition & 1 deletion docs/admin/kube-controller-manager.md
Original file line number Diff line number Diff line change
Expand Up @@ -38,7 +38,7 @@ Documentation for other releases can be found at
### Synopsis


The kubernetes controller manager is a daemon that embeds
The Kubernetes controller manager is a daemon that embeds
the core control loops shipped with Kubernetes. In applications of robotics and
automation, a control loop is a non-terminating loop that regulates the state of
the system. In Kubernetes, a controller is a control loop that watches the shared
Expand Down
2 changes: 1 addition & 1 deletion docs/admin/kube-proxy.md
Original file line number Diff line number Diff line change
Expand Up @@ -38,7 +38,7 @@ Documentation for other releases can be found at
### Synopsis


The kubernetes network proxy runs on each node. This
The Kubernetes network proxy runs on each node. This
reflects services as defined in the Kubernetes API on each node and can do simple
TCP,UDP stream forwarding or round robin TCP,UDP forwarding across a set of backends.
Service cluster ips and ports are currently found through Docker-links-compatible
Expand Down
2 changes: 1 addition & 1 deletion docs/admin/kube-scheduler.md
Original file line number Diff line number Diff line change
Expand Up @@ -38,7 +38,7 @@ Documentation for other releases can be found at
### Synopsis


The kubernetes scheduler is a policy-rich, topology-aware,
The Kubernetes scheduler is a policy-rich, topology-aware,
workload-specific function that significantly impacts availability, performance,
and capacity. The scheduler needs to take into account individual and collective
resource requirements, quality of service requirements, hardware/software/policy
Expand Down
2 changes: 1 addition & 1 deletion docs/admin/kubelet.md
Original file line number Diff line number Diff line change
Expand Up @@ -91,7 +91,7 @@ HTTP server: The kubelet can also listen for HTTP and respond to a simple API
--kubeconfig=: Path to a kubeconfig file, specifying how to authenticate to API server (the master location is set by the api-servers flag).
--low-diskspace-threshold-mb=0: The absolute free disk space, in MB, to maintain. When disk space falls below this threshold, new pods would be rejected. Default: 256
--manifest-url="": URL for accessing the container manifest
--master-service-namespace="": The namespace from which the kubernetes master services should be injected into pods
--master-service-namespace="": The namespace from which the Kubernetes master services should be injected into pods
--max-pods=40: Number of Pods that can run on this Kubelet.
--maximum-dead-containers=0: Maximum number of old instances of a containers to retain globally. Each container takes up some disk space. Default: 100.
--maximum-dead-containers-per-container=0: Maximum number of old instances of a container to retain per container. Each container takes up some disk space. Default: 2.
Expand Down
4 changes: 2 additions & 2 deletions docs/admin/multi-cluster.md
Original file line number Diff line number Diff line change
Expand Up @@ -33,7 +33,7 @@ Documentation for other releases can be found at

# Considerations for running multiple Kubernetes clusters

You may want to set up multiple kubernetes clusters, both to
You may want to set up multiple Kubernetes clusters, both to
have clusters in different regions to be nearer to your users, and to tolerate failures and/or invasive maintenance.
This document describes some of the issues to consider when making a decision about doing so.

Expand Down Expand Up @@ -67,7 +67,7 @@ Reasons to have multiple clusters include:

## Selecting the right number of clusters

The selection of the number of kubernetes clusters may be a relatively static choice, only revisited occasionally.
The selection of the number of Kubernetes clusters may be a relatively static choice, only revisited occasionally.
By contrast, the number of nodes in a cluster and the number of pods in a service may be change frequently according to
load and growth.

Expand Down
4 changes: 2 additions & 2 deletions docs/admin/node.md
Original file line number Diff line number Diff line change
Expand Up @@ -125,7 +125,7 @@ number of pods that can be scheduled onto the node.

### Node Info

General information about the node, for instance kernel version, kubernetes version
General information about the node, for instance kernel version, Kubernetes version
(kubelet version, kube-proxy version), docker version (if used), OS name.
The information is gathered by Kubelet from the node.

Expand Down Expand Up @@ -231,7 +231,7 @@ Normally, nodes register themselves and report their capacity when creating the
you are doing [manual node administration](#manual-node-administration), then you need to set node
capacity when adding a node.

The kubernetes scheduler ensures that there are enough resources for all the pods on a node. It
The Kubernetes scheduler ensures that there are enough resources for all the pods on a node. It
checks that the sum of the limits of containers on the node is no greater than than the node capacity. It
includes all containers started by kubelet, but not containers started directly by docker, nor
processes not in containers.
Expand Down
2 changes: 1 addition & 1 deletion docs/admin/resource-quota.md
Original file line number Diff line number Diff line change
Expand Up @@ -63,7 +63,7 @@ Neither contention nor changes to quota will affect already-running pods.

## Enabling Resource Quota

Resource Quota support is enabled by default for many kubernetes distributions. It is
Resource Quota support is enabled by default for many Kubernetes distributions. It is
enabled when the apiserver `--admission_control=` flag has `ResourceQuota` as
one of its arguments.

Expand Down
4 changes: 2 additions & 2 deletions docs/admin/salt.md
Original file line number Diff line number Diff line change
Expand Up @@ -95,15 +95,15 @@ Key | Value
------------- | -------------
`api_servers` | (Optional) The IP address / host name where a kubelet can get read-only access to kube-apiserver
`cbr-cidr` | (Optional) The minion IP address range used for the docker container bridge.
`cloud` | (Optional) Which IaaS platform is used to host kubernetes, *gce*, *azure*, *aws*, *vagrant*
`cloud` | (Optional) Which IaaS platform is used to host Kubernetes, *gce*, *azure*, *aws*, *vagrant*
`etcd_servers` | (Optional) Comma-delimited list of IP addresses the kube-apiserver and kubelet use to reach etcd. Uses the IP of the first machine in the kubernetes_master role, or 127.0.0.1 on GCE.
`hostnamef` | (Optional) The full host name of the machine, i.e. uname -n
`node_ip` | (Optional) The IP address to use to address this node
`hostname_override` | (Optional) Mapped to the kubelet hostname_override
`network_mode` | (Optional) Networking model to use among nodes: *openvswitch*
`networkInterfaceName` | (Optional) Networking interface to use to bind addresses, default value *eth0*
`publicAddressOverride` | (Optional) The IP address the kube-apiserver should use to bind against for external read-only access
`roles` | (Required) 1. `kubernetes-master` means this machine is the master in the kubernetes cluster. 2. `kubernetes-pool` means this machine is a kubernetes-minion. Depending on the role, the Salt scripts will provision different resources on the machine.
`roles` | (Required) 1. `kubernetes-master` means this machine is the master in the Kubernetes cluster. 2. `kubernetes-pool` means this machine is a kubernetes-minion. Depending on the role, the Salt scripts will provision different resources on the machine.

These keys may be leveraged by the Salt sls files to branch behavior.

Expand Down
2 changes: 1 addition & 1 deletion docs/design/access.md
Original file line number Diff line number Diff line change
Expand Up @@ -200,7 +200,7 @@ Namespaces versus userAccount vs Labels:

Goals for K8s authentication:
- Include a built-in authentication system with no configuration required to use in single-user mode, and little configuration required to add several user accounts, and no https proxy required.
- Allow for authentication to be handled by a system external to Kubernetes, to allow integration with existing to enterprise authorization systems. The kubernetes namespace itself should avoid taking contributions of multiple authorization schemes. Instead, a trusted proxy in front of the apiserver can be used to authenticate users.
- Allow for authentication to be handled by a system external to Kubernetes, to allow integration with existing to enterprise authorization systems. The Kubernetes namespace itself should avoid taking contributions of multiple authorization schemes. Instead, a trusted proxy in front of the apiserver can be used to authenticate users.
- For organizations whose security requirements only allow FIPS compliant implementations (e.g. apache) for authentication.
- So the proxy can terminate SSL, and isolate the CA-signed certificate from less trusted, higher-touch APIserver.
- For organizations that already have existing SaaS web services (e.g. storage, VMs) and want a common authentication portal.
Expand Down
2 changes: 1 addition & 1 deletion docs/design/clustering.md
Original file line number Diff line number Diff line change
Expand Up @@ -36,7 +36,7 @@ Documentation for other releases can be found at

## Overview

The term "clustering" refers to the process of having all members of the kubernetes cluster find and trust each other. There are multiple different ways to achieve clustering with different security and usability profiles. This document attempts to lay out the user experiences for clustering that Kubernetes aims to address.
The term "clustering" refers to the process of having all members of the Kubernetes cluster find and trust each other. There are multiple different ways to achieve clustering with different security and usability profiles. This document attempts to lay out the user experiences for clustering that Kubernetes aims to address.

Once a cluster is established, the following is true:

Expand Down
2 changes: 1 addition & 1 deletion docs/design/expansion.md
Original file line number Diff line number Diff line change
Expand Up @@ -94,7 +94,7 @@ script that sets up the environment and runs the command. This has a number of

1. Solutions that require a shell are unfriendly to images that do not contain a shell
2. Wrapper scripts make it harder to use images as base images
3. Wrapper scripts increase coupling to kubernetes
3. Wrapper scripts increase coupling to Kubernetes

Users should be able to do the 80% case of variable expansion in command without writing a wrapper
script or adding a shell invocation to their containers' commands.
Expand Down
4 changes: 2 additions & 2 deletions docs/design/secrets.md
Original file line number Diff line number Diff line change
Expand Up @@ -81,7 +81,7 @@ Goals of this design:
the kubelet implement some reserved behaviors based on the types of secrets the service account
consumes:
1. Use credentials for a docker registry to pull the pod's docker image
2. Present kubernetes auth token to the pod or transparently decorate traffic between the pod
2. Present Kubernetes auth token to the pod or transparently decorate traffic between the pod
and master service
4. As a user, I want to be able to indicate that a secret expires and for that secret's value to
be rotated once it expires, so that the system can help me follow good practices
Expand Down Expand Up @@ -112,7 +112,7 @@ other system components to take action based on the secret's type.
#### Example: service account consumes auth token secret

As an example, the service account proposal discusses service accounts consuming secrets which
contain kubernetes auth tokens. When a Kubelet starts a pod associated with a service account
contain Kubernetes auth tokens. When a Kubelet starts a pod associated with a service account
which consumes this type of secret, the Kubelet may take a number of actions:

1. Expose the secret in a `.kubernetes_auth` file in a well-known location in the container's
Expand Down
6 changes: 3 additions & 3 deletions docs/design/security.md
Original file line number Diff line number Diff line change
Expand Up @@ -55,14 +55,14 @@ While Kubernetes today is not primarily a multi-tenant system, the long term evo

We define "user" as a unique identity accessing the Kubernetes API server, which may be a human or an automated process. Human users fall into the following categories:

1. k8s admin - administers a kubernetes cluster and has access to the underlying components of the system
1. k8s admin - administers a Kubernetes cluster and has access to the underlying components of the system
2. k8s project administrator - administrates the security of a small subset of the cluster
3. k8s developer - launches pods on a kubernetes cluster and consumes cluster resources
3. k8s developer - launches pods on a Kubernetes cluster and consumes cluster resources

Automated process users fall into the following categories:

1. k8s container user - a user that processes running inside a container (on the cluster) can use to access other cluster resources independent of the human users attached to a project
2. k8s infrastructure user - the user that kubernetes infrastructure components use to perform cluster functions with clearly defined roles
2. k8s infrastructure user - the user that Kubernetes infrastructure components use to perform cluster functions with clearly defined roles


### Description of roles
Expand Down
Loading

0 comments on commit acd1bed

Please sign in to comment.