oc-observe - Man Page

Observe changes to resources and react to them (experimental)


oc observe [Options]


Observe changes to resources and take action on them

This command assists in building scripted reactions to changes that occur in Kubernetes or OpenShift resources. This is frequently referred to as a 'controller' in Kubernetes and acts to ensure particular conditions are maintained. On startup, observe will list all of the resources of a particular type and execute the provided script on each one. Observe watches the server for changes, and will reexecute the script for each update.

Observe works best for problems of the form "for every resource X, make sure Y is true". Some examples of ways observe can be used include:

· Ensure every namespace has a quota or limit range object

· Ensure every service is registered in DNS by making calls to a DNS API

· Send an email alert whenever a node reports 'NotReady'

· Watch for the 'FailedScheduling' event and write an IRC message

· Dynamically provision persistent volumes when a new PVC is created

· Delete pods that have reached successful completion after a period of time.

The simplest pattern is maintaining an invariant on an object - for instance, "every namespace should have an annotation that indicates its owner". If the object is deleted no reaction is necessary. A variation on that pattern is creating another object: "every namespace should have a quota object based on the resources allowed for an owner".

$ cat set_owner.sh
 if [[ "$(oc get namespace "$1" --template='{{ .metadata.annotations.owner }}')" == "" ]]; then
   oc annotate namespace "$1" owner=bob

$ oc observe namespaces -- ./set_owner.sh

The set _owner.sh script is invoked with a single argument (the namespace name) for each namespace. This simple script ensures that any user without the "owner" annotation gets one set, but preserves any existing value.

The next common of controller pattern is provisioning - making changes in an external system to match the state of a Kubernetes resource. These scripts need to account for deletions that may take place while the observe command is not running. You can provide the list of known objects via the --names command, which should return a newline-delimited list of names or namespace/name pairs. Your command will be invoked whenever observe checks the latest state on the server - any resources returned by --names that are not found on the server will be passed to your --delete command.

For example, you may wish to ensure that every node that is added to Kubernetes is added to your cluster inventory along with its IP:

$ cat add_to_inventory.sh
 echo "$1 $2" >> inventory
 sort -u inventory -o inventory

$ cat remove_from_inventory.sh
 grep -vE "^$1 " inventory > /tmp/newinventory
 mv -f /tmp/newinventory inventory

$ cat known_nodes.sh
 touch inventory
 cut -f 1-1 -d ' ' inventory

$ oc observe nodes -a '{ .status.addresses[0].address }' \
   --names ./known_nodes.sh \
   --delete ./remove_from_inventory.sh \
   -- ./add_to_inventory.sh

If you stop the observe command and then delete a node, when you launch observe again the contents of inventory will be compared to the list of nodes from the server, and any node in the inventory file that no longer exists will trigger a call to remove from inventory.sh with the name of the node.

Important: when handling deletes, the previous state of the object may not be available and only the name/namespace of the object will be passed to   your --delete command as arguments (all custom arguments are omitted).

More complicated interactions build on the two examples above - your inventory script could make a call to allocate storage on your infrastructure as a service, or register node names in DNS, or set complex firewalls. The more complex your integration, the more important it is to record enough data in the remote system that you can identify when resources on either side are deleted.



If true, list the requested object(s) across all projects. Project in current context is ignored.

-a,  --argument=""

Template for the arguments to be passed to each command in the format defined by --output.

-d,  --delete=""

A command to run when resources are deleted. Specify multiple times to add arguments.


Exit with status code 0 after the provided duration, optional.


The name of an interface to listen on to expose metrics and health checking.


Exit after this many errors have been detected with. May be set to -1 for no maximum.


A command that will list all of the currently known names, optional. Specify multiple times to add arguments. Use to get notifications when objects are deleted.


If true, skip printing information about each event prior to executing the command.


The name of an env var to serialize the object to when calling the command, optional.


If true, exit with a status code 0 after all current objects have been processed.


Controls the template type used for the --argument flags. Supported values are gotemplate and jsonpath.


If true, on exit write all metrics to stdout.


When non-zero, periodically reprocess every item from the server as a Sync event. Use to ensure external systems are kept up to date.


The number of times to retry a failing command before continuing.


If any command returns this exit code, retry up to --retry-count times.


If true return an error on any field or map key that is not missing in a template.


The name of an env var to set with the type of event received ('Sync', 'Updated', 'Deleted', 'Added') to the reaction command or --delete.

Options Inherited from Parent Commands


Allow a SignatureVerifier to use keys which are technically non-compliant with RFC6962.


log to standard error as well as files


Max number of application metrics to store (per container)


Username to impersonate for the operation


Group to impersonate for the operation, this flag can be repeated to specify multiple groups.


Path to the file containing Azure container registry configuration information.


Comma-separated list of files to check for boot-id. Use the first one that exists.


Default HTTP cache directory


Path to a cert file for the certificate authority


Path to a client certificate file for TLS


Path to a client key file for TLS


CIDRs opened in GCE firewall for LB traffic proxy health checks


The name of the kubeconfig cluster to use


location of the container hints file


containerd endpoint


The name of the kubeconfig context to use


Indicates the tolerationSeconds of the toleration for notReady:NoExecute that is added by default to every pod that does not already have such a toleration.


Indicates the tolerationSeconds of the toleration for unreachable:NoExecute that is added by default to every pod that does not already have such a toleration.


docker endpoint


use TLS to connect to docker


path to trusted CA


path to client certificate


path to private key


a comma-separated list of environment variable keys that needs to be collected for docker containers


Only report docker containers in addition to root stats


DEPRECATED: docker root is read from docker info (this is a fallback, default: /var/lib/docker)


Whether to enable cpu load reader


Max length of time for which to store events (per type). Value is a comma separated list of key values, where the keys are event types (e.g.: creation, oom) or "default" and the value is a duration. Default is applied to all non-specified event types


Max number of events to store (per type). Value is a comma separated list of key values, where the keys are event types (e.g.: creation, oom) or "default" and the value is an integer. Default is applied to all non-specified event types


Interval between global housekeepings


Interval between container housekeepings


If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure


Path to the kubeconfig file to use for CLI requests.


Maximum number of seconds between log flushes


when logging hits line file:N, emit a stack trace


Whether to log the usage of the cAdvisor container


If non-empty, write log files in this directory


log to standard error instead of files


Comma-separated list of files to check for machine-id. Use the first one that exists.


Require server version to match client version

-n,  --namespace=""

If present, the namespace scope for this CLI request


The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests.

-s,  --server=""

The address and port of the Kubernetes API server


logs at or above this threshold go to stderr


Writes in the storage driver will be buffered for this duration, and committed to the non memory backends as a single transaction


database name


database host:port


database password


use secure connection with database


table name


database username


Bearer token for authentication to the API server


The name of the kubeconfig user to use

-v,  --v=0

log level for V logs


Print version information and quit


comma-separated list of pattern=N settings for file-filtered logging


  # Observe changes to services
  oc observe services
  # Observe changes to services, including the clusterIP and invoke a script for each
  oc observe services -a '{ .spec.clusterIP }' -- register_dns.sh

See Also



June 2016, Ported from the Kubernetes man-doc generator

Referenced By


Openshift CLI User Manuals June 2016