This document helps you to state the reason for an error in the Operator Service for Jenkins®, which is the first step in solving it.

Operator logs

Operator Service for Jenkins® can provide some useful logs. If you prefer using a CLI run the following commands:

$ kubectl logs <controller-manager-pod-name> -f 

In the Openshift console you can get logs from the UI. Go to the ‘Pods’ section, choose controller-manager pod and check the logs tab.

In the logs look for WARNING, ERROR and SEVERE keywords.

Jenkins logs

If the container is in a CrashLoopBackoff the fault is in the Jenkins itself. If the container is in a ImagePullBackOff, it means that Operator can’t download the image, check the ‘name’ value in Jenkins Custom Resource and make sure you have the access to the repo. If the Operator is constantly terminating our pod with ‘missing-plugins’ messages that means the plugins lost compatibility with the Jenkins image and their version need to be updated. To learn more about the possible error check the state of the pod:

$ kubectl -n <namespace-name> get po <name-of-the-jenkins-pod> -w


$ kubectl -n <namespace-name> describe po <name-of-the-jenkins-pod>

and check the logs from the Jenkins container:

$ kubectl -n <namespace-name> logs <jenkins-pod> jenkins-controller -f 

The same can be done through the Openshift console. Go to the ‘Pods’ section, choose the Jenkins Pod and check the ‘Logs’ tab.

Kubernetes Events

Sometimes Events provide a great dose of information, especially in the case some Kubernetes resource doesn’t want to become Ready. To obtain the events in your Cluster run:

$ kubectl -n <namespace> get events --sort-by='{.lastTimestamp}'

Quick soft reset

You can always kill the Jenkins pod and wait for it to come up again. All the version-controlled configurations will be downloaded again and the rest will be discarded. Chances are the buggy part will be gone.

$ kubectl delete pod <jenkins-pod>