Monitoring & Logging
You can monitor the vcluster either from the host cluster or directly from within the vcluster.
info
In order to get node metrics from within the vcluster, vcluster will need to have RBAC permissions to access them. These permissions are given to vcluster when synchronization of the real nodes is enabled. See Nodes documentation page for more details.
#
Enabling the metrics server proxy (Recommended)info
This feature requires a working installation of metrics server on the host cluster
Its possible to proxy the metrics server in the underlying host cluster and get the pod
/node
metrics individually or both of them according to the use case. This can be enabled with the following values:
#
Installing metrics server (inside vcluster)In case the above recommended method of getting metrics in vcluster using the metrics server proxy does not fulfil your requirements and you need a dedicated metrics server installation in the vcluster you can follow this section. Make sure the vcluster has access to the host clusters nodes. Enabling real nodes synchronization will create the required RBAC permissions.
Install the metrics server via the official method into the vcluster.
Wait until the metrics server has started. You should be now able to use kubectl top pods
and kubectl top nodes
within the vcluster:
If you see below error after installing metrics-server (check k3s#5334 for more information):
Create a file named 'metrics_patch.yaml' with the following contents:
and apply the patch with kubectl:
#
How does it work?By default, vcluster will create a service for each node which redirects incoming traffic from within the vcluster to the node kubelet to vcluster itself. This means that if workloads within the vcluster try to scrape node metrics the traffic reaches vcluster first. Vcluster will redirect the incoming request to the host cluster and rewrite the response (pod names, pod namespaces etc) and return it to the requester.
#
Monitoring the vclusterVcluster is able to rewrite node stats and metrics. This means monitoring a vcluster works similar to monitoring a regular Kubernetes cluster.
info
You need to make sure that vcluster has access to the host clusters nodes. Enabling real nodes synchronization will create the required RBAC permissions.
Please follow the official Kuberentes documentation on how to monitor a Kubernetes cluster.
#
How does it work?By default, vcluster will create a service for each node which redirects incoming traffic from within the vcluster to the node kubelet to vcluster itself. This means that if workloads within the vcluster try to scrape node metrics the traffic reaches vcluster first. Vcluster will redirect the incoming request to the host cluster and rewrite the response (pod names, pod namespaces etc) and return it to the requester.
#
Monitoring the vcluster StatefulSetvcluster exposes metrics endpoints on https://0.0.0.0:8443/metrics
(syncer metrics) and https://0.0.0.0:6444/metrics
(k3s metrics). In order to scrape those metrics, you will need to send an Authorization
header with a valid virtual cluster service account token, that has permissions to access the /metrics
endpoint within the vcluster.
#
LoggingYou can enable logging for vcluster pods right from host cluster or from within each vcluster as well.
#
Enabling Hostpath MapperVcluster internal logging relies on enabling a vcluster component called the Hostpath Mapper. This will make sure to resolve the correct virtual pod and container names to their physical counterparts.
To enable this component, simply create or upgrade an existing vcluster with the following values
Once deployed successfully a new Daemonset
component of vcluster would start running on every node.
We can now install our desired logging stack and start collecting the logs.
#
Logging with ELK and fluentd inside vcluster:Install the ELK stack:
Next install fluentd daemonset, this can be found on github:
Alternatively, you can also deploy via the helm charts provided by fluentbit.
Check for available indices -
port-forward
theelasticsearch-master
on port9100
and visit the http://localhost:9200/_cat/indices, you should see the followinglogstash-*
indices available:Next
port-forward
the kibana dashboard on its default port5601
and navigate to http://localhost:5601/app/management or choose "Stack Management" from left menu side bar.Choose "Index Patterns" and click on "Create index Pattern"
Type the Name as
logstash*
and@timestamp
for the Timestamp field and click on "Create index pattern"Now you can navigate to http://localhost:5601/app/discover or click on "Discover" from the left sidebar menu and should start seeing your logs.
#
Logging with Grafana and Loki- Install the Prometheus stack:
- Install Loki:
- Open the Grafana Dashboard:
- Port-forward grafana dashboard
kubectl port-forward -n monitor service/prometheus-grafana 3000:80
- Get Grafana credentials
kubectl get secrets -n monitor prometheus-grafana -o jsonpath='{.data.admin-password}' | base64 -D
- Navigate to http://localhost:3000
- Port-forward grafana dashboard
- Add a data source by navigating to http://localhost:3000/datasources or click "Data Sources" under the ⚙️ icon from left menu
- Click on "Add Data Sources" and select "Loki" from the list.
- Enter the loki endpoint in the
URL
field ashttp://loki.monitoring:3100
or to the corresponding<name>.<namespace>:<port>
value according to your deployment, and click on "Save & test". - Next click on "Explore" or navigate to http://localhost:3000/explore and select "Loki" from the dropdown menu. Select the desired Labels and Click on "Run query". Youre logs should now start appearing.