问题描述
我正在尝试加强监控,并希望扩大从我们的Kube资产中提取到Prometheus中的指标数量.我们已经有一个独立的Prom实现,该实现具有一个硬编码的配置文件,用于监视一些裸机服务器,并且可以通过cadvisor来获取通用的Pod指标.
我想做的是配置Kube以监视群集中部署的Web服务器的apache_exporter指标,还可以在实例扩展时动态添加第二,第三等Web服务器.
我看过kube-prometheus项目,但这似乎更适合没有部署既定Prometheus的实例.是否有一种简单的方法来使Prometheus刮擦Kube API或etcd来提取与特定条件匹配的Pod的当前列表(例如,像deploymentType=webserver
这样的标记),并刮擦这些Pod的apache_exporter指标,然后刮擦mysqld_exporter指标,其中deploymentType=mysql
I'm trying to enhance my monitoring and want to expand the amount of metrics pulled into Prometheus from our Kube estate. We already have a stand alone Prom implementation which has a hard coded config file monitoring some bare metal servers, and hooks into cadvisor for generic Pod metrics.
What i would like to do is configure Kube to monitor the apache_exporter metrics from a webserver deployed in the cluster, but also dynamically add a 2nd, 3rd etc webserver as the instances are scaled up.
I've looked at the kube-prometheus project, but this seems to be more geared to instances where there is no established Prometheus deployed. Is there a simple way to get prometheus to scrape the Kube API or etcd to pull in the current list of pods which match a certain criteria (ie, a tag like deploymentType=webserver
) and scrape the apache_exporter metrics for these pods, and scrape the mysqld_exporter metrics where deploymentType=mysql
推荐答案
有一个名为kube-prometheus-stack
(以前为prometheus-operator
)的项目: https://github.com/prometheus-community/helm-charts/tree/main/charts/kube-prometheus-stack
There's a project called kube-prometheus-stack
(formerly prometheus-operator
): https://github.com/prometheus-community/helm-charts/tree/main/charts/kube-prometheus-stack
它具有称为ServiceMonitor
和PodMonitor
的概念:
- https://github.com /prometheus-operator/prometheus-operator/blob/master/Documentation/design.md#servicemonitor
- https://github.com /prometheus-operator/prometheus-operator/blob/master/Documentation/design.md#podmonitor
- https://github.com/prometheus-operator/prometheus-operator/blob/master/Documentation/design.md#servicemonitor
- https://github.com/prometheus-operator/prometheus-operator/blob/master/Documentation/design.md#podmonitor
基本上,这是一个选择器,它将您的Prometheus实例指向抓取目标.对于服务选择器,它将发现服务后面的所有Pod.如果是广告连播选择器,它将直接发现广告连播.在这两种情况下,Prometheus scrape的配置都会自动更新并重新加载.
Basically, this is a selector that points your Prometheus instance to scrape targets. In the case of service selector, it discovers all the pods behind the service. In the case of a pod selector, it discovers pods directly. Prometheus scrape config is updated and reloaded automatically in both cases.
示例PodMonitor
:
apiVersion: monitoring.coreos.com/v1
kind: PodMonitor
metadata:
name: example
namespace: monitoring
spec:
podMetricsEndpoints:
- interval: 30s
path: /metrics
port: http
namespaceSelector:
matchNames:
- app
selector:
matchLabels:
app.kubernetes.io/name: my-app
请注意,此PodMonitor
对象本身必须由控制器发现.为此,您可以编写一个PodMonitorSelector
(链接).这种额外的显式链接是有意完成的-通过这种方式,如果您的群集上有2个Prometheus实例(例如Infra
和Product
),则可以将哪个Prometheus分离哪些Pod到其抓取配置中.
Note that this PodMonitor
object itself must be discovered by the controller. To achieve this you write a PodMonitorSelector
(link). This additional explicit linkage is done intentionally - in this way, if you have 2 Prometheus instances on your cluster (say Infra
and Product
) you can separate which Prometheus will get which Pods to its scraping config.
ServiceMonitor
也是如此.
这篇关于根据Pod标签动态更新Prometheus抓取配置的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!