问题描述
我正试图使OpenTelemetry导出器与OpenTelemetry收集器一起使用。
我发现了这个。
所以我复制了这四个配置文件
- docker-compose.yml(在我的应用程序中,我删除了当前运行有问题的生成器部分和prometheus)
- otel-agent-config。 yaml
- otel-collector-config.yaml
- .env
到我的应用中。
也基于open-telemetry / opentelemetry-js repo中的两个演示:
我想出了我的版本(很抱歉一会儿,由于缺少文档,很难设置最低工作版本):
.env
OTELCOL_IMG = otel / opentelemetry-collector-dev:latest
OTELCOL_ARGS =
docker-compose.yml
版本: 3.7
服务:
#Jaeger
jaeger-all-in-one:
图片:jaegertracing / all-in-one:最新的
端口:
- 16686:16686
- 14268
- 14250
#Zipkin
zipkin-in-one:
图片:openzipkin / zipkin:最新
端口:
- 9411:9411
#收集器
otel-collector:
图像:$ {OTELCOL_IMG}
命令:["-config = / etc / otel-collector-config。 yaml, $ {OTELCOL_ARGS}]
卷:
-./otel-collector-config.yaml:/etc/otel-collector-config.yaml
端口:
- 1888:1888; #pprof扩展名
- 8888:8888 #收集者公开的普罗米修斯指标
- 8889:8889 #Prometheus出口商指标
- 13133:13133 #health_check扩展名
- 55678 #OpenCensus接收器
- 55680:55679 #zpages扩展
depend_on:
-jaeger多合一
-zipkin多合一
#代理
otel-agent :
图片:$ {OTELCOL_IMG}
命令:[-config = / etc / otel-agent-config.yaml, $ {OTELCOL_ARGS}]
卷:
-./otel-agent-config.yaml:/etc/otel-agent-config.yaml
端口:
- 1777:1777 #pprof扩展名
- 8887:8888; #代理商公开的普罗米修斯指标
- 14268 #Jaeger接收器
- 55678 #OpenCensus接收器
- 55679:55679 #zpages扩展名
- 13133 #health_check
取决于:
-otel-collector
otel-agent-config。 yaml
接收者:
开放人口普查:
zipkin:
端点::9411
jaeger:
协议:
thrift_http:
出口商:
opencensus:
端点: otel-collector:55678
不安全:真
日志记录:
日志级别:调试
处理器:
批处理:
queued_retry:
扩展名:
pprof:
端点::1777
zpages:
端点::55679
health_check:
服务:
扩展:[health_check,pprof,zpages]
管道:
迹线:
接收者:[opencensus,jaeger,zipkin]
处理器:[batch,queued_retry]
出口商:[opencensus,日志记录]
指标:
接收者:[opencensus]
处理器:[batch]
出口商:[logging,opencensus]
otel-collector-config.yaml
接收者:
开放人口普查:
出口商:
普罗米修斯:
端点: 0.0.0.0:8889
命名空间:promexample
const_labels:
label1:value1
日志记录:
zipkin:
端点: http:// zipkin-多合一:9411 / api / v2 / spans
格式:proto
jaeger:
端点:jaeger-all-in-one:14250
不安全:真实
处理器:
批处理:
queued_retry:
扩展名:
health_check:
pprof:
端点::1888
zpages:
端点::55679
服务:
扩展:[pprof,zpages,health_check]
管道:
跟踪:
接收者:[opencensus]
处理器:[batch,queued_retry]
出口商:[logging,zipkin,jaeger]
指标:
接收者:[opencensus]
处理器:[batch]
出口商:[记录]
运行 docker-compose up -d $ c $后c>,我可以打开Jaeger(http:// localhost:16686)和Zipkin UI(http:// localhost:9411)。
和我的 ConsoleSpanExporter
在Web客户端和Express.js服务器中均可使用。
但是,我在客户端和服务器中都尝试了此OpenTelemetry导出程序代码,但我仍然
请在代码内部查看我对URL的评论
import'@ opentelemetry / exporter-collector'的{CollectorTraceExporter};
// ...
tracerProvider.addSpanProcessor(new SimpleSpanProcessor(new ConsoleSpanExporter()));
tracerProvider.addSpanProcessor(
新的SimpleSpanProcessor(
新的CollectorTraceExporter({
serviceName:'my-service',
// // url:'http:// localhost: 55680 / v1 / trace',//返回错误404.
// url:'http:// localhost:55681 / v1 / trace',//没有响应,不存在
// url :'http:// localhost:14268 / v1 / trace',//无响应,不存在。
})
)
);
有什么想法吗?谢谢
您尝试的演示使用的是较早的配置和opencensus,应将其替换为otlp接收器。话虽如此,这是一个可行的示例
所以我要从那里复制文件:
docker-compose.yaml
版本: 3;
服务:
#收集器
收集器:
图像:otel / opentelemetry-collector:latest
命令:["-config = / conf / collector-config .yaml,-log-level = DEBUG]
卷:
-./collector-config.yaml:/conf/collector-config.yaml
端口:
- 9464:9464
- 55680:55680
- 55681:55681
depend_on:
-zipkin-all-in-one
#Zipkin
zipkin-all-in-one:
image:openzipkin / zipkin:最新的
端口:
- 9411:9411
#普罗米修斯
普罗米修斯:
container_name:普罗米修斯
图片:prom / prometheus:最新
量:
-./prometheus.yaml :/etc/prometheus/prometheus.yml
端口:
- 9090:9090&;
collector-config.yaml
接收者:
otlp:
协议:
grpc:
http:
出口商:
zipkin:
端点:" http:// zipkin-all-in-one:9411 / api / v2 / spans;
普罗米修斯:
端点: 0.0.0.0:9464
处理器:
批处理:
queued_retry:
服务:
管道:
跟踪:
接收者: [otlp]
出口商:[zipkin]
处理器:[batch,queued_retry]
指标:
接收者:[otlp]
出口商:[prometheus]
个处理器:[batch,queued_retry]
prometheus.yaml
global:
scrape_interval:15s#默认是每1分钟。
scrape_configs:
-job_name:收集器
#metrics_path默认为 / metrics
#方案默认为 http。
static_configs:
-目标:['collector:9464']
opentelemetry-js版本0.10.2
跟踪的默认端口是55680,度量标准的默认端口是55681
我之前发布的链接-您将始终在此处找到最新的链接日期工作示例:
对于网络示例,您可以使用相同的docker并在此处查看所有工作示例:
I am trying to make OpenTelemetry exporter to work with OpenTelemetry collector.
I found this OpenTelemetry collector demo.
So I copied these four config files
- docker-compose.yml (In my app, I removed generators part and prometheus which I currently having issue running)
- otel-agent-config.yaml
- otel-collector-config.yaml
- .env
to my app.
Also based on these two demos in open-telemetry/opentelemetry-js repo:
I came up with my version (sorry for a bit long, really hard to set up a minimum working version due to the lack of docs):
.env
OTELCOL_IMG=otel/opentelemetry-collector-dev:latest
OTELCOL_ARGS=
docker-compose.yml
version: '3.7'
services:
# Jaeger
jaeger-all-in-one:
image: jaegertracing/all-in-one:latest
ports:
- "16686:16686"
- "14268"
- "14250"
# Zipkin
zipkin-all-in-one:
image: openzipkin/zipkin:latest
ports:
- "9411:9411"
# Collector
otel-collector:
image: ${OTELCOL_IMG}
command: ["--config=/etc/otel-collector-config.yaml", "${OTELCOL_ARGS}"]
volumes:
- ./otel-collector-config.yaml:/etc/otel-collector-config.yaml
ports:
- "1888:1888" # pprof extension
- "8888:8888" # Prometheus metrics exposed by the collector
- "8889:8889" # Prometheus exporter metrics
- "13133:13133" # health_check extension
- "55678" # OpenCensus receiver
- "55680:55679" # zpages extension
depends_on:
- jaeger-all-in-one
- zipkin-all-in-one
# Agent
otel-agent:
image: ${OTELCOL_IMG}
command: ["--config=/etc/otel-agent-config.yaml", "${OTELCOL_ARGS}"]
volumes:
- ./otel-agent-config.yaml:/etc/otel-agent-config.yaml
ports:
- "1777:1777" # pprof extension
- "8887:8888" # Prometheus metrics exposed by the agent
- "14268" # Jaeger receiver
- "55678" # OpenCensus receiver
- "55679:55679" # zpages extension
- "13133" # health_check
depends_on:
- otel-collector
otel-agent-config.yaml
receivers:
opencensus:
zipkin:
endpoint: :9411
jaeger:
protocols:
thrift_http:
exporters:
opencensus:
endpoint: "otel-collector:55678"
insecure: true
logging:
loglevel: debug
processors:
batch:
queued_retry:
extensions:
pprof:
endpoint: :1777
zpages:
endpoint: :55679
health_check:
service:
extensions: [health_check, pprof, zpages]
pipelines:
traces:
receivers: [opencensus, jaeger, zipkin]
processors: [batch, queued_retry]
exporters: [opencensus, logging]
metrics:
receivers: [opencensus]
processors: [batch]
exporters: [logging,opencensus]
otel-collector-config.yaml
receivers:
opencensus:
exporters:
prometheus:
endpoint: "0.0.0.0:8889"
namespace: promexample
const_labels:
label1: value1
logging:
zipkin:
endpoint: "http://zipkin-all-in-one:9411/api/v2/spans"
format: proto
jaeger:
endpoint: jaeger-all-in-one:14250
insecure: true
processors:
batch:
queued_retry:
extensions:
health_check:
pprof:
endpoint: :1888
zpages:
endpoint: :55679
service:
extensions: [pprof, zpages, health_check]
pipelines:
traces:
receivers: [opencensus]
processors: [batch, queued_retry]
exporters: [logging, zipkin, jaeger]
metrics:
receivers: [opencensus]
processors: [batch]
exporters: [logging]
After running docker-compose up -d
, I can open Jaeger (http://localhost:16686) and Zipkin UI (http://localhost:9411).
And my ConsoleSpanExporter
works in both web client and Express.js server.
However, I tried this OpenTelemetry exporter code in both client and server, I am still having issue to connect OpenTelemetry collector.
Please see my comment about URL inside of the code
import { CollectorTraceExporter } from '@opentelemetry/exporter-collector';
// ...
tracerProvider.addSpanProcessor(new SimpleSpanProcessor(new ConsoleSpanExporter()));
tracerProvider.addSpanProcessor(
new SimpleSpanProcessor(
new CollectorTraceExporter({
serviceName: 'my-service',
// url: 'http://localhost:55680/v1/trace', // Return error 404.
// url: 'http://localhost:55681/v1/trace', // No response, not exists.
// url: 'http://localhost:14268/v1/trace', // No response, not exists.
})
)
);
Any idea? Thanks
The demo you tried is using older configuration and opencensus which should be replaced with otlp receiver. Having said that this is a working examplehttps://github.com/open-telemetry/opentelemetry-js/tree/master/examples/collector-exporter-node/dockerSo I'm copying the files from there:
docker-compose.yaml
version: "3"
services:
# Collector
collector:
image: otel/opentelemetry-collector:latest
command: ["--config=/conf/collector-config.yaml", "--log-level=DEBUG"]
volumes:
- ./collector-config.yaml:/conf/collector-config.yaml
ports:
- "9464:9464"
- "55680:55680"
- "55681:55681"
depends_on:
- zipkin-all-in-one
# Zipkin
zipkin-all-in-one:
image: openzipkin/zipkin:latest
ports:
- "9411:9411"
# Prometheus
prometheus:
container_name: prometheus
image: prom/prometheus:latest
volumes:
- ./prometheus.yaml:/etc/prometheus/prometheus.yml
ports:
- "9090:9090"
collector-config.yaml
receivers:
otlp:
protocols:
grpc:
http:
exporters:
zipkin:
endpoint: "http://zipkin-all-in-one:9411/api/v2/spans"
prometheus:
endpoint: "0.0.0.0:9464"
processors:
batch:
queued_retry:
service:
pipelines:
traces:
receivers: [otlp]
exporters: [zipkin]
processors: [batch, queued_retry]
metrics:
receivers: [otlp]
exporters: [prometheus]
processors: [batch, queued_retry]
prometheus.yaml
global:
scrape_interval: 15s # Default is every 1 minute.
scrape_configs:
- job_name: 'collector'
# metrics_path defaults to '/metrics'
# scheme defaults to 'http'.
static_configs:
- targets: ['collector:9464']
This should work fine with opentelemetry-js ver. 0.10.2
Default port for traces is 55680 and for metrics 55681
The link I posted previously - you will always find there the latest up to date working example:https://github.com/open-telemetry/opentelemetry-js/tree/master/examples/collector-exporter-nodeAnd for web example you can use the same docker and see all working examples here:https://github.com/open-telemetry/opentelemetry-js/tree/master/examples/tracer-web/
这篇关于如何在客户端和服务器中正确使用OpenTelemetry导出器和OpenTelemetry收集器?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!