问题描述
我希望使用 Serilog 将结构化日志数据写入 Amazon S3 存储桶,然后使用 Databricks 进行分析.我以为 Serilog 会有一个 S3 接收器,但我发现我错了.我想也许将 File sink 与其他东西一起使用可能是票,但我不确定那会是什么样子.我想我可以在我的 EC2 实例上安装 S3 存储桶并写入它,但我被告知这是有问题的.你们中的一位能不能给我指出正确的方向?
I'm looking to use Serilog to write structured log data to an Amazon S3 bucket, then analyze using Databricks. I assumed there would be an S3 sink for Serilog but I found I was wrong. I think perhaps using the File sink along with something else might be the ticket, but I'm unsure what that might look like. I suppose I could mount the S3 bucket on my EC2 instance and write to it, but I'm told that's problematic. Could one of you fine folks point me in the right direction?
推荐答案
在撰写本文时,还没有写入 Amazon S3 的接收器,因此您必须自己编写.
As of this writing, there are no Sinks that write to Amazon S3, so you'd have to write your own.
我首先看一下 Serilog.Sinks.AzureBlobStorage
sink,因为它可能可以作为您为 Amazon S3 编写 sink 的基础.
I'd start by taking a look at the Serilog.Sinks.AzureBlobStorage
sink, as it probably can serve as a base for you to write a sink for Amazon S3.
维基中提供了指向其他几个接收器的源代码的链接,也可以为您提供更多想法:https://github.com/serilog/serilog/wiki/Provided-Sinks
Links to the source code for several other sinks are available in the wiki and can give you some more ideas too: https://github.com/serilog/serilog/wiki/Provided-Sinks
这篇关于将 Serilog 日志写入 Amazon S3 的推荐方法是什么?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!