apiVersion: v1
kind: ServiceAccount
metadata:
name: otel-collector-deployment
您可以使用OpenTelemetry Collector转发遥测数据。
要配置将跟踪转发到TempoStack实例,您可以部署和配置OpenTelemetry Collector。您可以使用指定的处理器、接收器和导出器以部署模式部署OpenTelemetry Collector。对于其他模式,请参阅_其他资源_中链接的OpenTelemetry Collector文档。
已安装OpenTelemetry Operator的Red Hat版本。
已安装Tempo Operator。
已在集群上部署TempoStack实例。
为OpenTelemetry Collector创建一个服务帐户。
apiVersion: v1
kind: ServiceAccount
metadata:
name: otel-collector-deployment
为服务帐户创建一个集群角色。
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: otel-collector
rules:
(1)
(2)
- apiGroups: ["", "config.openshift.io"]
resources: ["pods", "namespaces", "infrastructures", "infrastructures/status"]
verbs: ["get", "watch", "list"]
1 | k8sattributesprocessor 需要Pod和命名空间资源的权限。 |
2 | resourcedetectionprocessor 需要基础设施和状态的权限。 |
将集群角色绑定到服务帐户。
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: otel-collector
subjects:
- kind: ServiceAccount
name: otel-collector-deployment
namespace: otel-collector-example
roleRef:
kind: ClusterRole
name: otel-collector
apiGroup: rbac.authorization.k8s.io
创建YAML文件以定义OpenTelemetryCollector
自定义资源 (CR)。
apiVersion: opentelemetry.io/v1alpha1
kind: OpenTelemetryCollector
metadata:
name: otel
spec:
mode: deployment
serviceAccount: otel-collector-deployment
config: |
receivers:
jaeger:
protocols:
grpc: {}
thrift_binary: {}
thrift_compact: {}
thrift_http: {}
opencensus: {}
otlp:
protocols:
grpc: {}
http: {}
zipkin: {}
processors:
batch: {}
k8sattributes: {}
memory_limiter:
check_interval: 1s
limit_percentage: 50
spike_limit_percentage: 30
resourcedetection:
detectors: [openshift]
exporters:
otlp:
endpoint: "tempo-simplest-distributor:4317" (1)
tls:
insecure: true
service:
pipelines:
traces:
receivers: [jaeger, opencensus, otlp, zipkin] (2)
processors: [memory_limiter, k8sattributes, resourcedetection, batch]
exporters: [otlp]
1 | Collector导出器配置为导出OTLP并指向Tempo分发器端点,本例中为"tempo-simplest-distributor:4317" ,该端点已创建。 |
2 | Collector 配置了接收器,用于接收 Jaeger 链路追踪数据、通过 OpenCensus 协议的 OpenCensus 链路追踪数据、通过 Zipkin 协议的 Zipkin 链路追踪数据以及通过 GRPC 协议的 OTLP 链路追踪数据。 |
您可以部署
|
您可以使用包含 Collector 组件的 OpenTelemetry Collector 将日志转发到 LokiStack 实例。
这种 Loki Exporter 的使用方法是一个临时的技术预览功能,计划在发布改进的解决方案后将其替换,届时 Loki Exporter 将被 OTLP HTTP Exporter 替换。
Loki Exporter 仅为技术预览功能。技术预览功能不受 Red Hat 生产服务等级协议 (SLA) 的支持,并且可能功能不完整。Red Hat 不建议在生产环境中使用它们。这些功能可以提前访问即将推出的产品功能,使客户能够在开发过程中测试功能并提供反馈。 有关 Red Hat 技术预览功能的支持范围的更多信息,请参阅 技术预览功能支持范围。 |
已安装OpenTelemetry Operator的Red Hat版本。
已安装 Loki Operator。
集群上部署了受支持的 LokiStack 实例。
为OpenTelemetry Collector创建一个服务帐户。
ServiceAccount
对象apiVersion: v1
kind: ServiceAccount
metadata:
name: otel-collector-deployment
namespace: openshift-logging
创建一个集群角色,授予 Collector 的服务帐户将日志推送到 LokiStack 应用程序租户的权限。
ClusterRole
对象apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: otel-collector-logs-writer
rules:
- apiGroups: ["loki.grafana.com"]
resourceNames: ["logs"]
resources: ["application"]
verbs: ["create"]
- apiGroups: [""]
resources: ["pods", "namespaces", "nodes"]
verbs: ["get", "watch", "list"]
- apiGroups: ["apps"]
resources: ["replicasets"]
verbs: ["get", "list", "watch"]
- apiGroups: ["extensions"]
resources: ["replicasets"]
verbs: ["get", "list", "watch"]
将集群角色绑定到服务帐户。
ClusterRoleBinding
对象apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: otel-collector-logs-writer
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: otel-collector-logs-writer
subjects:
- kind: ServiceAccount
name: otel-collector-deployment
namespace: openshift-logging
创建一个OpenTelemetryCollector
自定义资源 (CR) 对象。
OpenTelemetryCollector
CR 对象apiVersion: opentelemetry.io/v1beta1
kind: OpenTelemetryCollector
metadata:
name: otel
namespace: openshift-logging
spec:
serviceAccount: otel-collector-deployment
config:
extensions:
bearertokenauth:
filename: "/var/run/secrets/kubernetes.io/serviceaccount/token"
receivers:
otlp:
protocols:
grpc: {}
http: {}
processors:
k8sattributes:
auth_type: "serviceAccount"
passthrough: false
extract:
metadata:
- k8s.pod.name
- k8s.container.name
- k8s.namespace.name
labels:
- tag_name: app.label.component
key: app.kubernetes.io/component
from: pod
pod_association:
- sources:
- from: resource_attribute
name: k8s.pod.name
- from: resource_attribute
name: k8s.container.name
- from: resource_attribute
name: k8s.namespace.name
- sources:
- from: connection
resource:
attributes: (1)
- key: loki.format (2)
action: insert
value: json
- key: kubernetes_namespace_name
from_attribute: k8s.namespace.name
action: upsert
- key: kubernetes_pod_name
from_attribute: k8s.pod.name
action: upsert
- key: kubernetes_container_name
from_attribute: k8s.container.name
action: upsert
- key: log_type
value: application
action: upsert
- key: loki.resource.labels (3)
value: log_type, kubernetes_namespace_name, kubernetes_pod_name, kubernetes_container_name
action: insert
transform:
log_statements:
- context: log
statements:
- set(attributes["level"], ConvertCase(severity_text, "lower"))
exporters:
loki:
endpoint: https://logging-loki-gateway-http.openshift-logging.svc.cluster.local:8080/api/logs/v1/application/loki/api/v1/push (4)
tls:
ca_file: "/var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt"
auth:
authenticator: bearertokenauth
debug:
verbosity: detailed
service:
extensions: [bearertokenauth] (5)
pipelines:
logs:
receivers: [otlp]
processors: [k8sattributes, transform, resource]
exporters: [loki] (6)
logs/test:
receivers: [otlp]
processors: []
exporters: [debug]
1 | 提供以下资源属性供 Web 控制台使用:kubernetes_namespace_name 、kubernetes_pod_name 、kubernetes_container_name 和 log_type 。如果您将它们指定为此loki.resource.labels 属性的值,则 Loki Exporter 会将它们处理为标签。 |
2 | 配置 Loki 日志的格式。支持的值为json 、logfmt 和raw 。 |
3 | 配置哪些资源属性被处理为 Loki 标签。 |
4 | 将 Loki Exporter 指向 LokiStack logging-loki 实例的网关并使用application 租户。 |
5 | 启用 Loki Exporter 所需的 BearerTokenAuth 扩展。 |
6 | 启用 Loki Exporter 从 Collector 导出日志。 |
您可以部署
|