You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository was archived by the owner on Mar 15, 2021. It is now read-only.
Currently at Spotify a ffwd container is injected into each pod with an admission controller.
It has been slow and cumbersome to rollout new versions of ffwd since it requires recreating all the pods.
An alternative solution to the sidecar approach is to run ffwd as a demon set. Fluentd which ships logs off the GKE nodes is deployment in a similar way. However fluentd get's metadata around the logs based on filename (this was the case in 2018, it might be different now?).
This approach doesn't come without its own unique set of challenges some of which are outlined below.
Would need to map the incoming ip address to a pod to get metadata such as podname. IP addresses could move around quickly and this would need to be kept fresh. We could watch for pod change events and use that as a cache buster.
Does the UDP buffer need to be sized even higher? Currently each pod on a node get's his own ffwd/udp buffer.
Part of this issue should be doing the discovery work to see how feasible it would be.
Currently at Spotify a ffwd container is injected into each pod with an admission controller.
It has been slow and cumbersome to rollout new versions of ffwd since it requires recreating all the pods.
An alternative solution to the sidecar approach is to run ffwd as a demon set. Fluentd which ships logs off the GKE nodes is deployment in a similar way. However fluentd get's metadata around the logs based on filename (this was the case in 2018, it might be different now?).
This approach doesn't come without its own unique set of challenges some of which are outlined below.
Part of this issue should be doing the discovery work to see how feasible it would be.