Expose job latency metric via ActiveSupport Notifications job middleware#366
Expose job latency metric via ActiveSupport Notifications job middleware#366mnbbrown wants to merge 4 commits intoque-rb:masterfrom
Conversation
That PR's upstream is @stephenbinns's personal github. We've forked onto the @gocardless org so we can all contribute. Sorry for the confusion! I knew this would happen :( |
|
As mentioned previously our fork implements "first-class" support for prometheus. We thought instead of upstreaming all that, we'd just make a generic way for users to implement their own metric collection - irrespective of if they're using prometheus or not. We've been testing this internally - and for it to work we need to run
What would be your preferred approach @ZimbiX? |
|
Ah ha. Alright, I'll close #362 then, but the feedback left there is still valuable.
Hmm. It's been a while since I skimmed your Prometheus work, so I don't recall what you had. But yeah, given Que's aim was to be as minimal/focused as possible, maybe Prometheus support should be it's own gem, e.g.
That makes sense.
Ah yeah, moving
Even if Prometheus support is not added to Que itself, I would prefer using something other than ActiveSupport for the hooks. Another alternative could be to make use of the
Would you even need middleware to be able to run a HTTP server from the Que process though? I imagine the Ruby file you provide to Que could spawn a thread to run the HTTP server? Lots of options! =P I'm not yet sure what would be my preferred approach. Something that others can benefit the most from would be best. |
As requested :) #378 |
|
Bit of a deep dive into options.... listen on
|
|
Wow, great work putting that gem together - that was fast! I really like that approach.
Yeah, that was deprecated perhaps prematurely.
Fair enough!
Hmm, I can't see a clean way to access the locker either. This could be a good reason to add an accessor for it, like module Que
class << self
attr_accessor :locker
end
class Locker
def initialize(**options)
Que.locker = self
super(**options)
end
end
endThen, as you've probably seen, you can access the worker threads with |
This works really well #380 See the latest |
|
Slowly making progress towards being able to close this PR.. #382 is another PR we need to support que-prometheus properly. |
...picking up from @stephenbinns's good work in #362 from a fork in the gocardless GH org.
This adds ActiveSupport notifications as Job Middleware, there is also a change to expose the latency for the job to be picked up which is useful to work out how close to capacity the workers are.