Email or username:

Password:

Forgot your password?
8 posts total
jeff

Missed last weeks #caturday so here is my son pulling on the cats ears and tail. And the cat just doesn't mind :ablobcatbouncefast: 😂 #catsofmasto #catsofmastodon

jeff

Happy #catuday everyone 🍻 🎉 My feline is so patient when the kids carry her around the house 😂 🏡 #catsofmasto #catsofmastodon

jeff

Happy belated #Caturday everyone! 🍻 🎉 🐈‍⬛

My children love cats as much as me 😍 #catsofmasto #catsofmastodon

jeff

@Gargron OK, so have some encouraging news. Changing the Ruby version seems to fix the DNS lookups. when I test it out now I get the following:
pastebin.com/k2LNCfKz

^^ on the 2nd to last line, it says the table size is empty which is good news.

Unfortunately the workers are still "stuck", but the TID# is much longer than before. I'm assuming to manually try this in the ruby console, would it be correct to test it with:

ActivityPub::ProcessingWorker.new.perform(106...)
?? (or no)

@Gargron OK, so have some encouraging news. Changing the Ruby version seems to fix the DNS lookups. when I test it out now I get the following:
pastebin.com/k2LNCfKz

^^ on the 2nd to last line, it says the table size is empty which is good news.

Unfortunately the workers are still "stuck", but the TID# is much longer than before. I'm assuming to manually try this in the ruby console, would it be correct to test it with:

Eugen Rochko

@jeff :AAAAAA:

I don't really know where to go from here. I suppose you could keep trying to run those things manually until one of them gets stuck in the console, in which case we might learn where it gets stuck.

Or maybe the server is cursed

To answer your question, yes

jeff

@Gargron Hey, so sorry to bother you again. If you are busy and don't feel like responding, I totally get it.

Looks like less than 1 day, there are 9 stuck busy workers. Here is the full dump:
pastebin.com/DMFBZXgf

What is interesting, is it still has the same warnings as before.

Is there a way to verify the patch was installed correctly? Attached a screenshot of me applying the git command.

I dont know if this helps but here are the TID that were stuck:
pastebin.com/8Nhxfe4f

@Gargron Hey, so sorry to bother you again. If you are busy and don't feel like responding, I totally get it.

Looks like less than 1 day, there are 9 stuck busy workers. Here is the full dump:
pastebin.com/DMFBZXgf

What is interesting, is it still has the same warnings as before.

Is there a way to verify the patch was installed correctly? Attached a screenshot of me applying the git command.

Eugen Rochko

@jeff Seems like it's running my patch and still getting stuck in it

Is there anything blocking ffmpeg on the system somehow?

jeff

@Gargron OMG just restarting the service made it work haha. beers or dinner is on me, you have paypal?

jeff

@Gargron I can't believe it was that easy. I am such a moron. my bad, seriously

Eugen Rochko

@jeff Dealing with PayPal donations is a bother in terms of taxes, but I have a Patreon you can donate to and then cancel immediately

jeff

@Gargron
Hi Eugen. Check this out. All 25 sidekiq workers were "busy" so my timeline stopped. Was able to grab a better capture this time.

Here is the full dump:
pastebin.com/4dbc4qGw

And here are just the errors related to the busy workers:
pastebin.com/WyRt2z3A

Can you make any sense out of this?

Eugen Rochko

@jeff Looks like they were all stuck reading the pipe output from a ffmpeg command

What's your Ruby version, by the way? And this is the "main" branch or 3.3.0?

jeff

Why do I have to restart my masto services after just one or two days? It's like everything is frozen. Requests default to pending. And I have plenty of resources to spare, and on current version.

I just don't get it.
@Gargron

Eugen Rochko

@jeff I don't know what you mean by pending requests and what that has to do with restarting.

spla
@jeff seems related to what's happening to mine:
my Mastodon instance was running great on Centos 8 but I migrated it to a new server running Ubuntu 20.04 LTS.
Don't know why after several hours running fine in this new server, Sidekiq/Redis start throwing lines like this:
'WARN: Your Redis network connection is performing extremely poorly. Last RTT readings were. [100632, 99845, 100015, 99969, 100037], ideally these should be < 1000.'
And the federated timeline get freezed.
I must restart Sidekiq & and Redis to back to normal.
Happened three times in two days. For some reason Redis is performing really bad for unknown reason.

@Gargron
@jeff seems related to what's happening to mine:
my Mastodon instance was running great on Centos 8 but I migrated it to a new server running Ubuntu 20.04 LTS.
Don't know why after several hours running fine in this new server, Sidekiq/Redis start throwing lines like this:
Go Up