FBXL Social

oh hey site is running like shit for no reason again

Have you tried AI?
replies
0
announces
0
likes
1

Slow queries ?

@cjd still too busy to really get in there and find out why, just want to complain

psql -c "ALTER SYSTEM SET log_min_duration_statement TO '5s'"

tail -F /var/log/postgresql/postgresql-13-main.log

Paste results here and crowdsource the solution

@cjd I am getting trashed right now and it's all SELECT statements. not even the same statement.

tail -n 10000 /var/log/postgresql/postgresql-13-main.log | grep FROM | sort | uniq -c

@cjd let me disable the part that shows the arguments and I'll try

@cjd it's basically every query but here's all the ones over 100

```
106 execute <unnamed>: SELECT TRUE FROM "thread_mutes" AS t0 WHERE (t0."user_id" = $1) AND (t0."context" = $2) LIMIT 1
110 execute <unnamed>: SELECT b0."id", b0."user_id", b0."activity_id", b0."inserted_at", b0."updated_at" FROM "bookmarks" AS b0 WHERE (b0."user_id" = $1) AND (b0."activity_id" = $2)
115 execute <unnamed>: SELECT u0."id", u0."bio", u0."raw_bio", u0."email", u0."name", u0."nickname", u0."password_hash", u0."keys", u0."public_key", u0."ap_id", u0."avatar", u0."local", u0."follower_address", u0."following_address", u0."featured_address", u0."tags", u0."last_refreshed_at", u0."last_digest_emailed_at", u0."banner", u0."background", u0."note_count", u0."follower_count", u0."following_count", u0."is_locked", u0."is_confirmed", u0."password_reset_pending", u0."is_approved", u0."registration_reason", u0."confirmation_token", u0."default_scope", u0."domain_blocks", u0."is_active", u0."no_rich_text", u0."is_moderator", u0."is_admin", u0."show_role", u0."uri", u0."hide_followers_count", u0."hide_follows_count", u0."hide_followers", u0."hide_follows", u0."hide_favorites", u0."email_notifications", u0."mascot", u0."emoji", u0."pleroma_settings_store", u0."fields", u0."raw_fields", u0."is_discoverable", u0."invisible", u0."allow_following_move", u0."skip_thread_containment", u0."actor_type", u0."also_known_as", u0."inbox", u0."shared_inbox", u0."accepts_chat_messages", u0."last_active_at", u0."disclose_client", u0."pinned_objects", u0."is_suggested", u0."last_status_at", u0."birthday", u0."show_birthday", u0."language", u0."notification_settings", u0."blocks", u0."mutes", u0."muted_reblogs", u0."muted_notifications", u0."subscribers", u0."multi_factor_authentication_settings", u0."inserted_at", u0."updated_at" FROM "users" AS u0 WHERE (u0."ap_id" = $1)
127 execute <unnamed>: SELECT TRUE FROM "user_relationships" AS u0 WHERE (((u0."relationship_type" = $1) AND (u0."source_id" = $2)) AND (u0."target_id" = $3)) LIMIT 1
128 execute <unnamed>: SELECT u0."relationship_type", array_agg(u1."ap_id") FROM "user_relationships" AS u0 INNER JOIN "users" AS u1 ON u1."id" = u0."target_id" WHERE (u0."source_id" = $1) AND (u0."relationship_type" = ANY($2)) GROUP BY u0."relationship_type"
145 execute <unnamed>: UPDATE "public"."oban_jobs" AS o0 SET "state" = $1, "attempted_at" = $2, "attempted_by" = $3, "attempt" = o0."attempt" + $4 WHERE (o0."id" IN (SELECT so0."id" FROM "public"."oban_jobs" AS so0 WHERE (so0."state" = 'available') AND (so0."queue" = $5) AND (so0."attempt" < so0."max_attempts") ORDER BY so0."priority", so0."scheduled_at", so0."id" LIMIT $6 FOR UPDATE SKIP LOCKED)) RETURNING o0."id", o0."state", o0."queue", o0."worker", o0."args", o0."meta", o0."tags", o0."errors", o0."attempt", o0."attempted_by", o0."max_attempts", o0."priority", o0."attempted_at", o0."cancelled_at", o0."completed_at", o0."discarded_at", o0."inserted_at", o0."scheduled_at"
182 execute <unnamed>: SELECT u1."follower_address" FROM "following_relationships" AS f0 INNER JOIN "users" AS u1 ON f0."following_id" = u1."id" WHERE (f0."follower_id" = $1) AND (f0."state" = $2)
186 execute <unnamed>: SELECT f0."id", f0."user_id", f0."filter_id", f0."hide", f0."whole_word", f0."phrase", f0."context", f0."expires_at", f0."inserted_at", f0."updated_at" FROM "filters" AS f0 WHERE ((f0."expires_at" IS NULL) OR (f0."expires_at" > $1)) AND (f0."hide") AND (f0."user_id" = $2) ORDER BY f0."id" DESC
197 execute <unnamed>: SELECT o0."id", o0."data", o0."inserted_at", o0."updated_at", o0."id" FROM "objects" AS o0 WHERE (o0."id" = $1)
222 execute <unnamed>: SELECT count(*) FROM "chat_message_references" AS c0 WHERE (c0."chat_id" = $1) AND (c0."unread" = TRUE)
232 execute <unnamed>: SELECT c0."id", c0."object_id", c0."chat_id", c0."unread", c0."inserted_at", c0."updated_at" FROM "chat_message_references" AS c0 WHERE (c0."chat_id" = $1) ORDER BY c0."id" DESC LIMIT 1
247 execute <unnamed>: SELECT a0."id", a0."data", a0."local", a0."actor", a0."recipients", a0."inserted_at", a0."updated_at" FROM "activities" AS a0 INNER JOIN "users" AS u1 ON (a0."actor" = u1."ap_id") AND (u1."is_active" = TRUE) WHERE (associated_object_id((a0."data")) = $1) AND ((a0."data")->>'type' = $2)
276 execute <unnamed>: SELECT o0."id", o0."token", o0."refresh_token", o0."scopes", o0."valid_until", o0."user_id", o0."app_id", o0."inserted_at", o0."updated_at" FROM "oauth_tokens" AS o0 WHERE (o0."token" = $1)
```

I'd post this as markdown but pleroma's option for formatting posts disappeared.

What time are you using? 5s? Set to 20s then re-run. It will weed out the worst queries.

@cjd I am logging all queries, I can't post the video because it has private data in it but I recorded a video of the select statements going by and it's a super fast stream. I'm getting hit by thousands of statements

@cjd trying

@cjd based on slow queries I should disable remote deletes again

Aha, that explains it. I remember Pete saying that is implemented the worst way possible so they should always be rejected. BTW how do you disable them? It's not a perf issue for me but I wouldn't mind getting in front of it.

@Moon @cjd >slow que(e)r...
sexist

@cjd this MRF:

Pleroma.Web.ActivityPub.MRF.VocabularyPolicy,

with this config:

config :pleroma, :mrf_vocabulary,
reject: [ "Delete" ]

Right, now I remember my instance is a docker container which has worked just fine and dandy for at least a year and I have no intention of touching it.

@cjd @Moon >docker container which has worked
That won't be for much longer

Oh yes cool, I can do that from the admin panel.

I have docker containers that have run happily for *years*.

> Professional sysadmins hate him.
> Get super high uptime with near zero effort using this one weird trick.

@cjd @dcc I think something is happening to the company that makes docker? something like that

@cjd @Moon >> Professional sysadmins hate him.
Hate is the wrong word, laugh is more correct
>> Get super high uptime with near zero effort using this one weird trick.
They call me 99 for a reason (no i dont use shitdocker)

@cjd I am actually not sure if the MRF is working, if I configured it correctly. I don't see anything in my logs saying the activities were rejected and I'm seeing a few go through

@cjd
How the fuck is it anything other than looking up a single record and erasing it?

It shouldn't even require rebuilding anything until the next vacuum. It should be no more intense than viewing a status.
@Moon

It could be a cascading delete, but I think the issue is that when a user gets banned on mastodon, it sends a shower of delete requests out for everything they ever posted.

@cjd @r000t I am conferring with the Pleroma team on this to try to track down performance issues that primarily affect SPC. They did find that deleting activities causes a subsequent delete of notifications related to that activity, that delete is one of the slow queries in my logs. We're already in discussion on how to improve this and in general make Mastodon mass-deleting activities not destroy Pleroma servers.

@Moon
Maybe statusid <> notification needs an index, then the lookup is more straight forward (binary search)

I'm very sure moon would be willing to trade some disk space for less core raep
@cjd