

Exactly that, yeah. Thank you for the link.
I am also ‘Andrew’, the admin of this server. I’ll try to remember to only use this account for posting stuff.


Exactly that, yeah. Thank you for the link.


It’s straight-forward enough to do in back-end code, to just reject a query if parameters are missing, but I don’t think there’s a way to define a schema that then gets used to auto-generate the documentation and validate the requests. If the request isn’t validated, then the back-end never sees it.
For something like https://freamon.github.io/piefed-api/#/Misc/get_api_alpha_search, the docs show that ‘q’ and ‘type_’ are required, and everything else is optional. The schema definition looks like:
/api/alpha/search:
get:
parameters:
- in: query
name: q
schema:
type: string
required: true
- in: query
name: type_
schema:
type: string
enum:
- Communities
- Posts
- Users
- Url
required: true
- in: query
name: limit
schema:
type: integer
required: false
required is a simple boolean for each individual field - you can say every field is required, or no fields are required, but I haven’t come across a way to say that at least one field is required.


It’s in use by a Feed: https://piefed.social/f/3dprinting
Users, Communities and Feeds are all actors as far as remote fediverse instances are concerned, and they don’t necessarily have the means to distinguish between the types, so it’s easier if there’s no overlap.


I don’t know why All would stop like that, it shouldn’t do.
ip.address - - [24/Sep/2025 23:05:16] "GET /api/alpha/post/list?show_hidden=false&disliked_only=false&limit=25&page=1&saved_only=false&sort=Active&liked_only=false&type_=All HTTP/1.0" 200 -
ip.address - - [24/Sep/2025 23:05:41] "GET /api/alpha/post/list?page_cursor=2&show_hidden=false&disliked_only=false&limit=25&saved_only=false&sort=Active&liked_only=false&type_=All HTTP/1.0" 200 -
Boost starts using page_cursor after page 1, which got nixed


PieFed has a similar API endpoint. It used to be scoped, but was changed at the request of app developers. It’s how people browse sites by ‘New Comments’, and - for a GET request - it’s not really possible to document and validate that an endpoint needs to have at least one of something (i.e. that none of ‘post_id’ or ‘user_id’ or ‘community_id’ or ‘user_id’ are individually required, but there needs to be one of them).
It’s unlikely that these crawlers will discover PieFed’s API, but I guess it’s no surprise that they’ve moved on from basic HTML crawling to probing APIs. In the meantime, I’ve added some basic protection to the back-end for anonymous, unscoped requests to PieFed’s endpoint.


This is the kind of thing that apps handle well - I viewed your post from Voyager, and just had to click the sopuli.xyz link to get is resolved to my instance.
For the web browser experience: that link used to be a bit more visible (you can currently also get it from community sidebars, but it used to also be in post sidebars too). Someone complained though, and it was removed from post sidebars, so I assume they’d have the same complaint if it was re-surfaced again. You could just bookmark it, of course.
The page itself shouldn’t be slow to load (it’s a very lightweight page that’s not doing anything until you click ‘Retrieve’). It doesn’t immediately redirect you to the post because the assumption was that you might want to retrieve more than one post at a time.
That said, if you’re already viewing a page on the ‘wrong’ instance, then being able to change ‘https’ to ‘web+pf’ and have it work sounds cool (although it looks like Chrome makes highlighting ‘https’ into a 2-click experience).


There’s definitely something about the experience. I have a projector at home, and it’s not the latest model, and it’s far from the ideal set-up, but I was watching The Martian recently, and found myself wondering if it was the greatest movie ever made, and then had to remind myself that no, it’s just that I was projecting it.
There are some api rate limits (look for RateLimitExceeded in routes), but the settings are generous enough that a normal user (and not a bot) isn’t going to get caught by them.


It’s also available from the Options drop-down.



No, I was suggesting that peertube.wtf should have asked piefed.zip for the details of the comment. That would be the most authoritative place to ask, and that’s what PieFed, MBIN, and Friendica do.
For the comment that you made, piefed.zip would’ve signed it with your private key, and sent out 2 copies - one to technics.de and one to tilvids.com. After receiving it, technics.de is no longer involved, but tilvids.com would’ve sent to comment out to all the subscribers of ‘The Linux Experiment’. We can tell they did in fact do that, because the comment you made on piefed.zip is visible on piefed.social.
It doesn’t have your private key though, and it additionally doesn’t sign it with the channel’s private key, so the question is then not ‘was the data sent out?’, but rather ‘how do remote instances know to trust that this comment was actually made by this person?’. If the author was also on tilvids.com, then it has access to the private key, so it can be signed when it’s sent out. If the author was from Mastodon, their comments include a cryptographic hash inside the JSON, so that can be used. For all other authors, the best thing to do - I would think - is grab it from the source.
I don’t actually know what other PeerTube instances do in this circumstance though. Comparing the amount of comments on the host instance, vs. other PeerTube instances, vs. PieFed, reveals no discernible pattern. For ‘The Linux Experiment’, piefed.social has comments from misskey, from piefed, and from mbin that are absent from remote PeerTube instances. Hopefully, someone who’s familiar with their code can shed more light on their internal federation - if there’s something we can do to guarantee comment visibility on remote PeerTube instances, then we’ll do it if it’s feasible.
EDIT: just been digging through my server logs for requests of comments I made from PeerTube instances, and discovered tube.alphonso.fr - they have your comment: https://tube.alphonso.fr/w/eSYuduJSbZ9s7K4pFT3Ncd - so how fully PeerTube instances federate comments might be a policy decision that admins set, or it might just be buggy behaviour.


It appears to be specific to replies to replies - this video on peertube.wtf has a top-level comment from PieFed.
PeerTube’s federation model is different from Lemmy’s - they don’t sign remote comments when they federate them out again, so it’s often up to other instances to fetch them from the source. It might be that PieFed has to do something to help the likes of peertube.wtf successfully retrieve a comment when it’s a reply to another reply.


Nah, I don’t they’re real Groups (in the ActivityPub sense). They’re just accounts running on a Mastodon instance by the looks of it, which doesn’t support Group creation (a.gup.pe was it’s own software).
If you replicate trying to join a fedigroup.social group from PieFed or Lemmy, you get:
curl https://fedigroups.social/.well-known/webfinger?resource=acct%3Aknitting%40fedigroups.social | jq .
reveals that the application/activity+json link is for https://fedigroups.social/users/knitting
and if you then do curl --header 'accept: application/activity+json' https://fedigroups.social/users/knitting | jq . it reveals the ‘type’ to be a Service, not a Group. (a Service is a bot, so pretty much the same as a Person, but with automated activities)


Bah, I knew I’d think of one after submitting my list: “It’s a sin”, of course. Oh well, too late now.




For this particular case, it’s more an instance of the software not interacting (in the sense of not changing things they don’t understand).
If Lemmy doesn’t implement flairs, then community updates from them won’t over-write flairs set on PieFed’s copy of those comms. Also, when a PieFed user sends a comment to a Lemmy community, it will just attach an ‘Announce’ header to it and send it out to all followers. It would be against their own spec to change the content of anything they’re Announcing, so followers who receive the comment and happen to be on PieFed instances will interpret it fully, whereas Lemmy will just ignore any fields in the JSON that it doesn’t have a use for.


Maybe it was from a Mastodon server that requires ‘authorized fetch’ or whatever they call it? Last time I was tinkering with something related, Lemmy wasn’t doing the required signed GET request for the user, so couldn’t show the post.



There’s some other papers that are trying to paint this trial as some kind of “PC gone mad” thing, and even they have clearly struggled to find a photo of Lineham where he doesn’t look at least a bit unhinged. I’m not trying to make this about his appearance, I just mean that there’s some clear manifestations of poor mental health apparent, and he should probably try to direct his energies towards them a bit more instead.
Is there a use-case for pinging yourself?
If not, it seems better to be able to say “message me at @freamon@preferred.social” without actually generating a notification.
I suspect Netflix used Covid as an excuse to drop the bitrate, and then never actually put it back up again.
If you’re a pirate, you can tell how rubbish their 4K content is just from the file sizes.
Speaking of being needlessly destructive with stupid bots, these duplicates of other user’s posts don’t even register as cross-posts anymore (due to image proxying).