Do we need algorithms that predict what we’re interested in though? At what point do we go “ah this is actually causing more trouble than it’s worth?”
I’d be perfectly fine browsing content by category rather than having it fed to me based on some sort of black-box weighting system with no clear instructions for me to correct. I mean it works great here on Lemmy.
Do you ever sort posts by “hot”, “active” or even “top 6 hours”? They’re all algorithms that predict what you’re interested in. Less complex than something like YouTube or Instagram, but the same core principle.
The amount of content published on the internet each day makes some kind of sorting necessary. Browsing YouTube by “new” would be a cluttered mess, even with fairly narrow categories. Over 11,000 hours of new video are posted every hour - we need some way to automatically sort the wheat from the chaff, and that means some sort of algorithm.
So how do we build an algorithm that delivers what we want, without giving people too much of what they want if they want something potentially harmful? As far as I know, nobody has found a good answer to that.
Well I mean obviously I’m not against algorithms in general. They’re just mathematical functions to achieve a goal. Each HTTP request generally uses both encryption and compression algorithms and that’s highly useful.
I’m questioning the usefulness of profiling and targeting users with specific content. The Lemmy algorithm isn’t that complex, it doesn’t build a user profile on you, it just goes by general user engagement. That’s fine. Further by virtue of it being open source, Lemmy wouldn’t have a “black box”, it’d be open for anyone to view and analyse.
Comparing Lemmy to YouTube/Instagram/Facebook/Twitter and the like makes for a rather poor comparison.
Lemmy’s simpler algorithm still has the same the problem though. That’s been seen time and time again on Reddit. Humans will actively curate a feed of content they find engaging and avoid content they disagree with. This leads down exactly the same rabbit holes as if you let an algorithm curate a personalised feed for that user.
Do we need algorithms that predict what we’re interested in though? At what point do we go “ah this is actually causing more trouble than it’s worth?”
I’d be perfectly fine browsing content by category rather than having it fed to me based on some sort of black-box weighting system with no clear instructions for me to correct. I mean it works great here on Lemmy.
Lemmy literally has an algorithm to rank posts
Or do you sort your posts by new?
What would you propose for YouTube?
Do you ever sort posts by “hot”, “active” or even “top 6 hours”? They’re all algorithms that predict what you’re interested in. Less complex than something like YouTube or Instagram, but the same core principle.
The amount of content published on the internet each day makes some kind of sorting necessary. Browsing YouTube by “new” would be a cluttered mess, even with fairly narrow categories. Over 11,000 hours of new video are posted every hour - we need some way to automatically sort the wheat from the chaff, and that means some sort of algorithm.
So how do we build an algorithm that delivers what we want, without giving people too much of what they want if they want something potentially harmful? As far as I know, nobody has found a good answer to that.
Well I mean obviously I’m not against algorithms in general. They’re just mathematical functions to achieve a goal. Each HTTP request generally uses both encryption and compression algorithms and that’s highly useful.
I’m questioning the usefulness of profiling and targeting users with specific content. The Lemmy algorithm isn’t that complex, it doesn’t build a user profile on you, it just goes by general user engagement. That’s fine. Further by virtue of it being open source, Lemmy wouldn’t have a “black box”, it’d be open for anyone to view and analyse.
Comparing Lemmy to YouTube/Instagram/Facebook/Twitter and the like makes for a rather poor comparison.
Lemmy’s simpler algorithm still has the same the problem though. That’s been seen time and time again on Reddit. Humans will actively curate a feed of content they find engaging and avoid content they disagree with. This leads down exactly the same rabbit holes as if you let an algorithm curate a personalised feed for that user.