- cross-posted to:
- apple@lemdro.id
- hardware@lemmy.world
- cross-posted to:
- apple@lemdro.id
- hardware@lemmy.world
2010: We want bigger batteries, they give us colorful phones
2015 We want bigger batteries, they give us 1mm thinner phones
2020 We want bigger batteries, they give us 5 cameras
2025 We want bigger batteries, they give us AI
Phones are a great example of the utter failure of capitalism to address what people actually need and want.
They also keep taking away features, like removable storage (microSD) and headphone jacks. There’s a few phones that have them, but it gets more difficult to find them as time goes on.
Create a problem, sell a solution. It’s so annoying.
headphone jack -> sell bluetooth headphones
microSD -> sell cloud storageLate stage capitalism
Create a problem, sell a solution
Under the guise of “innovation”
Sony Xperia flagship has both. I think it’s the only Snapdragon 8g3 phone that does.
@Zerthax They’ll say they are responding to consumers who “want cloud storage and wireless headphones,” but options are always better, in my opinion.
*folds phone in half*
That’s two batteries for the price of one.
More like two batteries for the price of two phones; foldables are still expensive AF
Two half-size batteries for the price of three full-size phones coming right up
I would like colourful phones back though, they were so much more fun compared to the sea of black/white/grey + ONE option in the blue-purple spectrum we have today.
Can we get that AND bigger batteries?..bigger colourful batteries even?
And cameras! Don’t replace 12mp 2x telephotos with 48mp 1x digital zoom cameras pls
Steve Jobs proved that consumers don’t actually know what they want until you tell them. And it’s the manufacturers job to tell them what they want and deliver it.
Since Apple doesn’t want a bigger battery that means no one gets a bigger battery.
jobs was an ass. and smelled like it
Nah man, everyone around him must have been smelling something else. He was on the all fruit diet.
fruit. that explains it
It feels like yesterday some guy was arguing against me here on Lemmy about my personal choice of wanting a longer battery life.
WELL LOOK AT ME NOW BRO
No bro, it’s totally better to get 5-6 hours of battery and AI cause like it’s so incredible bro
Thank you for the verification can daddy
Your battery’s still short of 4 hours. 😁
Battery usage:
- 86%
- 3 days ago (last charge)
- 18 days left
That’s what I currently have with close to no usage. With usage it’s around 10 days in total. When using GPS it depends.
Wait, your phone battery lasts 18 days?! Is it a Nokia flip phone?
It’s an Oukitel.
Oukitel
Wow, that’s nuts! I have never heard of them, but that is super cool, and I’m glad I know about them now. Thank you!
There’s a lot of rebrands of these phones. If you get a rugged phone from an unknown brand it’s very possible it’s an Oukitel rebrand.
I’ve had a few but I mostly take one that doesn’t look too rugged. Enjoyed every one of them. They are also pretty easy to repair. (If you are able to remove the screen)
Oh wow shocking, people actually cared more about usability than trashy feature? That’s unheard of
Can we please stop calling LLMs artificial intelligence?
No. Strictly and technically speaking, LLMs absolutely fall under the category of AI. You’re thinking of AGI, which is a subset of AI, and which LLMs will be a necessary but insufficient component of.
I’m an AI Engineer; I’ve taken to, in my circles, calling AI “Algorithmic Intelligence” rather than “Artificial Intelligence.” It’s far more fitting term for what is happening. But until the Yanns and Ngs and Hintons of the field start calling it that, we’re stuck with it.
Where intelligence in spitting out samples from big data vaguely related to prompt?
Let’s just settle on SI, Schrödiger intelligence.
I like your definition. Algorithmic intelligence fits much better. And thanks for giving me a rabbit hole (AGI) to dive into.
Approximate Intelligence fits just as well me thinks
I know more humans that fit that description than language models.
I see that
If you don’t think this counts as AI, can you give us an example of some function or behavior that you would consider AI?
Reasoning, sentience, and the ability, over time, to improve. There’s more, but that’s the top three.
I like AI and my phone to be separate. Chatgpt is just an app, it shouldn’t be a core feature
How about making a phone that’s a whole millimeter thicker just to make the glass thick and strong enough that it won’t break if you drop it?
Great idea! Unless of course the replacement of parts and broken phones is a core part of the business model.
Rubbish. If my phone isn’t so thin that it can double as a knife, it’s not worth buying.
There are a few ruggedized phones out there. I bought some cheap Oukitel phones to use as an order pad in restaurant I used to run, because I was fed up with two waitresses dropping and breaking pads. When I sold the business, I kept one. I use it mainly in my boat, as GPS, plotter, speedometer, weather…
The thing drops, gets wet, handled without care.
These phones exist. They are not top performance dogs, but can be quite decent. Why arent they in the front line? Because demand
Even if it were thicker I’d still slap on a sacrificial glass screen protector atop it. I’ve dropped my phone only a handful of times, and so far have only ever broken the protector.
Just slap a shield on it, there’s your added thickness and better drop resistance all in one!
Smartphone buyers care more about that thing that they’ve been begging for, for years? You don’t say… And mobile phone manufacturers are again and still going to ignore what people actually want in favor of expensive and non functional vaporware, like they always do?
You don’t say!
yeah but you can’t set inflate your stock value based on hype about battery life.
people forget that these features aren’t for users. it’s for idiots who invest in ridiculous shit hoping it to be the next big thing.
I think the battery system that’s best for everyone would be user-replaceable batteries. That way you can have an extra battery on hand to swap in as needed, or even extra-capacity batteries that make your phone a little thicker for people who are okay with that.
Those of us who do actually prefer thinner, lighter phones can still have them (maybe with a slight increase in thickness to accommodate the attachment mechanisms). Plus bigger batteries are a huge waste of resources if the capacity isn’t going to be used.
that was a thing in the early days. most clamshells had em and a few flat panels (called candybars)
First few galaxy phones. Pretty much all of the first few generations of smart phone except apple
yep. first one i had with a non removable battery was the lg v30. battery was removable but you voided the warranty to do it and it required opening the entire case with a knife edge
In fairness the removable battery came with a pretty significant tradeoff.
Water resistance.
Many would happily take a reduction in water resistance for replaceable batteries, the problem is no one gives us the choice
EDIT: inaccurate statement. Fairphone offers removable batteries
There are phones that give you this choice. The Fairphones for example. The back cover is easily removable and you can pop out the battery like in the ol’ days. It has an IP55 as far as I know.
That sounds sweet, I’ll consider Fairphone once my current android dies its not so noble death
At that point I think many would just get a decent powerbank. I’d prefer a larger capacity battery, 7000-10000mah even if the phone is slightly heavier and bigger. Especially for travel.
I disagree, swappable battery > power bank.
Used to have a swappable battery. It was great, you could have like 3 of em and instantly be able to get back to 100% without having to be attached to a cord. I wish I could do the same for my SteamDeck now, it would be great :'(yeah and with a swappable system with a couple battery sizes you could do that. and I could choose a slimmer battery.
I used to have a power bank case for an old phone that had a weak battery. Battery got low I would just turn on the power bank in the case and charge the phone. It doubled the thickness of the phone but I don’t think it really bothered me at the time. This was the Amazon fire phone from 2014? You could get them for $100 and get a free year of prime. I rooted it and installed some custom os on it.
yeah I agree those are a good option too, but that doesn’t solve the issue of replacing a worn out battery. that’s why I think we need swappable batteries.
They’re pushing AI so hard but most people just see it as a gimmicky thing. The only people who care are the investors.
Also I want my OS to be an actual OS with root access. I want Linux in particular.
Like you know, you can setup a file share to back up files. You can back up your phone and get a new one easily. If you lost a phone you can bring it back. Your files organized the way you want and not some things here and done things there like the apps want.
No! I don’t care about battery! I want to become more dependent on advertising companies to arrange my daily life!
Do people here actually use AI? And if so…for what?
I use AI for what Google used to be able to do: Finding answers to simple questions. Usually about tech but sometimes movies or music. Like how do I add a physical volume to LVM, or what are the specs of this little fan model? Or who was that actress in a movie about kids buried in a collapsed building? Things like that…
But does it actually link the source or does it just say a basic answer and you have to take a leap of faith?
It links to the original article it found so you can check its work, which is nice. It’s perplexity.ai if you’re curious. I find it quite useful. And as much as AI makes shit up I wouldn’t trust it otherwise.
Cool. Yeah I think the best use case of AI is just gonna be better search of unorganized that. Having said that though, it would never be as good as a good search engine with organized data.
Summarizing, drafting things, understanding complex things that are filled with jargon, etc.
People are treating AI like crypto, and on some level I don’t blame them because a lot of hype-bros moved from crypto to AI. You can blame the silicon valley hype machine + Wall Street rewarding and punishing companies for going all in or not doing enough, respectively, for the Lemmy anti-new-tech tenor.
That and lemmy seema full of angsty asshats and curmudgeons that love to dogpile things. They feel like they have to counter balance the hype. Sure, that’s fair.
But with AI there is something there.
I use all sorts of AI on a daily basis. I’d venture to say most everyone reading this uses it without even knowing.
I set up my server to transcribe and diarize my my favorite podcasts that I’ve been listening to for 20 years. Whisper transcribes, pyannote diarieizes, gpt4o uses context clues to find and replace “speaker01” with “Leo”, and the. It saves those transcripts so that I can easily switch them. It’s a fun a hobby thing but this type of thing is hugely useful and applicable to large companies and individuals alike.
I use kagi’s assistant (which basically lets you access all the big models) on a daily basis for searching stuff, drafting boilerplate for emails, recipes, etc.
I have a local llm with ragw that I use for more personal stuff like, I had it do the BS work for my performance plan using notes I’d taken from the year. I’ve had it help me reword my resume.
I have it parse huge policy memos into things I actually might give a shit about.
I’ve used it to run though a bunch of semi-structured data on documents and pull relevant data. It’s not necessarily precise but it’s accurate enough for my use case.
There is a tool we use that uses CV to do sentiment analysis of users (as they use websites/apps) so we can improve our ux / cx. There’s some ml tooling that also can tell if someone’s getting frustrated. By the way, they’re moving their mouse if they’re thrashing it or what not.
There’s also a couple use cases that I think we’re looking at at work to help eliminate bias so things like parsing through a bunch of resumes. There’s always a human bias when you’re doing that and there’s evidence that shows llms can do that with less bias than a human and maybe it’ll lead to better results or selections.
So I guess all that to say is I find myself using AI or ml llms on a pretty frequent basis and I see a lot of value in what they can provide. I don’t think it’s going to take people’s jobs. I don’t think it’s going to solve world hunger. I don’t think it’s going to do much of what the hypros say. I don’t think we’re anywhere near AGI, but I do think that there is something there and I think it’s going to change the way we interact with our technology moving forward and I think it’s a great thing.
So here’s the path that you’re envisioning:
-
Someone wants to send you a communication of some sort. They draft a series of bullet points or short version.
-
They have an LLM elaborate it into a long-form email or report.
-
They send the long-from to you.
-
You receive it and have an LLM summarize the long-form into a short-form.
-
You read the short form.
Do you realize how stupid this whole process is? The LLM in step (2) cannot create new useful information from nothing. It is simply elaborating on the bullet points or short version of whatever was fed to it. It’s extrapolating and elaborating, and it is doing so in a lossy manner. Then in step (4), you go through ANOTHER lossy process. The LLM in step (4) is summarizing things, and it might be removing some of the original real information the human created in step (1), rather than the useless fluff the LLM in step (2) added.
WHY NOT JUST HAVE THE PERSON DIRECTLY SEND YOU THE BULLET POINTS FROM STEP (1)???!!
This is idiocy. Pure and simply idiocy. We send start with a series of bullet points, and we end with a series of bullet points, and it’s translated through two separate lossy translation matrices. And we pointlessly burn huge amounts of electricity in the process.
This is fucking stupid. If no one is actually going to read the long-form communications, the long-form communications SHOULDN’T EXIST.
Also neither side necessarily knows the others filter chain. Generational loss could grow exponentially. Not only loss but addition by fabrication. Each side trading back and forth indeterminate deletions/additions. It’s worse than traditional generational loss. It’s generational noise which can resemble signal too.
So if I receive a long form then how do I know if the substantial text is worth reading for the nuance from an actual human being. I can’t tell that apart from generated filler. If a human wrote the long form then maybe they’ve elaborated some nuance that deserved long form.
On the flip side of the same coin. If I receive a short form either generated by me or them. Then to what degree can I trust the indeterminate noisy summary. I just have to trust that the LLM picked out precisely the key points that the author wanted to convey. And trust that nuance was not lost, skewed, or fabricated.
It would be inevitable that two sides end up in a shooting war. Proverbial or otherwise. Because two communiques were playing a fancy game of telephone. Information that was lost or fabricated resulted in an incident but neither side knows which shot first because nobody realized the miscommunication started happening several generations ago.
Yep, pretty much every single “good” use case of AI I’ve seen is basically a band aid solution to enshitification.
You know what’s a good solution to that? Removing the profit motive.
That’s not what I am envisioning at all. That would be absurd.
Ironically, an gpt4o understood my post better than you :P
" Overall, your perspective appreciates the real-world applications and benefits of AI while maintaining a critical eye on the surrounding hype and skepticism. You see AI as a transformative tool that, when used appropriately, can enhance both individual and organizational capabilities."
if you believe that ai summary, i have a bridge that i’d like to sell to you.
As the author of the post it summarized, I agree with the summary.
Now, tell me more about this bridge.
do look up the “forer effect” and then read that ai summary again.
Haha, yea I’m familiar with it(always heard it called the Barnum effect though it sounds like they are the same thing), but this isn’t a fortune cookie-esque, meyers-briggs response.
In this case it actually summarized my post(I guess you could make the case that my post is an opinion that’s shared by many people–so forer-y in that sense), and to my other point, it didn’t misunderstand and tell me I was envisioning LLMs sending emails back and forth to each other.
Either way, there is this general tenor of negativity on Lemmy about AI (usually conflated to mean just LLMs). I think it’s a little misplaced. People are lumping the tech I’m with the hype bros- Altman, Musk, etc. the tech is transformative and there are plenty of valuable uses for it. It can solve real problems now. It doesn’t need to be AGI to do that. It doesn’t need to be perfect to do that.
-
The problem is basically this: if you’re a knowledge worker, then yes, your ass is at risk.
If your job is to summarize policy documents and write corpo-speak documents and then sit in meetings for hours to talk about what you’ve been doing, and you’re using the AI to do it, then your employer doesn’t really need you. They could just use the AI to do that and save the money they’re paying you.
Right now they probably won’t be replacing anyone other than the bottom of the ladder support types, but 5 years? 10? 15?
If your job is typing on a keyboard and then talking to someone else about all the typing you’ve done, you’re directly at risk, eventually.
Mostly stupid stuff involving sailor moon for me, using the lie machine for anything but funny pictures seems like maybe a bad idea at the moment:
I use it to summarize things for me. Or rewrite something I’ve written a bit better. I usually need to spot check it, but it’s still nice to have.
rewrite something I’ve written a bit better
Woah, that’s the biggest bummer of a reason I’ve seen for it. If you read good stuff and write stuff you’d get better at it.
It’s just like any tool.
I use photoshop for instance to edit photos rather than editing them in paint.
Sure I might be able to do the same thing without it, but it makes the process much faster.
I really hope the AI hype dies off
deleted by creator
I don’t get what those companies try to achieve by automating writing (by spewing statistically probable prose), reading (by badly summarizing text cobbled from excerpts without the ability to make any sense of it), art, photography, music, all standardized to the lowest common denominator.
I’m not buying a new device that will try to impose any of this hype. For now, Apple has decided to “punish” the users in the European Union by holding the Apple Intelligence features hostage. FINE BY ME!
edit: typo/phrasing
Yea. There are very few machine learning driven features that would actually improve my life in a meaningful way. I feel much more „punished“ by the omission of iPhone mirroring on mac than any Apple Intelligence feature.
Serious question: What would you do with iPhone mirroring? Because I have it, and I have no idea what to do with it.
Having it open on my mac while I’m working on it so I can access message apps that don’t work on the desktop without having to take out my phone.
In all fairness, it’s not really necessary, but it‘d make my life a little easier for a use case I actually have.
I’m guessing you are using 3rd party message apps? If so, that makes perfect sense. Work smarter, not harder.
Yea. Although I do use iMessage with a few people, it’s not really a big thing here in Germany, so I also do use different apps. The main app, that requires me to get out my phone, is Snapchat, as there’s no desktop app and the webapp sucks.