I’m learning a language, I speak it in public to other people who do. I don’t research the language, because I have some old text books on it. My partner doesn’t speak it and doesn’t research it on their devices. I don’t normally have my phone on me in public, but my partner does. It took about 4 months of publicly speaking in the language before they got ads

What do you think this means?

::edit::

It was a Reddit ad and my city has embraced those AI smart cameras, so I assume some of those are Google owned which makes sense with Reddit and Google’s recent alliance. This is assuming our devices aren’t listening to us without our permission and AI cameras are mining data on passersby

Other theories are that since cellphones are involved it doesn’t matter if I nor my partner ever searched for the language, at some point my phone or partner’s phone was near someone who spoke that language and the data brokers/ad sellers inferred from there

Seems like the consensus is that I must have posted in the language on some social media or used Google to research it or made some new friends who speak the language and that’s why

  • otp@sh.itjust.works
    link
    fedilink
    arrow-up
    53
    arrow-down
    1
    ·
    edit-2
    2 months ago

    I think “the microphones are listening when they’re off” is still a conspiracy theory at this point. It’s not really needed to get enough information.

    Are there any ways that Google could find out that you’re interacting together?

    • Do you share an IP address/router?
    • Do you watch YouTube videos in that language?
    • Do you use any messaging apps where you speak that language with other people, but also speak with your partner?
    • Do you access any Google services (with a Keyboard for that language installed) that your partner also accesses?
    • Do you use location services that could pinpoint both of your locations to the same street address?
    • Does your partner interact with any of the people you’re learning that language with? (E.g., Social media friends, "Contact"s, live in the same geographic region)
    • Is your device on the same network as your partner’s*? (Wi-Fi or Bluetooth)

    I’m not saying these are all ways that Google uses, but I believe that each of them are ways that Google would be able to associate that language to your partner.

    • EngineerGaming@feddit.nl
      link
      fedilink
      arrow-up
      20
      ·
      2 months ago

      Yea, that is my problem with the “always listening” theory. I am sure they’re capable of that, but don’t think they’re doing it just because they can get more data with a fraction of the cost by more “traditional” tracking.

      In a way, it is scarier than listening - because listening is far easier to understand than the multitude of ways the data is collected and combined.

      • otp@sh.itjust.works
        link
        fedilink
        arrow-up
        4
        ·
        2 months ago

        Exactly. They definitely could, but there’d also be potential legal issues, and it’d just be much more expensive to analyze sound data.

        If it’s done on each device, then their battery power would suck, and performance would decline. Sure, they could do that, but I imagine most phone manufacturers would rather sell more phones and make money from app companies (Meta, Google) who pay to have their apps pre-installed on the phone. Or Samsung and Apple, who have their own ecosystems for mining data like Google does.

        If they were instead just uploading audio to central servers (which could mitigate legal issues due to “anonymizing” the data), then they’d be paying for the computational power to analyze all that data.

        Again, completely possible, and likely in use with things like Alexa and Google Home. But on our phones (and laptops for that matter), they have so many other cheaper ways to get probably the same quality of information.

        • EngineerGaming@feddit.nl
          link
          fedilink
          arrow-up
          4
          ·
          2 months ago

          It is not about “too risky”. It is about “costs much more in processing power while providing a fraction of the info”.

          • ArcaneSlime@lemmy.dbzer0.com
            link
            fedilink
            arrow-up
            1
            ·
            edit-2
            1 month ago

            Tbh, my theory wouldn’t be too much more.

            My theory is not that they’re always listening and sending a live stream of audio back to some dingus in headphones. It has to be always listening for the trigger words, right? That’s how it works, you say “yo siri” and she hears you, so it must be listening at least for the trigger word. With that in mind, what’s to stop them from using other, secret trigger words, which may even behave differently than the advertised ones? Like say I’m Joe Bridgestone, and I pay google to add “tires” or “new tires” as silent trigger words, and instead of “activating” the google bitch it sends ads for Bridgestone Tires? Why wouldn’t that be possible? It’d also be harder to catch them, as opposed to “literally a 24/7 hot mic.” Tbh I find it hard to believe that the NSA hasn’t at least tried to get this up for words like “bomb” or whatever, too.

    • aviation_hydrated@infosec.pubOP
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      1
      ·
      2 months ago

      Yeah only one that would match would be geolocation, and I’m a pretty offline first person, so no overlap in internet history since my footprint is pretty much Lemmy this summer.

      Which makes me think it could be all the AI smart cameras recording interactions