Users don't have to wait 24 hours because Google Play store already has registered developers. Scammers can be held liable when Google knows who the developer of the malicious app is.
Really though? Who is in jail right now for Play Store malware offenses? Or are we just talking about some random person in China or Russia who signed up with a prepaid card and fake information had their Google account shut off eventually.
I'll give you that, enforcement of the rules can sometimes fail. But scamming & malware is a global industry, definitely not limited to state-funded actors in those two countries (which is what I think you're referring to).
> Allow a toggle with no waiting period during initial device setup
I like this idea in principle but I think it could become a workaround that the same malicious entities would be willing to exploit, by just coercing their victims to "reset" their phones to access that toggle.
Isn't app data, photos etc. usually synced with the Google account? Besides, Google claims that the scammers are using social engineering to create a feeling of panic and urgency, so I think the victim would be willing to reset and log in to the accounts again in such a frame of mind.
I'm sure there's a hypothetical scenario where someone successfully runs a scam that way, but there's also a hypothetical scenario where a 24 hour wait doesn't succeed at interrupting the scam.
None of this is stopping a malicious entity. We keep trying to use tech (poorly thought out tech at that) to solve issues of social engineering. And no one is asking for a solution, either; it's being jammed in for control.
Such a silly statement. Of course tech can solve social engineering problem, we do so every day startign from UX design. This is a good solution to killing urgency.
Ux is made for humans. Humans can learn to exploit UX. This is as useless a battle as fighting piracy: you will destroy your product before you solve the problem.
It's a little inconvenient for someone setting up a new phone to have to wait a full day to install unregistered apps. But while I can't speak for others, it's a price I'm personally willing to pay to make the types of scams they mention much less effective. The perfect is the enemy of the good.
How would you feel about needing to wait 24 hours to visit an "unapproved" website on your phone? You would pay Google/Apple $25 to get whitelisted so people can browse to your personal website without getting a scary security message.
This is the same thing since it applies to all apps, not just apps that need special permissions.
I don't think it's fair to extend the analogy to what amounts to censorship of websites since that's not the system they're proposing. Also isn't the owner of a website already identifying themselves when they register their domain name and/or rent a server? I think this is not the same as downloading an app by an unknown developer.
From the article I understood this to be a one-time delay, as opposed to having to go through the same waiting process for every single "unlicensed" app I want to install (which I would not accept). I'm just waiting 24 hours once to permanently change my device into a mode where I can install any app I like without any restrictions/delays whatsoever.
On what basis do you believe that it will meaningfully reduce the dollars lost or persons harmed by fraud, as opposed to simple shuffling around the exact means used?
Well maybe nothing ultimately changes. Maybe we end up in a world where Android users have to wait 24 hours to change a setting so that their devices will install any apps they want, from then on with no further delays. But this seems to me like a relatively low cost for a potentially huge benefit for victims.
Give me a break bro. Google are among the biggest crooks in the game and knowingly allow all kinds of fraudsters to use their ad platform. This is all about ensuring their cut.
Google has a library of millions of scanned books from their Google Books project that started in 2004. I think we have reason to believe that there are more than a few books about effectively playing different traditional card games in there, and that an LLM trained with that dataset could generalize to understand how to play Balatro from a text description.
Nonetheless I still think it's impressive that we have LLMs that can just do this now.
Winning in Balatro has very little to do with understanding how to play traditional poker. Yes, you do need a basic knowledge of different types of poker hands, but the strategy for succeeding in the game is almost entirely unrelated to poker strategy.
I think I weakly disagree. Poker players have intuitive sense of the statistics of various hand types showing up, for instance, and that can be a useful clue as to which build types are promising.
>Poker players have intuitive sense of the statistics of various hand types showing up, for instance, and that can be a useful clue as to which build types are promising.
Maybe in the early rounds, but deck fixing (e.g. Hanged Man, Immolate, Trading Card, DNA, etc) quickly changes that. Especially when pushing for "secret" hands like the 5 of a kind, flush 5, or flush house.
I would also like to add that as language models improve (in the sense of decreasing loss on the training set), they in fact become better at compressing their training data ("the Internet"), so that a model that is "half a terabyte" could represent many times more concepts with the same amount of space. Only comparing the relative size of the internet vs a model may not make this clear.
Why? A sample size of 28% is positively huge compared to what most statistical studies have to work with. The accuracy of an extrapolation is mostly determined by underlying sampling bias, not the amount of data. If you have any basis to suggest that capturing "only bugs with fixes tags" creates a skewed sample, that would be grounds to distrust the extrapolation, but simply claiming "it's only 28%" does not make it worth noting.