Hacker Newsnew | past | comments | ask | show | jobs | submit | silver_sun's commentslogin

Users don't have to wait 24 hours because Google Play store already has registered developers. Scammers can be held liable when Google knows who the developer of the malicious app is.


Really though? Who is in jail right now for Play Store malware offenses? Or are we just talking about some random person in China or Russia who signed up with a prepaid card and fake information had their Google account shut off eventually.


I'll give you that, enforcement of the rules can sometimes fail. But scamming & malware is a global industry, definitely not limited to state-funded actors in those two countries (which is what I think you're referring to).


> Allow a toggle with no waiting period during initial device setup

I like this idea in principle but I think it could become a workaround that the same malicious entities would be willing to exploit, by just coercing their victims to "reset" their phones to access that toggle.


That wipes all the data on the device and requires logging back in to accounts. It seems to me that's high enough friction to resist most coercion.


Isn't app data, photos etc. usually synced with the Google account? Besides, Google claims that the scammers are using social engineering to create a feeling of panic and urgency, so I think the victim would be willing to reset and log in to the accounts again in such a frame of mind.


Some is, some is optional, some isn't.

I'm sure there's a hypothetical scenario where someone successfully runs a scam that way, but there's also a hypothetical scenario where a 24 hour wait doesn't succeed at interrupting the scam.


The perfect is the enemy of the good.


Which applies just the same to the hypothetical option during initial device setup.


I don't think it does because of the workaround I mentioned upthread.


The victim also can't be on the phone with the scammer using that device during the setup process. We're talking about a very high-friction scenario.


None of this is stopping a malicious entity. We keep trying to use tech (poorly thought out tech at that) to solve issues of social engineering. And no one is asking for a solution, either; it's being jammed in for control.


Such a silly statement. Of course tech can solve social engineering problem, we do so every day startign from UX design. This is a good solution to killing urgency.


Ux is made for humans. Humans can learn to exploit UX. This is as useless a battle as fighting piracy: you will destroy your product before you solve the problem.


Social engineering is destroyed with education, not with restriction and control.

Trading freedom for safety eliminates both.


It's a little inconvenient for someone setting up a new phone to have to wait a full day to install unregistered apps. But while I can't speak for others, it's a price I'm personally willing to pay to make the types of scams they mention much less effective. The perfect is the enemy of the good.


How would you feel about needing to wait 24 hours to visit an "unapproved" website on your phone? You would pay Google/Apple $25 to get whitelisted so people can browse to your personal website without getting a scary security message.

This is the same thing since it applies to all apps, not just apps that need special permissions.


I don't think it's fair to extend the analogy to what amounts to censorship of websites since that's not the system they're proposing. Also isn't the owner of a website already identifying themselves when they register their domain name and/or rent a server? I think this is not the same as downloading an app by an unknown developer.

From the article I understood this to be a one-time delay, as opposed to having to go through the same waiting process for every single "unlicensed" app I want to install (which I would not accept). I'm just waiting 24 hours once to permanently change my device into a mode where I can install any app I like without any restrictions/delays whatsoever.


On what basis do you believe that it will meaningfully reduce the dollars lost or persons harmed by fraud, as opposed to simple shuffling around the exact means used?


Well maybe nothing ultimately changes. Maybe we end up in a world where Android users have to wait 24 hours to change a setting so that their devices will install any apps they want, from then on with no further delays. But this seems to me like a relatively low cost for a potentially huge benefit for victims.


Give me a break bro. Google are among the biggest crooks in the game and knowingly allow all kinds of fraudsters to use their ad platform. This is all about ensuring their cut.


Google has a library of millions of scanned books from their Google Books project that started in 2004. I think we have reason to believe that there are more than a few books about effectively playing different traditional card games in there, and that an LLM trained with that dataset could generalize to understand how to play Balatro from a text description.

Nonetheless I still think it's impressive that we have LLMs that can just do this now.


Winning in Balatro has very little to do with understanding how to play traditional poker. Yes, you do need a basic knowledge of different types of poker hands, but the strategy for succeeding in the game is almost entirely unrelated to poker strategy.


If it tried to play Balatro using knowledge of, e.g., poker, it would lose badly rather than win. Have you played?


I think I weakly disagree. Poker players have intuitive sense of the statistics of various hand types showing up, for instance, and that can be a useful clue as to which build types are promising.


>Poker players have intuitive sense of the statistics of various hand types showing up, for instance, and that can be a useful clue as to which build types are promising.

Maybe in the early rounds, but deck fixing (e.g. Hanged Man, Immolate, Trading Card, DNA, etc) quickly changes that. Especially when pushing for "secret" hands like the 5 of a kind, flush 5, or flush house.


They said it builds Linux 6.9, maybe you are trying to compile a newer version there?


git switch v6.9

The riscv build succeeded. For the x86-64 build I ran into

    % make O=/tmp/linux/x86 ARCH=x86_64 CC=/tmp/p/claudes-c-compiler/target/release/ccc-x86 HOSTCC=/tmp/p/claudes-c-compiler/target/release/ccc-x86 LDFLAGS=-fuse-ld=bfd LD=ld.bfd -j30 vmlinux -k
    make[1]: Entering directory '/tmp/linux/x86'
    ...
      CC      arch/x86/platform/intel/iosf_mbi.o
    ccc: error: lgdtl requires memory operand
      AR      arch/x86/platform/intel-mid/built-in.a
    make[6]: *** [/home/ray/Dev/linux/scripts/Makefile.build:362: arch/x86/realmode/rm/wakeup_asm.o] Error 1
    ld.bfd: arch/x86/entry/vdso/vdso32/sigreturn.o: warning: relocation in read-only section `.eh_frame'
    ld.bfd: error in arch/x86/entry/vdso/vdso32/sigreturn.o(.eh_frame); no .eh_frame_hdr table will be created
    ld.bfd: warning: creating DT_TEXTREL in a shared object
    ccc: error: unsupported pushw operand
There are many other errors.

tinyconfig and allnoconfig have fewer errors.

    RELOCS  arch/x86/realmode/rm/realmode.relocs
    Invalid absolute R_386_32 relocation: real_mode_seg
Still very impressive.


They said that it wasn't able to support 16 bit real mode. Needs to call gcc for that.


I feel like I could have done this in a much shorter time, for much less tokens, but still very impressive!


> this isn't just a matter of teasing out the compressed C compiler written in Rust that's already on the internet (where?)

A quick search brings up several C compilers written in Rust. I'm not claiming they are necessarily in Claude's training data, but they do exist.

https://github.com/PhilippRados/wrecc (unfinished)

https://github.com/ClementTsang/rustcc

https://codeberg.org/notgull/dozer (unfinished)

https://github.com/jyn514/saltwater

I would also like to add that as language models improve (in the sense of decreasing loss on the training set), they in fact become better at compressing their training data ("the Internet"), so that a model that is "half a terabyte" could represent many times more concepts with the same amount of space. Only comparing the relative size of the internet vs a model may not make this clear.


Their section on "Dataset limitations" says that the study "Only captures bugs with Fixes: tags (~28% of fix commits)."

Just worth noting that it is a significant extrapolation from only "28%" of fix commits to assume that the average is 2 years.


Why? A sample size of 28% is positively huge compared to what most statistical studies have to work with. The accuracy of an extrapolation is mostly determined by underlying sampling bias, not the amount of data. If you have any basis to suggest that capturing "only bugs with fixes tags" creates a skewed sample, that would be grounds to distrust the extrapolation, but simply claiming "it's only 28%" does not make it worth noting.


Interesting. Could you elaborate? Is there a specific reason that it doesn't do 100% of the work already?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: