Hacker Newsnew | past | comments | ask | show | jobs | submit | temp123789246's commentslogin

OpenAI initially claimed that GPT-2 was too dangerous to release in 2019.

How many times will labs repeat the same absurd propaganda?


The claim I remember was that releasing it would start an arms race for AGI, which I think it clearly did


Anthropic and OpenAI have very different cultures and ethos. Point to other times where anthropic has gone the way of cheap marketing tricks. Now look at openAI. Not even close.


Anthropic has done plenty of cheap marketing tricks as of late, see their recent non-functional C compiler that relied on a harness using gcc's entire test suite


It is functional. You can try it yourself or find third-party tests of it, even. Why do you think that it's a "cheap marketing trick" to test it on the GCC test suites?


Not surprising given that they dont even know why claude-code works as before or doesnt work [1] ie, there is no known theory of operation. Explains why they are afraid of it.

[1] https://news.ycombinator.com/item?id=47660925


I think Boris will come and say there is no issue with claude code.


Alternative view: GPT2 was indeed a risk to society, but we just keep raising the bar and "accepting" the risks.


GPT2 was definitely a risk, just not of the same magnitude. It would have (and did!) make social media bot farms way more convincing and widespread. There was specific worry about that being used to sway elections, which is why they held back the model.


OpenAI did not make the strong specific claims about GPT2's abilities that Anthropic is making about Claude Mythos.


One requirement for a programming language to be “good” is that doing this, with sufficient specificity to get all the behavior you want, will be more verbose than the code itself.


Theory: Any system, legal or otherwise, that denies the Axioms of Reality, will eventually fail.

Axiom of Reality: “Intellectual Property” does not exist.


I lol’d


“Two gin-scented tears trickled down the sides of his nose. But it was all right, everything was all right, the struggle was finished. He had won the victory over himself. He loved Big Brother.”


In the same way that people struggle to comprehend exponential growth, they seem to also struggle to comprehend the cost of inaction, compounded over time.

Imagine if the steam engine had not been allowed by regulators during the time of the Industrial Revolution.

If that happened and we were all still working on farms today, I bet half the people would be telling us how much safer the government was making us with all its regulations. In blissful ignorance.


Indeed. Thank you for writing this and speaking up in public.

Many of the comments here that essentially reply to your article by saying “regulation is good, stop criticizing it”, are deeply depressing. That is a regulatory mind virus that must be destroyed before it kills us.


Does anyone know why, anecdotally, it seems like the slowness of type inference is more of a pain point in Swift than in Ocaml, Rescript, Purescript, Haskell, etc?


Is it that Haskell, at least, doesn't support overloading in the same way as Swift? I don't know either of them well enough to be sure.

It seems like there's a combinatorial explosion of possible overloads in Swift, whereas if you implement a function with the same ergonomics in Haskell (e.g. a printf-like function), the only thing the compiler has to do is ask "Does type X have an implementation for typeclass Show? Yes? Done."

Essentially Haskell solved this overload inference problem in the same way that iterators solve the M*N problem for basic algorithms: convert all these disparate types to a single type, and run your algorithm on that.


"Does type X have an implementation for typeclass Y" isn't always easy to answer.

https://aphyr.com/posts/342-typing-the-technical-interview


That post, while awesome (as is the rest of aphyr's stuff), is a lot to wade through to get to the point you're trying to convey. Can you spell it out for me?


That typeclass resolution can encode some heavy computation, the example being n-queens in the article.


That's only the case when you turn on the "enable arbitrary computation in typeclasses" flag, so I'd say it's not much of a worry.


I'm not an expert on the theory, but OCaml has a very fast compiler and while it is (almost) capable of fully reconstructing the types from a program with no annotations, it doesn't have to deal with ad-hoc polymorphism and takes some shortcuts like weak polymorphism when it gets too hard: https://www.ocaml.org/manual/5.2/polymorphism.html


Try this:

    let f0  = fun x -> (x, x) in
        let f1  = fun y -> f0(f0 y) in
        let f2  = fun y -> f1(f1 y) in
        let f3  = fun y -> f2(f2 y) in
        let f4  = fun y -> f3(f3 y) in
        let f5  = fun y -> f4(f4 y) in
        f5 (fun z -> z)
Lifted from https://dl.acm.org/doi/pdf/10.1145/96709.96748 via Pierce, Types And Programming Languages.


But that's just a type that is huge. I didn't want to wait for the evaluation, but if I drop the f5 out, I got a type that is 1.6 megabytes long when printed without spaces.

It's still very fast for "normal size" types. That reduced version compiles in 151 milliseconds.


Wait what? In Haskell the types are usually directly inferrable from the arguments they're being used as, and when you put a type annotation it's usually not explicit types (Num a => a -> b -> c).

I almost never bother putting types in Haskell, unless I want to guarantee some constraint, in which case I typically use typeclasses. Maybe I'm just weird but I don't think so. One of the very few things I actually like about Haskell is how good the type inference is.


My guess is different extensions to Hindley–Milner type system, which is EXPTIME-complete in the worst case.

HM isn't bidirectional in the special case, so probably the features they added vs the top level universal quantifier type that has pathological low time complexity.


Congrats!

I’ve been watching HVM for a while and think it’s extremely cool.

My intuition is that this will eventually be a really big deal.


TIL what “fnord” is. I think that is a perfect way to describe much of this article.


_The Illuminatus Trilogy_ , where I learned this word, is a fantastic read.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: