Just an FYI...Claude is actually really good at building LSP servers [1].
If you want a better Julia LSP, you might just be able to get Claude or Codex to build one for you. I've been impressed with the TLA+ bindings it generated.
Claude has certainly been getting better with TLA+. It's not perfect yet but for laughs I got it to model the rules of Monopoly last night [1]. I haven't done any exhaustive checking on it yet, but it certainly looks passable.
It is pretty impressive at how good it's gotten at this, in a relatively short amount of time no less. I still usually write my specs by hand, but who knows how much longer I'll be doing that.
It looks quite complicated and I have no idea what it is doing. Obviously, since I don't know about TLA+. But what about someone who knows TLA+? It still seems hard to make sure it is valid. And it's just for a relatively simple game.
I've played a bit with having a Claude generate a TLA+ model, I review it, Claude reviews it, and then Claude checks for alignment with the actual system. The code in the system can be annotated with comments to link back to the model, and vice versa, to make human and AI review easier.
I think it's highly promising. I might even call it a success. For example, I was able to express a bug in the system and Claude was able to find me a property that would reveal that in the model; and then revise the model to eliminate that bug, and then revise the code accordinly.
Additionally (with OpenAI) I have a small guest management system I had the LLM generate a TLA+ model about the API which endpoints a client can request from it, and which tokens it learns from it (there are session ids, event ids, guest ids and invitation ids that interact in interesting ways, as guests are not required to have an invitation to the system, and they can have only magic-link protected account, resulting in different visibility on the contact details of others), and what data it can or cannot access with those tokens.
I still haven't manually verified, but it seems rather promising as well.
> Decline to buy: property stays with bank (auction abstracted out)
Ignoring an entire game mechanic is really stretching the definition of “abstracted out”…
Also, at the bottom it defines a “Liveness: someone eventually wins” property which I believe cannot be proven. Monopoly doesn’t have any rules forcing the game to end eventually. There is only a probabilistic guarantee, and even that only applies if the players are trying to win; if the players are conspiring to prevent the game from ending then they’re unlikely to fail.
From the link above, I also found Leslie Lamport's Home page[1], The original developer of TLA+. Leslie's home page contains an incredible amount of info about TLA. There's a published Book with pdf available. There are video courses and talks.
There are also git repos of examples[2] and tools[3]
What's the advantage of provable correctness if it's apparently not easy to prove even for people who understand TLA+? I'm not trying to be a party pooper, just curious.
Isn't logical incorrectness less of a problem in software than failures of imagination or conscientiousness in modeling the domain?
My thought (as someone interested in formal verification but unable to grok the math) is it exists as a canary in any sufficiently-complex codebase I let AI create. Even if it's wrong, knowing that something "we" changed breaks an agreement things currently assume is valuable.
When I was eighteen, I went to Something Awful, Newgrounds, ThatGuyWithTheGlasses, GameTrailers, Cinemassacre, YouTube, and SpoonyExperiment daily. Nowadays it's basically just YouTube for all that stuff (though I haven't watched Spoony for quite awhile).
Newgrounds is still around, I probably should make more of an effort to go there, and I do have stairs in my house, but I definitely don't go on as many different sites as I used to.
I certainly miss the days when everyone had their own web page.
I'm not sure how I feel about this; wasn't part of the point of core.async to push all this stuff into channels? I'm not convinced that having a JS-style async keyword is an upgrade.
This is about using a JS feature without bringing in any additional dependencies like core.async. You don't have to use it and you can still use core.async. This was the most asked for feature in the recent ClojureScript survey.
I have some very smart friends who think it's the perfect language, but I kind of prefer almost every language that has come out after C++. I feel like the language adds some very strange semantics in some very strange places that can be hard to reason about until you've spent a lot of time with the language. This wouldn't necessarily be so bad if not for the fact that most people who write C++ have not spent sufficient time to understand it (and I consider myself in that group, though I don't write C++).
I have mixed feelings on D, but I'm very grateful that Rust came along. Rust is arguably even more complicated than C++, but the good thing is that getting a lot of these complications wrong will simply not allow your code to compile. It's a huge pain in the ass at first but I've become grateful for it.
I still write C very occasionally, but Rust has supplanted like 95% of jobs that were formerly C. I still really need to play with Zig.
Personally I use C or something, anything other than C++ really, if I need something more ergonomic or provably correct. Many excuse C++’s design history with “they didn’t know better”, but the oft forgotten history explained in that video shows otherwise.
I haven't played with Zig yet, but it does look like what I want C++ to be; something unapologetically low level but given some high-level constructs for safety and ergonomics. I haven't had a huge impetus to learn it because Rust has been happily occupying that space for me but it certainly can't hurt to play with Zig.
I should write an Ffmpeg codec with it or something this weekend to try it out.
I've been having fun trying out Zig recently; would recommend. FYI that they released a big batch of API changes in April (version 0.16.0) so be mindful of learning from old resources.
Would you say Rust is more complicated than C++, or just harder to write? I think that complication a la C++ where there are too many features for the language's own good is a different problem than being a clean language but with higher order features that require a bit more, I don't know, focus?
I make a strong distinction between complexity, the inherent amount of moving parts in a thing, versus complication, the amount of frippery involved in dealing with the complexity.
Rust is complex. It's solving complex problems. It's not complicated, though. There's not much you could remove without creating leaky abstractions.
In my opinion, C++ is equally complex. It's solving the same kinds of problems as Rust (although it'd be fairer to say "Rust is solving the same problems as C++"). However, it's hella complicated. There's a vast number of twists and turns to keep in mind if you want to use it, and most of them are things you could not have anticipated by reasoning about it from first principles.
If you took the design goals of Rust, and reinvented it from scratch, it'd probably end up looking a lot like Rust. If you were to reinvent C++ from scratch, I bet it wouldn't remotely resemble modern C++. If anything, I bet it would also end up looking like Rust, and the fractal of powerful footguns would be left on the cutting room floor because "that's insane, there's no way anyone would want that".
I think there are certainly different complications in Rust than C++. The RAII can be roughly the same in the naive sense, but there's having to worry about Send and Sync and Pin and fighting with the borrow checker and all that fun stuff.
Gauging how "complicated" a language is somewhat subjective, so it's kind of hard for me to give a straightforward answer. I think it's certainly easier to be (some definition of) productive with C++ than with Rust. I feel like to do anything even remotely non-trivial with Rust, you kind of have to understand everything, because if you don't do it in the "Rust way", it often won't compile. I think this is a good thing, but it does make it harder to get started.
C++ has a lot less consistency and (kind of) more features, and lots of strange semantics to go with those features, and so if people actually use them it can get confusing and hard to read pretty quickly.
My knowledge is a bit out of date, to be clear; previously whenever I need something in the C++ domain, I could fairly easily just reach for C and use that instead. Now Rust is available and I think overall better (though I do sometimes miss how utterly simple and dumb C is).
> having to worry about Send and Sync and Pin and fighting with the borrow checker and all that fun stuff.
To be fair, the alternative to having to worry about Send/Sync/Pin is not "not worrying about Send/Sync/Pin". It's having to worry about correctly enforcing the constraints they describe on your own, without any kind of mechanical help. E.g., not moving data to another thread that shouldn't be and not accessing data from multiple threads that shouldn't be. This stuff is intrinsic.
In this sense the Rust mental model is simpler, because failing to uphold these constraints is no longer "your fault", it's Rust's fault.
I don't disagree; I was coming from Java most recently, and a lot of the equivalents of "Sync" and "Send" were just mental notes I was making myself, and I wasn't really used to it being encoded into the type system.
Yeah, and writing thread safe Java is fraught with hidden shotgun and footguns because whether something is thread safe depends on bunch of notes about this class/method not being thread safe.
And relying on people to check them. Versus a compiler.
> have to understand everything, because if you don't do it in the "Rust way", it often won't compile
I confess I haven't dug into it much yet, but this reminds me of how Haskell was. By the time you got a program to compile your project was more or less done.
In some senses I actually find Rust a bit harder than Haskell. With Haskell the types are immutable and as such they can be happily shared everywhere. That required basically no rewiring in my brain.
With Rust, I had to get used to single ownership or explicit cloning. There's an argument that this is "better", but I found it a bit harder to learn.
You can choose to make all your things immutable and have the same "no rewiring" as Haskell. Sometimes that's an excellent idea, sometimes it's obviously a terrible idea, many times you'd have to measure to find out and IMHO in Rust you should write the easiest thing in those cases and only measure if your perf isn't acceptable.
Rust is similar. I don't have a link handy, but I've read a few articles noting experience with programming in rust where a program that compiles works. Provided the logic is correct, of course.
> I do sometimes miss how utterly simple and dumb C is
You can make an argument that the K&R C "is" an utterly simple and dumb language, but if it was it's long gone and it's also irrelevant for modern hardware.
Today because C only has a single kind of reference, the raw pointer, that means if you want references at all (which you do) you need pointers, and to get decent performance from this sort of language you need pointer provenance, and so now all your reference types involve understanding compiler internals minutiae. Bad luck though, those aren't specified in the C language standard, that's a TODO item from the turn of the century. The committee agrees that C pointers do have provenance but declines to explain how that could possibly work.
Rust's Provenance itself stabilized, some time ago in fact, as basically the Tower of Weakenings. If you noticed the raw pointer provenance APIs are all marked stable since Rust 1.84, that's because the underpinning model is also stable - however the provenance rules do need to care about aliasing and Rust's aliasing rules are to some extent a WIP. So there is definitely a creaky floorboard in the specific case that you want to do pointer twiddling and your pointers might arguably alias and you'd like clarity about exactly where the line is on that.
I haven't seen places where they wanted this, but they definitely can exist. In the cases I'm thinking of any valid pointers are definitely unique (so no aliasing), or they're definitely pointing at something immutable (so aliasing is fine) or both and so there's no problem as I understand it.
There is an outstanding issue with LLVM - for any language including Rust - that it has unsound optimizations for pointers and this has implications for provenance tricks, but as I said that's not Rust specific and I think worse there are signs the same illness afflicts the GCC backend so maybe it's worse than "LLVM is buggy" and is a wider problem in how compiler developers have thought about this vague unspecified problem.
I didn't grow up with the PC Engine or TurboGrafx, but I did start getting into it after I got the MiSTer.
I find that the unsuccessful [1] consoles are generally pretty bad. The 3DO and the Jaguar and CD-i are mostly pretty crappy, and while it can be fun to play for a novelty and you might even find one or two games that make it worth it, the vast majority of the time there's a reason that people don't seriously revisit these consoles. To be clear, I did grow up with a 3DO and Jaguar (well, I got both when I was thirteen), so I'm not speaking out of my ass here.
So I was actually very surprised that the TurboGrafx games were actually quite good. Like, I kept going through random games, and I was shocked to find that a lot of them were actually very well made; decent graphics, tight controls, and fun gameplay.
There are terrible games on there, but I was kind of shocked to find that they appear to be outliers.
Now I kind of wish I had grown up with the Turbografx.
It was actually a big success in Japan. NEC America completely botched everything in the US market.
In Japan it came out in 1987, two years before the Sega Genesis. In the US it launched a few weeks after the Genesis. The pack in game was based on an anime that didn't come out in the US.
It's big weakness was probably that putting button batteries on HuCards was awkward, so it didn't really have many longer slower paced games with save systems.
I always thought the 3DO was good hardware, for 1993, let down by a high price (especially by 1993 standards) and a poor library of games: there just wasn’t a good incentive to buy the thing.
Fast forward a couple of years and the PlayStation and Saturn are both out, more technically capable, cheaper, and with better game libraries.
However, I really do think the main thing that killed the 3DO was the price - it was wildly and ridiculously expensive compared with the alternatives - and the lack of games sort of followed naturally from that because who’s going to develop for a console hardly anyone bought?
The inter-generation timing of its launch probably didn’t help. Who’s forking over when you know Sega, Nintendo and - it turns out - Sony are all going to release something better in a year or two?
I agree; if the console had been half the price it might have faired better.
Another issue is that pretty much every good game on there came out on other platforms, and often the other platform's version was actually better anyway. Return Fire, for example: great game, a lot of fun, but the PlayStation version is better in pretty much every way. Same with Road Rash, or The Horde (in a cheesy way, on the Saturn), Alone in the Dark (on the PC), etc. Despite being called the 3DO, it pretty much always had the absolute worst version of 3D games.
There are a few exclusives, and even a couple that are ok, but certainly not enough to justify the price tag. Even compared to the Saturn (a system that also tends to have pretty mediocre 3D experiences), it's pretty obvious that the 3DO is a bad deal.
> the main thing that killed the 3DO was the price
I believe this was because 3DO didn't make the consoles themselves, they licensed the design to others who made it. The console had to be profitable, rather than be sold at a loss and be subsidized by game sales.
I always wondered if the better approach would have been to have the 3DO company subsidize the development of the consoles in relation to how many games were sold, and maybe in relation to how many of those specific consoles were sold. e.g. if they dedicated half the licensing proceeds to paying back the consoles, and 80% of the 3DO consoles sold were Panasonic, than they'd give 40% of the licensing fees back to Panasonic.
I'm sure there are issues with this kind of scheme, but I think it's possible that by doing that it could have brought the console price down.
I wouldn't call the PC Engine unsuccessful. Sure it did poorly in the US, but it did well in Japan, even outsold the Mega Drive (Genesis). That's definitely part of the reason it had quite a few good games by Japanese companies.
Yeah, that's why I specified that it's relative. I honestly hadn't even heard of the Turbografx until I was a teenager, but I had a Sega Genesis and Super Nintendo.
It's a real tragedy for North American gamers because it seems like we were robbed of a lot of truly amazing games for the PC Engine/TurboGrafx due to mistakes on the business side of things.
The PC Engine, released in 1987, was an incredible success in Japan, rivalling the Famicom, as its graphical effects blew the socks off Famicom players. I imagine it would have done the same in North America if it had released in 1987 or 1988 but sadly it somehow took until late August 1989 for it to be released! Years late!!!! By this point the NES was firmly established and the same month, the truly 16 bit Genesis would be released. Turbografx didn't stand a chance.
The Jaguar catalog was mostly meh, but there were a few gems that I really enjoyed (Aliens vs Predator, Doom, Wolf 3d, Iron Soldier, and Tempest 2000). AvP alone was worth the price of the system.
I was a proud Turbo DUO owner. It had some fantastic multiplayer games, especially 5-player Bomberman. That got a lot of play, including tournaments at work. Dungeon Explorer is another one we played through multiple times.
I played through the campaign for Alien vs Predator for both the Space Marine and the Alien campaigns in BigPEmu, and I really think it's a mediocre-at-best Wolf 3D clone. If it had been released on anything besides the Jag, then I think it would be, at best, relegated to the annals of mediocrity like the dozens of other Wolfenstein 3D knock offs. I wanted to like it because I love the first two Alien movies, but I really do not think it holds up compared to most FPS games, even at the time.
I haven't fully gone through the Turbo CD catalog yet, but I did play through It Came From Outer space and that was a blast in a "so bad it's good" kind of way.
Tons of the games were simple shmups, so I can understand people who might have the wrong idea about the library. Also, the library in Japan was way more impressive than what made it to the west. Gekisha Boy in particular still stands out as a fun and unique game (though you can play a translated version on your Mister). Fire Pro 3 is an incredible wrestling game that I still play today (no japanese required). There was another game, Zero4 Champ, which had unique drag racing gameplay (it was all about timing your gear shifts). It was hard to play without japanese knowledge, except for the 2 player head to head races. For CD games, of course, Castlevania: Rondo of Blood is well known.
I actually really liked Jackie Chan's Action Kung Fu. The NES version is pretty ok too, but I think the TurboGrafx version is a bit better. There's also the Bonk games, Air Zonk, Alien Crush, and Legendary Axe: all pretty fun games with good art style and decent controls.
Rondo of Blood was actually the only PC-Engine game I had actually played before the MiSTer, and of course it's a classic for a reason, it's great.
Something I find charming in how utterly incompetent it is is the Addams Family game on the CD. I have no idea how the hell it was even released, let alone made, because it is absurdly bad. You play as the lawyer from the movie (the absolute least interesting character in the movie) and apparently all the Addams' want you actively fucking dead. It's so unbelievably terrible that I sort of got obsessed with it and managed to actually beat it.
Maybe they should have sold the TurboGrafx-16 in the US only with the CD add-on already built in, and one or two CD games included to set off the hardware price somewhat. That would have made them stand out versus SNES/Genesis.
CD drives were still expensive and a bit flakey at that point. It might have worked by packing in extra games.
Sega of America was just really good at understanding the US market.
Sega did a US launch (August 29, 1989) of the Genesis 9 months after the JP launch. They had EA releasing sports games on the console in 1990.
The TG-16 came out in 1987 in JP but they waited 1 year 10 months to do a US release, so it came out after the Genesis. They didn't get any US developers on board.
A 1988 release of the TG-16 combined with some decent sports games probably would have been a success.
On a recent project I have needed to use exFAT. exFAT is terrible for a number of reasons, but in my case the thing I had to deal with was the lack of journaling, which had the possibility to corrupt files if there were a power interruption or something.
I initially was writing a series of files and doing some quasi-append-only things with new files and compacting the old one to sort of reinvent journaling. What I did more or less worked but it was very ad hoc and bad and was probably hiding a lot of bugs I would eventually have to fix later.
And then I remembered SQLite. I realized that ACID was probably safe enough for my needs, and then all the hard parts I was reinventing were probably faster and less likely to break if I used something thoroughly audited and tested, so I reworked everything I was doing to SQLite and it worked fine.
I wish exFAT would die in a fire and a journaling filesystem would replace it as the "one filesystem you can use everywhere", but until it does I'm grateful SQLite exists.
The problem with it is you didn't solve your biggest actual problem, you just haven't had a problem bite you in the ass yet so you think your problem is solved.
It's totally solvable and SQLite solves it (or claims to anyway). The real question is if it works. To test this sort of thing properly you really need what is now called DST and I'm not sure SQLite does that. It is pretty well tested though so they've probably done at least some testing of it.
I guess some context; I'm not 100% sure it's solvable for the actual domain I'm working on, which is Micro SD cards; they have a tendency to lie about write success.
I think that is at too low of a level for me to realistically solve it, but with SQLite it will at least do what little I can; the fact that it's been around for twenty years with extremely thorough testing and frequent updates means that it's more likely to be correct than some ad-hoc thing I come up with. I think I'm pretty clever sometimes and I could probably get something *as good as SQLite if I really wanted to, but I don't think I'd surpass it and at that point why not just use SQLite?
As long as they lie in order, or alternatively you have a way of verifying the write (e.g. by reading it back) then you should be able to make it work fairly easily.
If they just completely lie - the data is just cached but never actually written - then you're screwed. There's obviously no way to make a persistent storage device out of something that doesn't persist your data.
In my experience it's the latter as far as I can tell. It has actually written like 99.99% of the time, but about 1/10000 writes it actually isn't writing.
exFAT has the lovely feature of potentially not only corrupting the file, but also corrupting the metadata for the surrounding system as well. It's terrible.
Everywhere in the sense of "I have a USB stick/SD card, what do I format it to so that every major device I'm using can read it".
In practice, every OS has its preferred system and the rest has varying levels of "I guess this works", with FAT32 and exFAT being the only real cross-platform options.
To wit:
* NTFS is only really properly and fully supported on Windows. Apple mounts it read-only. Linux can certainly mount NTFS and do some basic reads and writes. Unfortunately for whatever reason, the Linux fsck tools for NTFS are absolutely terrible, poorly designed and generally can't fix even the most basic of issues. At the same time, mount refuses to work with a partially corrupted filesystem, so if you're dealing with dirty unmounts (where the worst case usually is some unclosed file handle rather than data loss, but this also happens if you try to mount a suspended Windows parititon, which isn't uncommon since Windows hibernates by default and calls it fast boot), that's a boot to Windows just to fix it.
* Apple filesystems basically only work on apple devices. It's technically possible to mount them on Linux, but you end up digging into the guts of a bunch of stuff that Apple usually just masks for you.
* ext4 is only properly read/write under Linux and requires external drivers under Windows (which may not work properly either, as corruption issues are common).
FAT32 is reliable in that any OS can fsck/chkdsk it and properly mount it without needing special drivers, but is hindered by ancient filesize limitations. exFAT, at least for most cases, is the only filesystem you can plug into most devices and expect more or less the same capabilities as FAT32 (read/write support, can fix filesystem corruption.)
Out of the os specific ones, NTFS seems like it has the most potential to be the one filesystem that works everywhere; it's modern, works good-ish on most devices, it's just that the fsck/chkdsk tooling is awful outside of Windows.
Presumably Microsoft fear making it easy to swap OSes and access the same data.
"I can use Linux because if I get stuck I can just switch to Windows and still access my data" is a comfort that probably keeps people from even trying Linux (or other OSes)?
Why else would MS not support BTRFS/ZFS/Ext or whatever?
".. there is no way I can merge any of the ZFS efforts until I get an official letter from Oracle that is signed by their main legal counsel or preferably by Larry Ellison himself .. Don't use ZFS. It's that simple. It was always more of a buzzword than anything else, I feel, and the licensing issues just make it a non-starter for me. .. The benchmarks I've seen do not make ZFS look all that great. And as far as I can tell, it has no real maintenance behind it either any more, so from a long-term stability standpoint, why would you ever want to use it in the first place?"
I definitely understand why he doesn't want to merge it in and risk potential litigation from Oracle, but I think he's kind of wrong about the rest of what he says.
I don't know what people on Solaris use, but I'm pretty sure everyone in the Linux and BSD community is running OpenZFS, which does get frequent updates and has been pretty stable as a kernel module for quite awhile. My main server in my house is running a RAID-Z2 on Linux and has been for more than six years, and I haven't really had any issues. I run scrubs regularly and things seem to work just fine.
I do wish that Oracle would give written permission to let Linux include it into the kernel, since I think it would make it easier to run ZFS on root (which I don't bother with, I just use btrfs on root and that's fine for single-drive systems, like a laptop).
> Why else would MS not support BTRFS/ZFS/Ext or whatever?
You seriously can’t think of another reason? File systems are complex. Maintenance is a huge burden. Getting them wrong is a liability. Reason enough to only support the bare minimum. And then, 99% of their users don’t care about any of those. NTFS is good enough
In my mind, in the year 2026, I don't really see the point in using a non-CoW filesystem; it would be nice if the Windows System Restore tool actually worked, and that could be achieved much simpler if there were filesystem-level snapshots.
I believe that's actually how the virtual threads in the newer Java works. It's smart enough to notice IO and properly park it and move to another thread.
I think it's still basically doing epoll behind the scenes [1], but you have straightforward sequential code in the process and the actual implementation is invisible to the user, and you can use old boring blocking code with an object that is a drop-in replacement for Thread.
I personally still kind of prefer the explicit async stuff with Futures and Vert.x since I kind of like the idea that async is encoded into the type itself so you're more directly aware of it, but I'm definitely an outlier for that.
[1] Genuinely, please correct me if I'm wrong, it's very possible that I am.
You are not. I prefer the same and that's how my product works right now. My HTTP API is Vert.x-only with futures. My particular use case is thousands of devices sending small packages to the API in undefined periods of time or in bursts, so I find Vert.x event-loop performance quite a good match for my use case. In fact it has been very positive given customer feedback thusfar.
Background tasks in my app are processed in a different module, which uses plain old ScheduledExecutorService-based thread pool to poll. The tasks are visible in the UI as well. I still haven't switched to VTs, because I don't know what load-implications that may have on the database pool. The JEP writes `Do not pool virtual threads` [0]. I assume if a db connection is not available in the pool, the VT will get parked, but I feel this isn't quite what a background scheduler should look like, e.g., hundreds of "in-process" tasks blocked while waiting for db connection to free up. Testing is on my todo list for some time now.
The JEP doesn't mention epoll, but there is a write up about that on github: `On Linux the poller uses epoll, and on Windows wepoll (which provides an epoll-like API on the Ancillary Function Driver for Winsock)` [1]
Glad I'm not alone! I find having the actual asynchrony itself as an object I can play with to allow for for some nice fine-grained concurrency and allows me to be very explicit about when blocking happens.
It makes sense that they would use epoll under the covers; I would have been surprised if they weren't using epoll or io_uring/kqueue.
If you want a better Julia LSP, you might just be able to get Claude or Codex to build one for you. I've been impressed with the TLA+ bindings it generated.
[1] https://github.com/Tombert/TLA-Language-Server-Protocol
reply