I've always been a bit mystified by the popularity of TUIs. To me, the power of the terminal is the streaming model. Composible utilities is something that is much less common in GUIs.
I get it that maybe the constraints of terminals force design of TUIs to be more focused on the purpose of the tool than polish, but it's not that compelling of a point to me.
For some basic stuff like vim it works fine. But for almost everything else I'd rather a regular CLI tool or a web interface. I suspect a lot of the popularity comes from people who want to feel like a hacker using 10 terminal windows, but actually want a GUI like experience.
For me, TUIs compensate for the fact that I can't get good remote GUI rendering on Linux. Yes, X11 tunneling exists, but the experience has always been abysmal for me for anything not hosted on a machine that sits on the same LAN as the client. For Wayland I don't even know if such a thing is possible since I don't think the architecture supports it.
But the terminal is just fundamentally the wrong basic abstraction on which to build a structured GUI, it just happens to require few enough bits to be sent over the wire that it actually works reasonably well over SSH as opposed to pushing graphics.
I have never tried it until now, and I hadn't looked into it. But I just tried `waypipe ssh` to a remote server I have for doing asynchronous Claude work in VMs, and it actually works pretty great! Maybe I'll switch to that for my emacs/magit setup, the lack of clipboard integration when running emacs in a terminal over ssh is enough of an argument for me.
Edit: yikes, pressing M-w caused emacs-pgtk to crash with a Wayland protocol error, so it isn't trivial and requires some configuring I guess.
Edit 2: Apparently I have to install wl-clipboard and write a bunch of emacs lisp to work around this. I don't think I have the patience for that, and I fear that such problems will be even harder to solve for applications which are not as flexible and programmable as emacs. So far I'll conclude that remote Wayland is not ready and stick to TUI.
Edit 3: No, the problem is probably mismatched waypipe versions on client and server. Still not fun.
> Yes, X11 tunneling exists, but the experience has always been abysmal for me for anything not hosted on a machine that sits on the same LAN as the client.
I have used X11 tunnelling to machines on the other side of Europe and it was OK. I did prefer ssh for responsiveness. What happened to NX? What about other remote desktops?
Did you just invent a non-existent core Wayland developer and attack them for things they didn't say? I am all up for shitting on modern software stacks and praise the Unix philosophy, but I am really turned off by posts trying to make this a question about gender identity and generation, as if that has anything to do with it. Old cis-dudes come up with bad software architectures too.
Sorry, but it is kind of a generational thing. If you grew up in an environment where one set of assumptions always held, you are more likely to think that these assumptions are universals and just not even bother with the possibility that they might not be the case. This goes double if you're young; even bright young devs often have not had the breadth or depth of experience to consider that that crusty old protocol they're trying to get rid of due to "legacy cruft" is the way it is for a reason, one which is still reflected in the use cases of real users working today. This has direct bearing on the Wayland situation: one of the reasons why Wayland is having trouble getting traction against X11 is because the Wayland devs have lost sight of the fact that Wayland is designed to solve one set of problems and X11 another.
And real Wayland developers have made the exact argument I attributed to my fictional one, just with fewer anime emoticons (and "watch this talk by Daniel Stone" as a final mic drop).
Obviously people want GUIs. That's why TUIs should be compared to GUIs, not to CLIs. TUIs are nice since you get a lot of the benefits of a GUI, without having to leave the context of the terminal.
I feel like the better solution here (than trying to shoehorn a GUI into an interface meant for text) is to make terminal windows graphically-aware, like how things work in Plan 9.
I already do use tiling window managers and they don't really accomplish the “if you launch a graphical app in a terminal window it takes over that terminal window” flow. Closest I've found is Niri's support for tabbed windows, but even that's just sticking the graphical app window on top of the terminal window instead of the terminal window itself becoming the app window.
Depends, I'm building a markdown down editor that just previews in the web ui. However, I can use the web ui to do tasks like uploading files, view git commits etc. Different interfaces for different purposes. The CLI gives me focused mode, and then visual stuff that steal my time goes in the web
I dunno, pre-LLM TUI's at least tended to be okay, and keyboard navigation was a first class citizen. Besides, if you were using a TUI instead of a GUI then you basically always ended up saving memory/battery life, and TUI programs are generally more portable than trying to run some ancient GUI program.
I typically prefer CLI myself but having a TUI to manage torrents for instance was much more ergonomic.
A lot of the complaints in this thread seem like they're aimed more at recent vibecoded UIs than the concept of a TUI.
Like, okay, they are a big step back with accessibility, but they're flickering garbage because they were vibecoded in a weekend and the TS or Python library they're built on was similarly forced upon this world.
For almost every tui, a webui works better imo. Most torrent clients offer a web management ui and it's always going to be easier and more feature filled using a platform that was actually designed for it rather than hacking a gui in to the terminal.
Well, "better" is relative. Web UIs definitely have the advantage you are mentioning. However, they are also much slower to use, so it is a tradeoff.
I am blind and do rely on accessiblity. But I would choose a TUI over a Web UI every day simply because the web is slow as hell. Feels like running away from a "monster" in a dream. You have rich details, but you are being drowned in unnecessary verbosity and an inherently slow stack of tools working together.
The web might be accessible, but it is barelyuseable in practice.
Its more or less the same with every screen reader and browser I know. If there was a simple fix for this inherent issue, I wouldn't have to talk about it. You might get a bit of stuff out of the way if you configure less verbosity of the screen reader, but it doesn't really help with slow interactions. Problems include:
* Virtual buffer: In most systems, the DOM is rendered to a "virtual buffer" and the screen reader lets you navigate that, because cursor (caret) support was on the map for accessibility, but was never really sufficiently implemented on the browser side. So screen readers had to solve it in a separate step.
* Using the keyboard to do screen reading and navigation on a site conflicts with keyboard shortcuts from that site. So most screen reader have two modes: You switch between entering text in a field and navigating/reading the site. Its basically like vi insert mode, but less simple.
Those are the major issues from my POV. The rest of the slowness likely comes from having to go through the accessibility API of your OS.
Vim is special because 99% of what we do is editing text, and it is the text editor—the importance of that task overcomes the poor discoverability of a TUI. Most other programs should be CLI, so they can fit in the conventional command line toolbox.
For the Claude Code / OpenCode / Crush / etc new wave TUIs, it's not about composability or text streaming. It's basically a combination of a few tailwinds:
1. There's already a large-ish community of engineers who live in the terminal e.g. Vim/Neovim/tmux/zellij/etc users. Lots of engineering tasks are accomplished by running scripts in a terminal, so it makes sense for some people to just move as much of their work there as possible. This means there's a set of users you can address with dev tools that run in a terminal.
2. Cross-platform distribution among the platforms most of those people care about — macOS and Linux — is largely a solved problem via package managers. Distributing cross-platform native apps is fragmented at best.
3. Building modern TUIs has become a lot easier thanks to the demand+distribution wins above: there's a lot of appetite for building blocks, and so lots of good options have flourished like Ink for React, Bubble Tea for Go, etc.
4. General developer distaste for the most straightforward analogue to all of this for desktop GUIs: Electron. Deservedly or not it's associated with slow, bloated applications. And if you don't use Electron, doing cross-platform anything is going to be a much harder problem than just pushing out a quick TUI app.
Eventually successful products seem to eventually jump the gap, like Claude Code eventually spawning Claude Cowork and OpenCode adding OpenCode Web. But it's easier and faster to test product market fit for dev tools with a TUI. And plenty of your users will stay there, even after you launch something else.
While I agree in principle (slack works just fine as a firefox tab, thank you very much), if the application needs filesystem access, it's not going to be viable.
These were using 66GB compared to what, few KB/MB In NCurses? I can run Nethack/Slashem under a 30 yeard of computer. React it's a joke, and there are ports of Ncurses to any OS.
Totally out of fashion today but think of TN3270. Rather than "streaming" they were forms based and heavily keyboard driven.
This could easily be mimicked by a GUI but keyboard shortcuts has become an afterthought.
I still today meet users missing those old workflows. But they express it as "old text interface" aka TUI. If you listen to them you realize they mean blazing fast and shortcut driven. When you work with data entry you care about speed - not animations.
Any beginner likes eye candy. The veteran has stopped caring.
2) Constraints imposed by the terminal make all the apps look and work approximately the same - in the outside world the standards developed for UX are ignored as a matter of routine just because they can be. TUIs are in an optimum of least surprise, so to speak.
The command line shell has that benefit of piping text between programs. TUIs are runnable from the command line shell. -- So you can get many of the benefits of a GUI (e.g. discoverability) while sticking close to the terminal where you're doing things.
If you're going to "run command, edit command, run command", performing the edits from the terminal you're running the commands in seems reasonable/intuitive. (In contrast, for tools like VSCode, I think it's more common for terminals to take up a fraction of the screen space rather than switching it to full screen. And then developers will say they need a huge monitor).
It also seems to be that keyboard-driven programs are more commonly TUI than GUI. e.g. magit or lazygit. Or lazydocker. Or k9s.
>I've always been a bit mystified by the popularity of TUIs. To me, the power of the terminal is the streaming model.
Ever used Emacs? Or Vim? Or Mutt? Or Borland's old IDEs?
The power of the terminal is also in ubiquitness, trivial connection to a remote system, and lack of mountains of GUI cruft, that a TUI app can as well have.
* Lower resource load
* Less reliance on a mouse
* Related: key bindings for more activities that fit my vi muscle memory
* Deeper organization.
Thanks to TUI tools I've been able to roll my own IDE that's invisible when I need it to be (thanks guake & yakuake) and organized by project and tab (thanks zellij). For the mixed role I'm in that works perfectly.
I don't think anybody would look at what I have and call it polished though.
For me it’s mostly
- the convenience of being in the terminal, where I live
- you can use em over ssh
- they’re typically made with keyboard usage in mind, which is often an afterthought in a typical browser based UI
- other GUI options are browser (sandboxes, obvi, not good for lil personal tools), native (not dead simple, compared to TUI/browser/electron), or something like electron (no way lmao)
I don’t seek out TUI’s instead of other solutions. But it’s so dang easy to pop open a new pane and run lazygit. And it makes you look really cool when people walk behind you
I think there's a pretty good chance Adam Back is Satoshi, but I don't think this is a great article. Perhaps he's rendering a careful scientific process in a way that makes for a readable narrative, but as written, it sounds like a lot of gut feel and confirmation bias.
The biggest new contribution to the Satoshi question seems to be ad hoc stylometry. To have faith in his methodology, he should be testing it on identitying other people. If he were to show me that a repeatable methodology that doesn't require hand tuning can identify other people with low error rate, and it said Back=Satoshi, that would be much more convincing.
Like so much tech writing done by non engineers, there are many places where mundane things are made to sound remarkable (e.g. Black's thesis used C++, the "heated debate").
It seems very unlikely to me. I've had personal correspondence with Satoshi, and met Adam Back in person, and I can't see it.
Actually I don't see how anyone involved with Blockstream could be identified as Satoshi. They never believed in what Satoshi was doing and built their whole company around the claim that Satoshi had screwed up the core of the system's design, despite that nothing about the design or its assumptions had changed. They spent years raising investor capital (why would you do that if you were rich?) specifically to build a system designed to replace Bitcoin for end users.
The last time I met Adam he was trying to convince me to not continue working on Satoshi's original design, and none of his arguments were technical. Satoshi had a totally different approach.
Satoshi can't spend any of his bitcoins without tanking bitcoin's price. So Satoshi needs to find some other way to support himself. Creating bitcoin related companies is one way.
Nobody knows which coins Satoshi owns, it's just a guess for the very early coins and that guess gets progressively less accurate as time goes by. And this was a long time ago. There was no particular reason to think back then that Satoshi spending his coins would tank the price. Everyone back then was spending Bitcoins because that was the only way to build the economy. The idea that if his coins move everyone would panic is a post-2015 idea when Blockstream killed Bitcoin as a genuine means of exchange and it became all about sitting on them as a speculative "investment".
But if he did want to spend he could just start from his last coins backwards.
This [1][2][3] seems to have a methodology for identifying Satoshi's coins, mined from 2009 to May 2010. But yes, for coins mined after May 2010, he likely can spend without scrutiny.
>The idea that if his coins move everyone would panic is a post-2015 idea
Here are 2 people in 2013 expressing that idea: [4][5].
There's ambiguity here. When people talk about crashing the market they mean if he attempted to sell every last coin he owns for dollars all at once. Of course that would be a signal of lost confidence. What I mean is the more likely scenario of spending coins to achieve some specific goal or project.
1. Someone so purely interested in the tech and not money they'd give up the wealth
2. Governments, specifically the ones that don't consider a few billion to be a lot
3. Someone who's dead
It would also seem likely that if he at any point was alive and realised he wouldn't be able to touch his original wallets, he'd still get in early enough to be rich from subsequently crested wallets nobody would suspect.
I find it likely that someone who realised that if they were to touch that paper fortune the Bitcoin price will totally crater would have made additional billions they could actually access by mining more bitcoin at a point where the difficulty was still ridiculously low.
The value of the bitcoins in those early wallets isn't real, because they are the most watched bitcoin wallets in existence, and any movement there would send shockwaves through the crypto space.
First, they could just liquidate all their other BTC before moving the Satoshi coins.
Second, even with a significant price impact, the net worth of an additional 1M BTC would surely outweigh any realistic price slippage.
Third, the price probably wouldn't even crash that much. We're talking about a 5% increase in supply... which rationally, should result in a ~5% dip in price. I know the market would overreact, but I doubt it would be by much more than say 20%.
The volume would involve a crazy increase in the actual float, but the bigger issue is that everyone watching knows this is Satoshi, and any sudden move to sell would be a massive shock to the market because the current presumption is that those coins are off the market for the long term. If they are not, it changes a lot of calculations.
And assuming they have lots of other bitcoin to liquidate first, the reasons to liquidate the early coins are basically gone. If they believe the price will continue to rise, and have enough wealth from later coins, they'd have little reason to rock the boat.
This 5% increase is actually huge, because the amount of bitcoins in actual free float is very small. The same coins get traded over and over while the vast majority sits in cold storage. A 5% supply sell off would eat up all of the order books, and that's before everyone else starts selling off.
Do note that the selloff would be automatic, huge wallets are monitored 24/7.
If I was Satoshi, I would go a to big bank and sell the private key directly for huge discount. Still a billionaire, no trace.
Why is being dead a convincing idea? How old do you think he is/was, and why would it be likely that he would die? When do you think he died? The idea that he died doesn't explain how he came out of hiding twice.
Losing access by intentionally deleting the keys? That agrees with my point that he knows it would cause problems to spend them, and decided not to spend them.
Losing access by accidentally deleting the keys? Would Satoshi really be that careless?
He lost access to the wallet either by mistake (never even saved the key) or because he willingly destroyed the key for philosophical reasons. Or he is just dead.
I have no idea about any of this stuff - but if I were trying to hide my identity I would go out of my way to misalign my real self with my hidden identity.
e.g. Pick a name that puts people on a false trail.
Depends on what you mean by "win". It would be possible to go in, topple the regime and secure the nuclear material. But only at astronomical cost and years of blowback
"Regime Change" has become a modern term for vassalization. We should not be surprised that countries with no reason to be a US vassal, and no long-term ties to the US refuse to remain vassals.
So then what would we achieve? nuclear material is cheap (10s of billions) relative to a multi-decade occupation (single digit trillions). It's undoubtedly true that Iran would revert to it's preferred form of government, geopolitical orientation, and nuclear capability once the US left.
Winning a war means achieving your political goals while preventing the enemy from achieving theirs. Most of the time, you've won the war when the enemy effectively admits they lost.
The lack of will to use sufficient force to win a war is fundamentally no different from not having that force in the first place. Both are equally real constraints on your ability to win the war.
How’d that plan work out in Iraq or Afghanistan, both much smaller, less armed countries? Decades and trillions spent, and what exactly did the US “win”?
Probably a risk worth taking; defending a pipeline is much easier than escorting huge, slow-moving ships through a 24km-wide Strait laced with mines and peppered by artillery and missiles.
Pipelines can be protected. Just putting it in the ground for example. Or you build a "bomb" proof shelter over it - Iran's missiles are not bunker busters, we know how powerful they are and can design for that. Air defense systems are getting better too.
I did one of these experiments around 2011, and because it was so obvious that the experiment was contrived, there was a lot of misdirection around the actual experiment, which was testing something totally different from the pretense. Like different responses to font color or something like that.
I get it that maybe the constraints of terminals force design of TUIs to be more focused on the purpose of the tool than polish, but it's not that compelling of a point to me.
reply