I'm looking for monthly/yearly "no-strings" sponsors, not employment, if any individuals, companies (or bitcoin millionaires) would like to help a long-time OpenBSD slacker, unslack, I'd really like to focus more of my time on open source development (and advocacy), rather than making rent. Feel free to contact me (see HN bio).
On the topic of bitcoin millionaires... I'm getting some sponsorship for my FreeBSD release engineering work from https://opensats.org/ . Have you asked them?
I haven't, honestly the application processes, paperwork, project reporting expectations and eligibility criteria for these kind of funds just gives me too much anxiety. I never know how to navigate it as a Canadian either, most of them are US or Europe based.
I guess I'm naive to hold out for the anonymous bitcoin millionaires to donate "no strings" until I find something a bit more frictionless.
Thanks anyway for the suggestion, glad to hear you're getting sponsored for your FreeBSD work.
>.. macOS only ever programs CS42L84 to operate at either 48 or 96 kHz, we could only add support for those two sample rates to the Linux driver ..
> However, CS42L42 supports all the other common sample rates, and while the register layout and programming sequence is different, the actual values programmed in for 48 and 96 kHz are the same across both chips. What would happen if we simply took the values for all other sample rates from the CS42L42 datasheet and added those to the CS42L84 driver? As it turns out, you get support for those sample rates!
> The patch to enable hardware support for 44.1, 88.2, 176.4 and 192 kHz sample rates on both the input and output of the headphone jack was submitted directly upstream, and has been merged for 7.1. We also backported this to Asahi kernel 6.19.9, allowing users to take advantage of this immediately.
Nice bit of chip sleuthing and reverse engineering from the Asahi team!
The following is actually the most surprising part to me.
> This is quite limiting, as it forces PipeWire to waste CPU cycles (and therefore battery life) on resampling audio streams that are not either 48 or 96 kHz.
So the Asahi team thinks that only supporting 48 or 96 kHz wastes battery life by forcing the software to resample audio streams. But why does Apple still do this? Presumably Apple has a very high commitment to save power and increase battery life.
Always possible that it's the standard commercial software company reason: They do know about it and have a P2 bug tracking it, but the team that maintains that code has 5000 other things to do, and it never gets fixed.
More likely it’s that 48 kHz is a more sensible default, since the majority of non-music digital audio is sampled at 48 kHz, almost anyone who cares about potential audio artifacts introduced by resampling is going to be using an external DAC, and (from an Apple-centric viewpoint) almost anyone concerned about the energy consumption of music playback on their MacBook is going to listen to music on their iPhone instead.
Maybe this is a little pedantic, but we're not talking about a default among the many other available options supported by the chip. We're talking about 48 or 96 kHz being intentionally (or unintentionally) made the only allowed options.
So either someone said "we must disallow the other options" or they didn't and it's a bug.
do we ever get apple engineers rolling thru here or on mastodon? wish stuff like this wasn't such a black box behind the scenes.
i think the only time ive ever run into an apple engineer was on mastodon related to gptk it was interesting to see they actually are quite tuned into what is possible on these devices and what that could mean for gaming. despite being a developer toolkit to help studios get a read on the work needed to optimize a game for a metal port, they were expressing that they were well aware such tools showed a lot of promise for getting games going on mac. not much of a gamer myself, but thought it was interesting to see a slice into engineering there & that they weren't as hostile as HN would believe them to be and broadly aligned with many of us. id be mega curious what apple engineers think about asahi.
> do we ever get apple engineers rolling thru here or on mastodon? wish stuff like this wasn't such a black box behind the scenes.
Very rarely. I believe because Apple has a culture of secrecy and contractually forbids employees from sharing details about their work in most circumstances (and actually enforces this).
I used to work there and can confirm this. They beat it into you during training when you’re first hired that anything you say can become viral news or be attributed as an official statement from Apple and they are strict about enforcement.
There’s also extreme secrecy both between teams on different projects and even between teams within the same project just working on different parts of it. At least that was my experience.
I did enjoy my time there, but it was a very unique/strange development culture.
I would not agree with your analogy. [0] Turns out the movie industry actually uses infrasound in horror movies. [1] It might also explain why people think old buildings are haunted. [2]
If Apple only focuses on audible sounds, their devices lack the ability to maximize the thrill of horror movies.
Practical reasons vanished ~20 years ago, about the time Microsoft dropped hardware audio acceleration and switched to software audio mixing. Resampling is ear transparent and eats less than 1% of one slowest possible CPU core.
Just like the mouse that has to be polled 100 times per second or display that is composited from all window buffers at least 60 times per second. It might really be negligible in the grand scheme of things and not worth optimizing for.
You cant imagine how much strain fast-moving mouse cursor put on display server of almost any OS if there a lot of windows open simultaneously. Also non-standard cursor sizes / formats are still such a mess everywhere since even in 2026 there are semi-hardware and software cursors...
This is presumably what Apple does. You kind of have to anyway or you have the stupid situation Linux used to have where only one app could play audio at a time.
> you have the stupid situation Linux used to have where only one app could play audio at a time
When was that? I think my first Linux distribution was Ubuntu 8.04 and fairly sure it shipped with PulseAudio which in mind always been able to play audio from multiple sources at the same time, maybe I misremember?
As I recall it was rarely enabled by default and was a pain to set up so in practice not really used.
The most common solution at the time was PulseAudio, which was so bad it usually was better to just use direct ALSA and live with the idiotic one-at-a-time limitation.
Thankfully Pipewire seems to actually work reliably so I guess that's at least one thing ticked off the Year of the Linux Desktop checklist.
It was the default on Gentoo long before Pulseaudio was the default anywhere. If other distros messed up their config I cannot say but the fixing that would have been a lot easier than moving to an entirely different system with an incompatible application interface.
Most distributions shipped ALSA preconfigured with dmix, which means multiple applications could play sound at the same time just fine.
Which is why the whole "we must use pulseaudio even if it's terrible and has awful standards that blast volume or multiple streams won't work!" was so weird… everybody who tried knew that just removing pulseaudio the multiple streams kept working :)
So only those who never applied the scientific method kept insisting that without PA it was not possible to do that.
I think PA allows for setting applications volumes and have a modular design. But it's kinda the poster child of overengineering (challenged by systemd now). Something like sndiod is more sensible for most desktop distro. People that need a more complex setup can bring in the big gun like pipewire.
I don't think the problem was over-engineering. I think the problem was that if you plugged in headphones it would instantly set the volume to 100% from whatever value it was before.
Plus of course, initially you had to regularly run killall -9 pulseaudio to fix the sound. All in a moment when ALSA with dmix worked just fine.
Sometimes I think fedora and ubuntu are trying to hinder linux as mainstream desktop.
Even back then, it could play more than one stream. You had to have a sound card or kernel drivers that supported it (and all non-obsolete ones did by the time pulse audio came out).
I still don’t know what purpose pulseaudio serves, other than adding latency and making stuff less reliable.
PipeWire is better, but it turns out you can just use OSS under freebsd these days, and everything just works, but with lower latency.
If you have some sort of potato sound card that can’t mix output channels in hardware, note that OSS added sw mixing by 2007 (with support for 16 channels by default).
Sure, sure. I remember a time when I didn't have a sound card that supported it and couldn't play multiple streams at a time; this is a thing that really happened. I did eventually go out and buy a soundcard to enable multiple streams.
Nonsense - HDA systems were overwhelmingly the majority of Linux systems at that point, and didn't have any hardware support for multiple streams. OSS with software mixing was a commercial product that wasn't upstream. ALSA had userspace mixing but it was very much not an out of the box experience, and didn't take advantage of hardware capabilities in the way Pulseaudio did to reduce wakeups and power consumption.
Even so, surely it would have been easier and better to just fix or replace dmix (in kernel, in the existing data path) than introduce a userspace daemon, break API compatibility, and so on.
It’s been 20 years and pulseaudio is still flaky / high latency / incomprehensible. Professional flows that care use stuff like jack.
TBH pipewire works much better than pulse, up to the point to replacing jack itself. But DMIX worked fine for non-professional user needs and with very low CPU usage. Yes, it was Jackd for the professional but Windows had ASIO drivers too.
Pipe might work better than pulse but it's still an overcomplicated mess compared to ALSA, which is itself an overcomplicated mess compared to OSS, which could have been easily made to support concurrent clients to /dev/dsp without all the API breakages and flaky deamons we had to suffer through.
Doing audio mixing well is something that is, for a number of reasons, hard to do in kernel. And if you're still using pulseaudio, why? The rest of the world's moved to pipewire, which also provides a jack-compatible interface.
PipeWire replaced Pulse like five years ago; who is using Pulse at this point to make statements like "20 years" meaningful? It isn't really an ongoing concern.
This is the era where I was the lead on Ubuntu laptop support, and I promise you that dmix was not a trivial option to make things work out of the box.
I always had some Knoppix live CD/DVD which had better defaults than Ubuntu itself on hardware autodetection and setup. I think they used kudzu from RH for a good while plus custom patches.
Bear in mind the Knoppix creator had a blind wife, up to the point to creating A.R.I.A.N.E, one of the best distros for the blind (and it was merged with main KNOPPIX, making the distro one of the best accesible ones out there). Thus, proper audio mixing was mandatory.
With the bundled installer you could install it to as a Debian Testing install in the spot. As I didn't have internet at home, I remember using Knoppix before Debian Sarge because it had a huge amount of things to play and test without worrying about odd hardware setups.
Some of the context here is that that at the time, Ubuntu was aiming to work on as close to 100% of existing PCs as possible to make it available to the largest number of users. Knoppix had a lot of great features and also was very opinionated, and that had an influence on the set of hardware it worked well on by default. I evaluated basically every decision made there in terms of whether Ubuntu should adopt the same ones, and there were several that were just not good choices in terms of supporting the widest set of hardware possible.
If you have two audio streams, you can't play them as is on the audio device, you have to mix them together. The same happens with analog speakers as you can't just add two signals together. I believe at one point with Alsa, when an application takes control of the audio device, no one else could play with it. Now Alsa comes with dmix (a digital mixer feature) enabled in its default configuration, so two applications may play how they want. And we have PulseAudio, Jack, and Pipewire on top of Alsa to add more features.
OpenBSD still present raw audio devices, but they have sndio which provides a more helpful interface for applications including resampling (not the best algorithms there, according to them).
EDIT: Article seems to have been updated to remove mention of Chromium.
This article contains a lot of errors, for example Chromium on FreeBSD does NOT use Capsicum, it never has. That was experimental and invasive work done 17 years ago that was NEVER committed to their official ports repository. In fact, not a single browser on FreeBSD uses Capsicum or any form of sandboxing _at all_.
Contrast that with OpenBSD, where the Chromium port has used pledge(2) since January 2016, and unveil(2) since 2018. Both are enabled by default. Mozilla Firefox ports also use both pledge and unveil since 2018-2019, with refinements over the years.
OpenBSD's fork of tcpdump has been privsep for ~22 years, and its packet parser runs with no privileges. It's pledged tightly "stdio" and has no network/filesystem access, and uses OpenBSD specific innovations like bpf descriptor locking (BIOCLOCK) missing from both FreeBSD/Linux tcpdump today (despite FreeBSD adding the ioctl in 2005).
In the years since it was added, the reason Capsicum has only been applied to a handful of utilities is because it's a tree barren of decades worth of incremental work on privilege separation and security research.
AFAIK Loongson is dead and isn't made anymore, and unlike OpenBSD/sgi, Loongson was a little-endian arch. OpenBSD/octeon is a closer match, but also discontinued as Cavium switched to making ARM CPUs.
LoongArch is a new ISA and isn't MIPS compatible, and OpenBSD doesn't support it.
I didn't say Loogson the company was dead, or that LoongArch was either. I said the predecessor Loongson/Godson CPUs are, like the 2E and 2F, which were MIPS-compatible. They're not manufactured anymore, and were practically unobtainium when they were.
LoongArch is not MIPS, despite it having similarities. It's a new platform/ISA and requires a completely different toolchain and new OS port.
It is not at all "new MIPS-family hardware is being made today" like you originally wrote, and it has little to no relevance to SGI hardware.
You are angrily arguing against things I didn't say and am not saying. I suspect you're downvoting me as well.
I never claimed it was entirely compatible, because it wasn't. Nobody ever said it was.
I'm saying that there are MIPS like architectures still being made today, and I stand by it. You seem to think they don't count. You have not coherently explained why. Maybe they are not close enough for you, maybe the endianness is not the one you want. I don't know and TBH I don't care.
It's close. It's related. There is new hardware in the greater MIPS-like family. If you or Theo de Raadt don't like it, that is not my problem.
You said, although now you're backtracking, that it's dead. That is not true.
I called you on saying things that are not true and ISTM that now you are trying to quibble.
Shamelessly trying to attract new monthly sponsors and people willing to buy me the occasional pizza with my crap HTML skills.
https://brynet.ca/wallofpizza.html
reply