When I was a research physicist I spent a lot of time looking at the effects of ionizing radiation in pictures, although mostly in the context of digital images. The mechanisms are a bit different for photo emulsions, but to me the reason I'd discount radiation is because they're specifically filtering for features that exhibit the expected point spread function (which is a geometric property of the telescope's optical assembly itself). I guess you could test by exposing emulsion plates to ionizing radiation and seeing how often you get PSF-like images by chance. Also, their search is for +/- 1 day of nuclear testing, which seems weird. Certainly radiation from fallout wouldn't make sense on the day before testing. It would have been useful to see +1 day and -1 day separately. Or 0-2 days. The way it's chosen makes me suspect they couldn't find a signal in those windows, and therefore it's probably just statistical noise that they've massaged out of the data.
But to me the biggest flag is that these images are from 50 minute exposures. The objects don't appear as streaks, so they are either very, very short flashes (much shorter than 50 min), or they are very far away. The authors interpret this to mean the objects should be in geosynchronous orbit, which doesn't make sense; objects in geosync would still appear to move relative to the star background over the course of 50 min. Yet this is the entire basis for their "shadow deficit" window calculation. You could constrain the duration vs distance by looking at the effect it would have on smearing the PSF, which would be interesting.
Overall it seems pretty unscientific. If you go looking through enough statistically noisy data for signals in enough places, you'll eventually find it.
Yes, 50-minute exposures would certainly rule out geosynchronous; I've used image stacking to look at geo and you get visible movement relative to the star background after even a few seconds. Fifty minutes would be almost 15 degrees of movement relative to the background! This isn't even accounting for the fact that you would need to be looking in a narrow region above above the equator to get something geosynchronous to begin with.
There are other possiblities that are likely: Upper atmosphere tests resulting in transient luminous phenomena. This would be more likey in certain conditions where the sun could reflect off of specular matter (e.g., bits of metal). You would see this most likely within 1-2 hours of sunset or 1-2 hours of sunrise (source: I've used optical equipment to spot satellites professionally).
I'd note that thier pipeline for removing "plate defects" is not based on the PSF but on some vaguely defined "expert review" training. This can, and should, be a quantifiable step.
I'd agree with all your points and add some things to help people better "sniff-test" these kinds of papers.
1) The paper is suggesting aliens... your suspicion hats should always go on
- Carl Sagan said: "Extraordinary claims require extraordinary evidence". Is the evidence extra-ordinary?
2) The authors aren't experts
- Stephen Bruehl: A doctor of Anesthesiology
- Brian Doherty: "Independent Researcher"[0]
- Alina Streblyanska: Actually maybe a astrophysics researcher?[1]
- Beatriz Villarroel: The top Google hit for her is for a UFO wikipedia[2]
3) Authors don't share affiliations
- Corresponding author has no domain expertize and no clear affiliation to others.
4) Authors have hints of metric hacking
- Villarroel has 8 citations in a paper with only 18[3]
5) The GitHub repo is dead: https://github.com/dca-doherty/VASCO-ML
None of these things are enough to conclude that the paper is wrong, but they are red flags and don't require actually understanding any of the details of the paper.
If you do understand statistics there's clearly more red flags. The +/- windowing being a pretty big one, since there are much better tools for this (errors don't need to be symmetric! Nor do they need to be uniform!). There's also a pretty big assumption made that cshimmin didn't mention: the paper assumes all nuclear tests are in the public record. But I also assume if you have a strong statistics background then there's a high probability you didn't upvote the post.
[0] The man has effectively no online presence. Google searching his email yields effectively nothing except people posting about this paper in UFO groups (https://www.google.com/search?q=%22briandohertyresearch%40gm...). His linked GitHub also makes him anonymous (https://github.com/dca-doherty/) and his website linked is just about finding day care in Texas. He has one more paper on ArXiv, but it is from a few weeks prior
[1] Found their Linkedin (https://www.linkedin.com/in/alina-streblyanska-95b2375b/). Their most recent paper is also on UAPs, along with Villarroel. But also, they work for "Society of UAP Studies", which should be a big red flag. Also, they were working as a Post-doc for 12 years, which is a bit insane
[3] I looked at some other papers of hers and they show a similar pattern. This explains her citation count (which is rather low) and h-index (it's better to just click on the references and you'll see it's predominantly her referencing herself):
- 2602.15171: 9 citations total, 8 are hers
- "A cost-effective search for extraterrestrial probes in the Solar system" has many more, but still 6 to herself (and 3 to Loeb)
- Transients in the Palomar Observatory Sky Survey (Yes, this is in "Nature"): 20 citations, 5 hers
- Aligned, Multiple-transient Events in the First Palomar Sky Survey: 11/36
- On the Image Profiles of Transients in the Palomar Sky Survey: 5/5
- A Civilian Astronomer's Guide to UAP Research: 7/98 (actually not a red flag, but the title sure is...)
- and so on
Not gonna lie, the first thing I noticed was that the first author was in an anesthesiology department. Your guidelines for sniff-testing are not unreasonable, and can definitely be helpful to people who are unfamiliar with the research area. But I quite intentionally did not appeal to any of those. As a (somewhat) subject matter expert, it's important to _ignore_ things like ad hominem judgement, and instead address the paper on its self-contained merits. And more importantly, to share my assessment of those with the lay public.
I'm glad you did it that way. I hope, my comment works well as an addendum to your type of comment. I don't think would have worked well on its own, nor prior to yours. Especially since nothing I said is an absolute rule that allows one to reject a work. But this paper sure does smell suspicious. I think it's good to have the stronger reasons to be suspicious and then understand some softer flags to navigate in unfamiliar territory.
It kinda sounds like a post-doc, in that it provides an on-ramp to working in the industry/institution. But without having to waste your time getting a PhD.
Incidentally, I recently learned the origin of the term. Cyber - short for cybernetic - is from the greek κυβερνήτης (kybernetes), meaning helmsman. The original use of cybernetics is in the context of automated control systems, so steering a rudder was a good analogy. It is also the origin for the name k8s.
In my headcanon, I still read k8s as "network of cubes", as in Borg cubes, as Kubernetes itself is a poor man's Borg (as in the thing that Google runs on, named after Star Trek Borg, known for cube-shaped ships referred to as "Borg cubes"). The whole kyber thing sounds like an explanation after the fact, to detach from the Collective legacy.
Yeah perhaps a better term for Loser is Abstainer. Because the Sociopaths also can certainly lose at the game of maximum capitalist profit. Loser/Abstainer just chooses not to play the game.
The problem with these theories is that they fall apart as soon as you start adding or modifying the types. Because they aren't actually correct, just simple and flattering.
Fully agreed. I think "Loser" is a misnomer. And indeed, going by the essay, the Sociopaths can also lose big... they are willing to risk it all for personal gain, but it can end very badly for them if they miss their window, their manipulations get exposed, or decide to do illegal things to get ahead (high profile cases in my mind: Enron, Epstein, etc).
The names come from a cartoon that predates Rao's essay. He simply reused them because they mostly work. Just like the Sociopaths are not all literal sociopaths, the Losers are not all literal losers.
Yes, I understand this. I was simply making this explicit, it was a good idea to clarify that neither Losers nor Sociopaths match the common definition of those terms.
It's basically just a way for the LLM to lazy-load curated information, tools, and scripts into context. The benefit of making it a "standard" is that future generations of LLMs will be trained on this pattern specifically, and will get quite good at it.
This question doesn’t have anything to do with skills per se, this is just about how different agents handle context. I think right now the main way they cull context is by culling noisy tool call output. Skills are basically saved prompts and shouldn’t be that long, so they would probably not be near the top of the list of things to cull.
Claude Code subagents keep their context windows separate from the main agent, sending back only the most relevant context based on the main agent's request.
Was this after you and two zany friends made a scheme to divert a fraction of a penny from each of your employer's transactions into a bank account that you control? And then you gave it all back but the building burned down and Milton made off with the cash?
So… your argument is that it’s not counterculture unless it’s mainstream culture? And that one should only credit derivative works once they become mainstream, rather than the original inspiring works because they were too obscure?
I don’t think anyone is trying to “gotcha” you. You’ve just got a bad take.
I think it is actually pretty difficult to look at countries and say which ones have successful countercultures. I mean to some extent if a counterculture is successful it becomes not a counterculture, just part of the mainstream culture. On the other hand, a maximally out-of-mainstream counterculture is a totally unknown thing that we’ve never heard of as outsiders.
Counterculture is a culture that is counter to the mainstream culture. If a culture is happy on its own, it is more of just a subculture. Cyberpunk itself features counterculture not just subculture, but is also inspired by the counterculture at the time.
Cyberpunk doesn't randomly contain megacorporations, harsh environments and loneliness but it reflects the worst-case scenario for the ideals at the time. The grey skies and rain is because of pollution having destroyed environment as was relevant in concerns over acid rain or the oil crisis at the time. It is literally in the name with "punk". Japan doesn't have that much counterculture so it could never be that influential in cyberpunk. Just like it could never be that influential in music.
Something can be obscure and influential, but there is a limit to how defining it can be. Akira and Ghost in the Shell (and some video games) have been influential and are frequently credited for that, but that is about it. Everything else including similar media before and at the same time as them comes from mixing in other things [0]. Just like in music.
Korea is currently success with K-pop. But that is nothing in terms of influence compared to TikTok.
tl;dr: Cyberpunk is counterculture. Japan doesn't really do counterculture. Therefor it isn't very influential in cyberpunk despite having had influence.
i dunno. some of the most influential d-beat/crust bands of all time are from japan (d-clone, disclose, gauze, gallhammer, gism, death side... that's barely scratching the surface of bands that are/were actively countercultural).
it may not always take the same form, but anywhere you find big cities, you'll find some form of countercultural punk movement because the economy is big enough to support people at the fringes (even if you just work as a bartender or whatever).
By the early 90s, “cyberpunk” had largely become self parody, meaning that the counter culture was already rejecting cyberpunk as too mainstream. Search around for the Usenet reactions to Billy Idol’s album of the same name.
Or take a look at the opening sequence of Snow Crash, where the deliverator is clearly making fun of ubiquitous cyberpunk tropes. At the time it was considered a tombstone for cyberpunk, rather than some sort of positive signal milestone.
These are only two data points to demonstrate that the “counterculture” era had already expired in the US by the early 90s, as members of that counterculture felt that it had already stopped being counter to any part of American culture.
The claim that there is “not much Japanese counterculture” is too bizarre for me to wrap my head around. The more traditional a society is, the more “counter” any underground culture is —- by definition.
American counterculture hasn’t really properly existed outside of capitalist smother and capture since the early 90s either by the way. Give No Logo a read for more on that.
If I understood correctly, the global lock is so that notify events are emitted in order. Would it make sense to have a variant that doesn't make this ordering guarantee if you don't care about it, so that you can "notify" within transactions without locking the whole thing?
possibly, but i think at that point it would make more sense to move the business logic outside of the database (you can wait for a successful commit before triggering an external process via the originating app, or monitor the WAL with an external pub/sub system, or something else more clever than i can think of).
P-1 AI | Hybrid (SF) and Remote (US) roles | Full time
P-1 is trying to do for physical engineering (mechanical, aerospace, electrical), what Devin, Cursor et al. are doing for the software world. Founded by ex Airbus execs and DARPA program managers, we've just raised a $23 million seed round. Our goal is to solve engineering AGI to enable humans to scale engineering complexity by orders of magnitude.
We have several roles open including:
* Research Scientist - Applied AI
Looking for creative people to innovate on agentic AI systems for specialized engineering domains.
* ML Engineer - Training and AI Infra (ad should post tomorrow)
We're doing a lot of LLM fine-tuning and reinforcement learning. We need someone who can make our GPU cluster sing!
* Software Engineer - AI Evals and Test
We believe it's essential develop rigorous, domain-specific evaluations in order to ensure we're making progress on real world issues -- not just hill-climbing on general purpose benchmarks.
But to me the biggest flag is that these images are from 50 minute exposures. The objects don't appear as streaks, so they are either very, very short flashes (much shorter than 50 min), or they are very far away. The authors interpret this to mean the objects should be in geosynchronous orbit, which doesn't make sense; objects in geosync would still appear to move relative to the star background over the course of 50 min. Yet this is the entire basis for their "shadow deficit" window calculation. You could constrain the duration vs distance by looking at the effect it would have on smearing the PSF, which would be interesting.
Overall it seems pretty unscientific. If you go looking through enough statistically noisy data for signals in enough places, you'll eventually find it.
reply