Hacker Newsnew | past | comments | ask | show | jobs | submit | vunderba's commentslogin

Me as a kid realizing that the rate of fire on the shotgun was directly tied to the number of animation frames in the original Doom. Cue mecha super-extreme gatling shotgun and also mecha super-extreme choppy frame rate.

Hitscan weapons for the win.


CON files were great. One of the first enemies I made as a kid was a "basilisk"-type creature that if you looked at, there was a RNG chance it would

  wackplayer
If you know, you know.

Neat. The website looks the same (in a good way) from when I remember it over a decade ago - are you the creator of the original java port from back then?

https://web.archive.org/web/20140210122645/http://www.scorch...


yup, this is the original website. The domain kept auto-renewing I guess :)

I’ll have to give it a look. The last BFL Krea collab was released last summer and, unfortunately, scored very poorly in my Gen AI Showdown comparison - just 2 out of 15. For reference, even the stock Flux model (Flux.1 dev) scored 4 out of 15, despite having been released a full year before the Krea version.

https://genai-showdown.specr.net/?models=fd,kd,f2d


Nice job. But could use some clarification:

> The tool is free, but you'll have to BYOK.

I'm a bit confused by this statement considering on the repo it also says it has to communicate with a 3rd party, nova3d.xyz API.

So is it actually self-hostable or is it only that the frontend client is open sourced and the backend responsible for building the model is proprietary?


It's not an open-source tool. The front-end is open & self-hosted. The backend is currently closed. You can fund a ring-fenced API Key and use it with the front-end. Fund it with like $5 or $10.

It’s similar to what tools like Automatic1111 and ComfyUI do which embed the workflow and pipeline as EXIF so the image can be recreated later.

The biggest issue I see with this is that, depending on the graphics editor, export formats, etc, this information can easily be lost and/or mangled.


It's true, editors can cause a problem if they strip the metadata.

The idea is to only add the metadata as a final step when you export the spritesheet, so at that point there's no need to edit further.


Yeah. One possibility is you might want to look into possibly developing this concept as a plug-in for something like Aseprite [1]. That's the one I personally use when doing sprite work and it exports a spritesheet and accompanying JSON offset metadata.

[1] - https://www.aseprite.org/


There are actually a few capable VL models out there that can run on even modest hardware. If you want to keep things simple and process everything locally, I’d recommend something like Qwen3 VL [1]. It’s not the fastest model, but you can just let it chew through the docs over a weekend.

In my experience, it takes about 15 to 30 seconds per image, but the quality of the results is quite good if a bit verbose [2].

[1] - https://huggingface.co/Qwen/Qwen3-VL-8B-Instruct-FP8

[2] - https://mordenstar.com/other/vlm-xkcd


Now now. He was clearly being depicted as a doctor - couldn't you tell from the glowing orbs of light in his hands? /s

You say electronic music, but you didn’t really specify whether you have any background in music at all. Can you sing and/or play an instrument? Can you read sheet or have familiarity with music theory? Do you already make little simple melodies? If you’re starting completely from scratch, it’s going to be tougher.

You can, of course, grab a horizontal tracker or a DAW like FL Studio and just start scattering notes down. Set a key signature, throw some notes in, and curate until you have some kind of a four-bar loop that sounds good, then build on that.

Electronic music tends to be more pattern‑based, which is why some people prefer using programs like Ableton or FL Studio for that kind of workflow.

Side note but you can watch endless videos that teach you how to use a DAW (quantize, set up your mixer, do sidechaining, bla bla) but I don't think I've seen a tutorial that genuinely teach you how to come up with melodies. I, and most of my friends who are musicians, usu. come up with melodies while playing on our respective instruments, and the occasional dream.

That's why I recommend learning an instrument, or at least getting a MIDI keyboard so you can "plink" in a way that rewards discovery.


Building blocks by https://audiblegenius.com/ is excellent, as is DMP (see my other comment).

Nice! I'll have to check out building blocks, since Syntorial (which they also make) is fantastic for teaching you how to almost intuitively get a feel for manipulating ADSR, envelopes, oscillators, etc. to approximate a given sound.

That's exactly what this is - relying on sheer nostalgia-porn to overcome the fact that this is a low-effort, vibe-coded mess. Hey remember pogs? Remember double bubble? Remember trapper keeper?

To make up for it, here's an actually great collage of weird 90s web:

https://www.cameronsworld.net


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: