Hacker Newsnew | past | comments | ask | show | jobs | submit | pdhborges's commentslogin

At least if you a test suite that doesn't have to be migrated. I too would like to migrate some services from Python to Rust but my test suite is written in Python so I would have to actually check if the test suite migration was correct manually (I can't event compile it!) before doing the rewrite.

Curious how the test suite was applied. Was it ported from Zig to Rust beforehand?

Almost all of Bun's tests are written in JavaScript run in Bun itself.

Works great. No wonder China is eating our lunch.

Can you share that document?


Crackpot "Universal Theory of Everything" physics rooted in numerology:

https://meta-r0ze.github.io/Informational-Energetics/Informa...


also interested


It's in the file metadata:

- LuaTeX-1.17.0

- LaTeX via pandoc


Yup. The only major changes here are fonts and twocolumn. https://gist.github.com/aphyr/6f0cd6910ccfe2cd7828d1ade2eac5...


Thanks - appreciate it!

I don't even understand what approach 3 is doing. They ended up hashing the random part of the API key with an hash function that produces a small hash and stored that in the metashard server is that it?


yea... sorry I still am not the best explainer but that is the approach, I just wanted to have a shorter hash in the meta shard that is it. The approach 3 is an attempt by me to generate my own base62/base70 encoder ;-;


Apple's accidental moat now is taking the rise of hardware prices due to AI eat into their margins and just expand the mac user base.


Do you know why it is a toy? Because in a real prod environment after inserting 240k rows per second for a while you have to deal with the fact that schema evolution is required. Good luck migrating those huge tables with Sqlite ALTER table implementation


Try doing that on a “real” DB with hundreds of millions of rows too. Anything more than adding a column is a massive risk, especially once you’ve started sharding.


Yes it might be risky. But most schema evolution changes can be done with no or minimal downtime even if you have to do then in multiple steps. When is a simple ALTER going to be totally unacetable if youare using Sqlite?


This doesn't seem like a toy but you know... realizing different systems will have different constraints.

Not everyone needs monopolistic tech to do their work. There's probably less than 10,000 companies on earth that truly need to write 240k rows/second. For everyone else, we can focus on better things.


> realizing different systems will have different constraints.

I realize that. There are a few comments already that present use cases where I can totally see using Sqlite as a good option.

> Not everyone needs monopolistic tech to do their work

We are talking about localhost Postgres vs SQLite here. Both are open source.


Gets proper backups if you back it up the right way https://sqlite.org/backup.html


I bet that takes more time than the 5 extra minutes you take to setup Postgres in the same box upfront.


To export a database? Probably even faster. And that's ignoring the difference in performance.


So you are migrating from Sqlite to Postgres because you need it. What is the state of your product when you need to do this migration? Is your product non trivial? Are you now dependent on particular performance characteristics of Sqlite? Do you now need to keep your service running 24/7? Accounting for all of that takes way more than 5 minutes. The only way to beat that is if you still have a toy product and you can just export the database and import it and pray that it all works as a migration strategy.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: