Personally while I don't think what Bambu's doing is great, it's not like there aren't a dozen manufacturers that can generally match their printers at similar pricepoints, and don't have these issues
I disagree - teleoperated robot arms with human like kinematics makes it possible for human operators to perform relatively delicate but heavy tasks that needed heavy machinery before.
Imagine one of these things working on a powerline or stacking bricks like legos to build a house.
Teledildonics was the future that never really materialized at the wider scale. If porn cant make it work then no one (other than military for obvious reasons) will.
I've had bad experiences with USB irons, they generally don't have stellar compatibilty with USB power banks, and when your 60W iron can only draw 20W from your 100W power bank or PSU (but sometimes it works).
They even come with these compatibility wikis of what PSU or bank to buy.
I simply cannot recommend a Pinecil + compatible 20A battery pack enough. Not being tied to a socket is amazing and the device is good to go in literally seconds!
Dunno, I had a TS80P, and bought a compatible Xiaomi powerbank and had a lot of issues. What sort of worked for me, is they make adapters for Makita batteries that have Weller connectors, those have a lot of juice and are far less fussy.
I have bought the TC22 after going on r/Soldering and can only stand by the recommendation, its an amazing iron for hobbyists and its ability to put tons of power on a tiny area quickly means basically its 100% easier to work with than a ton of cheap irons, and have a much lower chance of killing components than dicking around with less powerful ones and staying on the pin a long time trying to heat it up while it wicks heat away into sensitive electronics. Doubly so when I mess it up or the solder is not fully melted. Another nice thing is with powerful irons you don't have to overshoot the melting temp of solder as much, and tips with less thermal mass in general can be used.
Im a rank amateur so take what I said with a grain of salt. With that said, I have made several cool things in my life that many people've said I could charge money for. I guess you can't really see the mess I made when you can't look inside the housing :)
I've purchased it from the GEEBOON Store on Aliexpress (no affiliate or anything just looked up my order history):
Yeah I remember reading that what killed the space industry in the 90s-2000s other than the collapse of the USSR and cessation of great power competition was the massive move to digital communications, particularly satellite TV - which mean that a smaller number of satellites could serve the expected demand.
and correctness too - I guess there aren't that many hardcore electrical engineers/physicists/mathematicians that can make sure the results it makes are correct and sound, and debug weird issues coming from numerical stability.
The sort of people who can do this are very rare, and it's not likely they will just randomly decide to donate their time to rewrite the codebase.
Imo lambdas are super cool, and the best way to have a no-headache fast-iteration time deployment service.
What most people realize, that you don't have to go microservice or fragment your code to a billion little repos, you could take a standard webserver, and move it to lambda, as long as you don't expect requests to be able to share on-server state.
I agree. Web service hosted on Lambda that, for long running async tasks, uses FIFO SQS (optionally by way of FIFO SNS) connected to the task runner Lambda. Easy. It's not hard to deploy like OP claims. Build a Docker image, toss it in ECR, and use AWS CDK to do infra. Done.
Well, complexity and iteration time for example. For docker, you need something that runs it (like your own cluster of EC2 or ECS), a private registry you push to, a separate user that provisions the actual server, while iteration (code changes, or something as simple as changing an env var) - involves the cycle of upload new image, shut down the old containers, try to start up the new ones, with all sort of weird failure cases, like if your container depends on a dockerhub image like alpine, you can run into a ratelimit scenario with dockerhub,as AWS is too cheap to pay for dockerhub access, and doesnt have their own mirror, so your containers may fail to start unless you explicitly mirror your base image.
Then you have to take care to update your images etc.
All this stuff, like ECS also lives in a subnet, so you have to manage routing, and public accessibility and stuff - its legit crazy amout of work compared to either lambda or just running stuff on a virtual machine.
You’re describing incidental complexity of running a container on AWS, though. Which you can also do with lambda because there are 15 different ways to do it.
Personally I think the UI flow is geared towards the idea that engineers don't really see the costs, they just build stuff and then management pays at the end of the month.
Often I see something that's supposed to be leaner - like Fargate is leaner than renting a whole server to run docker, right?
So it's cheaper as well? - Well, no.
Also if you reach any appreciable level of complexity, you should move to IaC - configuring all that stuff on the UI, and getting it right is torture.
Right - nobody who’s had a formal education in engineering would think that way, because cost considerations are part of the curriculum from the start.
I don't think a lot of formal education places teach AWS's resource pricing structure, which can be incredibly confusing, but can be boiled down to: if you want to be as cheap as possible, just use EC2 for everything and maybe S3 for storage.
I'm very surprised you expect any formal education to teach any specific pricing structure. You teach how to evaluate solutions for their price impact. No one was claiming any curriculum includes AWS's resource pricing structure.
I can't recall cost ever coming up as a consideration during my years of formal computer science studies in school. Big-O efficiency, sure, but the cost of compute, storage, bandwidth, nope, not once.
It was absolutely hammered into me in the years of working for startups that followed, though.
They're not but if they don't talk to the pricing team, and most devs don't want to talk to business people, they'd never coordinate on where it makes sense to show pricing to customers.
You didn’t read the comment I replied to, did you? The premise was :
> the UI flow is geared towards the idea that engineers don't really see the costs, they just build stuff and then management pays at the end of the month.
So this is about the engineers consuming AWS, not the ones who designed and implemented AWS
reply