Hacker Newsnew | past | comments | ask | show | jobs | submit | more iliabara's commentslogin

Hey,

No, we haven't looked into formal tools like that, good suggestion! TBD on if these approaches bode well for real world robotics, (due to model complexity and model/real world fit problems) but at least there seems to be some applications to path planning and controls: https://ieeexplore.ieee.org/document/5509686

I'll pass it on to the controls team!


Hey!

We don't train in simulation (if you are talking about ML training). Our ML training is done purely using real world data.

The simulator is to allow developers to "test drive" our autonomy core and API for free.

We have real vehicles as well, but you can imagine it's more expensive, time consuming, and generally slower to let people test on real hardware. Hence, if people like what they see in sim, they can get in touch with us to deploy their code to real machines on a case-by-case basis.


To add to what Ilia said - re: - "The machine works good enough that it can perform the task until it can't anymore and then someone remotes-in to fix it?"

The machine will work good enough for basic autonomy, but then the real application-specific work begins. Whether that work is you modifying the behaviors you're having us do, you routing those systems to humans to help out, or you complaining that our stuff sucks (and us trying to rapidly improve it).


Do you see the platform then providing for now "reptile" autonomy (mobility, navigation, localisation) to get started fast, so customers can focus on their "neo-cortex" applicative needs?

(please let me emphasize this "reptile" autonomy is today perhaps the hardest/time-consuming part killing many startups, as you nicely explain in the OP---Moravec's paradox)


That's a great way to think about it, you are describing it exactly correctly.


Thanks for the kind words, and awesome that you were on a DGC team!

Absolutely, there is a ton of really interesting controls, ML and general software quality work for us to do.


Hello,

You are asking about the FAQs here: https://www.polymathrobotics.com/product

We are actually a hardware and real robots company first! Our simulator efforts are for two purposes: 1. Allow more people to try out our autonomy core, and build on top of our API 2. Allow our own developers to run testing and tuning in sim.

To your point, we don't expect tuning to purely happen in sim. However, we did have our senior controls engineer just recently tune up a controller in sim for Caladan, and later deploy practically the same thing to the real vehicle, leading to much smoother steering commands. We'll write about that in detail in the future.

Our work is to ensure that API commands that are sent in sim, behave as close to identically as possible on real vehicles, thereby allowing people who build on top of us to focus on higher level software stack (business logic layer, etc). Hence, users of our API don't need to have any robotics experience, they simply command the vehicle to do something, and we ensure it gets done. (and our team is comprised of perception, controls, ML, etc engineers)

The details of how this is done will be a future writeup, but the summary is that we pass sensors through a Hardware Abstraction Layer (HAL), along with kinematic/dynamic configuration and data about the vehicle. This allows the global and local planners to plan for a more generic vehicle (ex: Ackermann steering vs differential steering), while the HAL ensures that the planners don't generate unfeasible or unsafe commands.

Let me know if the above answers your questions!


When you say this:

  they simply command the vehicle to do something, and we ensure it gets done. (and our team is comprised of perception, controls, ML, etc engineers)
Do you mean you're offering control and perception engineering as a service?


No, sorry, let me be more clear.

Users of our API can send high level commands (ex: go to GPS coordinate X), and our software (on vehicle) will ensure it gets done.

Our software is built by a team of roboticists, including ML, controls, etc. The software we write ensures that the real world vehicle responds almost identically to the simulated one.

There are of course limitations, hence we do a commissioning step where we ensure the work we've done in simulation for a particular vehicle (sensor locations, localization fusion of sensors, controls tuning due to kinematics/dynamics etc) are tested and tuned on the real vehicle. This is done before the real vehicle is put into service, and remotely monitored (collection metrics on performance, status of sensors/actuators, etc).

Cheers!


To add a somewhat easy sentance here - we're our actual product is delivered as SaaS (and it's not at Gmail pricing, it's more enterprise). Were you to become a customer we'd be pretty handholding with you to get the thing actually working.

There is a perception stack and a controls tuning stack within that SaaS that we'd be delivering.


Okay thanks this clears up a lot! The … in the process that I noted in my first post sounds like the handholding you’re talking about.


Hey, Ilia here :)

This could be built, but unsure it'd make sense in terms of required compute on each small robot. What would be the use case you are interested in?


To add to this - a neat thing about swarm robots is splitting the compute load between robots and figuring out how different robots move in conjunction with eachchoter. We're just focusing on the software that sits on the robot itself - so any of that coordination would be above our layer


I can imagine several autonomous trucks sharing large segments of roads, and splitting into a number of locations at the ends. Such vehicles could benefit from coordination on the shared segmets.

I suppose that trucks serving most open mines work that way: they have to share the common spiral part.


Ish. To _probably_ annoy you and everyone on this thread, I'll tell you how this works for a "large publicly traded company that has autonomous vehicles in mines."

Essentially they have a portion of the mine site designated as the "autonomous pit," which is the only place where autonomous vehicles can operate. Not all vehicles in the autonomous pit are autonomous (basically no one can automate the vast majority of industrial vehicles, which is part of what we want to solve). In order to be able to see these non-autonomous vehicles, this PubCo makes the mine buy a $20-50k transceiver to put on every non-autonomous vehicles (in addition to the $1m auton hw upgrade and $250k/yr in autonomy SaaS).

These autonomous vehicles do have sensing, but they more rely on those transceivers to tell where vehicles they shouldn't hit are.

So ya, sure, it kindof works like how you think it should. But in a worse way.


Hey, Ilia here :)

Not sure if the question is about the simulator Caladan, or about our autonomy core.

For Caladan, we don't support playback of data currently. Do you have a use-case need for this?

For the autonomy core, the localization, costmap generation, and path planner are all probabilistic to a more or less degree, so we can't guarantee determinism to any highly accurate degree. However, if the same data is played back over and over, the vehicle would react in basically the same way (perhaps not navigating to the exact same spot, but within a small tolerance). Same question here; what is your need/use case to have guaranteed determinism? Is it a safety question? (We can have a much longer chat there)


Thanks, I was talking more about the autonomy core side of things.

So to narrow it down a bit more, my question is a bit tied to safety, but also to general development. Given a set of logged inputs from a machine that had a field issue X, am I able to reliably reproduce (hopefully deterministically) what happened on that machine? (and therefore what went wrong)


Ah ya, debugging in real environments is a very difficult question, particularly in robotics where the system is complex, and not always with the best internet connectivity.

The approach we are taking to this is two fold: 1) Robust self monitoring and metrics, ensuring we measuring system performance (we are using Prometheus to allow us to scale to (hopefully) millions of devices. 2) Ensuring we tie those metrics to a rolling buffer of the last 90 seconds (or so) of data. When an issue occurs, we automatically save to disk, and allow remote operators to pull the raw sensor data as needed.

With the raw data, we can (and do!) play this back through our stack and step through the behavior tree and control outputs, comparing them to what actually happened on device. The system is certainly deterministic enough to allow this sort of debugging and testing!

We will have a set of articles on this approach in the future!


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: