This is a really cool idea. I think you could get a long way by finding calculations whose results depend on the chip being used.
Even more interesting would be a generalized version that sought to discover properties of an unknown calculator. For example, working out the internal precision by looking at rounding errors, or you could time calculations on numbers of different sizes and make inferences about the size of available memory. This would be cool because you can then apply it to more general sorts of things, like human brains.
> This would be cool because you can then apply it to more general sorts of things, like human brains.
I would expect that to fall apart because the underlying mechanics are different; we know what errors converting between base 2 and 10 look like, but I think it's pretty far from obvious that the same principles extend to whatever everyone's favorite biological neural network uses to execute mathematics. You could do the same kinds of analysis, and it would tell you something interesting probably, I just wouldn't trust inferences made on the basis of lessons learned from digital computers.
Even more interesting would be a generalized version that sought to discover properties of an unknown calculator. For example, working out the internal precision by looking at rounding errors, or you could time calculations on numbers of different sizes and make inferences about the size of available memory. This would be cool because you can then apply it to more general sorts of things, like human brains.