Actually… I don’t think this is solvable in theory, unless you have some way of ensuring that you can trust the client. It can probably be made difficult enough to discourage most people, but if you hand someone the code and the keys necessary to perform the encryption, and let them run it on their own hardware, then they can reverse engineer it and create code that returns the correct responses. That’s what “trusted computing” is supposed to solve, by storing the private key and performing certain operations only on the chip. Consoles typically use a similar mechanism, but in both cases, it all depends on keeping the key secure. If someone does manage to extract the key, then they can create responses indistinguishable from that produced by the original code.
Obfuscation can make it more difficult for a human to analyze the code, or the program running in memory, but it can’t prevent it. If your device can run the program, then you’ve got everything you need to duplicate its output.