I am doing a project that use NCS2 on an Arm64 board.
Besides cross-compile issues, I am also confused by the relation of model-optimizer and inference engine.
As there is no model-optimizer in the release for Pi, so does that mean the model-optimizer output is independent of platform,
and I can do the model-optimizing on an Intel PC and save the result for embedded system?
Yes, you can convert your model to IR using the model optimizer on a separate system, then transfer the resulting .xml and .bin files to your Raspberry Pi and run inference there. Just make sure that the Model Optimizer and Inference Engine are the same version.
Please let me know if you have any further questions!