I'm not asking for a how to - but I have conceptual questions about using a reid network.
In the samples, we have the person re-identification networks.
If I took a different person re-identification network, and ran the optimizer, can I change the inputs in the process?
Rather than a vertical rectangle, I'd like to reid a horizontal rectangle.
Which leads to my other question I have about reid networks - are reid networks (like the intel model) trained to detect faces (or whatever) or is the network trained and weighted to process the input of a specific shape (in this case, a vertical rectangle)? In other words - is the network inferring a face, then has an output layer of the vector of the face for cos closeness post processing?
Or does does such a network translate the input into vectors and supply those?