when I ran the face-detection model I notice that only 3 faces can be recognized with a picture(1280*720), but when I magnify the picture and use part of the image for input, more faces are recognized. Actually，I expect to be able to recognize more faces on a high-resolution picture.
Can anyone offer some advice on how to solve this problem？
Can you give a bit more detail? What model are you using as we have many face detection models in the package? What command lines are you using to convert the model in Model Optimizer etc..
From your description, the behavior sounds quite normal. If you refer to the documentation of face-detection-adas-0001 model, you'll see that the average precision (i.e. detection accuracy) degrades significantly when face size become smaller than 64x64.
The first step before the detection is always a downscale to a certain fixed resolution (384x672 for the aforementioned model), so when you start increasing the input size of the original image, you get smaller and smaller faces on the actual input resolution to the network. E.g. if your original image is 384x672 and you "downscale" it to 384x672, then a 64x64 face on the original image remains 64x64. But if your input image is 720x1280, then after downscaling it to 384x672, the faces that were 64x64 on the original image become around 32x32 that makes it harder to detect.
Your solution to use part of the image is, actually, a correct way to increase the accuracy. But if you need to run the detector on the whole image, it means that you will, most likely, need to run it several times - i.e. you will increase the accuracy by sacrificing the performance.