Detailed Description
In order to make those skilled in the art better understand the technical solutions in the embodiments of the present application, the technical solutions in the embodiments of the present application will be described clearly and completely below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, but not all embodiments. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments in the present application shall fall within the scope of the protection of the embodiments in the present application.
As described in the background section, the present binocular camera has a problem that the misrecognition rate of the binocular camera is increased due to the imaging deviation between the color camera and the infrared camera. In view of the above, embodiments of the present disclosure provide an imaging offset analysis technique, which can perform dynamic measurement and calculation on imaging offsets of a binocular camera corresponding to different imaging distances and perform calibration on the binocular camera based on the measurement and calculation result of the imaging offsets, so as to improve accuracy of detection results of the binocular camera, and the imaging offset analysis and living body detection method, apparatus, and computer storage medium of the present disclosure will be described in detail below with reference to the accompanying drawings.
First embodiment
Fig. 1 shows a schematic flow chart of an imaging bias analysis method according to a first embodiment of the present application. As shown in the figure, the imaging deviation analysis method of the present embodiment mainly includes the following steps:
and S102, providing a binocular camera to shoot the test object so as to determine the imaging distance of the binocular camera, and acquiring a first imaging result and a second imaging result of the binocular camera corresponding to the imaging distance.
In this embodiment, the binocular camera includes a color camera and an infrared camera, the first imaging result of the binocular camera is a visible light image, and the second imaging result is an infrared image.
Alternatively, the size parameter and/or the position parameter of the region of interest of the binocular camera may be adjusted based on the imaging distance so that the test object may fall into the respective regions of interest of the first imaging result and the second imaging result of the binocular camera.
Step S104, identifying the test object in the first imaging result and the second imaging result, and obtaining first position information of the test object in the first imaging result and second position information in the second imaging result.
Alternatively, the first position information of the test object in the first imaging result may be obtained by identifying a region of interest in the first imaging result, and the first position information of the test object in the second imaging result may be obtained by identifying a region of interest in the second imaging result.
Alternatively, the first and second position information of the test object may be obtained by identifying the coordinate position of at least one feature point of the test object in the first imaging result (region of interest) and the second imaging result (region of interest), respectively.
For example, the first and second position information of the test object may be obtained by identifying coordinate positions of at least one vertex of the test object in the first and second imaging results, respectively.
Alternatively, the first position information and the second position information of the test object may be obtained by identifying imaging regions of the test object as a whole in the first imaging result and the second imaging result, respectively.
In this embodiment, the imaging area includes coordinate information of each feature point of the test object and size information of the test object.
For example, the imaging area of the test object in the first imaging result/the second imaging result may be determined by identifying coordinate information of respective vertices of the test object.
For another example, the imaging area of the test object in the first imaging result/the second imaging result may be determined by identifying coordinate information of one feature vertex of the test object and identifying width and height information of the test object.
And step S106, obtaining an imaging deviation analysis result of the binocular camera corresponding to the imaging distance according to the first position information and the second position information.
In this embodiment, the difference calculation may be performed according to the first position information and the second position information to obtain the imaging deviation value of the binocular camera corresponding to the imaging distance.
For example, when the first position information and the second position information are constituted by coordinate positions of at least one feature point of the test object in the first imaging result and the second imaging result, respectively, a difference value calculation may be performed for the coordinate positions of the same feature point of the test object in the two imaging results to obtain an imaging deviation analysis result of the binocular camera corresponding to the imaging distance.
For another example, when the first position information and the second position information are constituted by imaging regions of the test object as a whole in the first imaging result and the second imaging result, respectively, a difference value calculation may be performed for the imaging regions of the test object in the two imaging results to obtain an imaging deviation analysis result of the binocular camera corresponding to the imaging distance.
As can be seen from the above, the imaging deviation analysis method provided in this embodiment can dynamically measure and calculate the imaging deviation of the binocular camera at different imaging distances, so as to analyze whether the binocular camera is abnormal, thereby improving the accuracy of the subsequent detection result of the binocular camera.
Second embodiment
Fig. 2 shows a schematic flow chart of an imaging bias analysis method according to a second embodiment of the present application. As shown in the figure, the imaging deviation analysis method of the present embodiment mainly includes the following steps:
in step S202, a test object having a target area and a peripheral area surrounding the target area is acquired.
In this embodiment, the target area of the test object may be rectangular and have a first color, and the peripheral area of the test object may have a second color.
In this embodiment, the first color may be a reverse color of the second color, that is, the target area is a reverse color of the peripheral area.
Preferably, the first color (i.e., the target area) may be black, and the second color (i.e., the peripheral area) may be white.
Step S204, adjusting the distance between the binocular camera and the test object so that the target area of the test object completely falls into the respective interested areas of the first imaging result and the second imaging result of the binocular camera, and the peripheral area of the test object completely covers the respective interested areas of the first imaging result and the second imaging result so as to determine the imaging distance of the binocular camera.
Optionally, the size parameters and/or position parameters of the region of interest of the binocular camera may be adjusted based on the desired imaging distance to bring the final determined imaging distance into line with expectations.
And step S206, shooting the test object based on the determined imaging distance by the binocular camera, and acquiring a first imaging result and a second imaging result of the binocular camera corresponding to the imaging distance.
In this embodiment, the binocular camera includes a color camera and an infrared camera, the first imaging result is a visible light image, and the second imaging result is an infrared image.
Step S208, identifying the target area in the first imaging result and the second imaging result, and obtaining first position information of the target area in the first imaging result and second position information in the second imaging result.
Optionally, a coordinate position of at least one vertex of the target region of the test object in the first and second imaging results, respectively, may be identified to obtain first and second position information of the test object.
For example, the coordinate positions of the top left corner vertex of the target region in the first imaging result and the second imaging result, respectively, may be identified to obtain the first position information and the second position information.
Optionally, imaging regions of the target area in the first and second imaging results, respectively, may be identified to obtain the first and second position information.
In this embodiment, the imaging region of the target region may include each vertex coordinate information, length information, and width information of the target region.
For example, the imaging area of the target area in the first imaging result/the second imaging result may be determined by identifying coordinate information of each vertex of the target area (e.g., rectangle).
As another example, the imaging area of the target area in the first imaging result/the second imaging result may be determined by identifying coordinate information of one vertex (e.g., a top left vertex of the target area) of the target area (e.g., a rectangle), and identifying length information and width information of the target area.
Optionally, before executing this step, preprocessing may also be executed for the first imaging result and the second imaging result based on a preset preprocessing rule.
In this embodiment, the pre-processing performed on the first imaging result and the second imaging result may include at least one of a picture mirror rotation process, a picture pixel conversion process, and a picture compression process, so as to improve the accuracy of the subsequent imaging deviation analysis.
And step S210, obtaining an imaging deviation value of the binocular camera corresponding to the imaging distance according to the first position information and the second position information.
In this embodiment, a difference calculation may be performed on the first position information and the second position information of the target area to obtain an imaging deviation value of the binocular camera corresponding to the current imaging distance.
In step S212, it is determined whether an imaging deviation value of the binocular camera corresponding to the imaging distance is smaller than a preset deviation threshold, if so, step S214 is performed, and if not, step S216 is performed.
Alternatively, a preset deviation threshold may be set based on the resolutions of the first and second imaging results.
In this embodiment, when the resolution of the first imaging result and the second imaging result is 640 × 480, the preset deviation threshold may be set to 15 pixels.
In step S214, if the imaging deviation value of the binocular camera corresponding to the current imaging distance is analyzed to be smaller than the preset deviation threshold, an analysis result that the imaging deviation value of the binocular camera corresponding to the imaging distance is normal is output.
In step S216, if the imaging deviation value of the binocular camera corresponding to the current imaging distance is not less than the preset deviation threshold, outputting an analysis result that the imaging deviation value of the binocular camera corresponding to the imaging distance is abnormal.
In summary, in the imaging deviation analysis method of the embodiment, the peripheral area having the target area and surrounding the target area is used as the test object to dynamically measure and calculate the imaging deviation values of the binocular camera corresponding to different imaging distances, so that the accuracy of the imaging deviation measurement result can be improved.
Third embodiment
A third embodiment of the present application provides a computer storage medium having stored therein instructions for executing the steps of the imaging bias analysis method according to the first or second embodiment.
Fourth embodiment
Fig. 3 shows a schematic flow chart of a living body detection method according to a fourth embodiment of the present application. The living body detection method of the embodiment is suitable for various application scenes such as gate inhibition, gate and the like which need to execute identity authentication.
As shown in the figure, the living body detection method of the present embodiment mainly includes the following:
step S302, according to the imaging distance of the binocular camera, an imaging deviation analysis result of the binocular camera corresponding to the imaging distance is obtained by an imaging deviation analysis method, and the binocular camera is calibrated according to the imaging deviation analysis result.
In this embodiment, the imaging distance of the binocular camera may be determined according to the separation distance between the installation position of the binocular camera and the target detection position.
In this embodiment, the imaging deviation analysis method according to the first embodiment or the second embodiment may be used to obtain the imaging deviation value of the binocular camera corresponding to the imaging distance, and perform calibration on the binocular camera based on the imaging deviation value. For example, the imaging bias values may be set directly in the binocular camera for calibration.
And step S304, shooting the target object meeting the imaging distance by using the calibrated binocular camera, obtaining the imaging result of the target object, and identifying the target object according to the imaging result.
In the present embodiment, a binocular camera may be used to perform photographing on a target object located at a target detection position to obtain an imaging result that the target object satisfies an imaging distance.
Alternatively, it is possible to identify whether or not the target object in the imaging result matches a preset standard object.
In this embodiment, each standard imaging information corresponding to each preset standard object may be collected in advance, and the imaging result obtained by the binocular camera is compared with each standard imaging information, so as to obtain an identification result that the target object is matched or not matched with the preset standard object, for example, whether the target object is a person pre-registered in the bottom library of the identification terminal device.
Alternatively, the standard imaging information of the preset standard object may include facial feature information and/or posture feature information of the preset standard object.
In this embodiment, when the target object in the recognition imaging result matches the preset standard object, further performing live body detection on the target object (i.e., verifying whether the target object is a real live body) by using the infrared camera in the binocular camera.
In summary, the in-vivo detection method according to the embodiment of the present application performs detection and identification of the target object by using the binocular camera calibrated based on the imaging deviation analysis result, and can effectively improve the accuracy of the identification result of the binocular camera.
Fifth embodiment
A fifth embodiment of the present application provides a computer storage medium having stored therein instructions for executing the steps of the living body detecting method according to the fourth embodiment.
Sixth embodiment
Fig. 4 is a schematic structural diagram showing an imaging deviation analyzing apparatus according to a sixth embodiment of the present application. As shown in the drawing, the imaging deviation analysis apparatus 400 of the present embodiment mainly includes: an imaging module 402, a recognition module 404, and an analysis module 406.
The imaging module 402 is configured to provide a binocular camera 40 to shoot a test object, to determine an imaging distance of the binocular camera 40, and to acquire a first imaging result and a second imaging result of the binocular camera 40 corresponding to the imaging distance.
Optionally, the binocular camera 40 includes a color camera and an infrared camera, the first imaging result is a visible light image, and the second imaging result is an infrared image.
Optionally, the imaging module 402 is further configured to adjust a size parameter and/or a position parameter of the region of interest of the binocular camera 40 based on the imaging distance.
Optionally, the imaging module 402 is further configured to provide the binocular camera 40 to shoot the test object so that the test object falls into the region of interest of each of the first imaging result and the second imaging result of the binocular camera 40.
Optionally, the test object has a target area and a peripheral area surrounding the target area, wherein the target area is rectangular and has a first color, the peripheral area has a second color, the first color is a reverse color of the second color, preferably, the first color is black, and the second color is white; the imaging module 402 is further configured to provide the binocular camera 40 to photograph the test object, so that the target area of the test object completely falls within the respective regions of interest of the first and second imaging results of the binocular camera 40, and the peripheral area of the test object completely covers the respective regions of interest of the first and second imaging results.
The identification module 404 is configured to identify the test object in the first imaging result and the second imaging result, and obtain first position information of the test object in the first imaging result and second position information of the test object in the second imaging result.
Optionally, the identification module 404 is further configured to identify the region of interest in the first imaging result, and obtain the first position information of the test object in the first imaging result; and identifying the region of interest in the second imaging result, and obtaining the first position information of the test object in the second imaging result.
Optionally, the identification module 404 is further configured to identify a coordinate position of at least one feature point of the test object in the first imaging result and the second imaging result, respectively, so as to obtain the first position information and the second position information of the test object; or identifying imaging areas of the whole test object in the first imaging result and the second imaging result respectively to obtain the first position information and the second position information of the test object; the imaging area includes coordinate information of each feature point of the test object and size information of the test object.
Optionally, the identification module 404 is further configured to identify a coordinate position of at least one vertex of the target region in the first imaging result and the second imaging result, respectively, so as to obtain the first position information and the second position information of the test object; or identifying imaging areas of the target area in the first imaging result and the second imaging result respectively to obtain the first position information and the second position information of the test object; wherein the imaging region includes vertex coordinate information, length information, and width information of the target region.
Optionally, the identification module 404 is further configured to perform preprocessing on the first imaging result and the second imaging result based on a preset preprocessing rule; wherein the preset preprocessing rule comprises: at least one of a picture mirror rotation process, a picture pixel conversion process, and a picture compression process.
The analysis module 406 is configured to obtain an imaging deviation analysis result of the binocular camera 40 corresponding to the imaging distance according to the first position information and the second position information.
Optionally, the analysis module 406 is further configured to perform a difference calculation according to the first position information and the second position information, so as to obtain an imaging deviation value of the binocular camera 40 corresponding to the imaging distance.
Optionally, the analysis module 406 is further configured to, according to the imaging deviation value of the binocular camera 40 corresponding to the imaging distance and a preset deviation threshold, output an analysis result that the imaging deviation value of the binocular camera 40 corresponding to the imaging distance is normal if the imaging deviation value is smaller than the preset deviation threshold, and output an analysis result that the imaging deviation value of the binocular camera 40 corresponding to the imaging distance is abnormal if the imaging deviation value is not smaller than the preset deviation threshold.
Optionally, the analysis module 406 is further configured to set the preset deviation threshold based on the resolution of the first imaging result and the second imaging result.
Optionally, the preset deviation threshold comprises 15 pixels corresponding to a resolution of 640 x 480.
Seventh embodiment
Fig. 5 shows a schematic configuration diagram of a living body detecting apparatus according to a seventh embodiment of the present application. As shown in the figure, the living body detecting device 500 of the present embodiment mainly includes a calibration module 502 and a detection module 504.
The calibration module 502 is configured to obtain an imaging deviation analysis result of the binocular camera 40 corresponding to the imaging distance by using the imaging deviation analysis apparatus 400 according to the imaging distance of the binocular camera 40, and calibrate the binocular camera 40 according to the imaging deviation analysis result.
The detection module 504 is configured to capture a target object meeting the imaging distance by using the calibrated binocular camera 40, obtain an imaging result of the target object, and identify the target object according to the imaging result.
Optionally, the detection module 504 is further configured to, if it is identified that the target object in the imaging result matches a preset standard object, further perform living body detection on the target object by using an infrared camera in the binocular camera 40.
Optionally, the detection module 504 is further configured to acquire each standard imaging information corresponding to each preset standard object; and comparing the imaging result with each standard imaging information respectively to obtain the identification result of the target object matched with or not matched with the preset standard object.
Optionally, the standard imaging information includes facial feature information and/or posture feature information of the preset standard object.
In summary, the imaging deviation analysis method, the imaging deviation analysis device and the computer storage medium according to the embodiments of the present application can dynamically measure and calculate the imaging deviation of the binocular camera corresponding to different imaging distances.
In addition, the accuracy of the imaging deviation analysis result can be improved by setting the test object to have a target area and a peripheral area surrounding the target area, wherein the target area is rectangular and has a first color, the peripheral area has a second color, and the first color is a reverse color of the second color.
Furthermore, the in-vivo detection method, the in-vivo detection device and the computer storage medium provided by the application can improve the accuracy of the detection result of the binocular camera by calibrating the binocular camera according to the imaging deviation analysis result.
Finally, it should be noted that: the above embodiments are only used for illustrating the technical solutions of the embodiments of the present application, and are not limited thereto; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions in the embodiments of the present application.