CN112153300A - Multi-view camera exposure method, device, equipment and medium - Google Patents

Multi-view camera exposure method, device, equipment and medium Download PDF

Info

Publication number
CN112153300A
CN112153300A CN202011016060.4A CN202011016060A CN112153300A CN 112153300 A CN112153300 A CN 112153300A CN 202011016060 A CN202011016060 A CN 202011016060A CN 112153300 A CN112153300 A CN 112153300A
Authority
CN
China
Prior art keywords
face
view camera
image
exposure
target object
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011016060.4A
Other languages
Chinese (zh)
Inventor
姚志强
周曦
钮锋
缑钢锋
张奕
万金学
吴凡
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Yunconghonghuang Intelligent Technology Co Ltd
Original Assignee
Guangzhou Yunconghonghuang Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Yunconghonghuang Intelligent Technology Co Ltd filed Critical Guangzhou Yunconghonghuang Intelligent Technology Co Ltd
Priority to CN202011016060.4A priority Critical patent/CN112153300A/en
Publication of CN112153300A publication Critical patent/CN112153300A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/73Circuitry for compensating brightness variation in the scene by influencing the exposure time
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • H04N23/611Control of cameras or camera modules based on recognised objects where the recognised objects include parts of the human body

Abstract

The invention provides a multi-view camera exposure method, a device, equipment and a medium, wherein the method comprises the following steps: acquiring at least two frames of images of the same target object collected under the same or different illumination conditions by using a multi-view camera; and when the face of the target object cannot be acquired in any image is detected, re-determining the face region in any image according to the face positions acquired by other images, and re-exposing the face region to obtain the face image. According to the invention, the positions of the human faces are used for reference, so that all cameras can be ensured to be normally exposed to obtain human face images; meanwhile, the exposure effect of the camera under strong light, backlight and dark environment is greatly improved; when the method is applied to the detection of living bodies, the identification rate of the living body detection is also greatly improved.

Description

Multi-view camera exposure method, device, equipment and medium
Technical Field
The invention relates to the technical field of image processing, in particular to a multi-view camera exposure method, a multi-view camera exposure device, multi-view camera exposure equipment and a multi-view camera exposure medium.
Background
With the continuous updating of security technologies, face recognition technologies are also more and more widely applied in life. Especially in government departments, human face entrance guard, gate machines, attendance and financial industries, has irreplaceable intelligent safety monitoring function on safety protection.
However, when a binocular camera or a multi-view camera acquires a face in an image, due to the fact that the performance of the camera is affected by a dynamic range and an exposure algorithm due to scene limitation, a clear face image cannot be acquired, and the output result is not good, for example, the face is too black in a strong backlight scene, and the face is overexposed in a strong backlight scene, and therefore, a multi-view camera exposure method, device, equipment and medium are urgently needed.
Disclosure of Invention
In view of the above-mentioned shortcomings of the prior art, an object of the present invention is to provide a method, an apparatus, a device and a medium for exposing a multi-view camera, which are used to solve the problem of poor exposure effect of the multi-view camera in the prior art.
In order to achieve the above and other related objects, the present invention provides an exposure method for a multi-view camera, comprising the steps of:
acquiring at least two frames of images of the same target object collected under the same or different illumination conditions by using a multi-view camera;
and when the face of the target object cannot be acquired in any image is detected, re-determining the face region in any image according to the face positions acquired by other images, and re-exposing the face region to obtain the face image.
The present invention also provides an exposure apparatus for a multi-view camera, comprising:
the acquisition module acquires at least two frames of images of the same target object acquired under the same or different illumination conditions by using the multi-view camera;
and the exposure module is used for re-determining a face area in any image according to the face positions acquired by other images and re-exposing the face area to obtain a face image when the face of the target object cannot be acquired in any image is detected.
The present invention also provides an apparatus comprising:
one or more processors; and
one or more machine-readable media having instructions stored thereon that, when executed by the one or more processors, cause the apparatus to perform a method as described in one or more of the above.
The present invention also provides one or more machine-readable media having instructions stored thereon, which when executed by one or more processors, cause an apparatus to perform the methods as described in one or more of the above.
As described above, the method, the apparatus, the device and the medium for exposing a multi-view camera provided by the present invention have the following advantages:
and detecting the face by using the multi-view camera, determining a face area in any image according to the face position acquired by other images when the face of the target object cannot be acquired in any image is detected, and re-exposing the face area to obtain a face image. According to the invention, the positions of the human faces are used for reference, so that all cameras can be ensured to be normally exposed to obtain human face images; meanwhile, the exposure effect of the camera under strong light, backlight and dark environment is greatly improved; when the method is applied to the detection of living bodies, the identification rate of the living body detection is also greatly improved.
Drawings
Fig. 1 is a schematic flow chart of an exposure method for a multi-view camera according to an embodiment;
fig. 2 is a schematic flow chart of a multi-view camera exposure method according to another embodiment;
fig. 3 is a schematic flowchart of an exposure method for a multi-view camera according to another embodiment;
fig. 4 is a schematic flowchart of an exposure method for a multi-view camera according to another embodiment;
fig. 5 is a schematic flowchart of an exposure method for a multi-view camera according to another embodiment;
fig. 6 is a schematic flowchart of a binocular camera exposure live body detection provided in an embodiment;
fig. 7 is a schematic diagram of a hardware structure of the multi-view camera exposure apparatus according to an embodiment;
fig. 8 is a schematic diagram of a hardware structure of a multi-view camera exposure apparatus according to another embodiment;
fig. 9 is a schematic diagram of a hardware structure of a multi-view camera exposure apparatus according to another embodiment;
fig. 10 is a schematic diagram of a hardware structure of a multi-view camera exposure apparatus according to another embodiment;
fig. 11 is a schematic diagram of a hardware structure of a multi-view camera exposure apparatus according to another embodiment;
fig. 12 is a schematic hardware structure diagram of a terminal device according to an embodiment;
fig. 13 is a schematic hardware structure diagram of a terminal device according to another embodiment;
fig. 14 is a diagram illustrating an exposure effect of a conventional multi-view camera in a backlight scene according to an embodiment;
fig. 15 is a diagram illustrating an exposure effect of the multi-view camera in a backlight scene according to an embodiment of the present invention;
fig. 16 is a diagram illustrating an exposure effect of the conventional multi-view camera in a strong front light scene according to an embodiment;
fig. 17 is a diagram illustrating an exposure effect of the multi-view camera in a strong front light scene according to an embodiment of the present invention.
Element number description:
0 correction module
1 acquisition Module
2 Exposure module
3 exposure parameter adjusting module
4 first abnormal exposure module
5 second abnormal exposure module
1100 input device
1101 first processor
1102 output device
1103 first memory
1104 communication bus
1200 processing assembly
1201 second processor
1202 second memory
1203 communication assembly
1204 Power supply Assembly
1205 multimedia assembly
1206 voice assembly
1207 input/output interface
1208 sensor assembly
Detailed Description
The embodiments of the present invention are described below with reference to specific embodiments, and other advantages and effects of the present invention will be easily understood by those skilled in the art from the disclosure of the present specification. The invention is capable of other and different embodiments and of being practiced or of being carried out in various ways, and its several details are capable of modification in various respects, all without departing from the spirit and scope of the present invention. It is to be noted that the features in the following embodiments and examples may be combined with each other without conflict.
It should be noted that the drawings provided in the following embodiments are only for illustrating the basic idea of the present invention, and the components related to the present invention are only shown in the drawings rather than drawn according to the number, shape and size of the components in actual implementation, and the type, quantity and proportion of the components in actual implementation may be changed freely, and the layout of the components may be more complicated.
In the related technologies in the field, with the continuous updating of security technologies, the face recognition technology is applied more and more widely in life. Especially in government departments, face access control, gate machines and financial industries, the intelligent security monitoring function irreplaceable to the security protection is provided. However, when the binocular camera or the multi-view camera acquires a face in an image, the performance of the camera is affected by the dynamic range and the exposure algorithm due to the scene limitation, so that a clear face image cannot be acquired, and the output result is poor, for example, the face in a strong backlight scene is too black, the face in a strong frontlight scene is overexposed, and the like.
Based on the problems existing in the scheme, the invention discloses and provides a multi-view camera exposure method, a multi-view camera exposure device, electronic equipment and a storage medium.
Near infrared image, an image formed by a remote sensor receiving the reflected or radiated near infrared spectrum of a target object.
The living body detection is a method for determining the real physiological characteristics of an object in some identity verification scenes, and in the application of face recognition, the living body detection can verify whether a user operates for the real living body per se by using the technologies of blink, mouth opening, head shaking, head nodding and the like and using the face characteristic point positioning, face tracking and the like. Common attack means such as photos, face changing, masks, sheltering and screen copying can be effectively resisted, so that a user is helped to discriminate fraudulent behaviors, and the benefit of the user is guaranteed.
Referring to fig. 1, the present invention provides an exposure method for a multi-view camera, which includes the following steps:
step S1, acquiring at least two frames of images of the same target object collected under the same or different illumination conditions by using a multi-camera;
for example, when the multi-view camera is a binocular camera, two frames of images are collected, wherein one frame of image utilizes a color image generated by the RGB camera under natural light or white light conditions; another frame of image is a near infrared image generated by an NIR camera under the condition of near infrared light; the binocular camera exposure method can be applied to living body detection so as to judge whether the acquired target object is a living body. For another example, the binocular cameras may be RGB cameras, and the binocular cameras may be used to take pictures.
It should be noted that, when the multi-view camera shoots the target object under the same illumination condition, at least two frames of images are obtained, and the number of the obtained images is correspondingly increased according to the increase of the number of the cameras in the used camera module. For example, at present, in order to improve the photographing effect, a terminal such as a smart phone or a smart tablet is provided with three cameras, four cameras or five cameras behind the terminal. Therefore, the present application can be widely applied to photographing a target object (person).
Step S2, when it is detected that the face of the target object cannot be acquired in any one of the images, re-determining the face region in any one of the images according to the face positions acquired from the other images, and re-exposing the face region to obtain a face image.
And re-determining an inner face area of any image according to the face position acquired from other images, wherein when the face area is re-exposed, the exposure parameters comprise one or more of exposure time length, sensitivity and aperture value.
The exposure adjustment method is the prior art and is not described herein again.
In this embodiment, a multi-view camera is used to detect a face, when a face that cannot be obtained from a target object in any image is detected, a face area in any image is determined according to face positions obtained from other images, and the face area is re-exposed to obtain a face image. According to the invention, the positions of the human faces are used for reference, so that all cameras can be ensured to be normally exposed to obtain human face images; meanwhile, the exposure effect of the camera under strong light, backlight and dark environment is greatly improved; when the method is applied to the detection of living bodies, the identification rate of the living body detection can be greatly improved.
In another embodiment, the adaboost algorithm is used to detect the face position of the target object within any of the images.
Specifically, the detection of the face region may adopt a feature-based face detection method, a template matching method, and an adaboost algorithm. The face detection method based on the characteristics comprises an overall contour method, a skin color detection method and an organ distribution method; the template matching method includes a mosaic method (also called mosaic method), a predetermined template matching method, and a deformed template method.
The human face detection algorithm based on the skin color has high detection rate, false judgment is easily generated on non-human face skin color areas (hands, feet, necks and the like) and skin color areas in the background, and meanwhile, the algorithm has poor robustness due to a single fixed threshold value and is not suitable for environments with changed external factors such as illumination, shadow and the like, so that the algorithm has limitation and can only be applied to skin color detection under a simple background.
The face detection algorithm based on the Gabor + BP neural network has the advantages of being simple to understand, strong in learning capacity, small in calculated amount and strong in anti-noise capacity, and the accuracy can be improved by expanding a training sample set.
Compared with other face detection algorithms, the face detection algorithm based on adaboost has higher robustness, high detection rate and low false detection rate for people with different skin colors, faces with a certain rotation angle and illumination change. Therefore, in the present application, it is preferable to use the adaboost algorithm to detect the face region.
In an exemplary embodiment, referring to fig. 2, a flowchart of an exposure method for a multi-view camera according to another embodiment of the present invention is different from that in fig. 1, before step S1, the method further includes:
and step S0, correcting the internal and external parameters configured by the multi-view camera.
Specifically, the multi-view camera is corrected by configuring internal and external parameters of the camera, wherein the internal and external parameters can be one or more of definition, resolution and brightness; the definition refers to the definition of each detail shadow and the boundary thereof on the image; the resolution refers to the precision of the screen image; the brightness is the brightness of the picture.
For another example, when an image is acquired under the same lighting condition, the internal and external parameters of the camera may further include level correction, automatic white balance, color restoration, automatic exposure, gamma correction, demosaicing, RGB2YUV, noise reduction, and the like, so that the image signal in the Bayer RAW format obtained by the RGB camera can be restored to a photosensitive image closer to the current real environment.
In the present embodiment, the error between the collected multiple images is made extremely small through step S0, which facilitates the reduction of the error in the subsequent face position area during image mapping, and at the same time, even in the color image generated under natural light or white light condition; and a near infrared image generated under the near infrared light condition, so that only a slight pixel error exists between the two images.
In an exemplary embodiment, referring to fig. 3, a flowchart of an exposure method for a multi-view camera according to another embodiment of the present invention is provided, before step S2, the method further includes:
step S11, when the face position corresponding to each image in other images is detected, selecting the camera exposure parameter matched with the face position corresponding to the face image with the highest quality as the exposure parameter corresponding to the camera without the detected face;
wherein, the plurality of images respectively correspond to the positions of the human faces, such as the positions, sizes and angles of the human faces; calculating the face quality scores of the images by adopting a face quality scoring algorithm; selecting the image corresponding to the highest face quality score as the face image with the highest quality, and adjusting the exposure parameter corresponding to the face image as the exposure parameter of the camera in which the face is not detected according to the exposure parameter, where the exposure parameter includes one or more of exposure time, sensitivity, and aperture value, and the exposure parameter may further include other parameter values, which is not limited in this embodiment.
In the embodiment, in a light source scene, the exposure parameters of the undetected face camera can be changed by selecting the exposure parameters with the best quality, so that a corresponding face image is obtained, and the shooting quality of the image is obviously improved.
In other embodiments, please refer to fig. 6, which is a schematic flowchart illustrating a binocular camera exposure live inspection according to an embodiment of the present invention, including:
the binocular camera is composed of a camera a (RGB camera) and a camera B (NIR camera). Firstly, detecting face positions in images respectively collected by a camera A and a camera B, and determining whether the face positions need to be output to the opposite side or not according to the face detection results of the two cameras; for example, when the camera a does not detect a face, the position of the face acquired by the camera B may be referred to; or when the camera B does not detect the face, the position of the face obtained by the camera A can be referred to; if the two cameras respectively correspond to the detected faces, the faces cannot be mutually output to the opposite side.
Further, if the camera a and the camera B detect/extract faces in their respective pictures, the respective faces are exposed to the respective face regions, the required face images are output and output to a back-end algorithm (live body detection algorithm, comparison algorithm), and the results are output.
Scheme one
If the camera A cannot detect the face in the picture of the camera A, and the camera B detects the face, the face position detected in the camera B is output to the camera A, the camera A re-estimates the face area according to the face position obtained in the camera B, and re-exposes the area until the face image with the required quality is obtained. Or the like, or, alternatively,
scheme two
If the camera B cannot detect the face in the picture of the camera B, but the camera A can detect the face, the position of the face in the camera A is output to the camera B, the camera B estimates the face area again according to the position of the face obtained by the camera A, and the face area is exposed again according to the area until the face image with the required quality is obtained.
In this embodiment, if the two cameras are in the backlight scene, since the camera B can also accurately obtain the face position in the backlight scene, the camera B has no requirement for color, and the face position can be determined by a black-and-white picture, however, as for the camera a, the face cannot be detected in the photographed image, the execution is performed according to the second scheme, see fig. 14 and fig. 15 for details, and the two cameras can accurately detect the face through the second scheme, so that the binocular camera has an industrial problem of inaccurate detection of the living body in the backlight scene.
Specifically, if two cameras are in the strong direct light scene, camera A also can accurately acquire the face position in the strong direct light scene, compare camera B, because camera A is sensitive to the colour can ensure to detect the face, on the contrary, camera B then can't detect the face. Referring to the implementation of the first scheme, see fig. 16 and 17 for details, the binocular camera can accurately detect faces, and therefore the industrial problem of inaccurate living body detection of the binocular camera in a strong direct light scene is solved.
In other embodiments, the mobile terminal may further include a dual camera or a triple camera, which is not limited in this embodiment. For example, the mobile terminal may include three cameras, one being a main camera, one being a wide camera, and one being a tele camera.
Optionally, when the mobile terminal includes a plurality of cameras, the plurality of cameras may be all front-mounted, or all rear-mounted, or a part of the cameras may be front-mounted, and another part of the cameras may be rear-mounted, which is not limited in this embodiment of the application.
Due to the fact that the brightness distribution range of pixels in an image under a light source scene is large, overexposure or underexposure can occur, and therefore the mobile terminal enables a human face in the image to obtain a clear human face image even if the human face is in a strong front light scene or a strong back light scene through the mode.
In other embodiments, please refer to fig. 4, which is a schematic flowchart of an exposure method for a multi-view camera according to another embodiment of the present invention, based on the above embodiments, further including:
step S21, when the multi-view camera can not detect the face of the target object, the time difference of the ultrasonic wave signal received and transmitted by the ultrasonic wave sensor is used for determining the face area of the target object in the current image acquired by the multi-view camera, and the exposure strategy is adjusted according to the face area to obtain the face image.
When the multi-view camera is the RGB camera, the face image cannot be acquired after exposure in a backlight scene or a strong frontlight scene, that is, the face of the target object cannot be detected.
Specifically, when the multi-view camera cannot detect the face of the target object, the distance between the emission point of the ultrasonic signal and the face of the target object is measured by using the time difference between the emitted ultrasonic signal and the received ultrasonic echo signal, the face area of the target object in the current image acquired by the multi-view camera is determined by using the distance, and the exposure strategy is adjusted according to the face area to obtain the face image.
In the embodiment, the distance between the transmitting point of the ultrasonic signal and the face of the target object is measured through the time difference between the ultrasonic signal and the echo signal transmitted by the ultrasonic transmitting module and the ultrasonic receiving module to the face, the face area of the target object in the current image acquired by the multi-view camera is determined by using the distance, and the face image is obtained by adjusting the exposure strategy according to the face area, so that the capability of the multi-view camera for acquiring the face image is improved, and the application range of the multi-view camera is expanded, such as intelligent terminal, living body detection and the like.
It should be noted that the transmitting frequency of the ultrasonic transmitter is 25KHz to 100KHz, the length of the sweep signal is 100ms, the transmitting interval is 150ms, and the sampling frequency of the a/D converter to the echo signal is 500 KHz.
In another embodiment, the ultrasonic transmitting module and the ultrasonic receiving module are used for measuring the time difference between the ultrasonic signal transmitted by the face and the echo signal, measuring the distance between the transmitting point and the face, generating three-dimensional model data of the surface appearance of the detected face, realizing modeling of the ultrasonic volume of the face, shooting two-dimensional image data of the face of a user through the face recognition binocular camera, verifying the three-dimensional model data and the two-dimensional image data, and contributing to improving the verification safety.
In other embodiments, please refer to fig. 5, which is a schematic flow chart of an exposure method for a multi-view camera according to another embodiment of the present invention, based on the above embodiments, further including:
and step S22, when the multi-view camera cannot detect the face of the target object, fitting according to the human body contour of the target object in the image acquired by the multi-view camera, determining the face area of the target object in the current image acquired by the multi-view camera, and adjusting the exposure strategy according to the face area to obtain the face image.
When the multi-view camera is the RGB camera, the face image cannot be acquired after exposure in a backlight scene or a strong frontlight scene, that is, the face of the target object cannot be detected.
In this embodiment, a human body contour of a target object in an image is obtained by using a multi-view camera for fitting, a human face region of the target object is estimated according to the fitted human body contour, and an exposure strategy is adjusted according to the human face region to obtain a human face image.
Referring to fig. 7, an exposure apparatus for a multi-view camera according to the present invention includes:
the system comprises an acquisition module 1, a multi-view camera, a display module and a control module, wherein the acquisition module acquires at least two frames of images of the same target object acquired under the same or different illumination conditions;
and the exposure module 2 is used for re-determining a face area in any image according to the face position acquired by other images and re-exposing the face area to obtain a face image when the face of the target object cannot be acquired in any image is detected.
Wherein any image is a color image generated under natural light or white light conditions; the other image is a near-infrared image generated under a near-infrared light condition.
And detecting the image by using an adaboost algorithm, and acquiring a face image in the image.
In an exemplary embodiment, the any image is a color image generated under natural light or white light conditions; the other image is a near-infrared image generated under a near-infrared light condition.
In an exemplary embodiment, please refer to fig. 8, which is a schematic diagram of a hardware structure of an exposure apparatus of a multi-view camera according to another embodiment of the present invention, and on the basis of fig. 5, the exposure apparatus further includes:
and the correction module 0 is used for correcting the internal and external parameters configured by the multi-view camera.
In an exemplary embodiment, please refer to fig. 9, which is a schematic diagram of a hardware structure of an adjusting module in an exposure apparatus of a multi-view camera according to an embodiment of the present invention, including:
and the exposure parameter adjusting module 3 is configured to, when a face position corresponding to each of the other images is detected, select the camera exposure parameter matched with the face position corresponding to the face image with the highest quality as the exposure parameter corresponding to the camera in which the face is not detected.
In another implementation, please refer to fig. 10, which is a schematic diagram of a hardware structure of an adjustment module in an exposure apparatus of a multi-view camera according to an embodiment of the present invention, including:
the first abnormal exposure module 4 is configured to, when the multi-view camera cannot detect a face of the target object, determine a face area of the target object in a current image acquired by the multi-view camera by using a time difference between the ultrasonic sensor and the ultrasonic signal, and adjust an exposure strategy according to the face area to obtain a face image.
In another implementation, please refer to fig. 11, which is a schematic diagram of a hardware structure of an adjustment module in an exposure apparatus of a multi-view camera according to an embodiment of the present invention, including:
and the second abnormal exposure module 5 is configured to, when the multi-view camera cannot detect a face of the target object, perform fitting according to a human body contour of the target object in the image acquired by the multi-view camera, determine a face region of the target object in the current image acquired by the multi-view camera, and adjust an exposure strategy according to the face region to obtain a face image.
In this embodiment, the multi-view camera exposure apparatus and the multi-view camera exposure method are in a one-to-one correspondence relationship, and please refer to the above embodiments for details of technical details, technical functions, and technical effects, which are not described herein in detail.
In summary, the present invention utilizes a multi-view camera to detect a face, and when it is detected that a face cannot be obtained in any one of the images, a face region in any one of the images is re-estimated according to a face position obtained from another one of the images, and the face region is exposed to obtain a face image. According to the invention, the positions of the human faces are used for reference, so that all cameras can be ensured to be normally exposed to obtain human face images; meanwhile, the exposure effect of the camera under strong light, backlight and dark environment is greatly improved; when the method is applied to the detection of living bodies, the identification rate of the living body detection can be greatly improved.
An embodiment of the present application further provides an apparatus, which may include: one or more processors; and one or more machine readable media having instructions stored thereon that, when executed by the one or more processors, cause the apparatus to perform the method of fig. 1. In practical applications, the device may be used as a terminal device, and may also be used as a server, where examples of the terminal device may include: the mobile terminal includes a smart phone, a tablet computer, an electronic book reader, an MP3 (Moving Picture Experts Group Audio Layer III) player, an MP4 (Moving Picture Experts Group Audio Layer IV) player, a laptop, a vehicle-mounted computer, a desktop computer, a set-top box, an intelligent television, a wearable device, and the like.
Embodiments of the present application also provide a non-transitory readable storage medium, where one or more modules (programs) are stored in the storage medium, and when the one or more modules are applied to a device, the device may execute instructions (instructions) included in the method in fig. 1 according to the embodiments of the present application.
Fig. 12 is a schematic hardware structure diagram of a terminal device according to an embodiment of the present application. As shown in fig. 12, the terminal device may include: an input device 1100, a first processor 1101, an output device 1102, a first memory 1103, and at least one communication bus 1104. The communication bus 1104 is used to implement communication connections between the elements. The first memory 1103 may include a high-speed RAM memory, and may also include a non-volatile storage NVM, such as at least one disk memory, and the first memory 1103 may store various programs for performing various processing functions and implementing the method steps of the present embodiment.
Alternatively, the first processor 1101 may be, for example, a Central Processing Unit (CPU), an Application Specific Integrated Circuit (ASIC), a Digital Signal Processor (DSP), a Digital Signal Processing Device (DSPD), a Programmable Logic Device (PLD), a Field Programmable Gate Array (FPGA), a controller, a microcontroller, a microprocessor, or other electronic components, and the first processor 1101 is coupled to the input device 1100 and the output device 1102 through a wired or wireless connection.
Optionally, the input device 1100 may include a variety of input devices, such as at least one of a user-oriented user interface, a device-oriented device interface, a software programmable interface, a camera, and a sensor. Optionally, the device interface facing the device may be a wired interface for data transmission between devices, or may be a hardware plug-in interface (e.g., a USB interface, a serial port, etc.) for data transmission between devices; optionally, the user-facing user interface may be, for example, a user-facing control key, a voice input device for receiving voice input, and a touch sensing device (e.g., a touch screen with a touch sensing function, a touch pad, etc.) for receiving user touch input; optionally, the programmable interface of the software may be, for example, an entry for a user to edit or modify a program, such as an input pin interface or an input interface of a chip; the output devices 1102 may include output devices such as a display, audio, and the like.
In this embodiment, the processor of the terminal device includes a function for executing each module of the speech recognition apparatus in each device, and specific functions and technical effects may refer to the above embodiments, which are not described herein again.
Fig. 13 is a schematic hardware structure diagram of a terminal device according to an embodiment of the present application. FIG. 13 is a specific embodiment of the implementation of FIG. 12. As shown, the terminal device of the present embodiment may include a second processor 1201 and a second memory 1202.
The second processor 1201 executes the computer program code stored in the second memory 1202 to implement the method described in fig. 1 in the above embodiment.
The second memory 1202 is configured to store various types of data to support operations at the terminal device. Examples of such data include instructions for any application or method operating on the terminal device, such as messages, pictures, videos, and so forth. The second memory 1202 may include a Random Access Memory (RAM) and may also include a non-volatile memory (non-volatile memory), such as at least one disk memory.
Optionally, a second processor 1201 is provided in the processing assembly 1200. The terminal device may further include: communication component 1203, power component 1204, multimedia component 1205, speech component 1206, input/output interfaces 1207, and/or sensor component 1208. The specific components included in the terminal device are set according to actual requirements, which is not limited in this embodiment.
The processing component 1200 generally controls the overall operation of the terminal device. The processing assembly 1200 may include one or more second processors 1201 to execute instructions to perform all or part of the steps of the data processing method described above. Further, the processing component 1200 can include one or more modules that facilitate interaction between the processing component 1200 and other components. For example, the processing component 1200 can include a multimedia module to facilitate interaction between the multimedia component 1205 and the processing component 1200.
The power supply component 1204 provides power to the various components of the terminal device. The power components 1204 may include a power management system, one or more power sources, and other components associated with generating, managing, and distributing power for the terminal device.
The multimedia components 1205 include a display screen that provides an output interface between the terminal device and the user. In some embodiments, the display screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the display screen includes a touch panel, the display screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation.
The voice component 1206 is configured to output and/or input voice signals. For example, the voice component 1206 includes a Microphone (MIC) configured to receive external voice signals when the terminal device is in an operational mode, such as a voice recognition mode. The received speech signal may further be stored in the second memory 1202 or transmitted via the communication component 1203. In some embodiments, the speech component 1206 further comprises a speaker for outputting speech signals.
The input/output interface 1207 provides an interface between the processing component 1200 and peripheral interface modules, which may be click wheels, buttons, etc. These buttons may include, but are not limited to: a volume button, a start button, and a lock button.
The sensor component 1208 includes one or more sensors for providing various aspects of status assessment for the terminal device. For example, the sensor component 1208 may detect an open/closed state of the terminal device, relative positioning of the components, presence or absence of user contact with the terminal device. The sensor assembly 1208 may include a proximity sensor configured to detect the presence of nearby objects without any physical contact, including detecting the distance between the user and the terminal device. In some embodiments, the sensor assembly 1208 may also include a camera or the like.
The communication component 1203 is configured to facilitate communications between the terminal device and other devices in a wired or wireless manner. The terminal device may access a wireless network based on a communication standard, such as WiFi, 2G or 3G, or a combination thereof. In one embodiment, the terminal device may include a SIM card slot therein for inserting a SIM card therein, so that the terminal device may log onto a GPRS network to establish communication with the server via the internet.
From the above, the communication component 1203, the voice component 1206, the input/output interface 1207 and the sensor component 1208 involved in the embodiment of fig. 13 may be implemented as input devices in the embodiment of fig. 12.
The foregoing embodiments are merely illustrative of the principles and utilities of the present invention and are not intended to limit the invention. Any person skilled in the art can modify or change the above-mentioned embodiments without departing from the spirit and scope of the present invention. Accordingly, it is intended that all equivalent modifications or changes which can be made by those skilled in the art without departing from the spirit and technical spirit of the present invention be covered by the claims of the present invention.

Claims (16)

1. The multi-view camera exposure method is characterized by comprising the following steps of:
acquiring at least two frames of images of the same target object collected under the same or different illumination conditions by using a multi-view camera;
and when the face of the target object cannot be acquired in any image is detected, re-determining the face region in any image according to the face positions acquired by other images, and re-exposing the face region to obtain the face image.
2. The multi-purpose camera exposure method according to claim 1, wherein any one of the images is a color image generated under natural light or white light conditions; the other image is a near-infrared image generated under a near-infrared light condition.
3. The multi-view camera exposure method according to claim 1, wherein before the step of acquiring at least two images of the same target object acquired under the same or different illumination conditions by using the multi-view camera, the method further comprises: and correcting the internal and external parameters configured by the multi-view camera.
4. The multi-view camera exposure method according to claim 1 or 3, wherein when the multi-view camera cannot detect the face of the target object, a face region of the target object in the current image acquired by the multi-view camera is determined by using a time difference between the ultrasonic sensor and the ultrasonic signal, and the face image is obtained by adjusting the exposure strategy according to the face region.
5. The multi-view camera exposure method according to claim 4, wherein when the multi-view camera cannot detect the face of the target object, the distance between the emission point of the ultrasonic signal and the face of the target object is measured by using the time difference between the emitted ultrasonic signal and the received ultrasonic echo signal, the face area of the target object in the current image acquired by the multi-view camera is determined by using the distance, and the exposure strategy is adjusted according to the face area to obtain the face image.
6. The multi-view camera exposure method according to claim 1 or 3, wherein when the multi-view camera cannot detect the face of the target object, fitting is performed according to the human body contour of the target object in the image acquired by the multi-view camera, a face area of the target object in the current image acquired by the multi-view camera is determined, and the exposure strategy is adjusted according to the face area to obtain the face image.
7. The multi-view camera exposure method according to claim 1 or 2, further comprising: when the face position corresponding to each image in other images is detected, selecting the exposure parameter of the camera matched with the face position corresponding to the face image with the highest quality as the exposure parameter corresponding to the camera without the detected face.
8. The exposure method for the multi-view camera according to claim 1, wherein when the face region is exposed, the face region is exposed according to the exposure parameters corresponding to the other images to obtain a face image, and the exposure parameters include one or more of exposure duration, sensitivity, and aperture value.
9. The utility model provides a many meshes camera exposure device which characterized in that includes:
the acquisition module acquires at least two frames of images of the same target object acquired under the same or different illumination conditions by using the multi-view camera;
and the exposure module is used for re-determining a face area in any image according to the face positions acquired by other images and re-exposing the face area to obtain a face image when the face of the target object cannot be acquired in any image is detected.
10. The multi-purpose camera exposure apparatus according to claim 9, wherein the any one of the images is a color image generated under natural light or white light conditions; the other image is a near-infrared image generated under a near-infrared light condition.
11. The multi-view camera exposure apparatus according to claim 9, further comprising:
a correction module: and the internal and external parameters are used for correcting the configuration of the multi-view camera.
12. The multi-view camera exposure apparatus according to claim 9, further comprising: and the exposure parameter adjusting module is used for selecting the camera exposure parameter matched with the face position corresponding to the face image with the highest quality as the exposure parameter corresponding to the camera without the detected face according to the detected face position of each of the other images.
13. The multi-view camera exposure apparatus according to claim 9, further comprising: and the first abnormal exposure module is used for determining a face area of the target object in a current image acquired by the multi-view camera by utilizing the time difference of the ultrasonic signal receiving and sending of the ultrasonic sensor when the multi-view camera cannot detect the face of the target object, and adjusting an exposure strategy according to the face area to obtain a face image.
14. The multi-view camera exposure apparatus according to claim 9, further comprising: and the second abnormal exposure module is used for fitting according to the human body contour of the target object in the image acquired by the multi-view camera when the multi-view camera cannot detect the human face of the target object, determining the human face area of the target object in the current image acquired by the multi-view camera, and adjusting the exposure strategy according to the human face area to obtain the human face image.
15. An apparatus, comprising:
one or more processors; and
one or more machine-readable media having instructions stored thereon that, when executed by the one or more processors, cause the apparatus to perform the method recited in one or more of claims 1-8.
16. One or more machine-readable media having instructions stored thereon, which when executed by one or more processors, cause an apparatus to perform the method recited by one or more of claims 1-8.
CN202011016060.4A 2020-09-24 2020-09-24 Multi-view camera exposure method, device, equipment and medium Pending CN112153300A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011016060.4A CN112153300A (en) 2020-09-24 2020-09-24 Multi-view camera exposure method, device, equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011016060.4A CN112153300A (en) 2020-09-24 2020-09-24 Multi-view camera exposure method, device, equipment and medium

Publications (1)

Publication Number Publication Date
CN112153300A true CN112153300A (en) 2020-12-29

Family

ID=73896576

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011016060.4A Pending CN112153300A (en) 2020-09-24 2020-09-24 Multi-view camera exposure method, device, equipment and medium

Country Status (1)

Country Link
CN (1) CN112153300A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113449663A (en) * 2021-07-06 2021-09-28 深圳中智明科智能科技有限公司 Collaborative intelligent security method and device based on polymorphic fitting
CN115174138A (en) * 2022-05-25 2022-10-11 北京旷视科技有限公司 Camera attack detection method, system, device, storage medium and program product

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101478641A (en) * 2008-01-04 2009-07-08 三星Techwin株式会社 Digital photographing apparatus and method of controlling the same
CN105100605A (en) * 2015-06-18 2015-11-25 惠州Tcl移动通信有限公司 Mobile terminal and quick focusing method for photographing with the same
CN105407276A (en) * 2015-11-03 2016-03-16 北京旷视科技有限公司 Photographing method and equipment
CN107872613A (en) * 2016-09-23 2018-04-03 中兴通讯股份有限公司 A kind of method, device and mobile terminal that recognition of face is carried out using dual camera
CN110213480A (en) * 2019-04-30 2019-09-06 华为技术有限公司 A kind of focusing method and electronic equipment
CN110278378A (en) * 2019-07-12 2019-09-24 易诚高科(大连)科技有限公司 A kind of multi-cam camera system based on infrared photography adjustment
CN111654643A (en) * 2020-07-22 2020-09-11 苏州臻迪智能科技有限公司 Exposure parameter determination method and device, unmanned aerial vehicle and computer readable storage medium

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101478641A (en) * 2008-01-04 2009-07-08 三星Techwin株式会社 Digital photographing apparatus and method of controlling the same
CN105100605A (en) * 2015-06-18 2015-11-25 惠州Tcl移动通信有限公司 Mobile terminal and quick focusing method for photographing with the same
CN105407276A (en) * 2015-11-03 2016-03-16 北京旷视科技有限公司 Photographing method and equipment
CN107872613A (en) * 2016-09-23 2018-04-03 中兴通讯股份有限公司 A kind of method, device and mobile terminal that recognition of face is carried out using dual camera
CN110213480A (en) * 2019-04-30 2019-09-06 华为技术有限公司 A kind of focusing method and electronic equipment
CN110278378A (en) * 2019-07-12 2019-09-24 易诚高科(大连)科技有限公司 A kind of multi-cam camera system based on infrared photography adjustment
CN111654643A (en) * 2020-07-22 2020-09-11 苏州臻迪智能科技有限公司 Exposure parameter determination method and device, unmanned aerial vehicle and computer readable storage medium

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113449663A (en) * 2021-07-06 2021-09-28 深圳中智明科智能科技有限公司 Collaborative intelligent security method and device based on polymorphic fitting
CN113449663B (en) * 2021-07-06 2022-06-03 深圳中智明科智能科技有限公司 Collaborative intelligent security method and device based on polymorphic fitting
CN115174138A (en) * 2022-05-25 2022-10-11 北京旷视科技有限公司 Camera attack detection method, system, device, storage medium and program product

Similar Documents

Publication Publication Date Title
CN108764091B (en) Living body detection method and apparatus, electronic device, and storage medium
US10956714B2 (en) Method and apparatus for detecting living body, electronic device, and storage medium
CN108566516B (en) Image processing method, device, storage medium and mobile terminal
JP7195422B2 (en) Face recognition method and electronic device
CN111079576B (en) Living body detection method, living body detection device, living body detection equipment and storage medium
US11321575B2 (en) Method, apparatus and system for liveness detection, electronic device, and storage medium
WO2021027537A1 (en) Method and apparatus for taking identification photo, device and storage medium
KR102566998B1 (en) Apparatus and method for determining image sharpness
CN108551552B (en) Image processing method, device, storage medium and mobile terminal
WO2019149099A1 (en) Electronic device, human face recognition method, and relevant product
US20200312022A1 (en) Method and device for processing image, and storage medium
CN108494996B (en) Image processing method, device, storage medium and mobile terminal
WO2021037157A1 (en) Image recognition method and electronic device
CN112434546A (en) Face living body detection method and device, equipment and storage medium
CN112153300A (en) Multi-view camera exposure method, device, equipment and medium
CN108718388B (en) Photographing method and mobile terminal
KR20200117695A (en) Electronic device and method for controlling camera using external electronic device
CN108765380A (en) Image processing method, device, storage medium and mobile terminal
CN108683845B (en) Image processing method, device, storage medium and mobile terminal
WO2019218879A1 (en) Photographing interaction method and apparatus, storage medium and terminal device
CN109726613B (en) Method and device for detection
WO2019218878A1 (en) Photography restoration method and apparatus, storage medium and terminal device
WO2021046773A1 (en) Facial anti-counterfeiting detection method and apparatus, chip, electronic device and computer-readable medium
CN110770786A (en) Shielding detection and repair device based on camera equipment and shielding detection and repair method thereof
CN116129526A (en) Method and device for controlling photographing, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20201229

RJ01 Rejection of invention patent application after publication