CN108197560B - Face image recognition method, mobile terminal and computer-readable storage medium - Google Patents

Face image recognition method, mobile terminal and computer-readable storage medium Download PDF

Info

Publication number
CN108197560B
CN108197560B CN201711471122.9A CN201711471122A CN108197560B CN 108197560 B CN108197560 B CN 108197560B CN 201711471122 A CN201711471122 A CN 201711471122A CN 108197560 B CN108197560 B CN 108197560B
Authority
CN
China
Prior art keywords
shot
image
face image
mobile terminal
person
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201711471122.9A
Other languages
Chinese (zh)
Other versions
CN108197560A (en
Inventor
魏宇虹
苗雷
栗嘉灿
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nubia Technology Co Ltd
Original Assignee
Nubia Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nubia Technology Co Ltd filed Critical Nubia Technology Co Ltd
Priority to CN201711471122.9A priority Critical patent/CN108197560B/en
Publication of CN108197560A publication Critical patent/CN108197560A/en
Application granted granted Critical
Publication of CN108197560B publication Critical patent/CN108197560B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/166Detection; Localisation; Normalisation using acquisition arrangements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/50Constructional details
    • H04N23/55Optical parts specially adapted for electronic image sensors; Mounting thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • H04N23/611Control of cameras or camera modules based on recognised objects where the recognised objects include parts of the human body

Abstract

The invention discloses a face image recognition method, a mobile terminal and a computer readable storage medium, wherein the method comprises the following steps: when the mobile terminal detects a shooting instruction for shooting a person to be shot, shooting the person to be shot through a lens group in the mobile terminal according to the shooting instruction so as to obtain an image to be shot corresponding to the person to be shot; extracting light spots in an image to be shot; identifying a face image in the image to be shot according to the light spots; the lens group is arranged between the diffractive optical element and the photosensitive component of the mobile terminal camera. The invention realizes the convergence and the divergence of the laser used in the process of shooting the image through the lens group so as to improve the shooting distance of the shooting object and further improve the instruction of remotely shooting the obtained image, and the lens group emits the structured light, and the face image of the image to be shot is identified through the structured light, thereby improving the accuracy of face identification in the image to be shot.

Description

Face image recognition method, mobile terminal and computer-readable storage medium
Technical Field
The invention relates to the technical field of mobile terminals, in particular to a face image recognition method, a mobile terminal and a computer readable storage medium.
Background
The existing camera imaging technology mainly uses a VCSEL (Vertical Cavity Emitting Laser), a wafer level Optical device WLO, a DOE (Diffraction Optical Element), and a CMOS (Complementary Metal Oxide Semiconductor) in the camera for imaging. Therefore, the existing mobile terminal transmits light for photographing an image to the CMOS through the WLO and the DOE by the VCSEL to photograph the image. The light emitted by the WLO is parallel light, and after the light emitted by the WLO is received by the DOE, the light can be emitted to the CMOS in a divergent mode. Specifically, referring to fig. 3, fig. 3 is a schematic structural diagram of a camera of a mobile terminal in the prior art. However, in this case, since light received by the camera CMOS is divergent, when monitoring is performed by the image pickup function in the mobile terminal, when an image of a distant person is photographed, the quality of the photographed image is low.
Disclosure of Invention
The invention mainly aims to provide a face image recognition method, a mobile terminal and a computer readable storage medium, and aims to solve the technical problem that the quality of a shot image is low when the existing mobile terminal shoots a long-distance person.
In order to achieve the above object, the present invention provides a face image recognition method, including:
when the mobile terminal detects a shooting instruction for shooting a person to be shot, shooting the person to be shot through a lens group in the mobile terminal according to the shooting instruction so as to obtain an image to be shot corresponding to the person to be shot;
extracting light spots in the image to be shot;
recognizing a face image in the image to be shot according to the light spots;
the lens group is arranged between the diffractive optical element and the photosensitive component of the mobile terminal camera.
Optionally, the step of identifying the face image in the image to be photographed according to the light spot includes:
determining the size of each light spot, and comparing the size of each light spot with the size of a light spot of a plane object stored in advance to obtain a comparison result;
and identifying the three-dimensional face image in the image to be shot according to the comparison result.
Optionally, the step of identifying the face image in the image to be photographed according to the light spot includes:
determining the receiving time corresponding to each light spot;
and identifying the three-dimensional face image in the image to be shot according to the receiving time.
Optionally, after the step of identifying the face image in the image to be photographed according to the light spot, the method further includes:
determining the face features of the preset area of the face image according to the light spots, and searching preset features corresponding to the face features in a preset feature library;
if the preset features are found in the preset feature library, determining a preset face image corresponding to the preset features;
calculating the similarity between the preset face image and the recognized face image;
and if the similarity is smaller than the preset similarity, determining that the person to be shot has camouflage.
Optionally, after the step of calculating the similarity between the preset face image and the identified face image, the method further includes:
judging whether the similarity is smaller than the preset similarity or not;
and if the similarity is greater than or equal to the preset similarity, determining that the person to be shot is not camouflaged.
Optionally, after the step of identifying the face image in the image to be photographed according to the light spot, the method further includes:
determining whether the angle between the person to be shot and the mobile terminal is changed or not through the face images identified at different time points;
and if the angle is changed, controlling the camera of the mobile terminal to rotate along with the person to be shot.
Optionally, after the step of controlling the camera of the mobile terminal to rotate along with the person to be photographed if the angle is changed, the method further includes:
detecting whether the face image exists in the image shot by the mobile terminal;
and if the face image does not exist in the shot image, controlling the camera to return to the original position.
Optionally, the lens group is made of glass or plastic.
In addition, in order to achieve the above object, the present invention further provides a mobile terminal, which includes a memory, a processor and a facial image recognition program stored on the memory and operable on the processor, wherein the facial image recognition program, when executed by the processor, implements the steps of the facial image recognition method as described above.
Furthermore, to achieve the above object, the present invention also provides a computer readable storage medium having stored thereon a face image recognition program, which when executed by a processor, implements the steps of the face image recognition method as described above.
According to the method, after a mobile terminal detects a shooting instruction for shooting a person to be shot, the person to be shot is shot through a lens group in the mobile terminal according to the shooting instruction, so that an image to be shot corresponding to the person to be shot is obtained; extracting light spots in the image to be shot; recognizing a face image in the image to be shot according to the light spots; the lens group is arranged between the diffractive optical element and the photosensitive component of the mobile terminal camera. The laser used in the process of shooting the image is converged and diverged through the lens group, so that the shooting distance of the shooting object is increased, the instruction of remotely shooting the obtained image is further increased, the lens group emits structured light, the face image of the image to be shot is identified through the structured light, and the accuracy of face identification in the image to be shot is improved.
Drawings
Fig. 1 is a schematic diagram of a hardware structure of a terminal for implementing various embodiments of the present invention;
fig. 2 is a diagram of a communication network system architecture according to an embodiment of the present invention;
fig. 3 is a schematic structural diagram of a camera of a mobile terminal in the prior art;
FIG. 4 is a flowchart illustrating a first embodiment of a face image recognition method according to the present invention;
fig. 5 is a schematic structural diagram of a camera of the mobile terminal in the embodiment of the present invention;
FIG. 6 is a flowchart illustrating a second embodiment of a face image recognition method according to the present invention;
FIG. 7 is a flowchart illustrating a face image recognition method according to a third embodiment of the present invention;
fig. 8 is a flowchart illustrating a face image recognition method according to a fourth embodiment of the present invention.
The implementation, functional features and advantages of the present invention will be described with reference to the accompanying drawings.
Detailed Description
It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
In the following description, suffixes such as "module", "component", or "unit" used to denote elements are used only for facilitating the explanation of the present invention, and have no specific meaning in itself. Thus, "module", "component" or "unit" may be used mixedly.
The terminal may be implemented in various forms. For example, the terminal described in the present invention may include a mobile terminal such as a mobile phone, a tablet computer, a notebook computer, a palmtop computer, a Personal Digital Assistant (PDA), a Portable Media Player (PMP), a navigation device, a wearable device, a smart band, a pedometer, and the like, and a fixed terminal such as a Digital TV, a desktop computer, and the like.
The following description will be given by way of example of a mobile terminal, and it will be understood by those skilled in the art that the construction according to the embodiment of the present invention can be applied to a fixed type terminal, in addition to elements particularly used for mobile purposes.
Referring to fig. 1, which is a schematic diagram of a hardware structure of a mobile terminal for implementing various embodiments of the present invention, the mobile terminal 100 may include: RF (Radio Frequency) unit 101, WiFi module 102, audio output unit 103, a/V (audio/video) input unit 104, sensor 105, display unit 106, user input unit 107, interface unit 108, memory 109, processor 110, and power supply 111. Those skilled in the art will appreciate that the mobile terminal architecture shown in fig. 1 is not intended to be limiting of mobile terminals, which may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components.
The following describes each component of the mobile terminal in detail with reference to fig. 1:
the radio frequency unit 101 may be configured to receive and transmit signals during information transmission and reception or during a call, and specifically, receive downlink information of a base station and then process the downlink information to the processor 110; in addition, the uplink data is transmitted to the base station. Typically, radio frequency unit 101 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like. In addition, the radio frequency unit 101 can also communicate with a network and other devices through wireless communication. The wireless communication may use any communication standard or protocol, including but not limited to GSM (Global System for Mobile communications), GPRS (General Packet Radio Service), CDMA2000(Code Division Multiple Access 2000), WCDMA (Wideband Code Division Multiple Access), TD-SCDMA (Time Division-Synchronous Code Division Multiple Access), FDD-LTE (Frequency Division duplex Long Term Evolution), and TDD-LTE (Time Division duplex Long Term Evolution).
WiFi belongs to short-distance wireless transmission technology, and the mobile terminal can help a user to receive and send e-mails, browse webpages, access streaming media and the like through the WiFi module 102, and provides wireless broadband internet access for the user. Although fig. 1 shows the WiFi module 102, it is understood that it does not belong to the essential constitution of the mobile terminal, and may be omitted entirely as needed within the scope not changing the essence of the invention.
The audio output unit 103 may convert voice data received by the radio frequency unit 101 or the WiFi module 102 or stored in the memory 109 into an audio signal and output as sound when the mobile terminal 100 is in a call signal reception mode, a call mode, a recording mode, a voice recognition mode, a broadcast reception mode, or the like. Also, the audio output unit 103 may also provide audio output related to a specific function performed by the mobile terminal 100 (e.g., a call signal reception sound, a message reception sound, etc.). The audio output unit 103 may include a speaker, a buzzer, and the like.
The a/V input unit 104 is used to receive audio or video signals. The a/V input Unit 104 may include a Graphics Processing Unit (GPU) 1041 and a microphone 1042, the Graphics processor 1041 Processing image data of still pictures or video obtained by an image capturing device (e.g., a camera) in a video capturing mode or an image capturing mode. The processed image frames may be displayed on the display unit 106. The image frames processed by the graphic processor 1041 may be stored in the memory 109 (or other storage medium) or transmitted via the radio frequency unit 101 or the WiFi module 102. The microphone 1042 can receive sound (voice data) via the microphone 1042 in a phone call mode, a recording mode, a voice recognition mode, or the like, and can process such sound into voice data. The processed audio (voice) data may be converted into a format output transmittable to a mobile communication base station via the radio frequency unit 101 in case of a phone call mode. The microphone 1042 may implement various types of noise cancellation (or suppression) algorithms to cancel (or suppress) noise or interference generated in the course of receiving and transmitting audio signals.
The mobile terminal 100 also includes at least one sensor 105, such as a light sensor, a motion sensor, and other sensors. Specifically, the light sensor includes an ambient light sensor that can adjust the brightness of the display panel 1061 according to the brightness of ambient light, and a proximity sensor that can turn off the display panel 1061 and/or a backlight when the mobile terminal 100 is moved to the ear. As one of the motion sensors, the accelerometer sensor can detect the magnitude of acceleration in each direction (generally, three axes), can detect the magnitude and direction of gravity when stationary, and can be used for applications of recognizing the posture of a mobile phone (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), vibration recognition related functions (such as pedometer and tapping), and the like; as for other sensors such as a fingerprint sensor, a pressure sensor, an iris sensor, a molecular sensor, a gyroscope, a barometer, a hygrometer, a thermometer, and an infrared sensor, which can be configured on the mobile phone, further description is omitted here.
The display unit 106 is used to display information input by a user or information provided to the user. The Display unit 106 may include a Display panel 1061, and the Display panel 1061 may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like.
The user input unit 107 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the mobile terminal. Specifically, the user input unit 107 may include a touch panel 1071 and other input devices 1072. The touch panel 1071, also referred to as a touch screen, may collect a touch operation performed by a user on or near the touch panel 1071 (e.g., an operation performed by the user on or near the touch panel 1071 using a finger, a stylus, or any other suitable object or accessory), and drive a corresponding connection device according to a predetermined program. The touch panel 1071 may include two parts of a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 110, and can receive and execute commands sent by the processor 110. In addition, the touch panel 1071 may be implemented in various types, such as a resistive type, a capacitive type, an infrared ray, and a surface acoustic wave. In addition to the touch panel 1071, the user input unit 107 may include other input devices 1072. In particular, other input devices 1072 may include, but are not limited to, one or more of a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and the like, and are not limited to these specific examples.
Further, the touch panel 1071 may cover the display panel 1061, and when the touch panel 1071 detects a touch operation thereon or nearby, the touch panel 1071 transmits the touch operation to the processor 110 to determine the type of the touch event, and then the processor 110 provides a corresponding visual output on the display panel 1061 according to the type of the touch event. Although the touch panel 1071 and the display panel 1061 are shown in fig. 1 as two separate components to implement the input and output functions of the mobile terminal, in some embodiments, the touch panel 1071 and the display panel 1061 may be integrated to implement the input and output functions of the mobile terminal, and is not limited herein.
The interface unit 108 serves as an interface through which at least one external device is connected to the mobile terminal 100. For example, the external device may include a wired or wireless headset port, an external power supply (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. The interface unit 108 may be used to receive input (e.g., data information, power, etc.) from external devices and transmit the received input to one or more elements within the mobile terminal 100 or may be used to transmit data between the mobile terminal 100 and external devices.
The memory 109 may be used to store software programs as well as various data. The memory 109 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as voice data, a phonebook, etc.) created according to the use of the cellular phone, etc. Further, the memory 109 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The processor 110 is a control center of the mobile terminal, connects various parts of the entire mobile terminal using various interfaces and lines, and performs various functions of the mobile terminal and processes data by operating or executing software programs and/or modules stored in the memory 109 and calling data stored in the memory 109, thereby performing overall monitoring of the mobile terminal. Processor 110 may include one or more processing units; preferably, the processor 110 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 110.
Further, in the mobile terminal 100 shown in fig. 1, the processor 110 is configured to call the face image recognition program stored in the memory 109, and perform the following operations:
when the mobile terminal 100 detects a shooting instruction for shooting a person to be shot, shooting the person to be shot through a lens group in the mobile terminal 100 according to the shooting instruction so as to obtain an image to be shot corresponding to the person to be shot;
extracting light spots in the image to be shot;
recognizing a face image in the image to be shot according to the light spots;
wherein the lens group is disposed between the diffractive optical element and the photosensitive component of the camera of the mobile terminal 100.
Further, the step of identifying the face image in the image to be shot according to the light spots comprises:
determining the size of each light spot, and comparing the size of each light spot with the size of a light spot of a plane object stored in advance to obtain a comparison result;
and identifying the three-dimensional face image in the image to be shot according to the comparison result.
Further, the step of identifying the face image in the image to be shot according to the light spots comprises:
determining the receiving time corresponding to each light spot;
and identifying the three-dimensional face image in the image to be shot according to the receiving time.
Further, after the step of identifying the face image in the image to be captured according to the light spot, the processor 110 is further configured to call a face image identification program stored in the memory 109, and perform the following operations:
determining the face features of the preset area of the face image according to the light spots, and searching preset features corresponding to the face features in a preset feature library;
if the preset features are found in the preset feature library, determining a preset face image corresponding to the preset features;
calculating the similarity between the preset face image and the recognized face image;
and if the similarity is smaller than the preset similarity, determining that the person to be shot has camouflage.
Further, after the step of calculating the similarity between the preset face image and the recognized face image, the processor 110 is further configured to call a face image recognition program stored in the memory 109, and perform the following operations:
judging whether the similarity is smaller than the preset similarity or not;
and if the similarity is greater than or equal to the preset similarity, determining that the person to be shot is not camouflaged.
Further, after the step of identifying the face image in the image to be captured according to the light spot, the processor 110 is further configured to call a face image identification program stored in the memory 109, and perform the following operations:
determining whether an angle between the person to be photographed and the mobile terminal 100 is changing through the face images recognized at different time points;
and if the angle is changed, controlling the camera of the mobile terminal 100 to rotate along with the person to be shot.
Further, after the step of controlling the camera of the mobile terminal 100 to rotate along with the person to be photographed if the angle is changed, the processor 110 is further configured to call a face image recognition program stored in the memory 109, and perform the following operations:
detecting whether the face image exists in the image shot by the mobile terminal 100;
and if the face image does not exist in the shot image, controlling the camera to return to the original position.
Further, the lens group is made of glass or plastic.
The mobile terminal 100 may further include a power supply 111 (e.g., a battery) for supplying power to various components, and preferably, the power supply 111 may be logically connected to the processor 110 via a power management system, so as to manage charging, discharging, and power consumption management functions via the power management system.
Although not shown in fig. 1, the mobile terminal 100 may further include a bluetooth module or the like, which is not described in detail herein.
In order to facilitate understanding of the embodiments of the present invention, a communication network system on which the mobile terminal of the present invention is based is described below.
Referring to fig. 2, fig. 2 is an architecture diagram of a communication Network system according to an embodiment of the present invention, where the communication Network system is an LTE system of a universal mobile telecommunications technology, and the LTE system includes a UE (User Equipment) 201, an E-UTRAN (Evolved UMTS Terrestrial Radio Access Network) 202, an EPC (Evolved Packet Core) 203, and an IP service 204 of an operator, which are in communication connection in sequence.
Specifically, the UE201 may be the mobile terminal 100 described above, and is not described herein again.
The E-UTRAN202 includes eNodeB2021 and other eNodeBs 2022, among others. Among them, the eNodeB2021 may be connected with other eNodeB2022 through backhaul (e.g., X2 interface), the eNodeB2021 is connected to the EPC203, and the eNodeB2021 may provide the UE201 access to the EPC 203.
The EPC203 may include an MME (Mobility Management Entity) 2031, an HSS (Home Subscriber Server) 2032, other MMEs 2033, an SGW (Serving gateway) 2034, a PGW (PDN gateway) 2035, and a PCRF (Policy and Charging Rules Function) 2036, and the like. The MME2031 is a control node that handles signaling between the UE201 and the EPC203, and provides bearer and connection management. HSS2032 is used to provide registers to manage functions such as home location register (not shown) and holds subscriber specific information about service characteristics, data rates, etc. All user data may be sent through SGW2034, PGW2035 may provide IP address assignment for UE201 and other functions, and PCRF2036 is a policy and charging control policy decision point for traffic data flow and IP bearer resources, which selects and provides available policy and charging control decisions for a policy and charging enforcement function (not shown).
The IP services 204 may include the internet, intranets, IMS (IP Multimedia Subsystem), or other IP services, among others.
Although the LTE system is described as an example, it should be understood by those skilled in the art that the present invention is not limited to the LTE system, but may also be applied to other wireless communication systems, such as GSM, CDMA2000, WCDMA, TD-SCDMA, and future new network systems.
Based on the above mobile terminal hardware structure and communication network system, the present invention provides various embodiments of the face image recognition method.
The invention provides a face image recognition method.
Referring to fig. 4, fig. 4 is a flowchart illustrating a first embodiment of a face image recognition method according to the present invention.
In the present embodiment, an embodiment of a face image recognition method is provided, and it should be noted that although a logical order is shown in the flowchart, in some cases, the steps shown or described may be performed in an order different from that here.
In this embodiment, the face image recognition method may be optionally applied to a mobile terminal, and the face image recognition method includes:
step S10, when the mobile terminal detects a shooting instruction for shooting a person to be shot, shooting the person to be shot through the lens set in the mobile terminal according to the shooting instruction, so as to obtain an image to be shot corresponding to the person to be shot.
When the mobile terminal detects a shooting instruction for shooting a person to be shot, the mobile terminal shoots the person to be shot through a built-in lens group according to the shooting instruction so as to obtain an image to be shot corresponding to the person to be shot. The lens group is arranged between the diffractive optical element DOE and the photosensitive component of the mobile terminal camera, and the photosensitive component is a CMOS. Specifically, referring to fig. 5, fig. 5 is a schematic structural diagram of a camera of a mobile terminal in an embodiment of the present invention. In this embodiment, the user may trigger the photographing instruction by clicking a photographing button in the mobile terminal. The shooting instruction comprises a shooting instruction and a video recording instruction. The user can trigger the photographing instruction and the video recording instruction according to the requirement. In the mobile terminal, two buttons can be arranged, one button is a photographing button, the other button is a video recording button, a user can trigger a photographing instruction by clicking the photographing button, and a video recording instruction is triggered by clicking the video recording button. Only one shooting button can be set, and if the mobile terminal detects a shooting instruction triggered by clicking the shooting button by a user in a shooting state, the mobile terminal enters a video recording state from the shooting state; if the mobile terminal detects a shooting instruction triggered by clicking a shooting button by a user in a video recording state, the mobile terminal enters a shooting state from the video recording state. When the mobile terminal is provided with the front camera and the rear camera, a user can trigger a photographing instruction and a video recording instruction under the condition of the front camera or the rear camera. If the shooting instruction in this embodiment is a video recording instruction, the mobile terminal may capture an image to be shot containing a person to be shot from the recorded image.
In the embodiment, after the VCSEL in the mobile terminal camera emits laser light, the structured light is formed by the lens group. The structured light is laser light emitted from a laser, and is converged into a narrow-width light band after passing through a cylindrical lens. Specifically, after the structured light projects specific light information to the face of a person to be photographed, the light information returned after encountering the face is collected through a camera of the mobile terminal, and the distance between the person to be photographed and the camera, the depth information of the person to be photographed and the like are determined according to the change of light signals caused by the face, so that the face image in the image to be photographed is recognized.
Further, the lens group is made of glass or plastic.
Further, in embodiments of the present invention, the lens group may be made of a thin sheet of glass or plastic. The thickness of the sheet is suitable for being installed between a diffraction optical element and a photosensitive component of a mobile terminal camera. In order to ensure the effect of structured light reflection and scattering, a preset number of such lamellae may be provided in the mobile terminal. The preset number can be set according to specific needs, for example, the preset number can be set to 5, 6, or 8.
In step S20, the light spot in the image to be captured is extracted.
After the mobile terminal shoots an image to be shot corresponding to a person to be shot through the camera, the mobile terminal extracts light spots in the image to be shot. The process of extracting the light spots in the image to be shot by the mobile terminal comprises the following steps: denoising an image to be shot to obtain a denoised image to be shot; graying the noise-reduced image to be shot to obtain a grayed image to be shot; thirdly, carrying out iterative threshold segmentation processing on the grayed image to be shot to obtain a binary image corresponding to the image to be shot; fourthly, performing edge tracking on the binary image and extracting edges; finding out the pixel points surrounding the circle center position, surrounding the pixel points surrounding the circle center position, and calculating the mean value of the pixel points to find out the position of the circle center of the light spot through the mean value. The noise reduction method includes, but is not limited to, a median filtering algorithm, a mean filtering algorithm, and a neighborhood average method for setting a threshold. The neighborhood averaging method is that in a facula image, the gray value of a certain point is represented by the average value of all points in a selected area so as to smooth noise under the condition of ensuring that an image to be shot is not blurred. After the circle center of the light spot is located, the mobile terminal divides the image to be shot by adopting a self-adaptive threshold value division method or a fuzzy threshold value method so as to extract the light spot in the image to be shot, namely, the area where the light spot is located in the image to be shot is determined.
And step S30, recognizing the face image in the image to be shot according to the light spots.
After the light spots are extracted from the image to be shot, the mobile terminal determines the light spots extracted from the image to be shot and identifies the face image in the image to be shot according to the extracted light spots. Specifically, a face image in the image to be shot is identified according to the characteristic parameters corresponding to the light spots. It can be understood that, because the appearances of different people to be photographed are different, the characteristic parameters of the light spots formed in the image after the laser returns after encountering the people to be photographed are also different, and therefore, the face image in the image to be photographed can be identified through the light spots.
Further, the mobile terminal determines the number of light spots per unit area in the image to be shot, and recognizes the face image in the image to be shot through a certain number of light spots, wherein the size of the unit area may be set according to specific needs, which is not limited in this embodiment. It is understood that the accuracy of recognizing the face image in the image to be captured is higher as the number of light spots per unit area is larger. If the number of the light spots is in the range of a-b, the mobile terminal can only identify whether the object in the image to be shot is a person or not through the light spots; if the number of the light spots is in the range of c-d, the mobile terminal can identify the specific appearance of the person to be shot in the image to be shot through the light spots, wherein a < b < c < d.
Further, step S30 includes:
step a, determining the size of each light spot, and comparing the size of each light spot with the size of the light spots of a plane object stored in advance to obtain a comparison result.
And b, identifying the three-dimensional face image in the image to be shot according to the comparison result.
Further, the process that the mobile terminal identifies the face image in the image to be shot according to the light spot in the image to be shot is as follows: after the mobile terminal extracts the light spots of the image to be shot, the mobile terminal determines the size of each light spot in the image to be shot, namely the area of each light spot, compares the size of each light spot of the image to be shot with the size of the light spot of a plane object stored in advance to obtain a comparison result, and identifies the three-dimensional face image in the figure to be shot according to the comparison result. In the mobile terminal, light spots of planar objects at different distances from the camera are stored in advance. When the mobile terminal determines the distance between the person to be shot and the mobile terminal, the size of the light spot of the planar object needing to be compared is determined. Because the face of the person to be shot is three-dimensional, the distances from different parts of the face to the mobile terminal are different, and the sizes of the formed light spots are also different. Therefore, the three-dimensional face image in the image to be shot can be identified by comparing the size of each light spot in the image to be shot with the size of the light spot of the plane object stored in advance.
Further, step S30 further includes:
and c, determining the receiving time corresponding to each light spot.
And d, identifying the three-dimensional face image in the image to be shot according to the receiving time.
Further, the process that the mobile terminal identifies the face image in the image to be shot according to the light spot in the image to be shot is as follows: and after the mobile terminal extracts the light spots of the images to be shot, the mobile terminal determines the receiving time corresponding to each light spot in each image to be shot. It can be understood that, after the mobile terminal emits the laser, because the face of the person to be photographed is three-dimensional, in the person to be photographed, the time for receiving the laser sent by the mobile terminal at different parts of the face is different, and therefore, the time for receiving the laser returned after the emitted laser meets the face by the mobile terminal through the camera of the mobile terminal is different. After the mobile terminal determines the receiving time corresponding to each light spot in the image to be shot, the mobile terminal determines the distance from each part of the human face of the person to be shot to the camera according to the difference of the receiving time corresponding to each light spot, so that the three-dimensional image in the image to be shot is identified.
According to the embodiment, after the mobile terminal detects a shooting instruction for shooting a person to be shot, the person to be shot is shot through the lens group in the mobile terminal according to the shooting instruction, so that an image to be shot corresponding to the person to be shot is obtained; extracting light spots in an image to be shot; identifying a face image in the image to be shot according to the light spots; the lens group is arranged between the diffractive optical element and the photosensitive component of the mobile terminal camera. The laser used in the process of shooting the image is converged and diverged through the lens group, so that the shooting distance of the shooting object is increased, the instruction of remotely shooting the obtained image is further increased, the lens group emits structured light, the face image of the image to be shot is identified through the structured light, and the accuracy of face identification in the image to be shot is improved.
Further, a second embodiment of the face image recognition method of the present invention is proposed based on the first embodiment of the face image recognition method. The second embodiment of the face image recognition method is different from the first embodiment of the face image recognition method in that, referring to fig. 6, the face image recognition method further includes:
and step S40, determining the face characteristics of the preset area of the face image according to the light spots, and searching the preset characteristics corresponding to the face characteristics in a preset characteristic library.
And when the mobile terminal identifies the face image in the image to be shot through the light spot in the image to be shot, the mobile terminal determines the face characteristics of the preset area of the face image according to the light spot. And after the mobile terminal determines the face features of the face image preset area, the mobile terminal searches the preset features corresponding to the determined face features in the preset feature library. The preset area is an area of a specific part of the face, such as eyes, ears, nose, mouth and the like of the face. In the preset feature library, the features of the face images of different users are stored in advance.
Further, in other embodiments, the mobile terminal may also send the face feature determined according to the light spot to another mobile terminal, and the another mobile terminal performs an operation of searching for a preset feature corresponding to the face feature in its preset feature library.
Step S50, if the preset feature is found in the preset feature library, determining a preset face image corresponding to the preset feature.
In step S60, the similarity between the preset face image and the recognized face image is calculated.
And step S70, if the similarity is smaller than the preset similarity, determining that the person to be shot has camouflage.
If the mobile terminal finds the preset features corresponding to the determined face features in the preset feature library, the mobile terminal determines the preset face images corresponding to the preset features and calculates the similarity between the preset face images and the recognized face images. It should be noted that the face image used in the process of calculating the similarity is an image that can see the specific appearance of the person to be photographed, that is, an unprocessed image to be photographed, or an image to be photographed that is subjected to noise reduction processing only. If the calculated similarity is smaller than the preset similarity, the mobile terminal determines that the person to be shot has camouflage, namely the current appearance of the person to be shot is greatly different from the appearance in the preset feature library. The preset similarity may be set according to specific needs, for example, the similarity may be set to 60%, set to 70%, or set to 85%.
Further, when the mobile terminal determines that the person to be photographed has the camouflage, the mobile terminal can output prompt information to prompt the corresponding manager that the abnormal person exists through the prompt information. Specifically, the prompt message may be a voice message, a text message, or a warning sound. The mobile terminal may be connected to an alarm system provided with a speaker and/or a display screen. And after the mobile terminal generates the prompt information, the prompt information is sent to the alarm system so that the alarm system can correspondingly output the prompt information through a loudspeaker and/or a display screen of the alarm system. If the prompt message 'the person to be shot is abnormal' is transmitted to the alarm system, the alarm system outputs a prompt tone of 'the person to be shot is abnormal' and/or displays the character message corresponding to 'the person to be shot is abnormal' on the display screen of the alarm system.
Further, the face image recognition method further comprises the following steps:
and e, judging whether the similarity is smaller than the preset similarity.
And f, if the similarity is greater than or equal to the preset similarity, determining that the person to be shot is not camouflaged.
After the mobile terminal calculates the similarity between the preset face image and the recognized face image, the mobile terminal judges whether the calculated similarity is smaller than the preset similarity. And if the mobile terminal determines that the calculated similarity is greater than or equal to the preset similarity, the mobile terminal determines that the person to be shot is not camouflaged.
In the embodiment, the face features of the preset area of the face image are determined through the light spots, the corresponding preset features are searched in the preset feature library according to the determined face features, the similarity between the preset face image corresponding to the preset features and the recognized face image is calculated, and whether the person to be shot has the camouflage is determined according to the similarity between the preset face image and the recognized face image. In order to improve the monitoring effect in the monitoring process through the mobile terminal.
Further, a third embodiment of the face image recognition method of the present invention is proposed based on the first or second embodiment of the face image recognition method. The difference between the third embodiment of the face image recognition method and the first or second embodiment of the face image recognition method is that, referring to fig. 7, if the camera of the mobile terminal can rotate, the face image recognition method further includes:
in step S80, it is determined whether the angle between the person to be photographed and the mobile terminal is changing through the face images recognized at different points in time.
And step S90, if the angle is changed, controlling the camera of the mobile terminal to rotate along with the person to be shot.
In this embodiment, the camera of the mobile terminal is a rotatable camera. The mobile terminal shoots an image to be shot corresponding to a person to be shot in real time or at regular time, and identifies a face image in the image to be shot in real time or at regular time to obtain the face images identified at different time points. It should be noted that, if the mobile terminal regularly captures the image to be captured corresponding to the person to be captured, the time duration corresponding to the timing should not be too long, and the time duration corresponding to the timing should be capable of determining the movement trajectory of the person to be captured, for example, the time duration corresponding to the timing may be set to 0.5 second, 1 second, or 3 seconds, etc.
After the mobile terminal obtains the face images recognized at different time points, the mobile terminal determines the angles between the person to be shot and the camera at the different time points, and determines whether the angle between the person to be shot and the camera is changed or not according to the angles between the person to be shot and the camera at the different time points, namely determines whether the angle between the person to be shot and the camera is changed or not. In this embodiment, an angle between a person to be photographed and the camera at the previous time point is recorded as a first angle, and an angle between the person to be photographed and the camera at the current time point is recorded as a second angle. The mobile terminal judges whether the first angle and the second angle are the same; if the first angle is the same as the second angle, the mobile terminal determines that the angle between the person to be shot and the camera changes; if the first angle is different from the second angle, the mobile terminal determines that the angle between the person to be shot and the camera is not changed.
When the camera determines that the angle between the person to be shot and the camera changes, the mobile terminal controls the camera to rotate along with the person to be shot so as to realize that the mobile terminal rotates along with the person to be shot.
Further, the process that the mobile terminal determines whether the angle between the person to be shot and the mobile terminal is changed through the face images recognized at different time points can be as follows: the mobile terminal calculates an angle difference between the first angle and the second angle, and judges whether the calculated angle difference is greater than a preset angle difference. If the calculated angle difference is larger than the preset angle difference, the mobile terminal determines that the angle between the person to be shot and the camera changes; and if the calculated angle difference is smaller than or equal to the preset angle difference, the mobile terminal determines that the angle between the person to be shot and the camera is not changed. The preset angle difference can be set according to specific needs, such as 3 degrees, 5 degrees, or 10 degrees. It can be understood that whether the angle between the person to be shot and the camera changes or not is judged through the angle difference, and the method is more suitable for the actual situation in life. Therefore, the situation that the moving direction of the person to be shot is very small is avoided, the mobile terminal also confirms that the person to be shot moves, and the camera of the mobile terminal is controlled to rotate along with the person to be shot.
Further, the process that the mobile terminal determines whether the angle between the person to be photographed and the mobile terminal is changing through the face images recognized at different time points may also be: the mobile terminal obtains an angle between a character to be shot in the previous time period and the camera, records the angle between the character to be shot in the previous time period and the camera as a third angle, and calculates a first angle average value corresponding to the third angle. The mobile terminal obtains an angle between a person to be shot and the camera in the current time period, records the angle between the person to be shot and the camera in the current time period as a fourth angle, and calculates a second angle average value corresponding to the fourth angle. It can be understood that the duration corresponding to the previous time period is equal to the duration corresponding to the current time period, wherein the duration corresponding to the previous time period and the duration corresponding to the current time period may be set according to specific needs, for example, may be set to 3 seconds, or 5 seconds, etc. And after the mobile terminal obtains the first angle average value and the second angle average value, the mobile terminal calculates the average difference between the first angle average value and the second angle average value and judges whether the average difference is greater than a preset angle difference. If the calculated mean difference is larger than the preset angle difference, the mobile terminal determines that the angle between the person to be shot and the camera changes; and if the calculated mean difference is smaller than or equal to the preset angle difference, the mobile terminal determines that the angle between the person to be shot and the camera does not change. It is understood that by the scheme of calculating the average difference, the amount of calculation for determining whether the angle between the person to be photographed and the camera is changed can be reduced.
The embodiment determines whether the angle between the person to be shot and the mobile terminal is changed or not through the face images recognized at different time points; and if the angle is changed, controlling the camera of the mobile terminal to rotate along with the person to be shot. Therefore, the mobile terminal can monitor the people to be shot in the whole process and improve the monitoring effect of monitoring the people to be shot.
Further, a fourth embodiment of the face image recognition method of the present invention is proposed based on the third embodiment of the face image recognition method. The fourth embodiment of the face image recognition method is different from the third embodiment of the face image recognition method in that, referring to fig. 8, the face image recognition method further includes:
step S110, detecting whether a face image exists in the image shot by the mobile terminal.
And step S120, if the face image does not exist in the shot image, controlling the camera to return to the original position.
When the mobile terminal controls the camera to rotate along with the person to be shot, the mobile terminal detects whether the face image of the person to be shot still exists in the shot image. If the face image of the person to be shot does not exist in the shot image, the mobile terminal controls the camera to return to the original position, namely, the camera returns to the default position when the camera does not rotate along with the person to be shot. Further, if the face image corresponding to the person to be shot exists in the shot image, the mobile terminal continues to determine whether the angle between the person to be shot and the camera of the person to be shot is changed through the face images recognized at different time points. It can be understood that when the person to be photographed is not within the monitoring range of the mobile terminal, the face image corresponding to the person to be photographed does not exist in the image photographed by the mobile terminal.
In the embodiment, after the camera of the mobile terminal rotates along with the person to be shot, the mobile terminal detects whether the face image of the person to be shot still exists in the shot image; and if the face image of the person to be shot does not exist in the shot image, controlling the camera of the mobile terminal to return to the original position. Therefore, when the mobile terminal cannot shoot the person to be shot, the mobile terminal automatically returns to the original position.
In addition, the embodiment of the invention also provides a computer readable storage medium.
The computer readable storage medium having stored thereon a facial image recognition program that when executed by a processor performs the steps of:
when the mobile terminal detects a shooting instruction for shooting a person to be shot, shooting the person to be shot through a lens group in the mobile terminal according to the shooting instruction so as to obtain an image to be shot corresponding to the person to be shot;
extracting light spots in the image to be shot;
recognizing a face image in the image to be shot according to the light spots;
the lens group is arranged between the diffractive optical element and the photosensitive component of the mobile terminal camera.
After a mobile terminal detects a shooting instruction for shooting a person to be shot, shooting the person to be shot through a lens group in the mobile terminal according to the shooting instruction so as to obtain an image to be shot corresponding to the person to be shot; extracting light spots in the image to be shot; recognizing a face image in the image to be shot according to the light spots; the lens group is arranged between the diffractive optical element and the photosensitive component of the mobile terminal camera. The laser used in the process of shooting the image is converged and diverged through the lens group, so that the shooting distance of the shooting object is increased, the instruction of remotely shooting the obtained image is further increased, the lens group emits structured light, the face image of the image to be shot is identified through the structured light, and the accuracy of face identification in the image to be shot is improved.
Further, the step of identifying the face image in the image to be shot according to the light spots comprises the following steps:
determining the size of each light spot, and comparing the size of each light spot with the size of a light spot of a plane object stored in advance to obtain a comparison result;
and identifying the three-dimensional face image in the image to be shot according to the comparison result.
Further, the step of identifying the face image in the image to be shot according to the light spots comprises:
determining the receiving time corresponding to each light spot;
and identifying the three-dimensional face image in the image to be shot according to the receiving time.
Further, after the step of identifying the face image in the image to be photographed according to the light spot, the face image identification program when executed by the processor implements the steps of:
determining the face features of the preset area of the face image according to the light spots, and searching preset features corresponding to the face features in a preset feature library;
if the preset features are found in the preset feature library, determining a preset face image corresponding to the preset features;
calculating the similarity between the preset face image and the recognized face image;
and if the similarity is smaller than the preset similarity, determining that the person to be shot has camouflage.
Further, after the step of calculating the similarity between the preset face image and the recognized face image, the face image recognition program when executed by the processor implements the steps of:
judging whether the similarity is smaller than the preset similarity or not;
and if the similarity is greater than or equal to the preset similarity, determining that the person to be shot is not camouflaged.
Further, after the step of identifying the face image in the image to be photographed according to the light spot, the face image identification program when executed by the processor implements the steps of:
determining whether the angle between the person to be shot and the mobile terminal is changed or not through the face images identified at different time points;
and if the angle is changed, controlling the camera of the mobile terminal to rotate along with the person to be shot.
Further, after the step of controlling the camera of the mobile terminal to rotate along with the person to be photographed if the angle is changed, the face image recognition program is executed by the processor to implement the following steps:
detecting whether the face image exists in the image shot by the mobile terminal;
and if the face image does not exist in the shot image, controlling the camera to return to the original position.
Further, the lens group is made of glass or plastic.
The specific implementation of the computer-readable storage medium of the present invention is substantially the same as the embodiments of the face image recognition method described above, and is not described herein again.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or system that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or system. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or system that comprises the element.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a mobile terminal (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present invention.
The above description is only a preferred embodiment of the present invention, and not intended to limit the scope of the present invention, and all modifications of equivalent structures and equivalent processes, which are made by using the contents of the present specification and the accompanying drawings, or directly or indirectly applied to other related technical fields, are included in the scope of the present invention.

Claims (10)

1. A face image recognition method is characterized by comprising the following steps:
when the mobile terminal detects a shooting instruction for shooting a person to be shot, shooting the person to be shot through a lens group in the mobile terminal according to the shooting instruction so as to obtain an image to be shot corresponding to the person to be shot;
extracting light spots in the image to be shot;
recognizing a face image in the image to be shot according to the light spots;
the lens group is arranged between the diffractive optical element and the photosensitive component of the mobile terminal camera, so that laser used in the image shooting process is converged and diffused through the lens group, and the shooting distance of a shot object is increased.
2. The method for recognizing the human face image according to the light spots as claimed in claim 1, wherein the step of recognizing the human face image in the image to be shot according to the light spots comprises the steps of:
determining the size of each light spot, and comparing the size of each light spot with the size of a light spot of a plane object stored in advance to obtain a comparison result;
and identifying the three-dimensional face image in the image to be shot according to the comparison result.
3. The method for recognizing the human face image according to the light spots as claimed in claim 1, wherein the step of recognizing the human face image in the image to be shot according to the light spots comprises the steps of:
determining the receiving time corresponding to each light spot;
and identifying the three-dimensional face image in the image to be shot according to the receiving time.
4. The method for recognizing a human face image according to claim 1, wherein after the step of recognizing the human face image in the image to be captured according to the light spots, the method further comprises:
determining the face features of the preset area of the face image according to the light spots, and searching preset features corresponding to the face features in a preset feature library;
if the preset features are found in the preset feature library, determining a preset face image corresponding to the preset features;
calculating the similarity between the preset face image and the recognized face image;
and if the similarity is smaller than the preset similarity, determining that the person to be shot has camouflage.
5. The method for recognizing a human face image according to claim 4, wherein after the step of calculating the similarity between the preset human face image and the recognized human face image, the method further comprises:
judging whether the similarity is smaller than the preset similarity or not;
and if the similarity is greater than or equal to the preset similarity, determining that the person to be shot is not camouflaged.
6. The method for recognizing a human face image according to claim 1, wherein after the step of recognizing the human face image in the image to be shot according to the light spots, the method further comprises:
determining whether the angle between the person to be shot and the mobile terminal is changed or not through the face images identified at different time points;
and if the angle is changed, controlling the camera of the mobile terminal to rotate along with the person to be shot.
7. The method for recognizing human face image according to claim 6, wherein after the step of controlling the camera of the mobile terminal to rotate along with the person to be photographed if the angle is changing, the method further comprises:
detecting whether the face image exists in the image shot by the mobile terminal;
and if the face image does not exist in the shot image, controlling the camera to return to the original position.
8. The face image recognition method according to any one of claims 1 to 7, wherein the lens group is made of glass or plastic.
9. A mobile terminal, characterized in that it comprises a memory, a processor and a facial image recognition program stored on said memory and executable on said processor, said facial image recognition program, when executed by said processor, implementing the steps of the facial image recognition method according to any one of claims 1 to 8.
10. A computer-readable storage medium, characterized in that the computer-readable storage medium has stored thereon a face image recognition program which, when executed by a processor, implements the steps of the face image recognition method according to any one of claims 1 to 8.
CN201711471122.9A 2017-12-28 2017-12-28 Face image recognition method, mobile terminal and computer-readable storage medium Active CN108197560B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711471122.9A CN108197560B (en) 2017-12-28 2017-12-28 Face image recognition method, mobile terminal and computer-readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711471122.9A CN108197560B (en) 2017-12-28 2017-12-28 Face image recognition method, mobile terminal and computer-readable storage medium

Publications (2)

Publication Number Publication Date
CN108197560A CN108197560A (en) 2018-06-22
CN108197560B true CN108197560B (en) 2022-06-07

Family

ID=62586273

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711471122.9A Active CN108197560B (en) 2017-12-28 2017-12-28 Face image recognition method, mobile terminal and computer-readable storage medium

Country Status (1)

Country Link
CN (1) CN108197560B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109214897B (en) * 2018-10-08 2019-11-29 百度在线网络技术(北京)有限公司 Determine the method, apparatus, equipment and computer-readable medium of laying for goods position
CN110602272A (en) * 2019-07-22 2019-12-20 珠海格力电器股份有限公司 Face scanning method, face scanning device, mobile terminal and storage medium
CN112449111A (en) * 2020-11-13 2021-03-05 珠海大横琴科技发展有限公司 Monitoring equipment processing method and device

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103577789A (en) * 2012-07-26 2014-02-12 中兴通讯股份有限公司 Detection method and device
CN105915784A (en) * 2016-04-01 2016-08-31 纳恩博(北京)科技有限公司 Information processing method and information processing device
CN107169483A (en) * 2017-07-12 2017-09-15 深圳奥比中光科技有限公司 Tasks carrying based on recognition of face
CN107231470A (en) * 2017-05-15 2017-10-03 努比亚技术有限公司 Image processing method, mobile terminal and computer-readable recording medium
CN107464280A (en) * 2017-07-31 2017-12-12 广东欧珀移动通信有限公司 The matching process and device of user's 3D modeling

Family Cites Families (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
AUPO798697A0 (en) * 1997-07-15 1997-08-07 Silverbrook Research Pty Ltd Data processing method and apparatus (ART51)
KR100941062B1 (en) * 2001-07-06 2010-02-05 팔란티르 리서치, 엘엘씨 Imaging system and methodology employing reciprocal space optical design
CN101356493A (en) * 2006-09-06 2009-01-28 苹果公司 Portable electronic device for photo management
US7408718B2 (en) * 2006-09-07 2008-08-05 Avago Technologies General Pte Ltd Lens array imaging with cross-talk inhibiting optical stop structure
CN102104709B (en) * 2009-12-21 2013-01-30 展讯通信(上海)有限公司 Method for processing image shot by camera and camera
CN103813087A (en) * 2012-11-13 2014-05-21 无锡华御信息技术有限公司 Remote control image acquisition module
CN103399414B (en) * 2013-07-22 2016-04-13 中国科学院上海光学精密机械研究所 Eliminate the method for diffraction optical element zero-order terms and twin-image
CN104814712A (en) * 2013-11-07 2015-08-05 南京三维视嘉科技发展有限公司 Three-dimensional endoscope and three-dimensional imaging method
JP6075644B2 (en) * 2014-01-14 2017-02-08 ソニー株式会社 Information processing apparatus and method
WO2015183699A1 (en) * 2014-05-30 2015-12-03 Apple Inc. Predictive messaging method
CN105451017A (en) * 2015-12-29 2016-03-30 努比亚技术有限公司 Camera module photosensitive quality detection method and device
CN106954020B (en) * 2017-02-28 2019-10-15 努比亚技术有限公司 A kind of image processing method and terminal

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103577789A (en) * 2012-07-26 2014-02-12 中兴通讯股份有限公司 Detection method and device
CN105915784A (en) * 2016-04-01 2016-08-31 纳恩博(北京)科技有限公司 Information processing method and information processing device
CN107231470A (en) * 2017-05-15 2017-10-03 努比亚技术有限公司 Image processing method, mobile terminal and computer-readable recording medium
CN107169483A (en) * 2017-07-12 2017-09-15 深圳奥比中光科技有限公司 Tasks carrying based on recognition of face
CN107464280A (en) * 2017-07-31 2017-12-12 广东欧珀移动通信有限公司 The matching process and device of user's 3D modeling

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于深度信息的3D人脸识别;徐昊;《中国优秀硕士学位论文全文数据库信息科技辑》;20160115(第2016年第01期);I138-857 *

Also Published As

Publication number Publication date
CN108197560A (en) 2018-06-22

Similar Documents

Publication Publication Date Title
CN108566510B (en) Flexible screen control method, mobile terminal and readable storage medium
CN108566479B (en) Screen state control method, mobile terminal and computer readable storage medium
CN107231470B (en) Image processing method, mobile terminal and computer readable storage medium
CN110035176B (en) Brightness adjusting method of mobile terminal, mobile terminal and storage medium
CN108089791B (en) Screen resolution adjusting method, mobile terminal and computer readable storage medium
CN107707821B (en) Distortion parameter modeling method and device, correction method, terminal and storage medium
CN107730460B (en) Image processing method and mobile terminal
CN108062162B (en) Flexible screen terminal, placement form control method thereof and computer-readable storage medium
CN108198150B (en) Method for eliminating image dead pixel, terminal and storage medium
CN107832032B (en) Screen locking display method and mobile terminal
CN107465873B (en) Image information processing method, equipment and storage medium
CN108197560B (en) Face image recognition method, mobile terminal and computer-readable storage medium
CN109146463B (en) Mobile payment method, mobile terminal and computer readable storage medium
CN107241504B (en) Image processing method, mobile terminal and computer readable storage medium
CN111885307A (en) Depth-of-field shooting method and device and computer readable storage medium
CN107896304B (en) Image shooting method and device and computer readable storage medium
CN107422956B (en) Mobile terminal operation response method, mobile terminal and readable storage medium
CN110086993B (en) Image processing method, image processing device, mobile terminal and computer readable storage medium
CN108279822B (en) Display method of camera application in flexible screen and mobile terminal
CN111443818A (en) Screen brightness regulation and control method and device and computer readable storage medium
CN112532838B (en) Image processing method, mobile terminal and computer storage medium
CN114900613A (en) Control method, intelligent terminal and storage medium
CN109215004B (en) Image synthesis method, mobile terminal and computer readable storage medium
CN110399780B (en) Face detection method and device and computer readable storage medium
CN107613204B (en) Focusing area adjusting method, terminal and computer storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant