CN107729857B - Face recognition method and device, storage medium and electronic equipment - Google Patents

Face recognition method and device, storage medium and electronic equipment Download PDF

Info

Publication number
CN107729857B
CN107729857B CN201711014406.5A CN201711014406A CN107729857B CN 107729857 B CN107729857 B CN 107729857B CN 201711014406 A CN201711014406 A CN 201711014406A CN 107729857 B CN107729857 B CN 107729857B
Authority
CN
China
Prior art keywords
face recognition
performance mode
frame
key
images
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201711014406.5A
Other languages
Chinese (zh)
Other versions
CN107729857A (en
Inventor
何新兰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201711014406.5A priority Critical patent/CN107729857B/en
Publication of CN107729857A publication Critical patent/CN107729857A/en
Application granted granted Critical
Publication of CN107729857B publication Critical patent/CN107729857B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The application relates to a face recognition method, a face recognition device, a storage medium and electronic equipment. The method comprises the following steps: receiving a face recognition instruction; detecting a motion state of the device according to the first performance mode; switching the device to a second performance mode when the motion state is a stationary state; performing face recognition on the scanned frame image in a second performance mode; when the identification result is obtained, restoring the first performance mode; and when the equipment performs face recognition in the first performance mode, the occupancy rate of the equipment to the resources is smaller than that in the second performance mode. The face recognition method, the face recognition device, the storage medium and the electronic equipment can keep balance of improving face recognition efficiency and resource occupation.

Description

Face recognition method and device, storage medium and electronic equipment
Technical Field
The present application relates to the field of information processing technologies, and in particular, to a face recognition method, an apparatus, a storage medium, and an electronic device.
Background
With the development of technologies, more and more technologies need to be applied to face recognition, for example, mobile payment and mobile phone unlocking can be performed in a face recognition manner. In the traditional face recognition process, when a face needs to be recognized quickly, more resources such as a memory need to be occupied in the process of quickly recognizing the face, namely, the equipment is in a high-power consumption state; on the contrary, when the device is in a relatively low power consumption state, the time for face recognition is longer. Therefore, it is difficult for the conventional technology to reduce the power consumption of the device in the process of rapidly recognizing the human face.
Disclosure of Invention
The application provides a face recognition method, a face recognition device, a storage medium and electronic equipment, which can reduce the power consumption of the equipment in the process of rapidly recognizing faces.
A method of face recognition, the method comprising:
receiving a face recognition instruction;
detecting a motion state of the device according to the first performance mode;
switching the device to a second performance mode when the motion state is a stationary state;
performing face recognition on the scanned frame image in a second performance mode;
when the identification result is obtained, restoring the first performance mode;
and when the equipment performs face recognition in the first performance mode, the occupancy rate of the equipment to the resources is smaller than that in the second performance mode.
An apparatus for face recognition, the apparatus comprising:
the face recognition instruction receiving module is used for receiving a face recognition instruction;
the motion state detection module is used for detecting the motion state of the equipment according to the first performance mode;
a performance switching module for switching the device to a second performance mode when the motion state changes to a stationary state;
the face recognition module is used for carrying out face recognition on the scanned frame image in a second performance mode;
the performance switching module is further used for restoring the first performance mode when an identification result is obtained; and when the equipment performs face recognition in the first performance mode, the occupancy rate of the equipment to the resources is smaller than that in the second performance mode.
A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, implements the steps of the face recognition method provided by the above embodiments.
An electronic device includes a memory, a processor, and a computer program stored on the memory and executable on the processor, where the processor executes the computer program to implement the steps of the face recognition method provided in the foregoing embodiments.
In the face recognition method, the apparatus, the storage medium, and the electronic device provided in this embodiment, since the occupancy rate of the resource is smaller than the occupancy rate in the second performance mode when the device performs face recognition in the first performance mode, the second performance mode is started to perform face recognition when the device is detected to be in the static state, that is, the device performs face recognition in the static state by using the relatively high performance mode, the speed of obtaining the face recognition result can be increased, and the second performance mode is recovered after the recognition result is obtained, so that the duration of the face recognition in the second performance mode can be reduced, the balance between the face recognition efficiency and the resource occupancy is maintained, and the resource occupancy is reduced as much as possible on the premise of satisfying the requirement of rapidly recognizing the face.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a schematic diagram of an application environment of a face recognition method in one embodiment;
FIG. 2 is a schematic diagram showing an internal configuration of an electronic apparatus according to an embodiment;
FIG. 3 is a flow diagram of a face recognition method in one embodiment;
FIG. 4 is a flow chart of detecting a motion state of a device according to a first performance mode in another embodiment;
FIG. 5 is a flow diagram of face recognition based on key frames in one embodiment;
FIG. 6 is a diagram of region partitioning in one embodiment;
FIG. 7 is a block diagram of a face recognition apparatus according to an embodiment;
FIG. 8 is a block diagram of a portion of the structure of a handset associated with an electronic device in one embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
It will be understood that, as used herein, the terms "first," "second," and the like may be used herein to describe various elements, but these elements are not limited by these terms. These terms are only used to distinguish one element from another. For example, a first performance mode may be referred to as a second performance mode, and similarly, a second performance mode may be referred to as a first performance mode, without departing from the scope of the present application. The first performance mode and the second performance mode are both performance modes, but are not the same performance mode.
Fig. 1 is a schematic diagram of an application environment of a face recognition method in an embodiment. As shown in fig. 1, the electronic device 110 may scan an object 120 in the environment by using a camera thereon, and present a frame image obtained by the scanning in real time. And when the motion state of the detection equipment is changed into the motion state under the first performance mode, switching to a second performance mode, carrying out face recognition on the scanned image under the second performance mode, and restoring the equipment to the first performance mode after a face recognition result is obtained. And when the equipment performs face recognition in the first performance mode, the occupancy rate of the equipment to the resources is smaller than that in the second performance mode.
Fig. 2 is a schematic diagram of an internal structure of an electronic device in one embodiment. As shown in fig. 2, the electronic device includes a processor, a memory, a display screen, and a camera connected by a system bus. Wherein, the processor is used for providing calculation and control capability and supporting the operation of the whole electronic equipment. The memory is used for storing data, programs and the like, and at least one computer program is stored on the memory, and the computer program can be executed by the processor to realize the face recognition method suitable for the electronic device provided in the embodiment of the application. The Memory may include a non-volatile storage medium such as a magnetic disk, an optical disk, a Read-Only Memory (ROM), or a Random-Access-Memory (RAM). For example, in one embodiment, the memory includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The computer program can be executed by a processor to implement a face recognition method provided in the following embodiments. The internal memory provides a cached execution environment for the operating system computer program in the non-volatile storage medium, such as a frame image obtained by real-time scanning can be cached. The camera comprises the first camera module and the second camera module, and both can be used for generating frame images. The display screen may be a touch screen, such as a capacitive screen or an electronic screen, and is used for displaying frame images or visual information such as a recognition result, and may also be used for detecting a touch operation applied to the display screen and generating a corresponding instruction. Those skilled in the art will appreciate that the architecture shown in fig. 2 is a block diagram of only a portion of the architecture associated with the subject application, and does not constitute a limitation on the electronic devices to which the subject application may be applied, and that a particular electronic device may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In an embodiment, as shown in fig. 3, a face recognition method is provided, and this embodiment is mainly explained by applying the method to the electronic device shown in fig. 1, where the method includes:
step 302, receiving a face recognition instruction.
In one embodiment, the electronic device may generate the face recognition instruction by detecting an operation acting on the interface. The cutting instruction can be a face recognition instruction triggered by detected related touch operation, pressing operation of a physical key, voice control operation or shaking operation on equipment and the like. The touch operation may be a touch click operation, a touch long press operation, a touch slide operation, a multi-point touch operation, and the like. The electronic device may provide an open button for triggering face recognition, and when a click operation on the open button is detected, a face recognition instruction for face recognition is triggered. The electronic equipment can also preset starting voice information for triggering the face recognition instruction. And receiving corresponding voice information by calling the voice receiving device, and triggering the face recognition instruction when the voice information is matched with the starting voice information. Through analysis, the voice information can be judged to be matched with the preset starting voice information, and therefore the face recognition instruction can be triggered.
In one embodiment, the electronic device may provide a mode of selecting face recognition to complete an operation of a certain function, where the function may be a function of implementing mobile payment or screen unlocking. When a selection mode of the face recognition mode is detected, a face recognition instruction can be triggered.
In step 304, the motion state of the device is detected according to the first performance mode.
In one embodiment, the electronic device processes information at different speeds and occupies different resources in different performance modes. Taking a mobile phone as an example, in general, the mobile phone may provide several fixed performance modes or customizable performance modes, such as a "low battery mode", a "normal mode", and a "high performance mode", for the user to select. The first performance mode is a performance mode in which the mobile terminal is currently located.
In one embodiment, after receiving the face recognition instruction, the camera is used for scanning information in a scene in the first performance mode, a frame image is generated, and the frame image generated in real time is displayed on the interface. The frame image is a real-time frame image formed in a shooting state through a camera. After the electronic equipment receives the face recognition instruction, the camera can be called for scanning, and the electronic equipment enters a scanning state. Optionally, the camera includes a first camera module and a second camera module. The first camera module and/or the second camera module can be used for scanning objects in the shooting environment to form the frame image.
In one embodiment, the electronic device may detect a motion state of the device itself, including a stationary state and a moving state. The electronic device can detect the motion state of the electronic device according to the variation of the scanned scene or a built-in motion detection element.
And step 306, switching the device to a second performance mode when the motion state is a static state.
In this embodiment, the occupancy rate of the device for performing face recognition in the first performance mode to the resource is smaller than the occupancy rate in the second performance mode. I.e. the second performance mode is a high performance mode with respect to the first performance mode. The electronic device can immediately switch the performance mode to the second performance mode when the electronic device is determined to be in the static state. The device can be in a motion state at the beginning after receiving the face recognition instruction, and can be immediately switched to the second performance mode when the device detects that the motion state is changed into a static state. Or may be initially in a quiescent state, at which point a switch to the second performance mode may be immediately made.
And 308, performing face recognition on the scanned frame image in the second performance mode.
In this embodiment, the electronic device may perform face recognition on one or more frames of image obtained by scanning in the second performance mode to recognize face feature information therein. Optionally, the electronic device may perform global or local area scanning on the one or more frames, and synthesize the face feature information through scanning on the one or more frames.
The local area can be a fixed area, the electronic device can display a face position prompt box on the interface, so that the user can adjust the device, the face in the displayed frame image is in the prompt box, and the area where the prompt box is located is the fixed area. The electronic equipment can select image information in a fixed area in each frame of image to perform face recognition, and combines multiple frames to perform face recognition to obtain face feature information.
In one embodiment, the second performance mode is a high performance mode preset for use in rapid face recognition, relative to the first performance mode. Under the second performance mode, more resources can be occupied for face recognition processing, and the face recognition efficiency is improved.
In step 310, when the recognition result is obtained, the first performance mode is restored.
In this embodiment, the obtained recognition result may be a result of determining identity information of a recognized face, the electronic device may compare the extracted face feature information with preset face feature information, determine whether the extracted face feature information matches the preset face feature information, and obtain the recognition result when a matching or non-matching determination result is obtained. At this time, the performance mode of the electronic device may be restored to the first performance mode.
According to the face recognition method, the occupancy rate of resources when the equipment performs face recognition in the first performance mode is smaller than the occupancy rate in the second performance mode, the second performance mode is started to perform face recognition when the equipment is detected to be in a static state, namely, the equipment performs face recognition in the static state by adopting the relatively high performance mode, the speed of obtaining a face recognition result can be increased, and the second performance mode is recovered after the recognition result is obtained, so that the time length of the equipment in the second performance mode can be reduced, the balance of improving the face recognition efficiency and resource occupation is kept, and the occupation of the resources is reduced as much as possible on the premise of meeting the requirement of rapidly recognizing the face.
In one embodiment, detecting a motion state of a device according to a first performance mode includes: the motion detection element is invoked to detect a motion state of the device in accordance with a first performance mode.
The motion detection element is an element suitable for detecting the motion state of the device and may include, but is not limited to, a gyroscope or a gravity sensing device. Acceleration sensors, and the like. The electronic equipment can call a built-in motion detection element, calculate the real-time moving speed of the equipment, and judge that the equipment is in a static state when the moving speed is kept to be 0 or the keeping time close to 0 reaches the preset time length. And when the holding time length is less than the preset time length or the moving speed is not close to 0, judging that the equipment is in the moving state. Wherein, being close to 0 means that the real-time moving speed is smaller than a smaller preset speed threshold, and the preset time duration may be a smaller time duration such as 1 second or 0.5 second. By invoking the motion detection element to detect the motion state of the device, the efficiency of detection of the motion state may be improved.
In one embodiment, as shown in FIG. 4, step 304 comprises:
step 402, acquiring scanned continuous multi-frame images according to a first performance mode.
In this embodiment, the electronic device may generate continuous frame images in real time according to a preset scanning frequency, and display the generated frame images on the interface, so as to improve the smoothness of the screen display.
And step 404, detecting the similarity between the multi-frame images.
The electronic equipment can detect the similarity between the extracted adjacent frame images. Alternatively, the generated frame image of each frame may be extracted, and the frame image generated in real time may also be extracted according to a certain sampling frequency, for example, 1 frame image may be extracted every n frames. n may be any suitable natural number such as 1, 2 or 3.
And 406, when the similarity is greater than a preset similarity threshold, judging that the motion state of the equipment is a static state.
The similarity threshold is a preset threshold for judging whether the frame images are similar or not. May be a predetermined suitable higher similarity value, such as 80% or 90%. Optionally, the electronic device may determine, for the detected consecutive similarity of the preset number, whether all the similarities are greater than a preset similarity threshold, and if so, determine that the device is in a stationary state. The preset number may be any suitable number, such as 3 or 5.
In one embodiment, the preset number may be a number determined according to a holding time length, that is, when an interval time length between a first frame and a last frame in the extracted consecutive m-frame images reaches the holding time length, the preset number may be m-1. For example, if the holding time is 0.5 seconds, the apparatus may generate the frame image in real time at a frame rate of 50 frames per second. And extracting the generated frame images according to the interval number of n frames per interval, and detecting the similarity between adjacent frame images in the extracted frame images. It is understood that the larger n, the smaller the preset number may be. When n is 1, when 12 consecutive similarities in the detected similarities exceed a preset similarity threshold, it is determined that the detected similarities are in a static state, that is, the preset number may be 12. When n is 2, when 8 continuous similarity degrees exceed a preset similarity threshold value, the static state is judged, and the preset number can be 8.
The motion state of the equipment is judged according to the similarity between the frame images, so that the accuracy of motion state judgment can be improved, namely the accurate determination of the scanned picture in a stable state can be improved. And the number of similarity detection can be reduced by extracting according to the preset sampling frequency, and the smoothness of picture display can be improved.
In one embodiment, step 304 includes: selecting one or more frames of images from the scanned multi-frame images as key frames; and under the first performance mode, performing face recognition according to the key frame.
The key frame is represented as a frame image used for selecting a frame for face recognition, and is a reference frame of other frames. When a certain frame image is a key frame of another frame image, the other frame image performs face recognition depending on face recognition information of the key frame.
The electronic equipment can select one or more frames of images from frame images obtained by real-time scanning in a static state as key frames, and performs face recognition according to the key frames in a first performance mode. When multiple key frames are included, the electronic device may select key frames from the frame images generated in real time at preset intervals. Optionally, the face feature information may be obtained by performing recognition only according to the key frame. Or the key frame and other frame images depending on the key frame can be combined for recognition to obtain the face feature information. By setting the key frame, the face recognition speed can be further improved.
In one embodiment, as shown in fig. 5, the face recognition based on the key frame includes:
step 502, global detection is performed on the key frame, and a target identification area in the key frame is identified, wherein the target identification area is an area where the face is located in the frame image.
Global detection means detecting image information in all regions in the key frame. The electronic equipment can carry out global detection on the key frame, detect the area of the image information in the frame image, which accords with the face feature information, and take the area as a target area. That is, the frame image can be used only as a detection for searching out the target area, and the detection efficiency can be improved.
Step 504, performing face recognition on the target recognition area in the first number of frame images after the key frame.
The first number is a number determined adaptively, that is, after the recognition result is obtained, the number of frame images participating in recognition except the key frame is the first number. The electronic device may select each frame of image from the images after the key frame for identification, or may select and sample according to the similar sampling manner to select a part of the frame of image for identification. The selected frame image can be identified only from the target identification area, and the face feature information in the target identification area is extracted for identification.
Step 506, obtaining a face recognition result according to the key frame and the first number of frame images.
In one embodiment, the electronic device may select image information in a corresponding local area from a target area in each frame of image after the reference frame, and extract facial feature information from the image information in the local area, where the plurality of local area information may form a corresponding target recognition area.
The electronic apparatus may previously set the number of divisions of the target recognition area, dividing the target recognition area into the set number of sub-areas. And sequentially selecting the image data in one sub-area of each subsequently selected frame image, and forming the target identification area from the sub-areas selected from the divided number of frame images. Optionally, the target recognition areas are divided equally. As shown in fig. 6, the target recognition area may be further divided into 6 equal sub-areas, which may be a 1 st sub-area and a 2 nd sub-area … and a 6 th sub-area from top to bottom. And aiming at each of the 1 st to 6 th frame images selected subsequently, the 1 st to 6 th sub-regions can be respectively and sequentially extracted, and the sub-regions selected by the 6 frame images form the target identification region. And the image information in each identified sub-area can be extracted, and the image information is synthesized to carry out face identification. And when the identification result is obtained, further extracting the 7 th to 12 th frame images, identifying by referring to the 1 st to 6 th frame images, and identifying by combining the previous image information until the identification result is obtained.
In the embodiment, the reference frame is selected, the target identification area is determined through the reference frame, and only the determined target identification area needs to be subjected to face identification aiming at the subsequent frame images, so that the face identification efficiency can be further improved.
In one embodiment, selecting one or more frames of images from the scanned multiple frames of images as key frames comprises: and selecting a second number of frame images from the scanned multi-frame images as key frames.
The second number may be any number, such as 3 or 5, etc. The electronic device may sample and select a second number of frame images from the scanned frame images as the key frames according to the sampling pattern. The second number of frame images generated by the continuous scanning may also be used as key frames.
Carrying out face recognition according to the key frame, comprising the following steps: performing region detection on a preset region in each key frame, forming global detection according to the region detection on a second number of key frames, and identifying a target identification region where the face is located in a frame image; performing face recognition on target recognition areas in a first number of frame images after the last key frame; and obtaining a face recognition result according to the second number of key frames and the first number of frame images.
The electronic device may divide the frame image in a manner similar to the above-described region division for each key frame, perform detection of different regions for each key frame, and form a whole from the regions selected from the first number of key frames, that is, form global detection for region detection of the first number of key frames. And determining a target identification area through detecting the first number of key frames.
After the target recognition area is determined, only the part of the target recognition area is subjected to face recognition aiming at the subsequently selected frame images, namely, the target recognition area in the first number of frame images after the last key frame is subjected to face recognition. And obtaining a face recognition result according to the recognition information of each frame image participating in recognition, namely obtaining the face recognition result according to the second number of key frames and the first number of frame images.
In this embodiment, by performing region detection on the key frames and forming global detection according to the region detection on the second number of key frames, the detection time for each frame of the selected key frames can be further increased, and the fluency of the picture display is improved.
In one embodiment, selecting one or more frames of images from the scanned multiple frames of images as key frames comprises: and selecting a third number of frame images as key frames at intervals from the scanned multi-frame images. Carrying out face recognition according to the key frame, comprising the following steps: carrying out global detection on each key frame, and identifying a target identification area in each key frame, wherein the target identification area is an area where the face is located in the current key frame; performing face recognition on a region which is the same as a target recognition region of a current frame image in a frame image between the current key frame and a next key frame of the current key frame; performing face recognition on the same area as the target recognition area of the current frame image in the frame images of the fourth number after the last key frame; and obtaining a face recognition result according to the recognition information of each frame image participating in recognition.
The first number is a number determined adaptively, that is, after the recognition result is obtained, the number selected as the key frame is the first number. The electronic device may select key frames at intervals, and the number of intervals between each adjacent key frame may be the same or different. Alternatively, the electronic device may select 1 frame as the key frame every n frames of images. The frame images participating in the recognition comprise the selected second number of key frames, the frame images between each key frame and the fourth number of frame images after the last key frame.
And performing global detection on each frame of key frame, and determining a target identification area of the current key frame as a current target identification area. And performing face detection only on the current target identification area according to the frame image between the current key frame and the next key frame. The detection mode can be a mode of detecting the molecular region, and can also be a mode of detecting the target identification region of each frame of image until a face identification result is obtained. . By the method, the target recognition area can be updated after the key frame is reselected, face detection is carried out on the subsequent frame image according to the updated target recognition area, the earnest malignancy of the target recognition area can be improved, and the face recognition efficiency is further improved.
In one embodiment, the number of intervals n between adjacent key frames may be a trade-off between the power consumption of the device and the speed of face detection. One round of detection is formed by the detection of the current key frame and the frame image before the next key frame, and in the second performance mode, the smaller interval number n can be set for the previous rounds of detection so as to further accelerate the detection of the human face.
In one embodiment, as shown in fig. 7, there is provided a face recognition apparatus, the method including:
a face recognition instruction receiving module 702, configured to receive a face recognition instruction.
A motion state detection module 704 for detecting a motion state of the device according to the first performance mode.
A performance switching module 706 for switching the device to the second performance mode when the motion state changes to the stationary state.
And a face recognition module 708, configured to perform face recognition on the scanned frame image in the second performance mode.
The performance switching module 706 is further configured to restore the first performance mode when the recognition result is obtained. And when the equipment performs face recognition in the first performance mode, the occupancy rate of the equipment to the resources is smaller than that in the second performance mode.
In one embodiment, the performance switching module 706 is further configured to invoke a motion detection element to detect a motion state of the device according to the first performance mode.
In one embodiment, the performance switching module 706 is further configured to acquire the scanned consecutive multi-frame images according to a first performance mode; detecting the similarity between multi-frame images; and when the similarity is greater than a preset similarity threshold, judging that the motion state of the equipment is a static state.
In one embodiment, the face recognition module 708 is further configured to select one or more frames of images from the scanned multiple frames of images as key frames; and under the first performance mode, performing face recognition according to the key frame.
In one embodiment, the face recognition module 708 is further configured to perform global detection on the key frame, and identify a target recognition area in the key frame, where the target recognition area is an area where a face is located in the frame image; performing face recognition on target recognition areas in a first number of frame images after the key frame; and obtaining a face recognition result according to the key frame and the first number of frame images.
In one embodiment, the face recognition module 708 is further configured to select a second number of frame images from the scanned multiple frame images as key frames; performing region detection on a preset region in each key frame, forming global detection according to the region detection on a second number of key frames, and identifying a target identification region where the face is located in a frame image; performing face recognition on target recognition areas in a first number of frame images after the last key frame; and obtaining a face recognition result according to the second number of key frames and the first number of frame images.
In one embodiment, the face recognition module 708 is further configured to select a third number of frame images as key frames at intervals from the scanned multiple frame images; carrying out global detection on each key frame, and identifying a target identification area in each key frame, wherein the target identification area is an area where the face is located in the current key frame; performing face recognition on a region which is the same as a target recognition region of a current frame image in a frame image between the current key frame and a next key frame of the current key frame; performing face recognition on the same area as the target recognition area of the current frame image in the frame images of the fourth number after the last key frame; and obtaining a face recognition result according to the recognition information of each frame image participating in recognition.
The division of each module in the face recognition device is only used for illustration, and in other embodiments, the face recognition device may be divided into different modules as needed to complete all or part of the functions of the face recognition device.
In one embodiment, a computer-readable storage medium is provided, on which a computer program is stored, which when executed by a processor implements the steps of the face recognition method provided by the above embodiments.
An electronic device includes a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor executes the computer program to implement the steps of the face recognition method provided in the foregoing embodiments.
The embodiment of the application also provides a computer program product. A computer program product containing instructions which, when run on a computer, cause the computer to perform the steps of the face recognition method provided by the above embodiments.
The embodiment of the application also provides the electronic equipment. As shown in fig. 8, for convenience of explanation, only the parts related to the embodiments of the present application are shown, and details of the technology are not disclosed, please refer to the method part of the embodiments of the present application. The electronic device may be any terminal device including a mobile phone, a tablet computer, a PDA (Personal Digital Assistant), a POS (Point of Sales), a vehicle-mounted computer, a wearable device, and the like, taking the electronic device as the mobile phone as an example:
fig. 8 is a block diagram of a partial structure of a mobile phone related to an electronic device provided in an embodiment of the present application. Referring to fig. 8, the handset includes: radio Frequency (RF) circuitry 810, memory 820, input unit 830, display unit 840, sensor 850, audio circuitry 860, wireless fidelity (WiFi) module 870, processor 880, and power supply 890. Those skilled in the art will appreciate that the handset configuration shown in fig. 8 is not intended to be limiting and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components.
The RF circuit 810 may be used for receiving and transmitting signals during information transmission and reception or during a call, and may receive downlink information of a base station and then process the downlink information to the processor 880; the uplink data may also be transmitted to the base station. Typically, the RF circuitry includes, but is not limited to, an antenna, at least one Amplifier, a transceiver, a coupler, a Low Noise Amplifier (LNA), a duplexer, and the like. In addition, the RF circuit 810 may also communicate with networks and other devices via wireless communication. The wireless communication may use any communication standard or protocol, including but not limited to Global System for Mobile communication (GSM), General Packet Radio Service (GPRS), Code Division Multiple Access (CDMA), Wideband Code Division Multiple Access (WCDMA), Long Term Evolution (LTE)), e-mail, Short Messaging Service (SMS), and the like.
The memory 820 may be used to store software programs and modules, and the processor 880 executes various functional applications and data processing of the cellular phone by operating the software programs and modules stored in the memory 820. The memory 820 may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system, an application program required for at least one function (such as an application program for a sound playing function, an application program for an image playing function, and the like), and the like; the data storage area may store data (such as audio data, an address book, etc.) created according to the use of the mobile phone, and the like. Further, the memory 820 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The input unit 830 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the cellular phone 800. Specifically, the input unit 830 may include a touch panel 831 and other input devices 832. The touch panel 831, which may also be referred to as a touch screen, may collect touch operations performed by a user on or near the touch panel 831 (e.g., operations performed by the user on the touch panel 831 or near the touch panel 831 using any suitable object or accessory such as a finger, a stylus, etc.) and drive the corresponding connection device according to a preset program. In one embodiment, the touch panel 831 can include two portions, a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts it to touch point coordinates, and sends the touch point coordinates to the processor 880, and can receive and execute commands from the processor 880. In addition, the touch panel 831 may be implemented by various types such as a resistive type, a capacitive type, an infrared ray, and a surface acoustic wave. The input unit 830 may include other input devices 832 in addition to the touch panel 831. In particular, other input devices 832 may include, but are not limited to, one or more of a physical keyboard, function keys (such as volume control keys, switch keys, etc.), and the like.
The display unit 840 may be used to display information input by the user or information provided to the user and various menus of the cellular phone. The display unit 840 may include a display panel 841. In one embodiment, the Display panel 841 may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like. In one embodiment, touch panel 831 can overlay display panel 841, and when touch panel 831 detects a touch operation thereon or nearby, communicate to processor 880 to determine the type of touch event, and processor 880 can then provide a corresponding visual output on display panel 841 based on the type of touch event. Although in fig. 8, the touch panel 831 and the display panel 841 are two separate components to implement the input and output functions of the mobile phone, in some embodiments, the touch panel 831 and the display panel 841 may be integrated to implement the input and output functions of the mobile phone.
The cell phone 800 may also include at least one sensor 850, such as light sensors, motion sensors, and other sensors. Specifically, the light sensor may include an ambient light sensor that adjusts the brightness of the display panel 841 according to the brightness of ambient light, and a proximity sensor that turns off the display panel 841 and/or the backlight when the mobile phone is moved to the ear. The motion sensor can comprise an acceleration sensor, the acceleration sensor can detect the magnitude of acceleration in each direction, the magnitude and the direction of gravity can be detected when the mobile phone is static, and the motion sensor can be used for identifying the application of the gesture of the mobile phone (such as horizontal and vertical screen switching), the vibration identification related functions (such as pedometer and knocking) and the like; the mobile phone may be provided with other sensors such as a gyroscope, a barometer, a hygrometer, a thermometer, and an infrared sensor.
The audio circuitry 860, speaker 861 and microphone 862 may provide an audio interface between the user and the handset. The audio circuit 860 can transmit the electrical signal converted from the received audio data to the speaker 861, and the electrical signal is converted into a sound signal by the speaker 861 and output; on the other hand, the microphone 862 converts the collected sound signal into an electrical signal, which is received by the audio circuit 860 and converted into audio data, and then the audio data is cut by the audio data output processor 880 and then transmitted to another mobile phone via the RF circuit 810, or the audio data is output to the memory 820 for subsequent processing.
WiFi belongs to short-distance wireless transmission technology, and the mobile phone can help a user to send and receive e-mails, browse webpages, access streaming media and the like through the WiFi module 870, and provides wireless broadband Internet access for the user. Although fig. 8 shows WiFi module 870, it is understood that it is not an essential component of cell phone 800 and may be omitted as desired.
The processor 880 is a control center of the mobile phone, connects various parts of the entire mobile phone using various interfaces and lines, and performs various functions of the mobile phone and processes data by operating or executing software programs and/or modules stored in the memory 820 and calling data stored in the memory 820, thereby integrally monitoring the mobile phone. In one embodiment, processor 880 may include one or more processing units. In one embodiment, the processor 880 may integrate an application processor and a modem processor, wherein the application processor primarily handles operating systems, user interfaces, applications, and the like; the modem processor handles primarily wireless communications. It will be appreciated that the modem processor described above may not be integrated into processor 880.
The cell phone 800 also includes a power supply 890 (e.g., a battery) for powering the various components, which may be logically coupled to the processor 880 via a power management system that may be used to manage charging, discharging, and power consumption.
In one embodiment, the cell phone 800 may also include a camera, a bluetooth module, and the like.
In the embodiment of the present application, the processor 880 included in the mobile terminal implements the steps of the above-described face recognition method when executing the computer program stored in the memory.
Any reference to memory, storage, database, or other medium used herein may include non-volatile and/or volatile memory. Suitable non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM), which acts as external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms, such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), double data rate SDRAM (DDR SDRAM), Enhanced SDRAM (ESDRAM), synchronous Link (Synchlink) DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and bus dynamic RAM (RDRAM).
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the present application. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (10)

1. A face recognition method, comprising:
receiving a face recognition instruction;
acquiring scanned continuous multi-frame images according to a first performance mode;
detecting the similarity between the multi-frame images;
when the similarity is larger than a preset similarity threshold, judging that the motion state of the equipment is a static state;
when the motion state is a moving state, the equipment performs face recognition in a first performance mode;
switching the device to a second performance mode when the motion state is a stationary state;
performing face recognition on the scanned frame image in a second performance mode;
when the identification result is obtained, restoring the first performance mode;
the occupancy rate of the device on resources when the face recognition is carried out in the first performance mode is smaller than that in the second performance mode, and the efficiency of face recognition in the second performance mode is higher than that in the first performance mode.
2. The method of claim 1, further comprising:
the motion detection element is invoked to detect a motion state of the device in accordance with a first performance mode.
3. The method according to claim 1, wherein the performing face recognition on the scanned frame image in the second performance mode comprises:
selecting one or more frames of images from the scanned multi-frame images as key frames;
and under a second performance mode, carrying out face recognition according to the key frame.
4. The method of claim 3, wherein the performing face recognition based on the key frame comprises:
carrying out global detection on the key frame, and identifying a target identification area in the key frame, wherein the target identification area is an area where a human face is located in a frame image;
performing face recognition on target recognition areas in a first number of frame images after the key frame;
and obtaining a face recognition result according to the key frame and the first number of frame images.
5. The method according to claim 3, wherein the selecting one or more frames of images from the scanned multiple frames of images as key frames comprises:
selecting a second number of frame images from the scanned multi-frame images as key frames;
the face recognition according to the key frame comprises the following steps:
performing region detection on a preset region in each key frame, forming global detection according to the region detection on a second number of key frames, and identifying a target identification region where the face is located in a frame image;
performing face recognition on target recognition areas in a first number of frame images after the last key frame;
and obtaining a face recognition result according to the second number of key frames and the first number of frame images.
6. The method according to claim 3, wherein the selecting one or more frames of images from the scanned multiple frames of images as key frames comprises:
selecting a third number of frame images at intervals from the scanned multi-frame images as key frames;
the face recognition according to the key frame comprises the following steps:
carrying out global detection on each key frame, and identifying a target identification area in each key frame, wherein the target identification area is an area where a human face is located in the current key frame;
performing face recognition on a region which is the same as a target recognition region of a current frame image in a frame image between the current key frame and a key frame next to the current key frame;
performing face recognition on the same area as the target recognition area of the current frame image in the frame images of the fourth number after the last key frame;
and obtaining a face recognition result according to the recognition information of each frame image participating in recognition.
7. An apparatus for face recognition, the apparatus comprising:
the face recognition instruction receiving module is used for receiving a face recognition instruction;
the motion state detection module is used for acquiring scanned continuous multi-frame images according to a first performance mode; detecting the similarity between the multi-frame images; when the similarity is larger than a preset similarity threshold, judging that the motion state of the equipment is a static state;
a performance switching module for switching the device to a second performance mode when the motion state changes to a stationary state;
the face recognition module is used for carrying out face recognition on the scanned frame image in a second performance mode;
the performance switching module is further used for restoring the first performance mode when an identification result is obtained; when the device performs face recognition in the first performance mode, the occupancy rate of resources is smaller than that in the second performance mode, and the efficiency of performing face recognition in the second performance mode is higher than that in the first performance mode;
the face recognition module is further configured to perform face recognition in a first performance mode when the motion state is a moving state.
8. The apparatus of claim 7, wherein the motion state detection module is configured to invoke a motion detection element to detect the motion state of the device according to the first performance mode.
9. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 6.
10. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the steps of the method of any of claims 1 to 6 are implemented when the computer program is executed by the processor.
CN201711014406.5A 2017-10-26 2017-10-26 Face recognition method and device, storage medium and electronic equipment Active CN107729857B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711014406.5A CN107729857B (en) 2017-10-26 2017-10-26 Face recognition method and device, storage medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711014406.5A CN107729857B (en) 2017-10-26 2017-10-26 Face recognition method and device, storage medium and electronic equipment

Publications (2)

Publication Number Publication Date
CN107729857A CN107729857A (en) 2018-02-23
CN107729857B true CN107729857B (en) 2021-05-28

Family

ID=61213839

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711014406.5A Active CN107729857B (en) 2017-10-26 2017-10-26 Face recognition method and device, storage medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN107729857B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110163041A (en) * 2018-04-04 2019-08-23 腾讯科技(深圳)有限公司 Video pedestrian recognition methods, device and storage medium again
CN110197107B (en) * 2018-08-17 2024-05-28 平安科技(深圳)有限公司 Micro-expression recognition method, micro-expression recognition device, computer equipment and storage medium
CN110705497A (en) * 2019-10-11 2020-01-17 Oppo广东移动通信有限公司 Image frame processing method and device, terminal equipment and computer readable storage medium
CN118116117A (en) * 2022-11-30 2024-05-31 荣耀终端有限公司 Crowd identification method, device and storage medium

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TW200705163A (en) * 2005-07-29 2007-02-01 Holtek Semiconductor Inc Power-saving device and method of a wireless optical mouse
CN103957327A (en) * 2014-05-21 2014-07-30 上海华勤通讯技术有限公司 Automatic contextual model switching method for mobile terminal and mobile terminal
CN104092822A (en) * 2014-07-01 2014-10-08 惠州Tcl移动通信有限公司 Mobile phone state switching method and system based on face detection and eyeball tracking
CN104508605A (en) * 2012-07-12 2015-04-08 史威特点有限公司 Improvements in devices for use with computers
CN105279813A (en) * 2015-10-23 2016-01-27 北京奇虎科技有限公司 Electronic equipment and switching method and device of power supply modes of electronic equipment
CN106196509A (en) * 2016-08-19 2016-12-07 珠海格力电器股份有限公司 air conditioner sleep mode control method and system
CN106851407A (en) * 2017-01-24 2017-06-13 维沃移动通信有限公司 A kind of control method and terminal of video playback progress
CN106951316A (en) * 2017-03-20 2017-07-14 北京奇虎科技有限公司 Changing method, device and the virtual reality device of Virtualization Mode and Realistic model
CN108156345A (en) * 2016-12-06 2018-06-12 柯尼卡美能达株式会社 The image forming apparatus of consumption electric power can be reduced

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103092345B (en) * 2013-01-11 2015-12-23 广东欧珀移动通信有限公司 A kind of mode switching method of mobile terminal and device
CN104935812B (en) * 2015-05-29 2017-11-03 广东欧珀移动通信有限公司 A kind of method and device for controlling self-timer mode to open
CN105678250B (en) * 2015-12-31 2019-10-11 北京迈格威科技有限公司 Face identification method and device in video
CN105740675B (en) * 2016-02-02 2018-08-28 深圳市指点信息科技有限公司 A kind of method and system triggering empowerment management based on dynamic person recognition
CN106817653B (en) * 2017-02-17 2020-01-14 Oppo广东移动通信有限公司 Audio setting method and device
CN107066983B (en) * 2017-04-20 2022-08-09 腾讯科技(上海)有限公司 Identity verification method and device
CN107220620A (en) * 2017-05-27 2017-09-29 北京小米移动软件有限公司 Face identification method and device

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TW200705163A (en) * 2005-07-29 2007-02-01 Holtek Semiconductor Inc Power-saving device and method of a wireless optical mouse
CN104508605A (en) * 2012-07-12 2015-04-08 史威特点有限公司 Improvements in devices for use with computers
CN103957327A (en) * 2014-05-21 2014-07-30 上海华勤通讯技术有限公司 Automatic contextual model switching method for mobile terminal and mobile terminal
CN104092822A (en) * 2014-07-01 2014-10-08 惠州Tcl移动通信有限公司 Mobile phone state switching method and system based on face detection and eyeball tracking
CN105279813A (en) * 2015-10-23 2016-01-27 北京奇虎科技有限公司 Electronic equipment and switching method and device of power supply modes of electronic equipment
CN106196509A (en) * 2016-08-19 2016-12-07 珠海格力电器股份有限公司 air conditioner sleep mode control method and system
CN108156345A (en) * 2016-12-06 2018-06-12 柯尼卡美能达株式会社 The image forming apparatus of consumption electric power can be reduced
CN106851407A (en) * 2017-01-24 2017-06-13 维沃移动通信有限公司 A kind of control method and terminal of video playback progress
CN106951316A (en) * 2017-03-20 2017-07-14 北京奇虎科技有限公司 Changing method, device and the virtual reality device of Virtualization Mode and Realistic model

Also Published As

Publication number Publication date
CN107729857A (en) 2018-02-23

Similar Documents

Publication Publication Date Title
CN106844484B (en) Information searching method and device and mobile terminal
CN108022274B (en) Image processing method, image processing device, computer equipment and computer readable storage medium
CN107729857B (en) Face recognition method and device, storage medium and electronic equipment
CN107729889B (en) Image processing method and device, electronic equipment and computer readable storage medium
EP3454240B1 (en) Unlocking methods and related products
CN109213416B (en) Display information processing method and mobile terminal
CN108512625B (en) Anti-interference method for camera, mobile terminal and storage medium
CN108460817B (en) Jigsaw puzzle method and mobile terminal
CN106406530B (en) Screen display method and mobile terminal thereof
CN107948729B (en) Rich media processing method and device, storage medium and electronic equipment
CN107707824B (en) Shooting method, shooting device, storage medium and electronic equipment
CN109409235B (en) Image recognition method and device, electronic equipment and computer readable storage medium
CN110347858B (en) Picture generation method and related device
CN108984066B (en) Application icon display method and mobile terminal
CN109246474B (en) Video file editing method and mobile terminal
CN109302528B (en) Photographing method, mobile terminal and computer readable storage medium
CN111405043B (en) Information processing method and device and electronic equipment
CN108573169A (en) Nearest task list display methods and device, storage medium, electronic equipment
CN108804062A (en) Display methods and device, storage medium, electronic equipment
CN108924413B (en) Shooting method and mobile terminal
CN108762641B (en) Text editing method and terminal equipment
CN110753914B (en) Information processing method, storage medium and mobile terminal
US20190080152A1 (en) Method for collecting facial information and related products
CN106803863A (en) A kind of image sharing method and terminal
CN106851784B (en) network scanning method and terminal equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: Changan town in Guangdong province Dongguan 523860 usha Beach Road No. 18

Applicant after: GUANGDONG OPPO MOBILE TELECOMMUNICATIONS Corp.,Ltd.

Address before: Changan town in Guangdong province Dongguan 523860 usha Beach Road No. 18

Applicant before: GUANGDONG OPPO MOBILE TELECOMMUNICATIONS Corp.,Ltd.

GR01 Patent grant
GR01 Patent grant