CN110263680B - Image processing method, device and system and storage medium - Google Patents

Image processing method, device and system and storage medium Download PDF

Info

Publication number
CN110263680B
CN110263680B CN201910477796.2A CN201910477796A CN110263680B CN 110263680 B CN110263680 B CN 110263680B CN 201910477796 A CN201910477796 A CN 201910477796A CN 110263680 B CN110263680 B CN 110263680B
Authority
CN
China
Prior art keywords
face
image
optimal
face image
quality
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910477796.2A
Other languages
Chinese (zh)
Other versions
CN110263680A (en
Inventor
卢龙飞
王宁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Kuangshi Technology Co Ltd
Original Assignee
Beijing Kuangshi Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Kuangshi Technology Co Ltd filed Critical Beijing Kuangshi Technology Co Ltd
Priority to CN201910477796.2A priority Critical patent/CN110263680B/en
Publication of CN110263680A publication Critical patent/CN110263680A/en
Application granted granted Critical
Publication of CN110263680B publication Critical patent/CN110263680B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/94Hardware or software architectures specially adapted for image or video understanding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30168Image quality inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Landscapes

  • Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Quality & Reliability (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)

Abstract

The embodiment of the invention provides an image processing method, an image processing device, an image processing system and a storage medium. The method comprises the following steps: acquiring an image sequence in real time; performing face detection on each image in the image sequence in real time, acquiring a face image corresponding to each face under the condition that the face is detected, and analyzing the face quality of each face image; the method comprises the steps that the earliest face image with the face quality meeting the preset requirement at the earliest time in face images belonging to the same target object is sent to a face recognition module for face recognition; caching the optimal face image with the best face quality and the face quality exceeding the earliest face image in the face images belonging to the target object in the optimal frame queue; and at a preset moment, checking the image caching condition of the optimal frame queue, and if the optimal face image belonging to the specific object is cached in the optimal frame queue and the face of the specific object is not recognized, sending the optimal face image belonging to the specific object to a face recognition module for face recognition. Useless calculation can be effectively reduced.

Description

Image processing method, device and system and storage medium
Technical Field
The present invention relates to the field of face recognition, and more particularly, to an image processing method, apparatus and system, and a storage medium.
Background
In the field of face recognition, an image acquisition device (e.g., a camera) may be used to acquire a face image of a person, and then face recognition may be performed using the acquired face image. In the process that a person appears in a camera picture and leaves the camera picture, the camera can acquire a plurality of face images of the person, if each face image is sent to the face recognition module for face recognition, huge calculation load can be brought to a face recognition system, the problems of equipment heating and scalding, reduced calculation capacity, more power consumption, short endurance time and the like are easily caused, and most importantly, useless calculation can be increased because the person does not need to repeatedly recognize once being successfully recognized again.
Disclosure of Invention
The present invention has been made in view of the above problems. The invention provides an image processing method, an image processing device, an image processing system and a storage medium.
According to an aspect of the present invention, there is provided an image processing method. The method comprises the following steps: acquiring an image sequence in real time; performing face detection on each image in the image sequence in real time, acquiring a face image corresponding to each face under the condition that the face is detected, and analyzing the face quality of each face image; the method comprises the steps that the earliest face image with the face quality meeting the preset requirement at the earliest time in face images belonging to the same target object is sent to a face recognition module for face recognition; caching the optimal face image with the best face quality and the face quality exceeding the earliest face image in the face images belonging to the target object in the optimal frame queue; and at a preset moment, checking the image caching condition of the optimal frame queue, and if the optimal face image belonging to the specific object is cached in the optimal frame queue and the face of the specific object is not recognized, sending the optimal face image belonging to the specific object to a face recognition module for face recognition.
Illustratively, the sending, to the face recognition module, an earliest face image whose face quality earliest meets a preset requirement among face images belonging to the same target object for face recognition includes: judging whether the face quality of each face image in the face images belonging to the target object meets a preset requirement or not in real time until the earliest face image appears; and sending the earliest face image to a face recognition module for face recognition.
Illustratively, the sending, to the face recognition module, an earliest face image whose face quality earliest meets a preset requirement among face images belonging to the same target object for face recognition includes: judging whether the face quality of each face image in the face images belonging to the target object meets a preset requirement or not in real time; under the condition that the face quality of the current face image in the face images belonging to the target object meets a preset requirement, judging whether any previous face image exists in the face images belonging to the target object and is sent to a face recognition module, and if not, determining that the current face image is the earliest face image; and sending the earliest face image to a face recognition module for face recognition.
Illustratively, caching the optimal face image having the best face quality and having the face quality exceeding the earliest face image among the face images belonging to the target object in the optimal frame queue includes: caching a face image of which the first face quality exceeds the face quality of the earliest face image in an optimal frame queue as an optimal face image in the face images belonging to the target object; comparing the face quality of the current face image with the face quality of the optimal face image every time when the current face image belonging to the target object is obtained; and under the condition that the face quality of the current face image exceeds the face quality of the optimal face image, updating the optimal face image cached in the optimal frame queue into the current face image.
Illustratively, the target object and the specific object are the same object, and checking the image buffer condition of the optimal frame queue at the predetermined time includes: in the case where face recognition based on the earliest face image fails, it is checked whether a face image belonging to the target object is cached in the optimal frame queue, wherein the predetermined timing is any timing after face recognition based on the earliest face image fails.
Illustratively, checking the image buffering condition of the optimal frame queue at a predetermined time includes: and checking whether the face image belonging to any object is cached in the optimal frame queue or not at intervals of a preset time period, wherein the specific object is any object of which the face image is cached in the optimal frame queue.
Illustratively, after checking the image buffering condition of the optimal frame queue at a predetermined time, the method further comprises: if the optimal face image belonging to the specific object is cached in the optimal frame queue and the face of the specific object is recognized, the optimal face image belonging to the specific object is removed from the optimal frame queue and/or the optimal face image belonging to the specific object is stored in a memory.
Illustratively, when performing face detection on each image in the image sequence in real time, the method further comprises: for two faces respectively located in two different images of an image sequence, whether the two faces belong to the same target object is judged based on the difference between position information in face detection results of the two faces.
According to another aspect of the present invention, there is provided an image processing apparatus comprising: the acquisition module is used for acquiring an image sequence in real time; the detection and analysis module is used for carrying out face detection on each image in the image sequence in real time, acquiring a face image corresponding to each face under the condition that the face is detected, and analyzing the face quality of each face image; the transmitting module is used for transmitting the earliest face image of which the face quality in the face images belonging to the same target object meets the preset requirement to the face recognition module for face recognition; the caching module is used for caching the optimal face image with the best face quality and the face quality exceeding the earliest face image in the face images belonging to the target object in the optimal frame queue; and the checking module is used for checking the image caching condition of the optimal frame queue at a preset moment, and if the optimal face image belonging to the specific object is cached in the optimal frame queue and the face of the specific object is not recognized, the optimal face image belonging to the specific object is sent to the face recognition module for face recognition.
According to another aspect of the present invention, there is provided an image processing system comprising a processor and a memory, wherein the memory has stored therein computer program instructions for executing the above image processing method when executed by the processor.
According to another aspect of the present invention, there is provided a storage medium having stored thereon program instructions for executing the above-described image processing method when executed.
According to the image processing method, the image processing device, the image processing system and the storage medium, a face recognition strategy combining the earliest face image and the optimal face image is adopted, useless calculation can be effectively reduced, the face can be recognized as soon as possible, and meanwhile the face recognition probability can be improved as much as possible.
Drawings
The above and other objects, features and advantages of the present invention will become more apparent by describing in more detail embodiments of the present invention with reference to the attached drawings. The accompanying drawings are included to provide a further understanding of the embodiments of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the principles of the invention and not to limit the invention. In the drawings, like reference numbers generally represent like parts or steps.
FIG. 1 shows a schematic block diagram of an example electronic device for implementing an image processing method and apparatus in accordance with embodiments of the present invention;
FIG. 2 shows a schematic flow diagram of an image processing method according to an embodiment of the invention;
FIG. 3 shows a schematic diagram of an image processing flow according to one example of the invention;
FIG. 4 shows a schematic block diagram of an image processing apparatus according to an embodiment of the present invention; and
FIG. 5 shows a schematic block diagram of an image processing system according to one embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, exemplary embodiments according to the present invention will be described in detail below with reference to the accompanying drawings. It is to be understood that the described embodiments are merely a subset of embodiments of the invention and not all embodiments of the invention, with the understanding that the invention is not limited to the example embodiments described herein. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the invention described herein without inventive step, shall fall within the scope of protection of the invention.
The embodiment of the invention provides an image processing method, device and system and a storage medium. According to the image processing method provided by the embodiment of the invention, the face recognition strategy combining the earliest face image and the optimal face image is adopted, the strategy can effectively reduce useless calculation, recognize the face as soon as possible and simultaneously improve the face recognition probability as much as possible. The image processing method and the image processing device can be applied to any field needing face recognition, such as the fields of security entrance guard, electronic commerce, banking business and the like.
First, an exemplary electronic device 100 for implementing an image processing method and apparatus according to an embodiment of the present invention is described with reference to fig. 1.
As shown in fig. 1, electronic device 100 includes one or more processors 102, one or more memory devices 104. Optionally, the electronic device 100 may also include an input device 106, an output device 108, and an image capture device 110, which may be interconnected via a bus system 112 and/or other form of connection mechanism (not shown). It should be noted that the components and structure of the electronic device 100 shown in fig. 1 are exemplary only, and not limiting, and the electronic device may have other components and structures as desired.
The processor 102 may be implemented in at least one hardware form of a Digital Signal Processor (DSP), a Field Programmable Gate Array (FPGA), a Programmable Logic Array (PLA), a microprocessor, the processor 102 may be one or a combination of several of a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), an Application Specific Integrated Circuit (ASIC), or other forms of processing units having data processing capability and/or instruction execution capability, and may control other components in the electronic device 100 to perform desired functions.
The storage 104 may include one or more computer program products that may include various forms of computer-readable storage media, such as volatile memory and/or non-volatile memory. The volatile memory may include, for example, Random Access Memory (RAM), cache memory (cache), and/or the like. The non-volatile memory may include, for example, Read Only Memory (ROM), hard disk, flash memory, etc. On which one or more computer program instructions may be stored that may be executed by processor 102 to implement client-side functionality (implemented by the processor) and/or other desired functionality in embodiments of the invention described below. Various applications and various data, such as various data used and/or generated by the applications, may also be stored in the computer-readable storage medium.
The input device 106 may be a device used by a user to input instructions and may include one or more of a keyboard, a mouse, a microphone, a touch screen, and the like.
The output device 108 may output various information (e.g., images and/or sounds) to an external (e.g., user), and may include one or more of a display, a speaker, etc. Alternatively, the input device 106 and the output device 108 may be integrated together, implemented using the same interactive device (e.g., a touch screen).
The image capture device 110 may capture images and store the captured images in the storage device 104 for use by other components. The image capture device 110 may be a separate camera or a camera in a mobile terminal, etc. It should be understood that the image capture device 110 is merely an example, and the electronic device 100 may not include the image capture device 110. In this case, other devices having image capturing capabilities may be used to capture an image and transmit the captured image to the electronic device 100.
Exemplary electronic devices for implementing the image processing method and apparatus according to embodiments of the present invention may be implemented on devices such as personal computers or remote servers, for example.
Next, an image processing method according to an embodiment of the present invention will be described with reference to fig. 2. FIG. 2 shows a schematic flow diagram of an image processing method 200 according to one embodiment of the invention. As shown in fig. 2, the image processing method 200 includes the following steps S210, S220, S230, S240, and S250.
In step S210, a sequence of images is acquired in real time.
The sequence of images may comprise at least one image. Each image in the image sequence may be a still image or a video frame in a video segment, in which case the image sequence is the video segment. Each image in the image sequence may be an original image acquired by the image acquisition device, or may be an image obtained after preprocessing (such as digitizing, normalizing, smoothing, etc.) the original image.
The image sequence may come from an external device, be transferred by the external device to the electronic device 100 for image processing or image processing and subsequent face recognition. In addition, the image sequence can also be acquired by the electronic device 100 for a human face. For example, the electronic device 100 may acquire at least one image with the image acquisition arrangement 110 (e.g., a separate camera) to obtain a sequence of images. The image capture device 110 may transmit the captured image sequence to the processor 102 for image processing by the processor 102 or image processing and subsequent face recognition.
Step S220, performing face detection on each image in the image sequence in real time, acquiring a face image corresponding to each face when the face is detected, and analyzing the face quality of each face image.
The images in the image sequence can be acquired in sequence and in real time, when each current image is acquired, the current image is subjected to face detection, a face image corresponding to each face in the current image is acquired under the condition that the face is detected, and the face quality of each face image is analyzed.
Any existing or future face detection method may be used for face detection. Those skilled in the art can understand the implementation of various face detection methods, which are not described herein.
By face detection, a face detection result of each face contained in each image can be obtained. Illustratively, the face detection result may include position information indicating a position where the corresponding face is located. For example, a bounding box (abbreviated as bbox) can be used to represent the position of the face. Illustratively, the bounding box may be a rectangular box. For example, the position information of the bounding box, i.e. the position information of the corresponding face, may be represented by four numerical values. For example, the position information of any face can be represented by the following numerical values: the horizontal coordinate x of the upper left corner of the boundary box corresponding to the face, the vertical coordinate y of the upper left corner, the width w of the boundary box and the height h of the boundary box. For another example, the position information of any face may be represented by coordinates of four vertices of a bounding box corresponding to the face.
The manner in which the bounding box represents the location of a face is by way of example only and not by way of limitation. For example, the face position may be represented by a preset number of key points on the face and/or a face contour line, and accordingly, the position information of any face may include coordinates of the preset number of key points and/or coordinates of contour points on the face contour line. The preset number of key points may include, for example, a center point of a human face.
After the face detection is performed on the current image, face images respectively including faces in the image can be obtained from the current image, each face image can mainly include a corresponding face, and other faces and other types of interference (such as buildings and the like) are eliminated as much as possible. In one example, each face image may be an image block extracted from a corresponding image in the image sequence and including a face corresponding to the face image, or an image obtained by performing processing such as scaling on the extracted image block. For example, assuming that one image contains two faces, two image blocks respectively containing two faces may be extracted from the image. The extracted image blocks can be used for subsequent operations such as face recognition. When extracting an image block including any face, the same size image block may be extracted for each face, or different sizes of image blocks may be extracted as needed and then normalized by scaling or the like. In another example, each face image may be a corresponding image in the image sequence, or an image obtained by processing the corresponding image, such as scaling, noise reduction, and the like. The latter example is particularly applicable in case each image in the sequence of images contains at most only one face, in which case it may not be necessary to extract a separate image block for each face.
For a face image, there may be various indexes to measure the quality of the face. For example, the quality of the face can be measured according to one or more of the indexes of face pose (which can be expressed by face angle, that is, the angle of the face deflected to some direction), image blurring degree, face shielding state, illumination condition and the like in the face image.
Specifically, for example, if the face side face angle or the pitch angle of the face exceeds the angle threshold, the face quality may be considered to be unqualified, and it may be considered to fail to meet the requirement of face recognition accuracy. On the contrary, if the face side face angle or the pitch angle of the face does not exceed the angle threshold, the face quality can be considered to be qualified. For another example, if the degree of blurring of the face image exceeds the blurring threshold, the face quality may also be considered to be unqualified. On the contrary, if the blurring degree of the face image does not exceed the blurring threshold, the face quality can be considered to be qualified. As another example, if some key parts (e.g., eyes and/or mouth) in the face are occluded, the face quality is considered to be not good. Otherwise, if the key part in the face is not shielded, the face quality is qualified. For another example, if the illumination brightness of the face image is lower than the brightness threshold, the face quality is considered to be unqualified. And otherwise, if the illumination brightness of the face image is not lower than the brightness threshold value, the face quality is qualified. For another example, various indexes may be considered comprehensively, for example, when the blurring degree of the face image exceeds a blurring threshold or the image brightness is lower than a brightness threshold, the face quality may be considered to be unqualified. Conversely, the face quality can be considered to be qualified under the condition that the blurring degree of the face image does not exceed the blurring threshold and the image brightness is not lower than the brightness threshold. It should be understood by those skilled in the art that the combination of the above indexes is exemplary, the present invention is not limited thereto, and those skilled in the art can also make various combinations of the above indexes according to actual needs.
Illustratively, for each face image, one or more indexes related to the face quality, such as the face pose, the image blur degree, the face shielding state, the illumination condition and the like, can be integrated to calculate a quality score. In this case, the face quality analysis result of each face image may include a quality score of the face image. The quality score can indicate the quality of the face. The quality score can be compared with a preset quality threshold value to judge whether the quality of the face is qualified. The quality threshold may be any suitable threshold, which may be set as desired, and the invention is not limited thereto.
The face quality may be analyzed using any existing or future face quality analysis method that may occur. By way of example and not limitation, a convolutional neural network may be utilized to analyze the face quality of a face image. For example, different convolutional neural networks may be trained respectively for the above different indexes (e.g., the face pose, the image blur degree, and the face shielding state), each convolutional neural network outputs a score of a corresponding index, and finally, the scores may be combined to obtain a total quality score. For another example, a convolutional neural network may be trained synthetically for the above-mentioned various indicators, and the convolutional neural network may directly output the total quality score.
In step S230, the earliest face image whose face quality earliest meets the preset requirement in the face images belonging to the same target object is sent to the face recognition module for face recognition.
The target object may be anyone contained in the sequence of images. In the process of performing face detection in real time, faces belonging to the same object can be found out according to face detection results of different images before the current time, and the process can be called as object tracking.
For example, the position information of each face detected from each image of the image sequence may be input into the tracker module in real time, and the tracker module may obtain at least one tracking trajectory associated with each object through an object tracking algorithm, so as to know which faces belong to the same object. For example, by tracing, a large number of bounding boxes with trace identifiers (track IDs) may be obtained, and bounding boxes of the same object may be assigned the same track ID. That is, two bounding boxes located in different images have the same track ID, which means that they belong to the same object. The face image of the corresponding face can be extracted based on the bounding box, so that the face or the face image can be known to belong to the same object. Each track ID may represent a trace track, e.g., object A may have a track ID of 1, object B may have a track ID of 2, and so on. Thus, one tracking trajectory may be obtained for each object.
Assuming that the same object a is detected from 10 images of the image sequence before the current time, 10 face images of the object a may be acquired accordingly. For example, when each face image of the object a is obtained, the face quality score of the face image may be compared with a quality threshold to determine whether the face quality score meets a preset requirement. For example, if the quality threshold is 90 points, and the face quality scores of the first 5 face images are all lower than the threshold, but the face quality score of the 6 th face image is 91 points, it may be determined that the 6 th face image is the earliest face image whose face quality reaches the preset requirement at the earliest time. At this time, the 6 th personal face image of the object a may be sent to the face recognition module for face recognition.
Face recognition may include comparing a face in the face image with a reference face to identify an identity of an object to which the face image belongs. The comparison between the face image and the reference image can be one-to-one comparison or one-to-many comparison. The implementation of face recognition can be understood by those skilled in the art, and is not described in detail herein.
For any object detected currently, whether the face quality of each face image belonging to the object meets a preset requirement can be judged in real time. When the face image which meets the preset requirement at the earliest is found, the face image can be sent to a face recognition module for face recognition, and even if the face image which meets the preset requirement of the object is obtained, the face image is not sent to the face recognition module any more unless the face image is the optimal face image buffered in the optimal frame queue.
It is understood that the above-described operation of transmitting the earliest face image of the target object to the face recognition module for face recognition is performed in the presence of the earliest face image of the target object. In the process of acquiring the image sequence in real time, as time increases, the number of images in the image sequence is increased, and the quality of the face of the target object may not be detected in the first images to meet the preset requirement, but when a certain image is acquired, the quality of the face of the target object may be detected to meet the preset requirement, and at this time, step S230 may be executed.
In step S240, the optimal face image with the best face quality and the face quality exceeding the earliest face image among the face images belonging to the target object is buffered in the optimal frame queue.
Assuming that only the earliest face image is transmitted for face recognition, if the quality analysis has errors or the face cannot be correctly recognized even if the face quality meets the preset requirement, the face of the object may not be recognized any more. Therefore, when the face is identified through the earliest face image, the optimal face image with the best quality can be cached, and the face can be identified under the condition that the face cannot be identified through the earliest face image in an auxiliary mode through the optimal face image. The calculation amount required for the operations of caching the optimal face image and performing the auxiliary face recognition through the optimal face image is much smaller than that required for performing the face recognition on each face image, so that the calculation amount required for a scheme of performing the face recognition by combining the earliest face image and the optimal face image is much smaller than that required for a scheme of performing the face recognition on each face image, and the probability of face recognition is higher than that of a scheme of considering only a single face image for each object. Note that in this document, "performing face recognition in combination with the earliest face image and the optimal face image" refers to assisting face recognition with the optimal face image when necessary (for example, when the earliest face image recognition fails), and does not mean that face recognition must be performed with the earliest face image and the optimal face image at the same time.
Illustratively, the optimal frame queue may be stored in memory. The optimal frame queue can occupy a certain memory space, and in the optimal frame queue, memory space can be respectively allocated for each object for caching the optimal face image of the object. For the same object, the optimal frame queue can only buffer a single face image with the best face quality and the face quality meeting the preset requirement so far.
It is understood that the above operation of buffering the optimal face image in the optimal frame queue is performed after step S230. The earliest face image is the face image with the best face quality until the moment of acquiring the earliest face image, and then as the number of images in the image sequence is increased, the face image with the face quality exceeding the earliest face image is possibly detected, and at this moment, the face image can be cached in the optimal frame queue. If a face image with better face quality than the currently cached optimal face image appears later, the face image can be used as a new optimal face image to be cached.
In step S250, at a predetermined time, the image buffer condition of the optimal frame queue is checked, and if the optimal face image belonging to the specific object is buffered in the optimal frame queue and the face of the specific object is not recognized yet, the optimal face image belonging to the specific object is sent to the face recognition module for face recognition.
As described above, although the earliest face image belonging to a certain object has already been sent to the face recognition module for face recognition, there is a possibility that the recognition is not successful. For such objects, auxiliary recognition can be performed through the optimal human face images of the objects.
The predetermined time may be any suitable time. For example, the predetermined timing may be a timing after the target object fails in face recognition through its earliest face image. For example, the predetermined time may be a time that is reached every predetermined period from a time of counting (for example, a time of starting a timer described below).
For example, it may be detected at intervals of a predetermined time period whether a face image is cached in the optimal frame queue, if so, it may be determined whether a face of an object to which the face image belongs has been recognized, if so, the face image may be stored in a nonvolatile memory (e.g., a disk) for long-term storage, and if not, the face image may be used for face recognition.
The target object involved in steps S230 and S240 may be any object. In one example, steps S230 and S240 may be performed separately for each object detected from the image sequence, i.e., steps S230 and S240 may be performed separately as target objects for each object detected from the image sequence. In another example, steps S230 and S240 may be performed on each object of only a portion of the objects detected from the image sequence, respectively. For example, steps S230 and S240 may be performed by a user designating or a system automatically selecting one or more objects as target objects, respectively.
The specific object involved in step S250 may be any object. In one example, the specific object is the target object involved in step S230 and step S240, for example, in the case that the face of the target object cannot be successfully recognized through step S230, the optimal face image thereof may be read from the optimal frame queue for face recognition. In this case, when checking the image buffer condition of the optimal frame queue, a pertinence check is performed, that is, only whether the face image of the target object is buffered is checked. In another example, the specific object may be any object whose face image has been buffered in the optimal frame queue at a predetermined time. In this case, when checking the image buffer condition of the optimal frame queue, a non-targeted check is performed, that is, whether a face image of any object is buffered is checked, and if the face images of one or more objects are buffered, the one or more objects can be respectively regarded as specific objects. It is to be understood that, in the latter example, if at a predetermined time, steps S230 and S240 have been performed, i.e., the optimal face image has been cached for the target object, the non-targeted inspection may also inspect the face image of the target object, and the target object may be regarded as a specific object.
It should be noted that the steps of the image processing method 200 shown in fig. 2 are not limited to be executed in a sequential manner, for example, steps S210 and S220 are both executed in real time, and step S210 may sequentially acquire one image in the image sequence in real time, and each image is acquired, and the steps of face detection, face image extraction, face quality analysis and the like involved in step S220 may be performed on the acquired image. Further, steps S230, S240, and S250 each have a respective execution time or execution condition, which can be understood in conjunction with the description herein.
According to the image processing method provided by the embodiment of the invention, the fastest face image with the earliest qualified face quality (meeting the preset requirement) can be selected for face recognition, and face recognition is not performed on each face image, so that meaningless repeated recognition can be effectively reduced, namely useless calculation is reduced, and meanwhile, the face recognition is facilitated as fast as possible. Meanwhile, the method also caches the optimal face image with the best face quality after the earliest face image, and when a person cannot successfully recognize the face through the earliest face image, the cached optimal face image can be used for face recognition, so that the probability of the face being recognized can be improved. Therefore, the face recognition strategy combining the earliest face image and the optimal face image can effectively reduce useless calculation, recognize the face as soon as possible and improve the face recognition probability as much as possible.
Illustratively, the image processing method according to the embodiment of the present invention may be implemented in a device, apparatus, or system having a memory and a processor.
The image processing method provided by the embodiment of the invention can be deployed at an image acquisition end, for example, in the field of security application, the image processing method can be deployed at the image acquisition end of an access control system; in the field of financial applications, it may be deployed at personal terminals such as smart phones, tablets, personal computers, and the like.
Alternatively, the image processing method according to the embodiment of the present invention may also be distributively deployed at a server side (or a cloud side) and a personal terminal side. For example, an image may be acquired at a client, and the client transmits the acquired image to a server (or a cloud), and the server (or the cloud) performs image processing.
According to the embodiment of the present invention, sending the earliest face image whose face quality earliest meets the preset requirement among the face images belonging to the same target object to the face recognition module for face recognition (step S230) may include: judging whether the face quality of each face image in the face images belonging to the target object meets a preset requirement or not in real time until the earliest face image appears; and sending the earliest face image to a face recognition module for face recognition.
After the earliest face image of the target object appears, whether the subsequent face image meets the preset requirement or not can be judged, and therefore the calculation amount can be reduced.
According to the embodiment of the present invention, sending the earliest face image whose face quality earliest meets the preset requirement among the face images belonging to the same target object to the face recognition module for face recognition (step S230) may include: judging whether the face quality of each face image in the face images belonging to the target object meets a preset requirement or not in real time; under the condition that the face quality of the current face image in the face images belonging to the target object meets a preset requirement, judging whether any previous face image exists in the face images belonging to the target object and is sent to a face recognition module, and if not, determining that the current face image is the earliest face image; and sending the earliest face image to a face recognition module for face recognition.
And judging whether the face quality of each face image of the target object meets a preset requirement, if the face image meeting the preset requirement is found, judging whether other images with the face quality meeting the preset requirement exist before are sent to the face recognition module, if so, not sending the current face image, and otherwise, sending the current face image to the face recognition module for face recognition.
For example, identification information indicating whether there is a face image of which the face quality is determined to meet the preset requirement by the step S230 is transmitted to the face recognition module may be allocated to each detected object. For example, the identification information of the target object may be a flag (flag), and when an earliest face image is found and the face image is sent to the face recognition module, the data of the flag may be set from 0 to 1, and when a new face image of the target object is subsequently acquired, it may be known that an existing previous face image is sent to the face recognition module by reading the flag, and at this time, the new face image may not be sent any more.
According to the embodiment of the present invention, caching the optimal face image having the best face quality and having a face quality exceeding the earliest face image among the face images belonging to the target object in the optimal frame queue (step S240) may include: caching a face image of which the first face quality exceeds the face quality of the earliest face image in an optimal frame queue as an optimal face image in the face images belonging to the target object; comparing the face quality of the current face image with the face quality of the optimal face image every time when the current face image belonging to the target object is obtained; and under the condition that the face quality of the current face image exceeds the face quality of the optimal face image, updating the optimal face image cached in the optimal frame queue into the current face image.
For example, for a certain object a, the earliest face image I thereof can be found through step S230 firsta1. Subsequently, the next face image I of the object A is acquireda2Thereafter, I may bea2Face quality and Ia1If I, the face quality is compareda2If the face quality is better, then I can be seta2Buffered in the optimal frame queue. Subsequently, each subsequent face image I of the object A is acquiredaiThereafter, each time IaiComparing the face quality with the face image cached currently, if IaiIf the quality of the face is better, the face image is taken as a new optimal face image to be cached. It can be understood that the optimal face image is the face image whose face quality meets the preset requirement and whose face quality is higher than the earliest face image. By the method, the face image with the best face quality of the object A can be guaranteed to be cached in the optimal frame queue all the time. When it is found that the object a is not recognized yet at a predetermined time, the face recognition may be performed using the cached face image of the object a.
According to the scheme, the optimal frame queue is updated in real time, so that the optimal frame queue can always keep caching the face image with the best face quality of the target object, the face quality of the face image exceeds the face quality of the earliest face image of the target object, and the probability of face recognition is improved.
According to the embodiment of the present invention, the target object and the specific object are the same object, and checking the image buffering condition of the optimal frame queue at the predetermined time (step S250) may include: in the case where face recognition based on the earliest face image fails, it is checked whether a face image belonging to the target object is cached in the optimal frame queue, wherein the predetermined timing is any timing after face recognition based on the earliest face image fails.
As described above, checking the image buffering condition of the optimal frame queue may be a pertinence check. That is, in the case where the face of the target object cannot be successfully recognized through step S230, the optimal face image thereof may be read from the optimal frame queue for face recognition. The scheme can select the object needing face recognition from one end of the face recognition result, avoids cache check and other operations on the object successfully recognized, and can reduce the calculation amount to a certain extent.
According to the embodiment of the present invention, checking the image buffering condition of the optimal frame queue at the predetermined time (step S250) may include: and checking whether the face image belonging to any object is cached in the optimal frame queue or not at intervals of a preset time period, wherein the specific object is any object of which the face image is cached in the optimal frame queue.
It is to be understood that, in the present embodiment, the predetermined time may be a time that is reached every predetermined period of time from the time of counting. The predetermined period of time may be any suitable period of time and the present invention is not limited thereto. Illustratively, the predetermined period of time may be 2 seconds, 5 seconds, 10 seconds, or the like. For example, a timer with a fixed period, i.e., the predetermined period, may be set in the face recognition system, and the timer may be implemented in hardware, software, or a combination thereof. Assuming that the period of the timer is 2 seconds, the predetermined time may be … … nd, 4 th, and 6 th seconds from the start of the timer.
As described above, checking the image buffering condition of the optimal frame queue may be a non-targeted check. Such non-targeted checking may optionally be performed periodically, for example, every 2 seconds. If the optimal face images of the two objects C and D are cached in the optimal frame queue at present, whether the face of the object is recognized or not can be judged respectively for the object C and the object D, and if any one of the objects is not recognized, the optimal face image of the object can be utilized for face recognition. The scheme can select an object needing auxiliary face recognition from one end of the cached face image, and the auxiliary face recognition can be performed on the object without the cached face image, so that the working efficiency of the system can be improved. In addition, the image caching condition of the optimal frame queue is checked regularly, objects with cached face images can be sorted in time, and auxiliary face recognition (when needed) can be carried out on the objects as soon as possible, so that the face recognition efficiency can be improved.
The above embodiments are merely examples, and the non-targeted check may be performed at one or more set times, or may be performed based on a variable period instead of a fixed period.
According to the embodiment of the present invention, after checking the image buffering condition of the optimal frame queue at a predetermined time (step S250), the method 200 may further include: if the optimal face image belonging to the specific object is cached in the optimal frame queue and the face of the specific object is recognized, the optimal face image belonging to the specific object is removed from the optimal frame queue and/or the optimal face image belonging to the specific object is stored in a memory.
The optimal face image which is involved in recognition is removed in time, so that the storage space can be saved, the subsequent cache check pressure is reduced, and the working efficiency of the face recognition system can be improved. Storing the optimized face image in a memory, such as the non-volatile memory described above, may facilitate subsequent recall of the face image to perform other operations, such as display on a display for a user to view people participating in face recognition on the same day, etc.
According to the embodiment of the present invention, when performing face detection on each image in the image sequence in real time (step S220), the method 200 may further include: for two faces respectively located in two different images of an image sequence, whether the two faces belong to the same target object is judged based on the difference between position information in face detection results of the two faces.
As described above, object tracking may be performed based on position information of faces to determine which faces belong to the same object. The difference in position between the front and rear images of the face of the same object is not too large, and therefore, it is possible to judge whether or not they belong to the same object by the difference between the positions of the faces of the two faces in the front and rear images. The above-described manner of judging faces belonging to the same object can be applied to any object. For example, for two faces arbitrarily located in two different images of an image sequence, whether the two faces belong to the same object may be determined based on a difference between position information in face detection results of the two faces.
Determining whether the faces in the two images belong to the same object may be accomplished using any existing or future-emerging object tracking approach and is not limited to the examples described herein.
Fig. 3 shows a schematic diagram of an image processing flow according to an example of the present invention. As shown in fig. 3, first, the camera is turned on to start capturing images, and a timer with a period of 2 seconds may be started. And then, carrying out face detection every time one image is collected, if the face is detected, extracting an image block where each face is positioned to obtain a face image, and analyzing the face quality of the face image. For a certain object, whether the face quality of each face image of the certain object is qualified (namely, a preset requirement is met) can be judged, and whether the certain object is identified already can be judged under the qualified condition, namely whether the face image is sent to the face identification module already before, if not, the current face image can be sent to the face identification module for face identification. In addition, face image optimization is also performed. For a certain object, the face image whose face quality exceeds the face quality of the earliest face image can be firstly cached in the optimal frame queue in the memory, and then if the face quality of any current face image is better than that of the cached optimal face image, the current face image is cached in the optimal frame queue. And detecting whether the face image (namely data) is cached in the optimal frame queue every time the period of the 2-second timer arrives, if so, sending the optimal face image of the object to a face recognition module for face recognition if the face of the object to which the face image belongs is not recognized, and if the face of the object to which the face image belongs is recognized, storing the optimal face image to a disk.
According to another aspect of the present invention, there is provided an image processing apparatus. Fig. 4 shows a schematic block diagram of an image processing apparatus 400 according to an embodiment of the present invention.
As shown in fig. 4, the image processing apparatus 400 according to the embodiment of the present invention includes an acquisition module 410, a detection and analysis module 420, a transmission module 430, a caching module 440, and a checking module 450. The various modules may perform the various steps/functions of the image processing method described above in connection with fig. 2-3, respectively. Only the main functions of the respective components of the image processing apparatus 400 will be described below, and details that have been described above will be omitted.
The acquisition module 410 is used to acquire a sequence of images in real time. The obtaining module 410 may be implemented by the processor 102 in the electronic device shown in fig. 1 executing program instructions stored in the storage 104.
The detection and analysis module 420 is configured to perform face detection on each image in the image sequence in real time, obtain a face image corresponding to each face when the face is detected, and analyze the face quality of each face image. The detection and analysis module 420 may be implemented by the processor 102 in the electronic device shown in fig. 1 executing program instructions stored in the storage 104.
The sending module 430 is configured to send an earliest face image, of the face images belonging to the same target object, of which the face quality earliest meets a preset requirement to the face recognition module for face recognition. The sending module 430 may be implemented by the processor 102 in the electronic device shown in fig. 1 executing program instructions stored in the storage 104.
The buffer module 440 is configured to buffer, in the optimal frame queue, an optimal face image with the best face quality and a face quality exceeding the earliest face image in the face images belonging to the target object. The cache module 440 may be implemented by the processor 102 in the electronic device shown in fig. 1 executing program instructions stored in the storage 104.
The checking module 450 is configured to check an image buffer condition of the optimal frame queue at a predetermined time, and if an optimal face image belonging to a specific object is buffered in the optimal frame queue and a face of the specific object is not recognized yet, send the optimal face image belonging to the specific object to the face recognition module for face recognition. The checking module 450 may be implemented by the processor 102 in the electronic device shown in fig. 1 executing program instructions stored in the storage 104.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
Fig. 5 shows a schematic block diagram of an image processing system 500 according to an embodiment of the invention. Image processing system 500 includes an image acquisition device 510, a storage device (i.e., memory) 520, and a processor 530.
The image capturing device 510 is used for capturing images. Image capture device 510 is optional and image processing system 500 may not include image capture device 510. In this case, an image may be captured by other image capturing devices and the captured image may be transmitted to the image processing system 500.
The storage 520 stores computer program instructions for implementing the corresponding steps in the image processing method according to an embodiment of the present invention.
The processor 530 is configured to execute the computer program instructions stored in the storage device 520 to perform the corresponding steps of the image processing method according to the embodiment of the present invention.
In one embodiment, the computer program instructions, when executed by the processor 530, are for performing the steps of: acquiring an image sequence in real time; performing face detection on each image in the image sequence in real time, acquiring a face image corresponding to each face under the condition that the face is detected, and analyzing the face quality of each face image; the method comprises the steps that the earliest face image with the face quality meeting the preset requirement at the earliest time in face images belonging to the same target object is sent to a face recognition module for face recognition; caching the optimal face image with the best face quality and the face quality exceeding the earliest face image in the face images belonging to the target object in the optimal frame queue; and at a preset moment, checking the image caching condition of the optimal frame queue, and if the optimal face image belonging to the specific object is cached in the optimal frame queue and the face of the specific object is not recognized, sending the optimal face image belonging to the specific object to a face recognition module for face recognition.
Further, according to an embodiment of the present invention, there is also provided a storage medium on which program instructions are stored, which when executed by a computer or a processor, are used to perform the respective steps of the image processing method according to an embodiment of the present invention, and to implement the respective modules in the image processing apparatus according to an embodiment of the present invention. The storage medium may include, for example, a memory card of a smart phone, a storage component of a tablet computer, a hard disk of a personal computer, a Read Only Memory (ROM), an Erasable Programmable Read Only Memory (EPROM), a portable compact disc read only memory (CD-ROM), a USB memory, or any combination of the above storage media.
In one embodiment, the program instructions, when executed by a computer or a processor, may cause the computer or the processor to implement the respective functional modules of the image processing apparatus according to the embodiment of the present invention and/or may perform the image processing method according to the embodiment of the present invention.
In one embodiment, the program instructions are operable when executed to perform the steps of: acquiring an image sequence in real time; performing face detection on each image in the image sequence in real time, acquiring a face image corresponding to each face under the condition that the face is detected, and analyzing the face quality of each face image; the method comprises the steps that the earliest face image with the face quality meeting the preset requirement at the earliest time in face images belonging to the same target object is sent to a face recognition module for face recognition; caching the optimal face image with the best face quality and the face quality exceeding the earliest face image in the face images belonging to the target object in the optimal frame queue; and at a preset moment, checking the image caching condition of the optimal frame queue, and if the optimal face image belonging to the specific object is cached in the optimal frame queue and the face of the specific object is not recognized, sending the optimal face image belonging to the specific object to a face recognition module for face recognition.
The modules in the image processing system according to the embodiment of the present invention may be implemented by a processor of an electronic device implementing image processing according to the embodiment of the present invention running computer program instructions stored in a memory, or may be implemented when computer instructions stored in a computer-readable storage medium of a computer program product according to the embodiment of the present invention are run by a computer.
Although the illustrative embodiments have been described herein with reference to the accompanying drawings, it is to be understood that the foregoing illustrative embodiments are merely exemplary and are not intended to limit the scope of the invention thereto. Various changes and modifications may be effected therein by one of ordinary skill in the pertinent art without departing from the scope or spirit of the present invention. All such changes and modifications are intended to be included within the scope of the present invention as set forth in the appended claims.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described device embodiments are merely illustrative, and for example, the division of the units is only one logical functional division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another device, or some features may be omitted, or not executed.
In the description provided herein, numerous specific details are set forth. It is understood, however, that embodiments of the invention may be practiced without these specific details. In some instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.
Similarly, it should be appreciated that in the description of exemplary embodiments of the invention, various features of the invention are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the invention and aiding in the understanding of one or more of the various inventive aspects. However, the method of the present invention should not be construed to reflect the intent: that the invention as claimed requires more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single disclosed embodiment. Thus, the claims following the detailed description are hereby expressly incorporated into this detailed description, with each claim standing on its own as a separate embodiment of this invention.
It will be understood by those skilled in the art that all of the features disclosed in this specification (including any accompanying claims, abstract and drawings), and all of the processes or elements of any method or apparatus so disclosed, may be combined in any combination, except combinations where such features are mutually exclusive. Each feature disclosed in this specification (including any accompanying claims, abstract and drawings) may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise.
Furthermore, those skilled in the art will appreciate that while some embodiments described herein include some features included in other embodiments, rather than other features, combinations of features of different embodiments are meant to be within the scope of the invention and form different embodiments. For example, in the claims, any of the claimed embodiments may be used in any combination.
The various component embodiments of the invention may be implemented in hardware, or in software modules running on one or more processors, or in a combination thereof. Those skilled in the art will appreciate that a microprocessor or Digital Signal Processor (DSP) may be used in practice to implement some or all of the functions of some of the blocks in an image processing apparatus according to embodiments of the present invention. The present invention may also be embodied as apparatus programs (e.g., computer programs and computer program products) for performing a portion or all of the methods described herein. Such programs implementing the present invention may be stored on computer-readable media or may be in the form of one or more signals. Such a signal may be downloaded from an internet website or provided on a carrier signal or in any other form.
It should be noted that the above-mentioned embodiments illustrate rather than limit the invention, and that those skilled in the art will be able to design alternative embodiments without departing from the scope of the appended claims. In the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The word "comprising" does not exclude the presence of elements or steps not listed in a claim. The word "a" or "an" preceding an element does not exclude the presence of a plurality of such elements. The invention may be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In the unit claims enumerating several means, several of these means may be embodied by one and the same item of hardware. The usage of the words first, second and third, etcetera do not indicate any ordering. These words may be interpreted as names.
The above description is only for the specific embodiment of the present invention or the description thereof, and the protection scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present invention, and the changes or substitutions should be covered within the protection scope of the present invention. The protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (10)

1. An image processing method comprising:
acquiring an image sequence in real time;
performing face detection on each image in the image sequence in real time, acquiring a face image corresponding to each face under the condition that the face is detected, and analyzing the face quality of each face image;
the method comprises the steps that the earliest face image with the face quality meeting the preset requirement at the earliest time in face images belonging to the same target object is sent to a face recognition module for face recognition;
caching the optimal face image with the best face quality and the face quality exceeding the earliest face image in the face images belonging to the target object in an optimal frame queue;
checking the image caching condition of the optimal frame queue at a preset moment, and if the optimal face image belonging to a specific object is cached in the optimal frame queue and the face of the specific object is not recognized, sending the optimal face image belonging to the specific object to the face recognition module for face recognition;
wherein, at the predetermined time, checking the image buffer condition of the optimal frame queue comprises:
and checking whether a face image belonging to any object is cached in the optimal frame queue or not at intervals of a preset time period, wherein the specific object is any object of which the face image is cached in the optimal frame queue.
2. The method of claim 1, wherein the sending an earliest face image with the earliest face quality reaching a preset requirement from among face images belonging to the same target object to a face recognition module for face recognition comprises:
judging whether the face quality of each face image in the face images belonging to the target object meets the preset requirement or not in real time until the earliest face image appears; and
and sending the earliest face image to the face recognition module for face recognition.
3. The method of claim 1, wherein the sending an earliest face image with the earliest face quality reaching a preset requirement from among face images belonging to the same target object to a face recognition module for face recognition comprises:
judging whether the face quality of each face image in the face images belonging to the target object meets the preset requirement or not in real time;
under the condition that the face quality of the current face image in the face images belonging to the target object meets the preset requirement, judging whether any previous face image exists in the face images belonging to the target object and is sent to a face recognition module, and if not, determining that the current face image is the earliest face image;
and sending the earliest face image to the face recognition module for face recognition.
4. The method of claim 1, wherein the buffering, in an optimal frame queue, an optimal face image with a best face quality and a face quality exceeding the earliest face image among the face images belonging to the target object comprises:
caching a face image of which the first face quality exceeds the face quality of the earliest face image in the face images belonging to the target object as the optimal face image in the optimal frame queue;
comparing the face quality of the current face image with the face quality of the optimal face image every time when the current face image belonging to the target object is obtained;
and under the condition that the face quality of the current face image exceeds the face quality of the optimal face image, updating the optimal face image cached in the optimal frame queue into the current face image.
5. The method of claim 1, wherein the target object and the specific object are the same object, and the checking the image buffer status of the optimal frame queue at the predetermined time comprises:
and under the condition that face recognition based on the earliest face image fails, checking whether the face image belonging to the target object is cached in the optimal frame queue, wherein the preset time is any time after face recognition based on the earliest face image fails.
6. The method of claim 1, wherein after said checking for image buffering of said optimal frame queue at a predetermined time, said method further comprises:
and if the optimal face image belonging to the specific object is cached in the optimal frame queue and the face of the specific object is recognized, removing the optimal face image belonging to the specific object from the optimal frame queue and/or storing the optimal face image belonging to the specific object in a memory.
7. The method of claim 1, wherein, in the real-time face detection of each image in the sequence of images, the method further comprises:
and judging whether the two faces belong to the same target object or not based on the difference between the position information in the face detection results of the two faces for the two faces respectively positioned in the two different images of the image sequence.
8. An image processing apparatus comprising:
the acquisition module is used for acquiring an image sequence in real time;
the detection and analysis module is used for carrying out face detection on each image in the image sequence in real time, acquiring a face image corresponding to each face under the condition that the face is detected, and analyzing the face quality of each face image;
the transmitting module is used for transmitting the earliest face image of which the face quality in the face images belonging to the same target object meets the preset requirement to the face recognition module for face recognition;
the caching module is used for caching the optimal face image which has the best face quality and the face quality exceeding the earliest face image in the face images belonging to the target object in the optimal frame queue;
the checking module is used for checking the image caching condition of the optimal frame queue at a preset moment, and if the optimal face image belonging to a specific object is cached in the optimal frame queue and the face of the specific object is not recognized, the optimal face image belonging to the specific object is sent to the face recognition module for face recognition;
wherein the inspection module comprises:
and the checking submodule is used for checking whether the face image belonging to any object is cached in the optimal frame queue or not at intervals of a preset time period, wherein the specific object is any object of which the face image is cached in the optimal frame queue.
9. An image processing system comprising a processor and a memory, wherein the memory has stored therein computer program instructions for execution by the processor for performing the image processing method of any of claims 1 to 7.
10. A storage medium on which program instructions are stored, which program instructions are operable when executed to perform an image processing method as claimed in any one of claims 1 to 7.
CN201910477796.2A 2019-06-03 2019-06-03 Image processing method, device and system and storage medium Active CN110263680B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910477796.2A CN110263680B (en) 2019-06-03 2019-06-03 Image processing method, device and system and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910477796.2A CN110263680B (en) 2019-06-03 2019-06-03 Image processing method, device and system and storage medium

Publications (2)

Publication Number Publication Date
CN110263680A CN110263680A (en) 2019-09-20
CN110263680B true CN110263680B (en) 2022-01-28

Family

ID=67916525

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910477796.2A Active CN110263680B (en) 2019-06-03 2019-06-03 Image processing method, device and system and storage medium

Country Status (1)

Country Link
CN (1) CN110263680B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111783512B (en) * 2019-11-11 2024-05-14 西安宇视信息科技有限公司 Image processing method, device, equipment and storage medium
CN110852303A (en) * 2019-11-21 2020-02-28 中科智云科技有限公司 Eating behavior identification method based on OpenPose
CN111160221B (en) * 2019-12-26 2023-09-01 深圳云天励飞技术有限公司 Face acquisition method and related device
CN111539283B (en) * 2020-04-15 2023-08-11 上海摩象网络科技有限公司 Face tracking method and face tracking equipment
CN111860163B (en) * 2020-06-17 2023-08-22 北京百度网讯科技有限公司 Image processing method, device, electronic equipment and readable storage medium
CN114510346A (en) * 2021-12-28 2022-05-17 浙江大华技术股份有限公司 Face recognition control method and device, computer equipment and storage medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007274264A (en) * 2006-03-31 2007-10-18 Casio Comput Co Ltd Camera, photographing method of best-shot image and program
CN105938552A (en) * 2016-06-29 2016-09-14 北京旷视科技有限公司 Face recognition method capable of realizing base image automatic update and face recognition device
CN107273862A (en) * 2017-06-20 2017-10-20 深圳市乐易时代科技有限公司 A kind of automatic grasp shoot method, monitoring device and computer-readable recording medium
CN107346426A (en) * 2017-07-10 2017-11-14 深圳市海清视讯科技有限公司 A kind of face information collection method based on video camera recognition of face
CN108875452A (en) * 2017-05-11 2018-11-23 北京旷视科技有限公司 Face identification method, device, system and computer-readable medium
CN108875512A (en) * 2017-12-05 2018-11-23 北京旷视科技有限公司 Face identification method, device, system, storage medium and electronic equipment
CN109035246A (en) * 2018-08-22 2018-12-18 浙江大华技术股份有限公司 A kind of image-selecting method and device of face
CN109034013A (en) * 2018-07-10 2018-12-18 腾讯科技(深圳)有限公司 A kind of facial image recognition method, device and storage medium

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101458763B (en) * 2008-10-30 2011-04-20 中国人民解放军国防科学技术大学 Automatic human face identification method based on image weighting average
KR102090624B1 (en) * 2013-02-26 2020-03-18 삼성전자 주식회사 Apparatus and method for processing a image in device
CN104978550B (en) * 2014-04-08 2018-09-18 上海骏聿数码科技有限公司 Face identification method based on extensive face database and system
CN108491784B (en) * 2018-03-16 2021-06-22 南京邮电大学 Single person close-up real-time identification and automatic screenshot method for large live broadcast scene

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007274264A (en) * 2006-03-31 2007-10-18 Casio Comput Co Ltd Camera, photographing method of best-shot image and program
CN105938552A (en) * 2016-06-29 2016-09-14 北京旷视科技有限公司 Face recognition method capable of realizing base image automatic update and face recognition device
CN108875452A (en) * 2017-05-11 2018-11-23 北京旷视科技有限公司 Face identification method, device, system and computer-readable medium
CN107273862A (en) * 2017-06-20 2017-10-20 深圳市乐易时代科技有限公司 A kind of automatic grasp shoot method, monitoring device and computer-readable recording medium
CN107346426A (en) * 2017-07-10 2017-11-14 深圳市海清视讯科技有限公司 A kind of face information collection method based on video camera recognition of face
CN108875512A (en) * 2017-12-05 2018-11-23 北京旷视科技有限公司 Face identification method, device, system, storage medium and electronic equipment
CN109034013A (en) * 2018-07-10 2018-12-18 腾讯科技(深圳)有限公司 A kind of facial image recognition method, device and storage medium
CN109035246A (en) * 2018-08-22 2018-12-18 浙江大华技术股份有限公司 A kind of image-selecting method and device of face

Also Published As

Publication number Publication date
CN110263680A (en) 2019-09-20

Similar Documents

Publication Publication Date Title
CN110263680B (en) Image processing method, device and system and storage medium
CN108921159B (en) Method and device for detecting wearing condition of safety helmet
CN108875732B (en) Model training and instance segmentation method, device and system and storage medium
CN109117714B (en) Method, device and system for identifying fellow persons and computer storage medium
CN108875537B (en) Object detection method, device and system and storage medium
CN108090458B (en) Human body falling detection method and device
CN106203305B (en) Face living body detection method and device
CN109508694B (en) Face recognition method and recognition device
US11055516B2 (en) Behavior prediction method, behavior prediction system, and non-transitory recording medium
CN106650662B (en) Target object shielding detection method and device
CN110047095B (en) Tracking method and device based on target detection and terminal equipment
CN108875535B (en) Image detection method, device and system and storage medium
CN108009466B (en) Pedestrian detection method and device
CN108875731B (en) Target identification method, device, system and storage medium
CN106845352B (en) Pedestrian detection method and device
CN108875481B (en) Method, device, system and storage medium for pedestrian detection
CN108875478B (en) People-authentication-integrated verification method, device and system and storage medium
CN111626163B (en) Human face living body detection method and device and computer equipment
CN110991231B (en) Living body detection method and device, server and face recognition equipment
CN109598298B (en) Image object recognition method and system
CN111160202A (en) AR equipment-based identity verification method, AR equipment-based identity verification device, AR equipment-based identity verification equipment and storage medium
CN109547748A (en) Object foothold determines method, apparatus and storage medium
CN110263830B (en) Image processing method, device and system and storage medium
CN112052702A (en) Method and device for identifying two-dimensional code
CN109711287B (en) Face acquisition method and related product

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant