CN111460939A - Deblurring face recognition method and system and inspection robot - Google Patents

Deblurring face recognition method and system and inspection robot Download PDF

Info

Publication number
CN111460939A
CN111460939A CN202010202422.2A CN202010202422A CN111460939A CN 111460939 A CN111460939 A CN 111460939A CN 202010202422 A CN202010202422 A CN 202010202422A CN 111460939 A CN111460939 A CN 111460939A
Authority
CN
China
Prior art keywords
image
face
network
face recognition
samples
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010202422.2A
Other languages
Chinese (zh)
Inventor
刘业鹏
程骏
顾景
曾钰胜
庞建新
熊友军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ubtech Robotics Corp
Original Assignee
Ubtech Robotics Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ubtech Robotics Corp filed Critical Ubtech Robotics Corp
Priority to CN202010202422.2A priority Critical patent/CN111460939A/en
Publication of CN111460939A publication Critical patent/CN111460939A/en
Priority to PCT/CN2020/140410 priority patent/WO2021184894A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • G06T5/73
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Abstract

The application is suitable for the technical field of intelligent robots, and provides a deblurred face recognition method, a deblurred face recognition system and an inspection robot, wherein the method comprises the following steps: acquiring a video stream from a camera of the inspection robot, and performing video decoding on the video stream to obtain a decoded video image; carrying out face detection on the video image by adopting a face detection algorithm to obtain a regional image containing a face ROI (region of interest) in the video image; judging the fuzziness of the area image; if the area image is not a fuzzy image, carrying out face recognition on the video image to obtain a face recognition result; if the image of the modular area is a fuzzy image, extracting continuous multi-frame images before and after the video image; inputting a plurality of frames of images into a deblurring network to obtain a deblurred image, wherein the deblurring network is a deep learning network obtained by taking a plurality of blurred image samples and corresponding clear image samples as training samples for pre-training; and carrying out face recognition on the deblurred image to obtain a face recognition result.

Description

Deblurring face recognition method and system and inspection robot
Technical Field
The application belongs to the technical field of intelligent robots, and particularly relates to a deblurred face recognition method and system and an inspection robot.
Background
With the improvement of informatization technology, urban security and protection systems are more and more perfect. The city inspection system can not leave the inspection robot, the inspection robot can monitor pedestrians in the designated security field, face analysis is carried out through the face recognition system, and if suspicious criminals or emergency situations occur, information can be transmitted to the public security bureau system through the network to give an alarm. The inspection robot can continuously patrol for 24 hours, and has larger moving range and higher efficiency compared with a fixed monitoring camera. However, when the inspection robot runs on an uneven ground, the robot body shakes, the camera on the robot generates motion blur, and the blurred face image has great influence on the accuracy of face recognition.
Aiming at the problem of face image blurring caused by shaking of an inspection robot, the main solution at present is to correct a shaken cradle head through a motor in the robot cradle head so that an image acquired by a camera on the cradle head is clear. However, the holder with the image stabilizing function is expensive, and is not beneficial to refitting or updating the stock inspection robot.
Therefore, finding a deblurring face recognition method which has a wide application range and low cost and is suitable for an inspection robot becomes a problem to be solved urgently by the technical personnel in the field.
Disclosure of Invention
The embodiment of the application provides a deblurring face recognition method and system and an inspection robot, and can solve the problem of inaccurate face recognition caused by face image blurring generated by shaking of the inspection robot.
In a first aspect, an embodiment of the present application provides a deblurred face recognition method, including:
acquiring a video stream from a camera of the inspection robot, and performing video decoding on the video stream to obtain a decoded video image;
carrying out face detection on the video image by adopting a face detection algorithm to obtain a regional image containing a face ROI (region of interest) in the video image;
judging the fuzziness of the area image;
if the ambiguity judging result is that the region image is not a blurred image, carrying out face recognition on the video image to obtain a face recognition result;
if the ambiguity judging result is that the area image is a blurred image, extracting continuous multi-frame images before and after the video image;
inputting the multi-frame images into a deblurring network to obtain deblurring images output by the deblurring network, wherein the deblurring network is a deep learning network obtained by taking a plurality of blurred image samples and corresponding sharp image samples as training samples for pre-training;
and carrying out face recognition on the deblurred image to obtain a face recognition result.
The method can ensure that the images of the face recognition are clear, and the cloud deck with the image stabilizing function is not required to be installed, so that the cost of the inspection robot is reduced, the application range is wide, and the method is favorable for modifying the stock inspection robot to solve the problem of face image blurring caused by shaking of the inspection robot.
Preferably, the deblurring network is pre-trained by the following steps:
collecting a plurality of clear image samples, and carrying out fuzzy processing on each clear image sample to obtain a plurality of continuous fuzzy image samples corresponding to the clear image samples and obtain a plurality of groups of training samples, wherein each group of training samples consists of one clear image sample and a plurality of corresponding fuzzy image samples;
for each group of training samples, inputting a plurality of fuzzy image samples in each group of training samples into a deblurring network to obtain a target image output by the deblurring network;
taking a calculation result of a preset loss function as an adjustment target, and in the iterative learning process, minimizing the calculation result of the loss function by adjusting network parameters of the deblurring network, wherein the loss function is used for calculating an error between a clear image sample and a target image in each group of training samples;
and if the calculation result of the loss function meets a preset training termination condition, determining that the training of the deblurring network is finished.
Therefore, the training process of the deblurring network can effectively ensure that the training of the deblurring network is completed, and the deblurring network has the deblurring function on the blurred picture after training by inputting a plurality of groups of training samples.
Preferably, the deblurring network comprises an encoding network and a decoding network;
for each group of training samples, inputting a plurality of fuzzy image samples in each group of training samples into a deblurring network, and obtaining a target image output by the deblurring network comprises:
in an encoding network, compressing a plurality of blurred image samples in each group of training samples into an image with a first specified size, and performing two groups of residual convolution on the image with the first specified size to obtain an image with a second specified size;
and in a decoding network, performing reverse convolution and decompression on the second specified-size image through two groups of residual errors to obtain a target image with the same size as the blurred image sample.
The self-coding network consisting of the coding network and the decoding network can realize the basic function of the deep learning network and has a deep learning network structure.
Preferably, the performing face recognition on the deblurred image to obtain a face recognition result includes:
inputting the deblurred image into a face feature extraction network to perform face feature extraction to obtain a first target face feature, wherein the face feature extraction network is a deep learning network trained by a plurality of face samples in advance;
and comparing the first target face features with face features in a preset face feature library to determine identity information corresponding to each face in the deblurred image.
The face feature extraction is carried out on the deblurred image through the face feature extraction network, so that the accuracy is improved, and the subsequent face identity recognition is more accurate.
Preferably, the performing face recognition on the video image to obtain a face recognition result includes:
inputting the video image into a face feature extraction network to perform face feature extraction to obtain a second target face feature, wherein the face feature extraction network is a deep learning network trained by a plurality of face samples in advance;
and comparing the second target face features with face features in a preset face feature library to determine identity information corresponding to each face in the video image.
The face feature extraction is carried out on the video image through the face feature extraction network, so that the accuracy is improved, and the subsequent face identity recognition is more accurate.
Preferably, the determining the degree of blur of the region image includes:
calculating the fuzziness of the area image by adopting an L aplanian algorithm;
judging whether the calculated ambiguity is greater than a preset ambiguity threshold value or not;
if the calculated ambiguity is greater than a preset ambiguity threshold, determining that the region image is not a blurred image;
and if the calculated fuzziness is less than or equal to a preset fuzziness threshold value, determining that the area image is a blurred image.
And the distinction of whether the area image is a blurred image can be realized by adopting an L aplarian algorithm to calculate the blurring degree and judging whether the blurring degree exceeds a threshold value.
Preferably, the extracting of the consecutive multi-frame images before and after the video image specifically includes: and extracting six continuous frame images before and after the video image. Therefore, the processing efficiency is considered, information among different images can be fused, more detailed features can be learned by a deep learning network, and the deblurring of the images is realized.
In a second aspect, an embodiment of the present application provides a deblurred face recognition system, including:
the video decoding module is used for acquiring a video stream from the camera of the inspection robot and performing video decoding on the video stream to obtain a decoded video image;
the face detection module is used for carrying out face detection on the video image by adopting a face detection algorithm to obtain a regional image containing a face ROI (region of interest) in the video image;
the ambiguity judging module is used for judging the ambiguity of the region image;
the first identification module is used for carrying out face identification on the video image to obtain a face identification result if the judgment result of the ambiguity judgment module is that the region image is not a blurred image;
a multi-frame image extraction module, configured to extract a multi-frame image that is consecutive before and after the video image if the determination result of the ambiguity determination module is that the region image is a blurred image;
the image deblurring module is used for inputting the multi-frame images into a deblurring network to obtain deblurring images output by the deblurring network, and the deblurring network is a deep learning network obtained by taking a plurality of blurred image samples and corresponding sharp image samples as training samples for pre-training;
and the second recognition module is used for carrying out face recognition on the deblurred image to obtain a face recognition result.
In a third aspect, an embodiment of the present application provides an inspection robot, including a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor implements the above-mentioned deblurred face recognition method when executing the computer program.
In a fourth aspect, an embodiment of the present application provides a computer-readable storage medium, where a computer program is stored, and when the computer program is executed by a processor, the computer program implements the above-mentioned deblurred face recognition method.
It is understood that the beneficial effects of the second to fourth aspects can be seen from the description of the first aspect, and are not described herein again.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise.
FIG. 1 is a flow chart of a method of deblurring a face in an embodiment of the present application;
FIG. 2 is a diagram of a context module in an embodiment of the present application;
FIG. 3 is a schematic flow chart of step 103 of the deblurred face recognition method in an application scenario according to an embodiment of the present application;
FIG. 4 is a schematic flow chart of a method for deblurring a human face in an application scenario according to an embodiment of the present application, in which a deblurring network is trained in advance;
FIG. 5 is a flow chart of the deblurred face recognition method step 302 in an application scenario according to an embodiment of the present application;
FIG. 6 is a schematic structural diagram of a deblurring network in an application scenario according to an embodiment of the present application;
FIG. 7 is a schematic structural diagram of a deblurred face recognition system in an embodiment of the present application;
fig. 8 is a schematic diagram of an inspection robot in an embodiment of the present application.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
It will be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It should also be understood that the term "and/or" as used in this specification and the appended claims refers to and includes any and all possible combinations of one or more of the associated listed items.
As used in this specification and the appended claims, the term "if" may be interpreted contextually as "when", "upon" or "in response to" determining "or" in response to detecting ". Similarly, the phrase "if it is determined" or "if a [ described condition or event ] is detected" may be interpreted contextually to mean "upon determining" or "in response to determining" or "upon detecting [ described condition or event ]" or "in response to detecting [ described condition or event ]".
Furthermore, in the description of the present application and the appended claims, the terms "first," "second," "third," and the like are used for distinguishing between descriptions and not necessarily for describing or implying relative importance.
Reference throughout this specification to "one embodiment" or "some embodiments," or the like, means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the present application. Thus, appearances of the phrases "in one embodiment," "in some embodiments," "in other embodiments," or the like, in various places throughout this specification are not necessarily all referring to the same embodiment, but rather "one or more but not all embodiments" unless specifically stated otherwise. The terms "comprising," "including," "having," and variations thereof mean "including, but not limited to," unless expressly specified otherwise.
In an embodiment, as shown in fig. 1, a deblurring face recognition method is provided, which is described by taking an application of the method to an inspection robot as an example, and includes the following steps:
101. acquiring a video stream from a camera of the inspection robot, and performing video decoding on the video stream to obtain a decoded video image;
in this embodiment, the robot that patrols and examines can be through the image information around the camera record on its cloud platform at the during operation to form the video stream in the robot that patrols and examines. The system of the inspection robot can acquire the video streams, and then performs video decoding on the video streams to obtain decoded video images. It is understood that the obtained video images will be obtained in a frame-by-frame manner, and the sequence between the video images can be determined according to time.
102. Carrying out face detection on the video image by adopting a face detection algorithm to obtain a regional image containing a face ROI (region of interest) in the video image;
when the inspection robot realizes face recognition and face analysis, each frame of image is generally not required to be processed, and the image to be detected can be determined according to a received instruction or a preset rule. For example, when the system of the inspection robot receives a video image of a certain frame and performs face recognition, it may perform face detection on the video image by using a face detection algorithm to obtain a region image including a face ROI region in the video image.
It should be noted that the face detection algorithm adopted in this embodiment may be a lightweight neural network constructed based on a separable convolution and a context module, and has the advantages of high speed and high accuracy. As shown in fig. 2, the context module of the network may be divided into three branches, and the three branches adopt convolution kernels with different sizes to perform convolution, so that the difference between the distance from the feature center when the final feature is extracted determines that the final convolution weight parameters are different, and the detection accuracy can be effectively improved. The face detection algorithm can extract a face part region in the whole frame of video image to obtain an roi (region of interest) region, i.e. the region image, so as to facilitate subsequent analysis.
103. Judging the fuzziness of the area image, if the judgment result of the fuzziness is that the area image is not a fuzzy image, executing step 104, and if the judgment result of the fuzziness is that the area image is a fuzzy image, executing step 105;
it will be appreciated that there are generally two cases of the region image obtained by the system at this time. In the first case, the video image from which the area image is obtained is shot when the inspection robot moves on a flat road surface, so that the face in the area image should be clear; in the second case, the video image from the area image is shot when the inspection robot moves on a rugged and bumpy road surface, and there is shake during shooting, so that the face in the area image is likely to be blurred. Therefore, in the first case, the system can directly perform face recognition on the image, and can obtain a more accurate recognition result; in the second case, as described in the background art, the recognition result obtained by directly performing face recognition on the image is often inaccurate, and the face recognition accuracy is reduced.
Therefore, in the present application, the system may first perform the blur degree determination on the area image to determine whether the video image is sharp or blurred.
Further, as shown in fig. 3, step 103 may include:
201. calculating the fuzziness of the area image by adopting an L aplanian algorithm;
202. judging whether the calculated ambiguity is greater than a preset ambiguity threshold, if so, executing step 203, otherwise, executing step 204;
203. determining that the region image is not a blurred image;
204. determining that the region image is a blurred image.
For the above step 201-204, the application may use L aplanian algorithm to calculate the ambiguity of the region image, and the formula of L aplanian algorithm is as follows:
Figure BDA0002419839470000081
the L aplanian algorithm can emphasize a region with rapidly changing density in an image, namely a boundary, by measuring a second derivative of a region image, so that the region is commonly used for boundary detection, the boundary is clearer in a normal image, so that the variance is larger, and boundary information contained in a blurred image is less, so that the variance is smaller.
Therefore, if the calculated ambiguity is greater than a preset ambiguity threshold, it can be determined that the region image is not a blurred image; otherwise, if the calculated fuzziness is smaller than or equal to a preset fuzziness threshold value, determining that the area image is a blurred image.
104. Carrying out face recognition on the video image to obtain a face recognition result;
as can be seen from the above, when the ambiguity determination result indicates that the region image is not a blurred image, it indicates that the face in the video image is clear, so that the system can directly perform face recognition on the video image to obtain a face recognition result.
105. Extracting continuous multi-frame images before and after the video image;
and when the ambiguity judgment result indicates that the region image is not a blurred image, the face in the video image is clear and blurred, and the face is not suitable for being directly subjected to face recognition. For this reason, the system needs to deblur the video image, and first, a plurality of frame images that are consecutive before and after the video image can be extracted. As can be seen from the content of step 101, since all video images have corresponding time information, a certain video image is easily acquired from a plurality of consecutive frames of images.
In a specific application scenario, in order to consider both the processing efficiency and the deblurring success rate, step 105 may specifically be: and extracting six continuous frame images before and after the video image. That is, 6 consecutive images before and after the video image are selected as the multi-frame image to be used for the deblurring processing of the subsequent step. It can be understood that the six frames of images are selected, not only is the processing efficiency considered, but also the information among different images can be fused, and the deep learning network learns more detailed features to realize the deblurring of the images.
106. Inputting the multi-frame images into a deblurring network to obtain deblurring images output by the deblurring network, wherein the deblurring network is a deep learning network obtained by taking a plurality of blurred image samples and corresponding sharp image samples as training samples for pre-training;
in order to realize deblurring processing of images, a deblurring network is trained in advance, the deblurring network is a deep learning network obtained by taking a plurality of blurred image samples and corresponding clear image samples as training samples through pre-training, and after a large number of training samples are trained, blurred face information in a plurality of frames of images can be restored into clear face information through the deblurring network.
For ease of understanding, the training process of the deblurring network is first described in detail. Further, as shown in fig. 4, the deblurring network may be obtained by pre-training through the following steps:
301. collecting a plurality of clear image samples, and carrying out fuzzy processing on each clear image sample to obtain a plurality of continuous fuzzy image samples corresponding to the clear image samples and obtain a plurality of groups of training samples, wherein each group of training samples consists of one clear image sample and a plurality of corresponding fuzzy image samples;
302. for each group of training samples, inputting a plurality of fuzzy image samples in each group of training samples into a deblurring network to obtain a target image output by the deblurring network;
303. taking a calculation result of a preset loss function as an adjustment target, and in the iterative learning process, minimizing the calculation result of the loss function by adjusting network parameters of the deblurring network, wherein the loss function is used for calculating an error between a clear image sample and a target image in each group of training samples;
304. and if the calculation result of the loss function meets a preset training termination condition, determining that the training of the deblurring network is finished.
In step 301, in this embodiment, a plurality of sharp image samples may be collected, and then each sharp image sample may be blurred to obtain a plurality of consecutive blurred image samples corresponding to the sharp image sample. For example, for a clear image a, the image a may be blurred, and blurred images a1, a2, a3, a4, a5, and a6 corresponding to the image a are obtained. Thus, a set of training samples consisting of images a, a1, a2, a3, a4, a5, and a6 was obtained. The above processing is performed on each clear image sample, and a plurality of groups of training samples can be obtained.
For step 302, in the training process, the system may input, for each set of training samples, a plurality of blurred image samples in the set of training samples into the deblurring network to obtain a target image output by the deblurring network. Specifically, the deblurring network in this embodiment may include an encoding network and a decoding network, and for this purpose, as shown in fig. 5, step 302 may include:
401. in an encoding network, compressing a plurality of blurred image samples in each group of training samples into an image with a first specified size, and performing two groups of residual convolution on the image with the first specified size to obtain an image with a second specified size;
402. and in a decoding network, performing reverse convolution and decompression on the second specified-size image through two groups of residual errors to obtain a target image with the same size as the blurred image sample.
For step 401 and 402, the deblurring network may be a self-coding network based on deep learning, which is symmetric and divided into two parts, i.e., encoding and decoding, for example. As shown in fig. 6, in an application scenario, assuming that a set of training samples includes 6 frames of blurred image samples, the 6 frames of blurred image samples are first continuously output to a coding network, which is compressed into an image of a first specified size, which is assumed to be 256 × 256, and then, after two sets of residual convolutions, the image size becomes 64 × 128 (i.e., a second specified size), which may be referred to as coding. The decoding process is opposite to the encoding process, the size of the image of the second specified size can be restored to 256 × 256 by performing two groups of residual reverse convolution, and then the image is decompressed, so that the obtained image is a target image with the same size as the sample size of the blurred image.
With respect to step 303, it can be understood that, in order to evaluate whether the deblurring network has been trained, specifically, the system may take the calculation result of the preset loss function as an adjustment target, and during the iterative learning process, the network parameters of the deblurring network are adjusted to minimize the calculation result of the loss function. Wherein a loss function (i.e., loss function) is used to calculate the error between the sharp image sample and the target image in each set of training samples. It should be noted that the loss function may be a plurality of functions such as mean square error loss, mean deviation loss, square loss, and the like, and in a specific application, one of the functions may be selected according to needs, which is not described herein again.
For step 304, after training and learning of a large number of training samples and after a plurality of iterations, if the calculation result of the loss function meets a preset training termination condition, it may be determined that the deblurring network is trained well. The preset training termination condition may be set according to actual training needs, for example, if the calculation result of the loss function is within a certain range and the iteration number exceeds N times, it may be determined that the deblurring network training is completed, and the like.
107. And carrying out face recognition on the deblurred image to obtain a face recognition result.
It can be considered that the deblurred image output by the deblurring network is clear, and the recognition result obtained by performing face recognition on the deblurred image is accurate. Therefore, the system can perform face recognition on the deblurred image to obtain a face recognition result.
In this embodiment, in order to improve accuracy and efficiency of face recognition, further, step 107 may specifically include: inputting the deblurred image into a face feature extraction network to perform face feature extraction to obtain a first target face feature, wherein the face feature extraction network is a deep learning network trained by a plurality of face samples in advance; and comparing the first target face features with face features in a preset face feature library to determine identity information corresponding to each face in the deblurred image. It should be noted that the face feature extraction network adopted in this embodiment in a summary manner may specifically be a deep learning network based on resnet50, and during network training, a triple loss function may be adopted, where the loss function may make the euclidean distance of the feature vectors of the face of the same person as small as possible, and the euclidean distance of the feature vectors of the faces of different persons is relatively large. The training process of the face feature extraction network is not described in detail in this embodiment.
Similarly, the step 104 may also specifically include: inputting the video image into a face feature extraction network to perform face feature extraction to obtain a second target face feature, wherein the face feature extraction network is a deep learning network trained by a plurality of face samples in advance; and comparing the second target face features with face features in a preset face feature library to determine identity information corresponding to each face in the video image. It can be understood that the same face feature extraction network may be used in step 104 and step 107, or two different face feature extraction networks may be trained to perform face recognition on the video image and the deblurred image respectively, so as to obtain a better face recognition result.
It can be understood that, in step 104 and step 107, the first target face feature or the second target face feature is compared with the face features in the preset face feature library, so that the reserved face features identical to the first target face feature and the second target face feature can be determined from the preset face feature library, and the reserved face features record corresponding person identities, so that the person identities can be regarded as identity information of persons recognized in the image. Of course, if the comparison finds that the preset face feature library does not have the face features identical to the first target face features or the second target face features, the identity of the person of the face in the image can be considered to be not confirmed, and specifically, the identity of the person can be labeled as "unknown" or "recognition failure" in the system.
In the embodiment of the application, firstly, a video stream from a camera of the inspection robot is obtained, and the video stream is subjected to video decoding to obtain a decoded video image; then, carrying out face detection on the video image by adopting a face detection algorithm to obtain a regional image containing a face ROI (region of interest) in the video image; then, judging the fuzziness of the area image; if the ambiguity judging result is that the region image is not a blurred image, carrying out face recognition on the video image to obtain a face recognition result; if the ambiguity judging result is that the area image is a blurred image, extracting continuous multi-frame images before and after the video image; then, inputting the multi-frame images into a deblurring network to obtain deblurring images output by the deblurring network, wherein the deblurring network is obtained by taking a plurality of blurred image samples and corresponding clear image samples as training samples for pre-training; and finally, carrying out face recognition on the deblurred image to obtain a face recognition result. Therefore, before face recognition, the method can firstly judge the fuzziness of the face in the video image, and if the face is not fuzzy, the face recognition can be directly carried out on the video image; on the contrary, if the face is blurred, inputting a plurality of continuous frames of images before and after the video image into a deblurring network to obtain a clear image (namely the deblurred image) containing the face, and then carrying out face recognition on the deblurred image. Therefore, the image of face recognition can be clear, a cloud platform with an image stabilizing function is not required to be installed, the cost of the inspection robot is reduced, the application range is wide, and the storage inspection robot is favorably improved to solve the problem of face image blurring caused by shaking of the inspection robot.
It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present application.
In an embodiment, a deblurred face recognition system is provided, which corresponds to the deblurred face recognition method in the above embodiment one to one. As shown in fig. 7, the deblurred face recognition system includes a video decoding module 501, a face detection module 502, an ambiguity judging module 503, a first recognition module 504, a multi-frame image extraction module 505, an image deblurring module 506, and a second recognition module 507. The functional modules are explained in detail as follows:
the video decoding module 501 is configured to obtain a video stream from a camera of the inspection robot, and perform video decoding on the video stream to obtain a decoded video image;
a face detection module 502, configured to perform face detection on the video image by using a face detection algorithm, so as to obtain a region image including a face ROI region in the video image;
a blur degree determination module 503, configured to perform blur degree determination on the region image;
a first identification module 504, configured to perform face identification on the video image to obtain a face identification result if the determination result of the ambiguity determination module is that the region image is not a blurred image;
a multi-frame image extraction module 505, configured to extract a multi-frame image that is consecutive before and after the video image if the determination result of the ambiguity determination module is that the region image is a blurred image;
an image deblurring module 506, configured to input the multiple frames of images into a deblurring network to obtain a deblurred image output by the deblurring network, where the deblurring network is a deep learning network obtained by pre-training a plurality of blurred image samples and corresponding sharp image samples as training samples;
and the second identification module 507 is configured to perform face identification on the deblurred image to obtain a face identification result.
Further, the deblurring network can be obtained by pre-training the following modules:
the sample collection module is used for collecting a plurality of clear image samples and carrying out fuzzy processing on each clear image sample to obtain a plurality of continuous fuzzy image samples corresponding to the clear image samples and obtain a plurality of groups of training samples, and each group of training samples consists of one clear image sample and a plurality of corresponding fuzzy image samples;
the network training module is used for inputting a plurality of fuzzy image samples in each group of training samples into a deblurring network aiming at each group of training samples to obtain a target image output by the deblurring network;
the network parameter adjusting module is used for minimizing the calculation result of the loss function by adjusting the network parameters of the deblurring network in the iterative learning process by taking the calculation result of a preset loss function as an adjusting target, wherein the loss function is used for calculating the error between a clear image sample and a target image in each group of training samples;
and the training completion module is used for determining that the training of the deblurring network is completed if the calculation result of the loss function meets a preset training termination condition.
Further, the deblurring network may include an encoding network and a decoding network;
the network training module may include:
the encoding unit is used for compressing a plurality of blurred image samples in each group of training samples into an image with a first specified size in an encoding network, and performing two groups of residual convolution on the image with the first specified size to obtain an image with a second specified size;
and the decoding unit is used for performing reverse convolution on the second specified-size image through two groups of residual errors and decompressing the second specified-size image in a decoding network to obtain a target image with the same size as the blurred image sample.
Further, the first identification module may include:
the first feature extraction unit is used for inputting the video image into a face feature extraction network to carry out face feature extraction so as to obtain a second target face feature, and the face feature extraction network is a deep learning network which is trained by a plurality of face samples in advance;
and the first feature comparison unit is used for comparing the second target face features with the face features in a preset face feature library to determine the identity information corresponding to each face in the video image.
Further, the second identification module may include:
the second feature extraction unit is used for inputting the deblurred image into a face feature extraction network to carry out face feature extraction so as to obtain a first target face feature, and the face feature extraction network is a deep learning network which is trained by a plurality of face samples in advance;
and the second feature comparison unit is used for comparing the first target face features with the face features in a preset face feature library to determine the identity information corresponding to each face in the deblurred image.
Further, the ambiguity judging module may include:
a blur degree calculation unit for calculating a blur degree of the region image by using L aplarian algorithm;
the judging unit is used for judging whether the calculated ambiguity is greater than a preset ambiguity threshold value or not;
a first determining unit configured to determine that the area image is not a blurred image if the determination result of the determining unit is yes;
and the second determining unit is used for determining that the area image is a blurred image if the judgment result of the judging unit is negative.
Further, the multi-frame image extraction module is specifically configured to: and extracting six continuous frame images before and after the video image.
For specific limitations of the deblurred face recognition system, reference may be made to the above limitations of the deblurred face recognition method, which are not described herein again. The modules in the above-described deblurred face recognition system may be implemented in whole or in part by software, hardware, and combinations thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In one embodiment, there is provided an inspection robot, as shown in fig. 8, including a memory, a processor and a computer program stored in the memory and running on the processor, wherein the processor executes the computer program to implement the steps of the deblurred face recognition method in the above embodiments, such as the steps 101 to 107 shown in fig. 1. Alternatively, the processor, when executing the computer program, implements the functions of the modules/units of the deblurred face recognition system in the above-described embodiments, such as the functions of the modules 501 to 507 shown in fig. 7. To avoid repetition, further description is omitted here.
In one embodiment, a computer readable storage medium is provided, on which a computer program is stored, which when executed by a processor implements the steps of the deblurred face recognition method in the above embodiments, such as the steps 101 to 107 shown in fig. 1. Alternatively, the computer program, when executed by the processor, implements the functions of the modules/units of the deblurred face recognition system in the above-described embodiments, such as the functions of modules 501 to 507 shown in fig. 7. To avoid repetition, further description is omitted here.
It should be noted that, for the information interaction, execution process, and other contents between the above-mentioned devices/units, the specific functions and technical effects thereof are based on the same concept as those of the embodiment of the method of the present application, and specific reference may be made to the part of the embodiment of the method, which is not described herein again.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-mentioned functions. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the system may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, all or part of the processes in the methods of the embodiments described above can be implemented by a computer program, which can be stored in a computer-readable storage medium and can implement the steps of the embodiments of the methods described above when the computer program is executed by a processor. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer readable medium may include at least: any entity or device capable of carrying computer program code to a photographing apparatus/terminal apparatus, a recording medium, computer Memory, Read-Only Memory (ROM), random-access Memory (RAM), an electrical carrier signal, a telecommunications signal, and a software distribution medium. Such as a usb-disk, a removable hard disk, a magnetic or optical disk, etc. In certain jurisdictions, computer-readable media may not be an electrical carrier signal or a telecommunications signal in accordance with legislative and patent practice.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus/network device and method may be implemented in other ways. For example, the above-described apparatus/network device embodiments are merely illustrative, and for example, the division of the modules or units is only one logical division, and there may be other divisions when actually implementing, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not implemented. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present application and are intended to be included within the scope of the present application.

Claims (10)

1. A deblurred face recognition method is characterized by comprising the following steps:
acquiring a video stream from a camera of the inspection robot, and performing video decoding on the video stream to obtain a decoded video image;
carrying out face detection on the video image by adopting a face detection algorithm to obtain a regional image containing a face ROI (region of interest) in the video image;
judging the fuzziness of the area image;
if the ambiguity judging result is that the region image is not a blurred image, carrying out face recognition on the video image to obtain a face recognition result;
if the ambiguity judging result is that the area image is a blurred image, extracting continuous multi-frame images before and after the video image;
inputting the multi-frame images into a deblurring network to obtain deblurring images output by the deblurring network, wherein the deblurring network is a deep learning network obtained by taking a plurality of blurred image samples and corresponding sharp image samples as training samples for pre-training;
and carrying out face recognition on the deblurred image to obtain a face recognition result.
2. The deblurred face recognition method of claim 1, wherein the deblurring network is pre-trained by:
collecting a plurality of clear image samples, and carrying out fuzzy processing on each clear image sample to obtain a plurality of continuous fuzzy image samples corresponding to the clear image samples and obtain a plurality of groups of training samples, wherein each group of training samples consists of one clear image sample and a plurality of corresponding fuzzy image samples;
for each group of training samples, inputting a plurality of fuzzy image samples in each group of training samples into a deblurring network to obtain a target image output by the deblurring network;
taking a calculation result of a preset loss function as an adjustment target, and in the iterative learning process, minimizing the calculation result of the loss function by adjusting network parameters of the deblurring network, wherein the loss function is used for calculating an error between a clear image sample and a target image in each group of training samples;
and if the calculation result of the loss function meets a preset training termination condition, determining that the training of the deblurring network is finished.
3. The deblurred face recognition method of claim 2, wherein the deblurring network comprises an encoding network and a decoding network;
for each group of training samples, inputting a plurality of fuzzy image samples in each group of training samples into a deblurring network, and obtaining a target image output by the deblurring network comprises:
in an encoding network, compressing a plurality of blurred image samples in each group of training samples into an image with a first specified size, and performing two groups of residual convolution on the image with the first specified size to obtain an image with a second specified size;
and in a decoding network, performing reverse convolution and decompression on the second specified-size image through two groups of residual errors to obtain a target image with the same size as the blurred image sample.
4. The deblurred face recognition method of claim 1, wherein the performing face recognition on the deblurred image to obtain a face recognition result comprises:
inputting the deblurred image into a face feature extraction network to perform face feature extraction to obtain a first target face feature, wherein the face feature extraction network is a deep learning network trained by a plurality of face samples in advance;
and comparing the first target face features with face features in a preset face feature library to determine identity information corresponding to each face in the deblurred image.
5. The deblurred face recognition method of claim 1, wherein the performing face recognition on the video image to obtain a face recognition result comprises:
inputting the video image into a face feature extraction network to perform face feature extraction to obtain a second target face feature, wherein the face feature extraction network is a deep learning network trained by a plurality of face samples in advance;
and comparing the second target face features with face features in a preset face feature library to determine identity information corresponding to each face in the video image.
6. The deblurred face recognition method of claim 1, wherein the determining of the blurriness of the region image comprises:
calculating the fuzziness of the area image by adopting an L aplanian algorithm;
judging whether the calculated ambiguity is greater than a preset ambiguity threshold value or not;
if the calculated ambiguity is greater than a preset ambiguity threshold, determining that the region image is not a blurred image;
and if the calculated fuzziness is less than or equal to a preset fuzziness threshold value, determining that the area image is a blurred image.
7. The deblurred face recognition method of any one of claims 1 to 6, wherein the extracting of the plurality of consecutive frame images before and after the video image is specifically: and extracting six continuous frame images before and after the video image.
8. A deblurred face recognition system, comprising:
the video decoding module is used for acquiring a video stream from the camera of the inspection robot and performing video decoding on the video stream to obtain a decoded video image;
the face detection module is used for carrying out face detection on the video image by adopting a face detection algorithm to obtain a regional image containing a face ROI (region of interest) in the video image;
the ambiguity judging module is used for judging the ambiguity of the region image;
the first identification module is used for carrying out face identification on the video image to obtain a face identification result if the judgment result of the ambiguity judgment module is that the region image is not a blurred image;
a multi-frame image extraction module, configured to extract a multi-frame image that is consecutive before and after the video image if the determination result of the ambiguity determination module is that the region image is a blurred image;
the image deblurring module is used for inputting the multi-frame images into a deblurring network to obtain deblurring images output by the deblurring network, and the deblurring network is a deep learning network obtained by taking a plurality of blurred image samples and corresponding sharp image samples as training samples for pre-training;
and the second recognition module is used for carrying out face recognition on the deblurred image to obtain a face recognition result.
9. An inspection robot comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the deblurred face recognition method according to any one of claims 1 to 7 when executing the computer program.
10. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the deblurred face recognition method according to any one of claims 1 to 7.
CN202010202422.2A 2020-03-20 2020-03-20 Deblurring face recognition method and system and inspection robot Pending CN111460939A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202010202422.2A CN111460939A (en) 2020-03-20 2020-03-20 Deblurring face recognition method and system and inspection robot
PCT/CN2020/140410 WO2021184894A1 (en) 2020-03-20 2020-12-28 Deblurred face recognition method and system and inspection robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010202422.2A CN111460939A (en) 2020-03-20 2020-03-20 Deblurring face recognition method and system and inspection robot

Publications (1)

Publication Number Publication Date
CN111460939A true CN111460939A (en) 2020-07-28

Family

ID=71685660

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010202422.2A Pending CN111460939A (en) 2020-03-20 2020-03-20 Deblurring face recognition method and system and inspection robot

Country Status (2)

Country Link
CN (1) CN111460939A (en)
WO (1) WO2021184894A1 (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111738230A (en) * 2020-08-05 2020-10-02 深圳市优必选科技股份有限公司 Face recognition method, face recognition device and electronic equipment
CN112069887A (en) * 2020-07-31 2020-12-11 深圳市优必选科技股份有限公司 Face recognition method, face recognition device, terminal equipment and storage medium
CN112069885A (en) * 2020-07-30 2020-12-11 深圳市优必选科技股份有限公司 Face attribute identification method and device and mobile terminal
CN112381016A (en) * 2020-11-19 2021-02-19 山东海博科技信息系统股份有限公司 Vehicle-mounted face recognition algorithm optimization method and system
CN112966562A (en) * 2021-02-04 2021-06-15 深圳市街角电子商务有限公司 Face living body detection method, system and storage medium
WO2021184894A1 (en) * 2020-03-20 2021-09-23 深圳市优必选科技股份有限公司 Deblurred face recognition method and system and inspection robot
CN113436231A (en) * 2021-06-30 2021-09-24 平安科技(深圳)有限公司 Pedestrian trajectory generation method, device, equipment and storage medium
CN113743220A (en) * 2021-08-04 2021-12-03 深圳商周智联科技有限公司 Biological characteristic in-vivo detection method and device and computer equipment
CN113821680A (en) * 2021-09-15 2021-12-21 深圳市银翔科技有限公司 Mobile electronic evidence management method and device
CN116543222A (en) * 2023-05-12 2023-08-04 北京长木谷医疗科技股份有限公司 Knee joint lesion detection method, device, equipment and computer readable storage medium

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114066751B (en) * 2021-10-29 2024-02-27 西北工业大学 Vehicle card monitoring video deblurring method based on common camera acquisition condition
CN114240764B (en) * 2021-11-12 2024-04-23 清华大学 De-blurring convolutional neural network training method, device, equipment and storage medium
CN114332733B (en) * 2022-01-04 2024-03-15 桂林电子科技大学 Video monitoring face recognition method based on residual error cyclic neural network
CN114565538B (en) * 2022-03-10 2024-03-01 山东大学齐鲁医院 Endoscopic image processing method, system, storage medium and equipment
CN114783020A (en) * 2022-04-03 2022-07-22 南京邮电大学 Dynamic human face recognition method based on novel counterstudy deblurring theory
CN115170973B (en) * 2022-09-05 2022-12-20 广州艾米生态人工智能农业有限公司 Intelligent paddy field weed identification method, device, equipment and medium
CN116110100B (en) * 2023-01-14 2023-11-14 深圳市大数据研究院 Face recognition method, device, computer equipment and storage medium
CN116229518B (en) * 2023-03-17 2024-01-16 百鸟数据科技(北京)有限责任公司 Bird species observation method and system based on machine learning
CN116309724B (en) * 2023-03-29 2024-03-22 上海锡鼎智能科技有限公司 Face blurring method and face blurring device for laboratory assessment

Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106503669A (en) * 2016-11-02 2017-03-15 重庆中科云丛科技有限公司 A kind of based on the training of multitask deep learning network, recognition methods and system
CN106682604A (en) * 2016-12-20 2017-05-17 电子科技大学 Method for detecting blurred image based on deep learning
CN109087256A (en) * 2018-07-19 2018-12-25 北京飞搜科技有限公司 A kind of image deblurring method and system based on deep learning
CN109345449A (en) * 2018-07-17 2019-02-15 西安交通大学 A kind of image super-resolution based on converged network and remove non-homogeneous blur method
CN109360171A (en) * 2018-10-26 2019-02-19 北京理工大学 A kind of real-time deblurring method of video image neural network based
CN109461131A (en) * 2018-11-20 2019-03-12 中山大学深圳研究院 A kind of real-time deblurring method of intelligent inside rear-view mirror based on neural network algorithm
CN110008919A (en) * 2019-04-09 2019-07-12 南京工业大学 The quadrotor drone face identification system of view-based access control model
CN110210432A (en) * 2019-06-06 2019-09-06 湖南大学 A kind of face identification method based on intelligent security guard robot under the conditions of untethered
WO2019178893A1 (en) * 2018-03-22 2019-09-26 深圳大学 Motion blur image sharpening method and device, apparatus, and storage medium
CN110378235A (en) * 2019-06-20 2019-10-25 平安科技(深圳)有限公司 A kind of fuzzy facial image recognition method, device and terminal device
WO2019204945A1 (en) * 2018-04-26 2019-10-31 C2Ro Cloud Robotics Inc. System and method for scalable cloud-robotics based face recognition and face analysis
CN110472566A (en) * 2019-08-14 2019-11-19 旭辉卓越健康信息科技有限公司 The high-precision fuzzy face identification method of one kind and system
CN110473147A (en) * 2018-05-09 2019-11-19 腾讯科技(深圳)有限公司 A kind of video deblurring method and device
US20200013011A1 (en) * 2016-04-06 2020-01-09 Smiota, Inc. Package analysis devices and systems
CN110750663A (en) * 2019-10-08 2020-02-04 浙江工业大学 Cross-modal image retrieval method for life records

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108875486A (en) * 2017-09-28 2018-11-23 北京旷视科技有限公司 Recongnition of objects method, apparatus, system and computer-readable medium
CN108109121A (en) * 2017-12-18 2018-06-01 深圳市唯特视科技有限公司 A kind of face based on convolutional neural networks obscures quick removing method
CN110569809A (en) * 2019-09-11 2019-12-13 淄博矿业集团有限责任公司 coal mine dynamic face recognition attendance checking method and system based on deep learning
CN111460939A (en) * 2020-03-20 2020-07-28 深圳市优必选科技股份有限公司 Deblurring face recognition method and system and inspection robot

Patent Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200013011A1 (en) * 2016-04-06 2020-01-09 Smiota, Inc. Package analysis devices and systems
CN106503669A (en) * 2016-11-02 2017-03-15 重庆中科云丛科技有限公司 A kind of based on the training of multitask deep learning network, recognition methods and system
CN106682604A (en) * 2016-12-20 2017-05-17 电子科技大学 Method for detecting blurred image based on deep learning
WO2019178893A1 (en) * 2018-03-22 2019-09-26 深圳大学 Motion blur image sharpening method and device, apparatus, and storage medium
WO2019204945A1 (en) * 2018-04-26 2019-10-31 C2Ro Cloud Robotics Inc. System and method for scalable cloud-robotics based face recognition and face analysis
CN110473147A (en) * 2018-05-09 2019-11-19 腾讯科技(深圳)有限公司 A kind of video deblurring method and device
CN109345449A (en) * 2018-07-17 2019-02-15 西安交通大学 A kind of image super-resolution based on converged network and remove non-homogeneous blur method
CN109087256A (en) * 2018-07-19 2018-12-25 北京飞搜科技有限公司 A kind of image deblurring method and system based on deep learning
CN109360171A (en) * 2018-10-26 2019-02-19 北京理工大学 A kind of real-time deblurring method of video image neural network based
CN109461131A (en) * 2018-11-20 2019-03-12 中山大学深圳研究院 A kind of real-time deblurring method of intelligent inside rear-view mirror based on neural network algorithm
CN110008919A (en) * 2019-04-09 2019-07-12 南京工业大学 The quadrotor drone face identification system of view-based access control model
CN110210432A (en) * 2019-06-06 2019-09-06 湖南大学 A kind of face identification method based on intelligent security guard robot under the conditions of untethered
CN110378235A (en) * 2019-06-20 2019-10-25 平安科技(深圳)有限公司 A kind of fuzzy facial image recognition method, device and terminal device
CN110472566A (en) * 2019-08-14 2019-11-19 旭辉卓越健康信息科技有限公司 The high-precision fuzzy face identification method of one kind and system
CN110750663A (en) * 2019-10-08 2020-02-04 浙江工业大学 Cross-modal image retrieval method for life records

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021184894A1 (en) * 2020-03-20 2021-09-23 深圳市优必选科技股份有限公司 Deblurred face recognition method and system and inspection robot
CN112069885A (en) * 2020-07-30 2020-12-11 深圳市优必选科技股份有限公司 Face attribute identification method and device and mobile terminal
CN112069887A (en) * 2020-07-31 2020-12-11 深圳市优必选科技股份有限公司 Face recognition method, face recognition device, terminal equipment and storage medium
CN112069887B (en) * 2020-07-31 2023-12-29 深圳市优必选科技股份有限公司 Face recognition method, device, terminal equipment and storage medium
US11373443B2 (en) 2020-08-05 2022-06-28 Ubtech Robotics Corp Ltd Method and appratus for face recognition and computer readable storage medium
CN111738230A (en) * 2020-08-05 2020-10-02 深圳市优必选科技股份有限公司 Face recognition method, face recognition device and electronic equipment
CN112381016A (en) * 2020-11-19 2021-02-19 山东海博科技信息系统股份有限公司 Vehicle-mounted face recognition algorithm optimization method and system
CN112966562A (en) * 2021-02-04 2021-06-15 深圳市街角电子商务有限公司 Face living body detection method, system and storage medium
CN113436231A (en) * 2021-06-30 2021-09-24 平安科技(深圳)有限公司 Pedestrian trajectory generation method, device, equipment and storage medium
CN113436231B (en) * 2021-06-30 2023-09-15 平安科技(深圳)有限公司 Pedestrian track generation method, device, equipment and storage medium
CN113743220A (en) * 2021-08-04 2021-12-03 深圳商周智联科技有限公司 Biological characteristic in-vivo detection method and device and computer equipment
CN113821680A (en) * 2021-09-15 2021-12-21 深圳市银翔科技有限公司 Mobile electronic evidence management method and device
CN116543222A (en) * 2023-05-12 2023-08-04 北京长木谷医疗科技股份有限公司 Knee joint lesion detection method, device, equipment and computer readable storage medium

Also Published As

Publication number Publication date
WO2021184894A1 (en) 2021-09-23

Similar Documents

Publication Publication Date Title
CN111460939A (en) Deblurring face recognition method and system and inspection robot
CN107944427B (en) Dynamic face recognition method and computer readable storage medium
CN110060276B (en) Object tracking method, tracking processing method, corresponding device and electronic equipment
CN109685045B (en) Moving target video tracking method and system
CN111259919B (en) Video classification method, device and equipment and storage medium
TWI539407B (en) Moving object detection method and moving object detection apparatus
CN110188627B (en) Face image filtering method and device
CN112800825B (en) Key point-based association method, system and medium
CN110059634B (en) Large-scene face snapshot method
CN111402237A (en) Video image anomaly detection method and system based on space-time cascade self-encoder
CN111986163A (en) Face image selection method and device
CN114627150A (en) Data processing and motion estimation method and device based on event camera
CN113379858A (en) Image compression method and device based on deep learning
CN112069887A (en) Face recognition method, face recognition device, terminal equipment and storage medium
CN114359333A (en) Moving object extraction method and device, computer equipment and storage medium
CN110795998A (en) People flow detection method and device, electronic equipment and readable storage medium
CN107958231B (en) Light field image filtering method, face analysis method and electronic equipment
CN115358952B (en) Image enhancement method, system, equipment and storage medium based on meta-learning
CN116152758A (en) Intelligent real-time accident detection and vehicle tracking method
CN114549987A (en) Image processing method and image processing device based on multiple tasks
CN114445787A (en) Non-motor vehicle weight recognition method and related equipment
CN112926444A (en) Method and device for detecting parabolic behavior
CN116128734B (en) Image stitching method, device, equipment and medium based on deep learning
CN111753793B (en) Model training method and device, face screening method and electronic equipment
CN116883913B (en) Ship identification method and system based on video stream adjacent frames

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination