WO2020252911A1 - 失踪人脸识别方法、装置、计算机设备和存储介质 - Google Patents

失踪人脸识别方法、装置、计算机设备和存储介质 Download PDF

Info

Publication number
WO2020252911A1
WO2020252911A1 PCT/CN2019/102927 CN2019102927W WO2020252911A1 WO 2020252911 A1 WO2020252911 A1 WO 2020252911A1 CN 2019102927 W CN2019102927 W CN 2019102927W WO 2020252911 A1 WO2020252911 A1 WO 2020252911A1
Authority
WO
WIPO (PCT)
Prior art keywords
face
face image
missing
feature
image
Prior art date
Application number
PCT/CN2019/102927
Other languages
English (en)
French (fr)
Inventor
王建华
何四燕
司马云鹤
Original Assignee
平安科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 平安科技(深圳)有限公司 filed Critical 平安科技(深圳)有限公司
Publication of WO2020252911A1 publication Critical patent/WO2020252911A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • G06V40/173Classification, e.g. identification face re-identification, e.g. recognising unknown faces across different face tracks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/178Human faces, e.g. facial parts, sketches or expressions estimating age from face image; using age information for improving recognition

Definitions

  • This application relates to a method, device, computer equipment and storage medium for identifying missing faces.
  • a method, apparatus, computer device, and storage medium for recognizing missing faces are provided.
  • a missing face recognition method including:
  • a missing face recognition device including:
  • the first image acquisition module is configured to receive a face recognition instruction, and acquire the first missing face image according to the face recognition instruction;
  • the second image obtaining module is used to input the first missing face image into the trained face prediction model to obtain the second missing face image corresponding to the target age;
  • the feature extraction module is used to obtain the face image of the relative of the direct blood corresponding to the target age, and extract the facial features of the face image of the relative of the direct blood;
  • the third image obtaining module is used to correct the second missing face image corresponding to the target age according to the facial features to obtain the third missing face image;
  • the face recognition module is used to perform face recognition on the third missing face image to obtain a face recognition result.
  • a computer device including a memory and one or more processors, the memory stores computer readable instructions, when the computer readable instructions are executed by the processor, the one or more processors execute Computer readable instructions for the following steps:
  • One or more non-volatile computer-readable storage media storing computer-readable instructions.
  • the computer-readable instructions are executed by one or more processors, the one or more processors execute the following steps.
  • Fig. 1 is an application scenario diagram of a missing face recognition method according to one or more embodiments.
  • Fig. 2 is a schematic flowchart of a missing face recognition method according to one or more embodiments.
  • Fig. 3 is a schematic diagram of a process of training a face prediction model according to one or more embodiments.
  • Fig. 4 is a schematic diagram of a process of extracting facial image features of relatives of immediate blood according to one or more embodiments.
  • Fig. 5 is a schematic flowchart of obtaining a third missing face image according to one or more embodiments.
  • Fig. 6 is a schematic flowchart of obtaining a face recognition result according to one or more embodiments.
  • Fig. 7 is a schematic flowchart of obtaining a face recognition result in another embodiment.
  • Fig. 8 is a block diagram of a missing face recognition method and apparatus according to one or more embodiments.
  • Figure 9 is a block diagram of a computer device according to one or more embodiments.
  • the missing face recognition method provided in this application can be applied to the application environment shown in FIG. 1.
  • the terminal 102 communicates with the server 104 through the network.
  • the server 104 receives the face recognition instruction sent by the terminal 102, obtains the first missing face image according to the face recognition instruction; inputs the first missing face image into the trained face prediction model, and obtains the second corresponding to the target age Missing face image; obtain the face image of the immediate blood relatives corresponding to the target age, extract the facial features of the face images of the immediate blood relatives; correct the second missing face image corresponding to the target age according to the facial features to obtain the third missing face image ; Perform face recognition on the third missing face image to obtain a face recognition result, which can be returned to the terminal 102 for display.
  • the terminal 102 may be, but is not limited to, various personal computers, notebook computers, smart phones, tablet computers, and portable wearable devices.
  • the server 104 may be implemented by an independent server or a server cluster composed of multiple servers.
  • a method for identifying missing faces is provided. Taking the method applied to the server in FIG. 1 as an example for description, the method includes the following steps:
  • S202 Receive a face recognition instruction, and obtain a first missing face image according to the face recognition instruction.
  • the first missing face image is the face image of the missing person before the loss.
  • the first missing face image can be a face image taken and saved at a certain age before the loss, or a face monitored by the surveillance equipment before the loss.
  • the image can also be the face image in the resident ID card.
  • the server receives the face recognition instruction sent by the terminal, and according to the instruction, the server obtains the first missing face image uploaded by the terminal.
  • S204 Input the first missing face image into the trained face prediction model to obtain a second missing face image corresponding to the target age.
  • the face prediction model is a neural network model built using a convolutional neural network algorithm based on historical face data, and is used to predict missing face images of a certain age.
  • the second missing face image refers to the predicted missing face image at the target age.
  • the target age refers to the age of the missing face at the time of prediction.
  • the missing face refers to a 4-year-old lost, and the first missing face image can be a 4-year-old face image.
  • the missing face is predicted, the missing face is 10 years old, and the target age is 10 years old.
  • the server inputs the first missing face image into the trained face prediction model for calculation, and obtains the output of the face prediction model, which is the second missing face image corresponding to the target age.
  • S206 Obtain a face image of a relative of the direct blood corresponding to the target age, and extract the facial features of the face image of the relative of the direct blood.
  • Immediate blood relatives refer to relatives who have direct blood relationship with oneself, and have a relationship between birth and birth, such as parents, children, grandparents (maternal grandparents), grandchildren (grandchildren), and so on.
  • the face image of the immediate blood relative refers to the face image of the immediate blood relative at the target age.
  • the server obtains the face image of the immediate blood relative corresponding to the target age, and extracts the facial features in the face image of the immediate blood relative.
  • the face image of the immediate blood relative may be the face image of the parent.
  • the face image of the immediate blood relative may be the face image of the child.
  • the missing face image is a 4-year-old child, and the missing child is now 10 years old
  • the face image of the parents at the age of 10 is obtained, and the facial features in the face images of both parents include geometric features, skin color features, and textures.
  • the third missing face image refers to the missing face image that has been corrected by the facial features of the immediate blood relatives.
  • the server corrects the second missing face image corresponding to the target age according to the facial features of the extracted face images of the immediate blood relatives to obtain the corrected third missing face image.
  • the result of face recognition includes matching or not matching a similar face image through face recognition in a face database.
  • the server performs face recognition on the third missing face image to obtain the face recognition result of the third missing face image, and then may send the face recognition result to the terminal for display, and the third missing face image may be displayed Image and face recognition results.
  • the face recognition result may be a face image matching a similarity greater than a preset threshold in the face database, and the face image is also displayed on the terminal.
  • the face database is used to store face data collected from various channels. For example, face image data can be obtained from the National Data Center, face image data can be obtained from recent monitoring equipment, and so on.
  • the face image data in the face database is constantly updated as time changes.
  • the aforementioned missing face recognition method obtains the first missing face image according to the face recognition instruction by receiving the face recognition instruction; input the first missing face image into the trained face prediction model to obtain the target age
  • the second missing face image obtain the face image of the immediate blood relatives corresponding to the target age, extract the facial features of the face images of the immediate blood relatives; correct the second missing face image corresponding to the target age according to the facial features to obtain the third missing person Face image; perform face recognition on the third missing face image to obtain the face recognition result.
  • the face prediction model to predict the first missing face
  • the second missing face image is obtained and the third missing face image is corrected according to the facial features of the parents.
  • the second missing face image is recognized, and the recognition result is obtained. Improve the accuracy of face recognition.
  • the method before step S202, that is, before receiving a face recognition instruction, and obtaining the first missing face image according to the face recognition instruction, the method further includes the following steps:
  • S302 Obtain face images corresponding to each age, take the face image corresponding to the first age as input, and use the face image corresponding to the second age as output, and use a convolutional neural network for training.
  • the first age refers to the age of the person at the time of disappearance
  • the second age refers to the age of the person after missing for a period of time.
  • the first age may be 3 years old at the time of disappearance
  • the second age may be 20 years old after missing for a period of time. It can also be 60 years old at the time of disappearance, and the second age can be 65 years after missing for a period of time, etc.
  • Convolutional neural network is a kind of feedforward neural network that includes convolution calculation and has a deep structure. Its structure includes input layer, convolution layer, pooling layer, fully connected layer and output layer.
  • the server obtains a large number of face images corresponding to each age, uses the face image corresponding to the first age as the input of the convolutional neural network, and the second age face image corresponding to the input face image of the first age As the output of the convolutional neural network, the convolutional neural network is used for training.
  • the convolutional neural network is used for training.
  • Use the ReLU (Rectified Linear Unit, linear rectification function) function as the excitation function, that is, f(x) max(0,x).
  • the loss function uses a cross entropy function. For example, a large number of face images of different age groups are acquired.
  • the young face image is used as the input of the convolutional neural network, and the same face image in the older face image is used as the output of the convolutional neural network for training.
  • a 2-year-old face image can be used as the input of the convolutional neural network, and the 2-year-old face image at the age of 15 can be used as the output of the convolutional neural network for training.
  • the preset condition refers to when the number of training reaches the maximum number of iterations or the value of the loss function reaches the preset threshold.
  • the server training is completed, and a trained convolutional neural network is obtained, and the trained convolutional neural network is a face prediction model.
  • the face prediction model is trained by preset, and when performing missing face recognition, the missing face image of the target age can be directly predicted from the existing missing face image, thereby improving the efficiency of missing face recognition.
  • step S206 that is, acquiring the face image of the relative of the direct blood corresponding to the target age, and extracting the facial features of the face image of the relative of the direct blood, includes the steps:
  • S402 Divide the face image of the relative of the immediate blood corresponding to the target age according to preset conditions to obtain the face area of the face image of the relative of the immediate blood.
  • the preset condition may be divided according to preset dividing conditions, such as dividing according to the five sense regions.
  • the face area of the face image of the immediate blood relative corresponding to the target age is divided, which can be divided into a preset number of face areas, and each divided face area of the face image of the immediate blood relative is obtained.
  • S404 Calculate the local binary mode value of the face area to obtain the texture feature of the face area.
  • the server calculates the LBP (Local Binary Patterns, local binary pattern) value of each face area to obtain the texture feature of each face area.
  • LBP Local Binary Patterns, local binary pattern
  • the basic idea of LBP is based on a certain pixel in the image as the center, and threshold comparison of adjacent pixels. If the brightness of the center pixel is greater than or equal to its neighboring pixels, mark the neighboring pixel as 1, otherwise mark it as 0.
  • S406 Determine the skin color feature of the face region, and obtain the face feature of the face image of the immediate blood relative according to the texture feature of the face region and the skin color feature of the face region.
  • the server determines the skin color feature according to the pixel points of each face area, or uses a skin color model to determine the skin color feature of each face area, such as a Gaussian mixture model.
  • the server obtains the face features of each face area in the face image of the blood relatives according to the texture feature of each face area and the skin color feature of each face area.
  • the face area of the immediate blood relatives face image is obtained, and the local binary mode value of the face area is calculated to obtain the face area Texture characteristics.
  • Determine the skin color feature of the face region and obtain the facial features of the face image of the immediate blood relatives according to the texture feature of the face region and the skin color feature of the face region.
  • step S208 that is, correcting the second missing face image corresponding to the target age according to the facial features to obtain the third missing face image includes the steps:
  • the first facial feature refers to a facial feature whose similarity of facial features in each facial region is greater than a preset threshold, and there may be multiple facial features.
  • Each face area may have a first face feature or no first face feature, that is, there is no face feature with a similarity greater than a preset threshold in the face area.
  • the face image of the first blood relative and the face image of the second blood relative are the face images corresponding to two different direct blood relatives selected from the direct blood relatives. For example, if the missing person is a child, the face image of the first blood relative may be the face image of the father, and the face image of the second blood relative may be the face image of the mother. If the missing person is a parent, the face image of the first blood relative may be the face image of the son, and the face image of the second blood relative may be the face image of the daughter, etc.
  • the server calculates the similarity between the facial features corresponding to the facial image of the first direct blood relative and the facial features corresponding to the second direct blood relative. If the missing person is a child, the similarity between the facial features in each face area of the father's face image and the corresponding face area in the mother's face image can be calculated. When the similarity is greater than the preset threshold, the first facial feature whose similarity of the facial features in each face area is greater than the preset threshold is acquired.
  • S504 Calculate a second face feature corresponding to the second missing face image and the first face feature, and replace the second face feature with the first face feature to obtain a third missing face image.
  • the second face feature is the face feature of the same face area in the second missing face image and the face image of the immediate blood relative.
  • the third missing face image refers to the missing face image that has been corrected by the facial features in the face image of the immediate blood relative.
  • the server divides the second missing face image according to the method of dividing the face area in the face image of the immediate blood relatives to obtain the target face area in the second missing face image, and the target face area is the same as the first person.
  • the second face feature in the face area is calculated, and the second face feature corresponds to the first face feature. Replace the second face feature with the first face feature to obtain the third missing face image.
  • the skin color feature of a face area in the face image of the immediate blood relative is found in the second missing face image
  • the skin color feature of the same area in the second missing face image is calculated
  • the skin color feature Replace with the skin color feature of the face region in the face image of the immediate blood relative is obtained.
  • the similarity is obtained when the similarity is greater than the preset threshold.
  • calculate the second face feature corresponding to the second missing face image and the first face feature calculate the second face feature corresponding to the second missing face image and the first face feature, and replace the second face feature with the first face feature to obtain the third missing face feature.
  • the face image realizes the correction of the second missing face image and obtains the third missing face image, so that the accuracy of face recognition is improved when performing face recognition.
  • step S210 that is, performing face recognition on the target missing face image to obtain the face recognition result, includes the steps:
  • S602 Determine the corresponding skin color feature according to the target missing face image, and calculate the first similarity with the skin color feature of the face in the preset face database according to the skin color feature.
  • the first degree of similarity refers to the degree of skin color similarity between the missing face image and the face in the face database.
  • the server calculates the corresponding skin color feature according to the pixels in the target missing face image, or can use the established skin color model to calculate the skin color feature, and calculates the first skin color feature based on the skin color feature in the preset face database. Similarity.
  • S604 Calculate the corresponding texture feature according to the target missing face image, and calculate the second degree of similarity with the face texture feature in the preset face database according to the texture feature.
  • the second similarity refers to the texture similarity between the missing face image and the face in the face database.
  • the server divides the target missing face image into a preset number of face areas, calculates the LBP value of each pixel point of the face area, and obtains the texture feature of the target missing face image according to the calculated LBP value of the face area .
  • the basic idea of LBP is based on a certain pixel in the image as the center, and threshold comparison of adjacent pixels. If the brightness of the center pixel is greater than or equal to its neighboring pixels, mark the neighboring pixel as 1, otherwise mark it as 0.
  • S606 Obtain the similarity between the target missing face image and the face in the preset face database according to the first similarity and the second similarity, and obtain a face recognition result.
  • the similarity between the target missing face image and the face in the preset face database is obtained, the face with the similarity greater than the preset threshold is searched, and when it can be found, the The face images in the preset face database whose similarity is greater than the preset threshold are returned to the terminal for display, and when not found, a prompt message indicating that the matching fails is returned to the terminal.
  • the corresponding skin color feature is determined according to the target missing face image
  • the first similarity with the face skin color feature in the preset face database is calculated according to the skin color feature
  • the corresponding texture is calculated according to the target missing face image
  • calculate the second similarity between the face texture features in the preset face database according to the texture features and obtain the similarity between the target missing face image and the face in the preset face database according to the first similarity and the second similarity
  • performing face recognition on the target missing face image to obtain the face recognition result includes the steps:
  • the server starts multiple preset parallel threads to perform parallel recognition of the target missing face image and the face in the preset face database, that is, multiple threads are used to simultaneously target the missing face image and the preset face
  • the different faces in the database are matched to obtain the face matching results in the preset face database, which improves the efficiency of face recognition.
  • step S210 performing face recognition on the target missing face image to obtain the face recognition result, includes the steps:
  • S702 Send the target missing image to each slave node server, so that each slave node server performs face recognition on the target missing face image.
  • the server is used as the master node server, the service area of the master node sends the target missing image to each slave node server, and each slave node server loads a face recognition program, and the face recognition task is distributed to the slave nodes according to load balance.
  • the server performs recognition, that is, the faces in the preset face database can be matched with a part of the faces in the preset face database from the slave node servers that are distributed according to the load balance.
  • the face recognition result is returned to the master node server.
  • the target face recognition result refers to the result obtained after the target missing face image is matched with all the faces in the preset face database.
  • the master node server obtains the face recognition result returned from the node server, and obtains the target face recognition result according to the face recognition returned from the node server.
  • a missing face recognition device 800 which includes: a first image acquisition module 802, a second image acquisition module 804, a feature extraction module 806, and a third image acquisition module 808 and face recognition module 810, where:
  • the first image acquisition module 802 is configured to receive a face recognition instruction, and acquire a first missing face image according to the face recognition instruction;
  • the second image obtaining module 804 is configured to input the first missing face image into the trained face prediction model to obtain the second missing face image corresponding to the target age;
  • the feature extraction module 806 is configured to obtain the face image of the relative of the direct blood corresponding to the target age, and extract the facial features of the face image of the relative of the direct blood;
  • the third image obtaining module 808 is used to correct the second missing face image corresponding to the target age according to the facial features to obtain the third missing face image;
  • the face recognition module 810 is configured to perform face recognition on the third missing face image to obtain a face recognition result.
  • the missing face recognition device 800 further includes:
  • the model training module is used to obtain face images corresponding to each age, take the face image corresponding to the first age as input, and use the face image corresponding to the second age as output, and use a convolutional neural network for training;
  • the training completion module is used to obtain the trained face prediction model when the preset conditions are reached.
  • the feature extraction module 806 includes:
  • the area dividing unit is used to divide the face image of the relative of the direct blood corresponding to the target age according to preset conditions to obtain the face area of the face image of the relative of the direct blood;
  • the texture feature calculation unit is used to calculate the local binary mode value of the face area to obtain the texture feature of the face area
  • the face feature obtaining unit is used to determine the skin color feature of the face area, and obtain the face feature of the face image of the immediate blood relative according to the texture feature of the face area and the skin color feature of the face area.
  • the third image obtaining module 808 includes:
  • the first facial feature obtaining module is used to calculate the similarity between the facial features corresponding to the father's facial image and the facial features corresponding to the mother's facial image. When the similarity is greater than the preset threshold, the similarity is obtained to be greater than the preset threshold.
  • the feature replacement module is used to calculate the second face feature corresponding to the first face feature of the second missing face image, and replace the second face feature with the first face feature to obtain the third missing face image.
  • the face recognition module 810 includes:
  • the first similarity calculation unit is configured to determine the corresponding skin color feature according to the target missing face image, and calculate the first similarity with the facial skin color feature in the preset face database according to the skin color feature;
  • the second similarity calculation unit is configured to calculate the corresponding texture feature according to the target missing face image, and calculate the second similarity with the face texture feature in the preset face database according to the texture feature;
  • the face similarity obtaining unit is used to obtain the similarity between the target missing face image and the face in the preset face database according to the first similarity and the second similarity, to obtain a face recognition result.
  • the face recognition module 810 includes:
  • the parallel computing unit is used to start a preset parallel thread to recognize the target missing face image and the face in the preset face database in parallel to obtain the face recognition result.
  • the face recognition module 810 includes:
  • the image sending unit is used to send the target missing image to each slave node server, so that each slave node server performs face recognition on the target missing face image;
  • the result obtaining unit is used to obtain each face recognition result returned from the node server, and obtain the target face recognition result according to the face recognition returned from the node server
  • each module in the aforementioned missing face recognition device can be implemented in whole or in part by software, hardware, and a combination thereof.
  • the foregoing modules may be embedded in the form of hardware or independent of the processor in the computer device, or may be stored in the memory of the computer device in the form of software, so that the processor can call and execute the operations corresponding to the foregoing modules.
  • a computer device is provided.
  • the computer device may be a server, and its internal structure diagram may be as shown in FIG. 9.
  • the computer equipment includes a processor, a memory, a network interface and a database connected through a system bus.
  • the processor of the computer device is used to provide calculation and control capabilities.
  • the memory of the computer device includes a non-volatile storage medium and an internal memory.
  • the non-volatile storage medium stores an operating system, computer readable instructions, and a database.
  • the internal memory provides an environment for the operation of the operating system and computer-readable instructions in the non-volatile storage medium.
  • the database of the computer device is used to store face data.
  • the network interface of the computer device is used to communicate with an external terminal through a network connection.
  • the computer readable instruction is executed by the processor to realize a missing face recognition method.
  • FIG. 9 is only a block diagram of part of the structure related to the solution of the present application, and does not constitute a limitation on the computer equipment to which the solution of the present application is applied.
  • the specific computer equipment may Including more or fewer parts than shown in the figure, or combining some parts, or having a different arrangement of parts.
  • a computer device including a memory and one or more processors, in which computer-readable instructions are stored, and when the computer-readable instructions are executed by the processor, the steps of the missing face recognition method provided in any embodiment of the present application are implemented .
  • One or more non-volatile storage media storing computer-readable instructions.
  • the computer-readable instructions When executed by one or more processors, the one or more processors implement the missing information provided in any of the embodiments of the present application. Steps of face recognition method.
  • Non-volatile memory may include read only memory (ROM), programmable ROM (PROM), electrically programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM), or flash memory.
  • ROM read only memory
  • PROM programmable ROM
  • EPROM electrically programmable ROM
  • EEPROM electrically erasable programmable ROM
  • Volatile memory may include random access memory (RAM) or external cache memory.
  • RAM is available in many forms, such as static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double data rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous chain Channel (Synchlink) DRAM (SLDRAM), memory bus (Rambus) direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM), etc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Biomedical Technology (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
  • Collating Specific Patterns (AREA)

Abstract

一种失踪人脸识别方法,包括:接收人脸识别指令,根据人脸识别指令获取第一失踪人脸图像;将第一失踪人脸图像输入到已训练的人脸预测模型中,得到目标年龄对应的第二失踪人脸图像;获取目标年龄对应的直系血亲人脸图像,提取直系血亲人脸图像的人脸特征;根据人脸特征修正目标年龄对应的第二失踪人脸图像,得到第三失踪人脸图像;对第三失踪人脸图像进行人脸识别,得到人脸识别结果。

Description

失踪人脸识别方法、装置、计算机设备和存储介质
相关申请的交叉引用
本申请要求于2019年06月19日提交中国专利局,申请号为201910531862X,申请名称为“失踪人脸识别方法、装置、计算机设备和存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及一种失踪人脸识别方法、装置、计算机设备和存储介质。
背景技术
随着互联网的发展,开始采用人脸识别对失踪人口进行寻找。但是,如果失踪人口走失的时间较长,失踪人口的容貌随年龄的增长变化比较大,因此通过传统的人脸识别的方式确认失踪人口,存在准确性较低的问题。
发明内容
根据本申请公开的各种实施例,提供一种失踪人脸识别方法、装置、计算机设备和存储介质。
一种失踪人脸识别方法,包括:
接收人脸识别指令,根据人脸识别指令获取第一失踪人脸图像;
将第一失踪人脸图像输入到已训练的人脸预测模型中,得到目标年龄对应的第二失踪人脸图像;
获取目标年龄对应的直系血亲人脸图像,提取直系血亲人脸图像的人脸特征;
根据人脸特征修正目标年龄对应的第二失踪人脸图像,得到第三失踪人脸图像;及
对第三失踪人脸图像进行人脸识别,得到人脸识别结果。
一种失踪人脸识别装置,包括:
第一图像获取模块,用于接收人脸识别指令,根据人脸识别指令获取第一失踪人脸图像;
第二图像得到模块,用于将第一失踪人脸图像输入到已训练的人脸预测模型中,得到目标年龄对应的第二失踪人脸图像;
特征提取模块,用于获取目标年龄对应的直系血亲人脸图像,提取直系血亲人脸图像的人脸特征;
第三图像得到模块,用于根据人脸特征修正目标年龄对应的第二失踪人脸图像,得到第三失踪人脸图像;及
人脸识别模块,用于对第三失踪人脸图像进行人脸识别,得到人脸识别结果。
一种计算机设备,包括存储器和一个或多个处理器,所述存储器中储存有计算机可读指令,所述计算机可读指令被所述处理器执行时,使得所述一个或多个处理器执行以下步骤计算机可读指令计算机可读指令:
接收人脸识别指令,根据人脸识别指令获取第一失踪人脸图像;
将第一失踪人脸图像输入到已训练的人脸预测模型中,得到目标年龄对应的第二失踪人脸图像;
获取目标年龄对应的直系血亲人脸图像,提取直系血亲人脸图像的人脸特征;
根据人脸特征修正目标年龄对应的第二失踪人脸图像,得到第三失踪人脸图像;及
对第三失踪人脸图像进行人脸识别,得到人脸识别结果。
一个或多个存储有计算机可读指令的非易失性计算机可读存储介质,计算机可读指令被一个或多个处理器执行时,使得一个或多个处理器执行以下步骤计算机可读指令计算机可读指令:
接收人脸识别指令,根据人脸识别指令获取第一失踪人脸图像;
将第一失踪人脸图像输入到已训练的人脸预测模型中,得到目标年龄对应的第二失踪人脸图像;
获取目标年龄对应的直系血亲人脸图像,提取直系血亲人脸图像的人脸特征;
根据人脸特征修正目标年龄对应的第二失踪人脸图像,得到第三失踪人脸图像;及
对第三失踪人脸图像进行人脸识别,得到人脸识别结果。
本申请的一个或多个实施例的细节在下面的附图和描述中提出。本申请的其它特征和优点将从说明书、附图以及权利要求书变得明显。
附图说明
为了更清楚地说明本申请实施例中的技术方案,下面将对实施例中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其它的附图。
图1为根据一个或多个实施例中失踪人脸识别方法的应用场景图。
图2为根据一个或多个实施例中失踪人脸识别方法的流程示意图。
图3为根据一个或多个实施例中训练人脸预测模型的流程示意图。
图4为根据一个或多个实施例中的提取直系血亲人脸图像特征的流程示意图。
图5为根据一个或多个实施例中得到第三失踪人脸图像的流程示意图。
图6为根据一个或多个实施例中得到人脸识别结果的流程示意图。
图7另一个实施例中得到人脸识别结果的流程示意图。
图8为根据一个或多个实施例中失踪人脸识别方法装置的框图。
图9为根据一个或多个实施例中计算机设备的框图。
具体实施方式
为了使本申请的技术方案及优点更加清楚明白,以下结合附图及实施例,对本申请进行进一步详细说明。应当理解,此处描述的具体实施例仅仅用以解释本申请,并不用于限定本申请。
本申请提供的失踪人脸识别方法,可以应用于如图1所示的应用环境中。终端102通过网络与服务器104进行通信。服务器104接收终端102发送的人脸识别指令,根据人脸识别指令获取第一失踪人脸图像;将第一失踪人脸图像输入到已训练的人脸预测模型中,得到目标年龄对应的第二失踪人脸图像;获取目标年龄对应的直系血亲人脸图像,提取直系血亲人脸图像的人脸特征;根据人脸特征修正目标年龄对应的第二失踪人脸图像,得到第三失踪人脸图像;对第三失踪人脸图像进行人脸识别,得到人脸识别结果,可以将人脸识别结果返回到终端102进行显示。终端102可以但不限于是各种个人计算机、笔记本电脑、智能手机、平板电脑和便携式可穿戴设备,服务器104可以用独立的服务器或者是多个服务器组成的服务器集群来实现。
在其中一个实施例中,如图2所示,提供了一种失踪人脸识别方法,以该方法应用于图1中的服务器为例进行说明,包括以下步骤:
S202,接收人脸识别指令,根据人脸识别指令获取第一失踪人脸图像。
第一失踪人脸图像是失踪人员在走失前的人脸图像,第一失踪人脸图像可以是在走失前某个年龄拍照留存的人脸图像,也可以是走失前监控设备监控得到的人脸图像,也可以是居民身份证中的人脸图像。
具体地,服务器接收到终端发送的人脸识别指令,根据该指令服务器获取到终端上传的第一失踪人脸图像。
S204,将第一失踪人脸图像输入到已训练的人脸预测模型中,得到目标年龄对应的第二失踪人脸图像。
人脸预测模型是根据历史人脸数据使用卷积神经网络算法建立的神经网络模型,是用来预测某一年龄的失踪人脸图像的。第二失踪人脸图像是指预测的在目标年龄时的失踪人脸图像。目标年龄是指该失踪人脸在进行预测时的年龄。比如,该失踪人脸是指4岁走失,第一失踪人脸图像就可以是4岁的人脸图像。则在进行失踪人脸预测时,该失踪人脸为10岁,则目标年龄就是10岁。
具体地,服务器将第一失踪人脸图像输入到已训练的人脸预测模型中进行计算,得到该人脸预测模型的输出即目标年龄对应的第二失踪人脸图像。
S206,获取目标年龄对应的直系血亲人脸图像,提取直系血亲人脸图像的人脸特征。
直系血亲是指和是指和自己有直接血缘关系的亲属,具有生与被生关系,比如父母、子女、祖父母(外祖父母)、孙子女(外孙子女)等等。直系血亲人脸图像是指在目标年龄时直系血亲的人脸图像。
具体地,服务器获取该目标年龄对应的直系血亲的人脸图像,提取直系血亲人脸图像中的人脸特征。如果第一失踪人脸图像为儿童人脸图像,则直系血亲人脸图像可以是父母的人脸图像。如果第一失踪人脸图像为父母的人脸图像,则直系血亲人脸图像可以是儿女的人脸图像。比如当失踪人脸图像是4岁的儿童时,现在该走失儿童10岁,则获取10岁时父母的人脸图像,提取父母双方人脸图像中的人脸特征包括几何特征、肤色特征和纹理特征等等。
S208,根据人脸特征修正目标年龄对应的第二失踪人脸图像,得到第三失踪人脸图像。
第三失踪人脸图像是指经过直系血亲人脸图像修正后得到人脸特征修正后的失踪人脸图像。
具体地,服务器根据提取到的直系血亲人脸图像的人脸特征修正目标年龄对应的第二失踪人脸图像,得到修正后的第三失踪人脸图像。
S210,对第三失踪人脸图像进行人脸识别,得到人脸识别结果。
人脸识别结果包括在人脸数据库中通过人脸识别匹配到相似人脸图像或者未匹配到相似人脸图像。
具体地,服务器对第三失踪人脸图像进行人脸识别,得到该第三失踪人脸图像的人脸识别结果,然后可以将人脸识别结果发送到终端进行显示,可以显示第三失踪人脸图像和人脸识别结果。其中,人脸识别结果可以是在人脸数据库中匹配到相似度大于预设阈值的人脸图像,将该人脸图像也在终端进行显示。其中,人脸数据库用于存储从各个不同渠道到采集到的人脸数据,比如,可以从国家数据中心获取人脸图像数据,可以从近期的监控设备中获取人脸图像数据等等。人脸数据库中的人脸图像数据随着时间的变化而不断进行更新。
上述失踪人脸识别方法,通过接收人脸识别指令,根据人脸识别指令获取第一失踪人脸图像;将第一失踪人脸图像输入到已训练的人脸预测模型中,得到目标年龄对应的第二失踪人脸图像;获取目标年龄对应的直系血亲人脸图像,提取直系血亲人脸图像的人脸特征;根据人脸特征修正目标年龄对应的第二失踪人脸图像,得到第三失踪人脸图像;对第三失踪人脸图像进行人脸识别,得到人脸识别结果。通过对第一失踪人脸使用人脸预测模型预测得到二失踪人脸图像并根据父母人脸特征进行修正的带第三失踪人脸图像,对第二失踪人脸图像识别,得到识别结果,提高了人脸识别的准确性。
在其中一个实施例中,如图3所示,在步骤S202之前,即在接收人脸识别指令,根据人脸识别指令获取第一失踪人脸图像之前,还包括步骤:
S302,获取各个年龄对应的人脸图像,将第一年龄对应的人脸图像作为输入,将第二年龄对应的人脸图像作为输出,使用卷积神经网络进行训练。
第一年龄是指该人员在失踪时的年龄,第二年龄是指该人员在失踪一段时间后的年龄。比如,第一年龄可以是失踪时的3岁,则第二年龄可以是在失踪一段时间后的20岁。 也可以是失踪时的60岁,则第二年龄可以是在失踪一段时间后的65岁等等。卷积神经网络是一类包含卷积计算且具有深度结构的前馈神经网络,其结构包括输入层,卷积层、池化层、全连接层和输出层。
具体地,服务器获取大量各个年龄对应的人脸图像,将第一年龄对应的人脸图像作为卷积神经网络的输入,将该输入的第一年龄的人脸图像对应的第二年龄人脸图像作为卷积神经网络的输出,使用卷积神经网络进行训练。使用ReLU(Rectified Linear Unit,线性整流函数)函数为激励函数即f(x)=max(0,x)。损失函数使用交叉熵函数。例如,获取到大量的不同年龄段的人脸图像。将年龄小的人脸图像作为卷积神经网络的输入,将同一的人脸图像在年龄较大的人脸图像作为卷积神经网络的输出,进行训练。举例来说,可以将2岁的人脸图像作为卷积神经网络的输入,将该2岁的人脸图像在15岁时的人脸图像作为卷积神经网络的输出,进行训练。也可以将20岁的人脸图像作为卷积神经网络的输入,将该20岁的人脸图像在30岁时的人脸图像作为卷积神经网络的输出,进行训练。
S304,当达到预设条件时,得到已训练的人脸预测模型。
预设条件是指当训练次数达到最大迭代次数或者损失函数的值达到预设阈值。
具体地,当达到预设条件时,服务器训练完成,得到训练好的卷积神经网络,该训练好的卷积神经网络就是人脸预测模型。
在上述实施例中,通过预设训练好人脸预测模型,在进行失踪人脸识别时,可以直接通过已有的失踪人脸图像预测目标年龄的失踪人脸图像,提高失踪人脸识别的效率。
在其中一个实施例中,如图4所示,步骤S206,即获取目标年龄对应的直系血亲人脸图像,提取直系血亲人脸图像的人脸特征,包括步骤:
S402,将目标年龄对应的直系血亲人脸图像按照预设条件进行划分,得到直系血亲人脸图像的人脸区域。
预设条件可以是按照预先设置好的划分条件比如按照五官区域进行划分等等。
具体地,将目标年龄对应的直系血亲人脸图像的人脸区域进行划分,可以划分为预先设置数量的人脸区域,得到直系血亲人脸图像的划分后的各个人脸区域。
S404,计算人脸区域的局部二进制模式值,得到人脸区域的纹理特征。
具体地,服务器计算各个人脸区域的LBP(Local Binary Patterns,局部二进制模式)值,得到各个人脸区域的纹理特征。LBP的基本思想是以图像中某个像素为中心,对相邻像素进行阈值比较。如果中心像素的亮度大于等于它的相邻像素,把相邻像素标记为1,否则标记为0。
S406,确定人脸区域的肤色特征,根据人脸区域的纹理特征和人脸区域的肤色特征,得到直系血亲人脸图像的人脸特征。
具体地,服务器根据各个人脸区域的像素点确定肤色特征,或者使用肤色模型来确定各个人脸区域的肤色特征,比如高斯混合模型等等。服务器根据各个人脸区域的纹理特征和各个人脸区域的肤色特征,得到直系血亲人脸图像中各个人脸区域的人脸特征。
在上述实施例中,通过将目标年龄对应的直系血亲人脸图像按照预设条件进行划分,得到直系血亲人脸图像的人脸区域,计算人脸区域的局部二进制模式值,得到人脸区域的纹理特征。确定人脸区域的肤色特征,根据人脸区域的纹理特征和人脸区域的肤色特征,得到直系血亲人脸图像的人脸特征。通过计算纹理特征和肤色特征,实现了提取直系血亲人脸图像的人脸特征,方便快捷。
在其中一个实施例中,如图5所示,步骤S208,即根据人脸特征修正目标年龄对应的第二失踪人脸图像,得到第三失踪人脸图像,包括步骤:
S502,计算第一直系血亲人脸图像对应的人脸特征和第二直系血亲人脸图像对应的人脸特征的相似度,当相似度大于预设阈值时,获取相似度大于预设阈值的第一人脸特征。
第一人脸特征是指各个人脸区域中的人脸特征的相似度大于预设阈值的人脸特征,该人脸特征可以有多个。每个人脸区域中可以有第一人脸特征,也可以没有第一人脸特征,即该人脸区域中没有相似度大于预设阈值的人脸特征。第一直系血亲人脸图像和第二直系血亲人脸图像是从直系血亲中选项两个不同的直系血亲对应的人脸图像。比如,若失踪人员为儿童,则第一直系血亲人脸图像可以是父亲人脸图像,第二直系血亲人脸图像可以是母亲人脸图像。若失踪人员为父母,则第一直系血亲人脸图像可以是儿子人脸图像,第二直系血亲人脸图像可以是女儿人脸图像等。
具体地,服务器计算第一直系血亲人脸图像对应的人脸特征和第二直系血亲人脸图像对应的人脸特征的相似度。若失踪人员为儿童,则可以计算父亲人脸图像各个人脸区域中的人脸特征和母亲人脸图像中对应人脸区域中人脸特征的相似度。当相似度大于预设阈值时,获取各个人脸区域中人脸特征的相似度大于预设阈值的第一人脸特征。
S504,计算第二失踪人脸图像与第一人脸特征对应的第二人脸特征,将第二人脸特征替换为第一人脸特征,得到第三失踪人脸图像。
第二人脸特征是第二失踪人脸图像中与直系血亲人脸图像中相同人脸区域的人脸特征。第三失踪人脸图像是指经过直系血亲人脸图像中人脸特征修正过的失踪人脸图像。
具体的,服务器将第二失踪人脸图像按照直系血亲人脸图像中人脸区域划分方法进行划分,得到第二失踪人脸图像中的目标人脸区域,该目标人脸区域是与第一人脸特征属于的人脸区域一致的人脸区域,计算该人脸区域中第二人脸特征,第二人脸特征与第一人脸特征是对应的。将第二人脸特征替换为第一人脸特征,得到第三失踪人脸图像。比如,直系血亲人脸图像中一个人脸区域的肤色特征,在第二失踪人脸图像中找到相同的人脸区域,计算第二失踪人脸图像中该相同区域的肤色特征,将该肤色特征替换为直系血亲人脸图像中该人脸区域的肤色特征,得到第三失踪人脸图像。
在上述实施例中,通过计算第一直系血亲人脸图像对应的人脸特征和第二直系血亲人脸图像对应的人脸特征的相似度,当相似度大于预设阈值时,获取相似度大于预设阈值的第一人脸特征,计算第二失踪人脸图像与第一人脸特征对应的第二人脸特征,将第二人脸特征替换为第一人脸特征,得到第三失踪人脸图像,实现了对第二失踪人脸图像的修正, 得到第三失踪人脸图像,使得在进行人脸识别时,提高人脸识别的准确性。
在其中一个实施例中,如图6所示,步骤S210,即对目标失踪人脸图像进行人脸识别,得到人脸识别结果,包括步骤:
S602,根据目标失踪人脸图像确定对应的肤色特征,根据肤色特征计算与预设人脸数据库中人脸肤色特征的第一相似度。
第一相似度是指失踪人脸图像与人脸数据库中人脸的肤色相似度。
具体地,服务器根据目标失踪人脸图像中像素点计算对应的肤色特征,也可以使用已建立肤色模型来计算肤色特征,根据该肤色特征计算与预设人脸数据库中人脸肤色特征的第一相似度。
S604,根据目标失踪人脸图像计算对应的纹理特征,根据纹理特征计算与预设人脸数据库中人脸纹理特征的第二相似度。
第二相似度是指失踪人脸图像与人脸数据库中人脸的纹理相似度。
具体地,服务器将目标失踪人脸图像划分为预设数量的人脸区域,计算每个人脸区域像素点的LBP值,根据计算得到的人脸区域的LBP值得到目标失踪人脸图像的纹理特征。LBP的基本思想是以图像中某个像素为中心,对相邻像素进行阈值比较。如果中心像素的亮度大于等于它的相邻像素,把相邻像素标记为1,否则标记为0。
S606,根据第一相似度和第二相似度得到目标失踪人脸图像与预设人脸数据库中人脸的相似度,得到人脸识别结果。
具体地,根据第一相似度和第二相似度得到目标失踪人脸图像与预设人脸数据库中人脸的相似度,查找相似度大于预设阈值的人脸,当能够查找到时,将预设人脸数据库中相似度大于预设阈值的人脸图像返回给终端进行显示,当未查找到时,向终端返回匹配失败的提示信息。
在上述实施例中,通过根据目标失踪人脸图像确定对应的肤色特征,根据肤色特征计算与预设人脸数据库中人脸肤色特征的第一相似度,根据目标失踪人脸图像计算对应的纹理特征,根据纹理特征计算与预设人脸数据库中人脸纹理特征的第二相似度,根据第一相似度和第二相似度得到目标失踪人脸图像与预设人脸数据库中人脸的相似度,得到人脸识别结果,通过使用不同的人脸特征分别计算相似度,最后得到失踪人脸图像与预设人脸数据库中人脸的相似度,提高了相似度计算的精确性。
在其中一个实施例中,对目标失踪人脸图像进行人脸识别,得到人脸识别结果,包括步骤:
启动预设并行线程,将目标失踪人脸图像与预设人脸数据库中的人脸并行识别,得到人脸识别结果。
具体的,服务器启动预先设置好的多个并行线程,将目标失踪人脸图像与预设人脸数据库中的人脸进行并行识别,即使用多个线程同时目标失踪人脸图像与预设人脸数据库中的不同人脸进行匹配处理,得到与预设人脸数据库中人脸匹配结果,提高人脸识别的效 率。
在其中一个实施例中,如图7所示,步骤S210,对目标失踪人脸图像进行人脸识别,得到人脸识别结果,包括步骤:
S702,将目标走失图像发送到各个从节点服务器,以使各个从节点服务器对目标失踪人脸图像进行人脸识别。
具体地,将服务器作为主节点服务器,主节点服务区将目标走失图像发送到各个从节点服务器,每个从节点服务器中都加载有人脸识别程序,将人脸识别任务按照负载均衡分配到从节点服务器进行识别,即可以将预设人脸数据库中人脸按照负载均衡分配的从节点服务器中,每个从节点服务器与预设人脸数据库中的一部分人脸进行匹配,当各个从节点服务器识别任务完成时,将人脸识别结果返回给主节点服务器。
S704,获取各个从节点服务器返回的人脸识别结果,根据从节点服务器返回的人脸识别得到目标人脸识别结果。
目标人脸识别结果是指目标失踪人脸图像与预设人脸数据库中的所有人脸匹配后得到的结果。
具体地,主节点服务器获取从节点服务器返回的人脸识别结果,根据从节点服务器返回的人脸识别得到目标人脸识别结果。
在上述实施例中,通过将人脸识别的任务分配到从节点服务器中进行处理,减轻了服务器的压力,提高了人脸识别的效率。
应该理解的是,虽然图2-7的流程图中的各个步骤按照箭头的指示依次显示,但是这些步骤并不是必然按照箭头指示的顺序依次执行。除非本文中有明确的说明,这些步骤的执行并没有严格的顺序限制,这些步骤可以以其它的顺序执行。而且,图2-7中的至少一部分步骤可以包括多个子步骤或者多个阶段,这些子步骤或者阶段并不必然是在同一时刻执行完成,而是可以在不同的时刻执行,这些子步骤或者阶段的执行顺序也不必然是依次进行,而是可以与其它步骤或者其它步骤的子步骤或者阶段的至少一部分轮流或者交替地执行。
在其中一个实施例中,如图8所示,提供了一种失踪人脸识别装置800,包括:第一图像获取模块802、第二图像得到模块804、特征提取模块806、第三图像得到模块808和人脸识别模块810,其中:
第一图像获取模块802,用于接收人脸识别指令,根据人脸识别指令获取第一失踪人脸图像;
第二图像得到模块804,用于将第一失踪人脸图像输入到已训练的人脸预测模型中,得到目标年龄对应的第二失踪人脸图像;
特征提取模块806,用于获取目标年龄对应的直系血亲人脸图像,提取直系血亲人脸图像的人脸特征;
第三图像得到模块808,用于根据人脸特征修正目标年龄对应的第二失踪人脸图像,得到第三失踪人脸图像;
人脸识别模块810,用于对第三失踪人脸图像进行人脸识别,得到人脸识别结果。
在其中一个实施例中,失踪人脸识别装置800,还包括:
模型训练模块,用于获取各个年龄对应的人脸图像,将第一年龄对应的人脸图像作为输入,将第二年龄对应的人脸图像作为输出,使用卷积神经网络进行训练;
训练完成模块,用于当达到预设条件时,得到已训练的人脸预测模型。
在其中一个实施例中,特征提取模块806,包括:
区域划分单元,用于将目标年龄对应的直系血亲人脸图像按照预设条件进行划分,得到直系血亲人脸图像的人脸区域;
纹理特征计算单元,用于计算人脸区域的局部二进制模式值,得到人脸区域的纹理特征;
人脸特征得到单元,用于确定人脸区域的肤色特征,根据人脸区域的纹理特征和人脸区域的肤色特征,得到直系血亲人脸图像的人脸特征。
在其中一个实施例中,第三图像得到模块808,包括:
第一人脸特征得到模块,用于计算父亲人脸图像对应的人脸特征和母亲人脸图像对应的人脸特征的相似度,当相似度大于预设阈值时,获取相似度大于预设阈值的第一人脸特征;
特征替换模块,用于计算第二失踪人脸图像与第一人脸特征对应的第二人脸特征,将第二人脸特征替换为第一人脸特征,得到第三失踪人脸图像。
在其中一个实施例中,人脸识别模块810,包括:
第一相似度计算单元,用于根据目标失踪人脸图像确定对应的肤色特征,根据肤色特征计算与预设人脸数据库中人脸肤色特征的第一相似度;
第二相似度计算单元,用于根据目标失踪人脸图像计算对应的纹理特征,根据纹理特征计算与预设人脸数据库中人脸纹理特征的第二相似度;
人脸相似度得到单元,用于根据第一相似度和第二相似度得到目标失踪人脸图像与预设人脸数据库中人脸的相似度,得到人脸识别结果。
在其中一个实施例中,人脸识别模块810,包括:
并行计算单元,用于启动预设并行线程,将目标失踪人脸图像与预设人脸数据库中的人脸并行识别,得到人脸识别结果。
在其中一个实施例中,人脸识别模块810,包括:
图像发送单元,用于将目标走失图像发送到各个从节点服务器,以使各个从节点服务器对目标失踪人脸图像进行人脸识别;
结果获取单元,用于获取各个从节点服务器返回的人脸识别结果,根据从节点服务器返回的人脸识别得到目标人脸识别结果
关于失踪人脸识别装置的具体限定可以参见上文中对于失踪人脸识别方法的限定,在此不再赘述。上述失踪人脸识别装置中的各个模块可全部或部分通过软件、硬件及其组合来实现。上述各模块可以硬件形式内嵌于或独立于计算机设备中的处理器中,也可以以软件形式存储于计算机设备中的存储器中,以便于处理器调用执行以上各个模块对应的操作。
在其中一个实施例中,提供了一种计算机设备,该计算机设备可以是服务器,其内部结构图可以如图9所示。该计算机设备包括通过系统总线连接的处理器、存储器、网络接口和数据库。其中,该计算机设备的处理器用于提供计算和控制能力。该计算机设备的存储器包括非易失性存储介质、内存储器。该非易失性存储介质存储有操作系统、计算机可读指令和数据库。该内存储器为非易失性存储介质中的操作系统和计算机可读指令的运行提供环境。该计算机设备的数据库用于存储人脸数据。该计算机设备的网络接口用于与外部的终端通过网络连接通信。该计算机可读指令被处理器执行时以实现一种失踪人脸识别方法。
本领域技术人员可以理解,图9中示出的结构,仅仅是与本申请方案相关的部分结构的框图,并不构成对本申请方案所应用于其上的计算机设备的限定,具体的计算机设备可以包括比图中所示更多或更少的部件,或者组合某些部件,或者具有不同的部件布置。
一种计算机设备,包括存储器和一个或多个处理器,存储器中存储有计算机可读指令,计算机可读指令被处理器执行时实现本申请任意一个实施例中提供的失踪人脸识别方法的步骤。
一个或多个存储有计算机可读指令的非易失性存储介质,计算机可读指令被一个或多个处理器执行时,使得一个或多个处理器实现本申请任意一个实施例中提供的失踪人脸识别方法的步骤。
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,是可以通过计算机可读指令来指令相关的硬件来完成,所述的计算机可读指令可存储于一非易失性计算机可读取存储介质中,该计算机可读指令在执行时,可包括如上述各方法的实施例的流程。其中,本申请所提供的各实施例中所使用的对存储器、存储、数据库或其它介质的任何引用,均可包括非易失性和/或易失性存储器。非易失性存储器可包括只读存储器(ROM)、可编程ROM(PROM)、电可编程ROM(EPROM)、电可擦除可编程ROM(EEPROM)或闪存。易失性存储器可包括随机存取存储器(RAM)或者外部高速缓冲存储器。作为说明而非局限,RAM以多种形式可得,诸如静态RAM(SRAM)、动态RAM(DRAM)、同步DRAM(SDRAM)、双数据率SDRAM(DDRSDRAM)、增强型SDRAM(ESDRAM)、同步链路(Synchlink)DRAM(SLDRAM)、存储器总线(Rambus)直接RAM(RDRAM)、直接存储器总线动态RAM(DRDRAM)、以及存储器总线动态RAM(RDRAM)等。
以上实施例的各技术特征可以进行任意的组合,为使描述简洁,未对上述实施例中的各个技术特征所有可能的组合都进行描述,然而,只要这些技术特征的组合不存在矛盾, 都应当认为是本说明书记载的范围。
以上所述实施例仅表达了本申请的几种实施方式,其描述较为具体和详细,但并不能因此而理解为对发明专利范围的限制。应当指出的是,对于本领域的普通技术人员来说,在不脱离本申请构思的前提下,还可以做出若干变形和改进,这些都属于本申请的保护范围。因此,本申请专利的保护范围应以所附权利要求为准。

Claims (20)

  1. 一种失踪人脸识别方法,包括:
    接收人脸识别指令,根据所述人脸识别指令获取第一失踪人脸图像;
    将所述第一失踪人脸图像输入到已训练的人脸预测模型中,得到目标年龄对应的第二失踪人脸图像;
    获取所述目标年龄对应的直系血亲人脸图像,提取所述直系血亲人脸图像的人脸特征;
    根据所述人脸特征修正所述目标年龄对应的第二失踪人脸图像,得到第三失踪人脸图像;及
    对所述第三失踪人脸图像进行人脸识别,得到人脸识别结果。
  2. 根据权利要求1所述的方法,其特征在于,在所述接收人脸识别指令,根据所述人脸识别指令获取第一失踪人脸图像之前,还包括:
    获取各个年龄对应的人脸图像,将第一年龄对应的人脸图像作为输入,将第二年龄对应的人脸图像作为输出,使用卷积神经网络进行训练;及
    当达到预设条件时,得到已训练的人脸预测模型。
  3. 根据权利要求1所述的方法,其特征在于,所述获取所述目标年龄对应的直系血亲人脸图像,提取所述直系血亲人脸图像的人脸特征,包括:
    将所述目标年龄对应的直系血亲人脸图像按照预设条件进行划分,得到所述直系血亲人脸图像的人脸区域;
    计算所述人脸区域的局部二进制模式值,得到所述人脸区域的纹理特征;及
    确定所述人脸区域的肤色特征,根据所述人脸区域的纹理特征和所述人脸区域的肤色特征,得到所述直系血亲人脸图像的人脸特征。
  4. 根据权利要求1所述的方法,其特征在于,所述根据所述人脸特征修正所述目标年龄对应的第二失踪人脸图像,得到第三失踪人脸图像,包括:
    计算第一直系血亲人脸图像对应的人脸特征和第二直系血亲人脸图像对应的人脸特征的相似度,当所述相似度大于预设阈值时,获取所述相似度大于预设阈值的第一人脸特征;及
    计算所述第二失踪人脸图像与所述第一人脸特征对应的第二人脸特征,将所述第二人脸特征替换为所述第一人脸特征,得到第三失踪人脸图像。
  5. 根据权利要求1所述的方法,其特征在于,所述对所述目标失踪人脸图像进行人脸识别,得到人脸识别结果,包括:
    根据所述目标失踪人脸图像确定对应的肤色特征,根据所述肤色特征计算与预设人脸数据库中人脸肤色特征的第一相似度;
    根据所述目标失踪人脸图像计算对应的纹理特征,根据所述纹理特征计算与所述预设人脸数据库中人脸纹理特征的第二相似度;及
    根据所述第一相似度和所述第二相似度得到所述目标失踪人脸图像与所述预设人脸数据库中人脸的相似度,得到人脸识别结果。
  6. 根据权利要求1所述的方法,其特征在于,所述对所述目标失踪人脸图像进行人脸识别,得到人脸识别结果,包括:
    启动预设并行线程,将所述目标失踪人脸图像与预设人脸数据库中的人脸并行识别,得到人脸识别结果。
  7. 根据权利要求1所述的方法,其特征在于,所述对所述目标失踪人脸图像进行人脸识别,得到人脸识别结果,包括:
    将所述目标走失图像发送到各个从节点服务器,以使所述各个从节点服务器对所述目标失踪人脸图像进行人脸识别;及
    获取所述各个从节点服务器返回的人脸识别结果,根据所述从节点服务器返回的人脸识别得到目标人脸识别结果。
  8. 一种失踪人脸识别装置,包括:
    第一图像获取模块,用于接收人脸识别指令,根据所述人脸识别指令获取第一失踪人脸图像;
    第二图像得到模块,用于将所述第一失踪人脸图像输入到已训练的人脸预测模型中,得到目标年龄对应的第二失踪人脸图像;
    特征提取模块,用于获取所述目标年龄对应的直系血亲人脸图像,提取所述直系血亲人脸图像的人脸特征;
    第三图像得到模块,用于根据所述人脸特征修正所述目标年龄对应的第二失踪人脸图像,得到第三失踪人脸图像;及
    人脸识别模块,用于对所述第三失踪人脸图像进行人脸识别,得到人脸识别结果。
  9. 根据权利要求8所述的装置,其特征在于,所述装置还包括:
    模型训练模块,用于获取各个年龄对应的人脸图像,将第一年龄对应的人脸图像作为输入,将第二年龄对应的人脸图像作为输出,使用卷积神经网络进行训练;及
    训练完成模块,用于当达到预设条件时,得到已训练的人脸预测模型。
  10. 一种计算机设备,包括存储器及一个或多个处理器,所述存储器中储存有计算机可读指令,所述计算机可读指令被所述一个或多个处理器执行时,使得所述一个或多个处理器执行以下步骤:
    接收人脸识别指令,根据所述人脸识别指令获取第一失踪人脸图像;
    将所述第一失踪人脸图像输入到已训练的人脸预测模型中,得到目标年龄对应的第二失踪人脸图像;
    获取所述目标年龄对应的直系血亲人脸图像,提取所述直系血亲人脸图像的人脸特征;
    根据所述人脸特征修正所述目标年龄对应的第二失踪人脸图像,得到第三失踪人脸图 像;及
    对所述第三失踪人脸图像进行人脸识别,得到人脸识别结果。
  11. 根据权利要求10所述的计算机设备,其特征在于,所述处理器执行所述计算机可读指令时还执行以下步骤:
    获取各个年龄对应的人脸图像,将第一年龄对应的人脸图像作为输入,将第二年龄对应的人脸图像作为输出,使用卷积神经网络进行训练;及
    当达到预设条件时,得到已训练的人脸预测模型。
  12. 根据权利要求10所述的计算机设备,其特征在于,所述处理器执行所述计算机可读指令时还执行以下步骤:
    将所述目标年龄对应的直系血亲人脸图像按照预设条件进行划分,得到所述直系血亲人脸图像的人脸区域;
    计算所述人脸区域的局部二进制模式值,得到所述人脸区域的纹理特征;及
    确定所述人脸区域的肤色特征,根据所述人脸区域的纹理特征和所述人脸区域的肤色特征,得到所述直系血亲人脸图像的人脸特征。
  13. 根据权利要求10所述的计算机设备,其特征在于,所述处理器执行所述计算机可读指令时还执行以下步骤:
    计算第一直系血亲人脸图像对应的人脸特征和第二直系血亲人脸图像对应的人脸特征的相似度,当所述相似度大于预设阈值时,获取所述相似度大于预设阈值的第一人脸特征;及
    计算所述第二失踪人脸图像与所述第一人脸特征对应的第二人脸特征,将所述第二人脸特征替换为所述第一人脸特征,得到第三失踪人脸图像。
  14. 根据权利要求10所述的计算机设备,其特征在于,所述处理器执行所述计算机可读指令时还执行以下步骤:
    根据所述目标失踪人脸图像确定对应的肤色特征,根据所述肤色特征计算与预设人脸数据库中人脸肤色特征的第一相似度;
    根据所述目标失踪人脸图像计算对应的纹理特征,根据所述纹理特征计算与所述预设人脸数据库中人脸纹理特征的第二相似度;及
    根据所述第一相似度和所述第二相似度得到所述目标失踪人脸图像与所述预设人脸数据库中人脸的相似度,得到人脸识别结果。
  15. 根据权利要求10所述的计算机设备,其特征在于,所述处理器执行所述计算机可读指令时还执行以下步骤:
    启动预设并行线程,将所述目标失踪人脸图像与预设人脸数据库中的人脸并行识别,得到人脸识别结果。
  16. 一个或多个存储有计算机可读指令的非易失性计算机可读存储介质,所述计算机可读指令被一个或多个处理器执行时,使得所述一个或多个处理器执行以下步骤:
    接收人脸识别指令,根据所述人脸识别指令获取第一失踪人脸图像;
    将所述第一失踪人脸图像输入到已训练的人脸预测模型中,得到目标年龄对应的第二失踪人脸图像;
    获取所述目标年龄对应的直系血亲人脸图像,提取所述直系血亲人脸图像的人脸特征;
    根据所述人脸特征修正所述目标年龄对应的第二失踪人脸图像,得到第三失踪人脸图像;及
    对所述第三失踪人脸图像进行人脸识别,得到人脸识别结果。
  17. 根据权利要求16所述的存储介质,其特征在于,所述计算机可读指令被所述处理器执行时还执行以下步骤:
    获取各个年龄对应的人脸图像,将第一年龄对应的人脸图像作为输入,将第二年龄对应的人脸图像作为输出,使用卷积神经网络进行训练;及
    当达到预设条件时,得到已训练的人脸预测模型。
  18. 根据权利要求16所述的存储介质,其特征在于,所述计算机可读指令被所述处理器执行时还执行以下步骤:
    将所述目标年龄对应的直系血亲人脸图像按照预设条件进行划分,得到所述直系血亲人脸图像的人脸区域;
    计算所述人脸区域的局部二进制模式值,得到所述人脸区域的纹理特征;及
    确定所述人脸区域的肤色特征,根据所述人脸区域的纹理特征和所述人脸区域的肤色特征,得到所述直系血亲人脸图像的人脸特征。
  19. 根据权利要求16所述的存储介质,其特征在于,所述计算机可读指令被所述处理器执行时还执行以下步骤:
    计算第一直系血亲人脸图像对应的人脸特征和第二直系血亲人脸图像对应的人脸特征的相似度,当所述相似度大于预设阈值时,获取所述相似度大于预设阈值的第一人脸特征;及
    计算所述第二失踪人脸图像与所述第一人脸特征对应的第二人脸特征,将所述第二人脸特征替换为所述第一人脸特征,得到第三失踪人脸图像。
  20. 根据权利要求16所述的存储介质,其特征在于,所述计算机可读指令被所述处理器执行时还执行以下步骤:
    根据所述目标失踪人脸图像确定对应的肤色特征,根据所述肤色特征计算与预设人脸数据库中人脸肤色特征的第一相似度;
    根据所述目标失踪人脸图像计算对应的纹理特征,根据所述纹理特征计算与所述预设人脸数据库中人脸纹理特征的第二相似度;及
    根据所述第一相似度和所述第二相似度得到所述目标失踪人脸图像与所述预设人脸数据库中人脸的相似度,得到人脸识别结果。
PCT/CN2019/102927 2019-06-19 2019-08-28 失踪人脸识别方法、装置、计算机设备和存储介质 WO2020252911A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910531862.XA CN110378230B (zh) 2019-06-19 2019-06-19 失踪人脸识别方法、装置、计算机设备和存储介质
CN201910531862.X 2019-06-19

Publications (1)

Publication Number Publication Date
WO2020252911A1 true WO2020252911A1 (zh) 2020-12-24

Family

ID=68249290

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/102927 WO2020252911A1 (zh) 2019-06-19 2019-08-28 失踪人脸识别方法、装置、计算机设备和存储介质

Country Status (2)

Country Link
CN (1) CN110378230B (zh)
WO (1) WO2020252911A1 (zh)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112580572A (zh) * 2020-12-25 2021-03-30 深圳市优必选科技股份有限公司 多任务识别模型的训练方法及使用方法、设备及存储介质
CN113255594A (zh) * 2021-06-28 2021-08-13 深圳市商汤科技有限公司 人脸识别方法和装置、神经网络
CN113361471A (zh) * 2021-06-30 2021-09-07 平安普惠企业管理有限公司 图像数据处理方法、装置、计算机设备及存储介质
CN114708644A (zh) * 2022-06-02 2022-07-05 杭州魔点科技有限公司 一种基于家庭基因模板的人脸识别方法和系统
CN116912899A (zh) * 2023-05-22 2023-10-20 国政通科技有限公司 一种基于区域网络的人员搜索方法及装置

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112437226B (zh) * 2020-09-15 2022-09-16 上海传英信息技术有限公司 图像处理方法、设备及存储介质
CN117441195A (zh) * 2021-04-26 2024-01-23 微软技术许可有限责任公司 纹理补全

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100290677A1 (en) * 2009-05-13 2010-11-18 John Kwan Facial and/or Body Recognition with Improved Accuracy
CN108257081A (zh) * 2018-01-17 2018-07-06 百度在线网络技术(北京)有限公司 用于生成图片的方法和装置
CN108804996A (zh) * 2018-03-27 2018-11-13 腾讯科技(深圳)有限公司 人脸验证方法、装置、计算机设备及存储介质
CN109359210A (zh) * 2018-08-09 2019-02-19 中国科学院信息工程研究所 双盲隐私保护的人脸检索方法与系统
CN109509142A (zh) * 2018-10-29 2019-03-22 重庆中科云丛科技有限公司 一种人脸变老图像处理方法、系统、可读存储介质及设备

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103902960A (zh) * 2012-12-28 2014-07-02 北京计算机技术及应用研究所 一种实时人脸识别系统及其方法
CN105488463B (zh) * 2015-11-25 2019-01-29 康佳集团股份有限公司 基于人脸生物特征的直系亲属关系识别方法及系统
CN106920256B (zh) * 2017-03-14 2020-05-05 张志航 一种有效的失踪儿童寻找系统
CN108875466A (zh) * 2017-06-01 2018-11-23 北京旷视科技有限公司 基于人脸识别的监控方法、监控系统与存储介质
CN107679451A (zh) * 2017-08-25 2018-02-09 百度在线网络技术(北京)有限公司 建立人脸识别模型的方法、装置、设备和计算机存储介质

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100290677A1 (en) * 2009-05-13 2010-11-18 John Kwan Facial and/or Body Recognition with Improved Accuracy
CN108257081A (zh) * 2018-01-17 2018-07-06 百度在线网络技术(北京)有限公司 用于生成图片的方法和装置
CN108804996A (zh) * 2018-03-27 2018-11-13 腾讯科技(深圳)有限公司 人脸验证方法、装置、计算机设备及存储介质
CN109359210A (zh) * 2018-08-09 2019-02-19 中国科学院信息工程研究所 双盲隐私保护的人脸检索方法与系统
CN109509142A (zh) * 2018-10-29 2019-03-22 重庆中科云丛科技有限公司 一种人脸变老图像处理方法、系统、可读存储介质及设备

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112580572A (zh) * 2020-12-25 2021-03-30 深圳市优必选科技股份有限公司 多任务识别模型的训练方法及使用方法、设备及存储介质
CN112580572B (zh) * 2020-12-25 2023-09-08 深圳市优必选科技股份有限公司 多任务识别模型的训练方法及使用方法、设备及存储介质
CN113255594A (zh) * 2021-06-28 2021-08-13 深圳市商汤科技有限公司 人脸识别方法和装置、神经网络
CN113361471A (zh) * 2021-06-30 2021-09-07 平安普惠企业管理有限公司 图像数据处理方法、装置、计算机设备及存储介质
CN114708644A (zh) * 2022-06-02 2022-07-05 杭州魔点科技有限公司 一种基于家庭基因模板的人脸识别方法和系统
CN114708644B (zh) * 2022-06-02 2022-09-13 杭州魔点科技有限公司 一种基于家庭基因模板的人脸识别方法和系统
CN116912899A (zh) * 2023-05-22 2023-10-20 国政通科技有限公司 一种基于区域网络的人员搜索方法及装置
CN116912899B (zh) * 2023-05-22 2024-05-03 国政通科技有限公司 一种基于区域网络的人员搜索方法及装置

Also Published As

Publication number Publication date
CN110378230A (zh) 2019-10-25
CN110378230B (zh) 2024-03-05

Similar Documents

Publication Publication Date Title
WO2020252911A1 (zh) 失踪人脸识别方法、装置、计算机设备和存储介质
CN109389030B (zh) 人脸特征点检测方法、装置、计算机设备及存储介质
CN112037912B (zh) 基于医疗知识图谱的分诊模型训练方法、装置及设备
US11200404B2 (en) Feature point positioning method, storage medium, and computer device
CN113239874B (zh) 基于视频图像的行为姿态检测方法、装置、设备及介质
CN110489951B (zh) 风险识别的方法、装置、计算机设备和存储介质
CN110866491B (zh) 目标检索方法、装置、计算机可读存储介质和计算机设备
WO2017088432A1 (zh) 图像识别方法和装置
CN110824587B (zh) 图像预测方法、装置、计算机设备和存储介质
CN109271917B (zh) 人脸识别方法、装置、计算机设备和可读存储介质
CN108830782B (zh) 图像处理方法、装置、计算机设备和存储介质
WO2022057309A1 (zh) 肺部特征识别方法、装置、计算机设备及存储介质
CN112241952B (zh) 大脑中线识别方法、装置、计算机设备及存储介质
CN112016318A (zh) 基于解释模型的分诊信息推荐方法、装置、设备及介质
CN113469092B (zh) 字符识别模型生成方法、装置、计算机设备和存储介质
CN110689323A (zh) 图片审核方法、装置、计算机设备和存储介质
CN109002776B (zh) 人脸识别方法、系统、计算机设备和计算机可读存储介质
CN110956195A (zh) 图像匹配方法、装置、计算机设备及存储介质
CN113705685A (zh) 疾病特征识别模型训练、疾病特征识别方法、装置及设备
CN109102549B (zh) 图像光源颜色的检测方法、装置、计算机设备及存储介质
CN109087240B (zh) 图像处理方法、图像处理装置及存储介质
CN111178126A (zh) 目标检测方法、装置、计算机设备和存储介质
CN110929730A (zh) 图像处理方法、装置、计算机设备和存储介质
JP5930450B2 (ja) アノテーション装置及びアノテーションシステム
CN111027469B (zh) 人体部位识别方法、计算机设备和可读存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19933655

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19933655

Country of ref document: EP

Kind code of ref document: A1