WO2020019451A1 - 人脸识别方法、装置、计算机设备及存储介质 - Google Patents

人脸识别方法、装置、计算机设备及存储介质 Download PDF

Info

Publication number
WO2020019451A1
WO2020019451A1 PCT/CN2018/106263 CN2018106263W WO2020019451A1 WO 2020019451 A1 WO2020019451 A1 WO 2020019451A1 CN 2018106263 W CN2018106263 W CN 2018106263W WO 2020019451 A1 WO2020019451 A1 WO 2020019451A1
Authority
WO
WIPO (PCT)
Prior art keywords
portrait
avatar
preset
person
face
Prior art date
Application number
PCT/CN2018/106263
Other languages
English (en)
French (fr)
Inventor
王红伟
Original Assignee
平安科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 平安科技(深圳)有限公司 filed Critical 平安科技(深圳)有限公司
Publication of WO2020019451A1 publication Critical patent/WO2020019451A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/179Human faces, e.g. facial parts, sketches or expressions metadata assisted face recognition
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Definitions

  • the present application relates to the field of face recognition technology, and in particular, to a face recognition method, device, computer device, and storage medium.
  • face recognition technology has been widely used in many scenarios.
  • the public security system performs face recognition on criminal suspects to track criminal suspects.
  • the current face recognition technology usually compares the identified person's avatar image with the tracking target's avatar one-to-one, and pursues the criminal suspect based on the face recognition of the solution.
  • the point is: in order to avoid hunting, cunning criminals often modify or change some of the more easily changed places on their bodies, making it difficult for the human eye to distinguish their original appearance. Therefore, the simple one in the prior art is Face recognition that is less than one has no meaning, and its tracking effect is not satisfactory.
  • a face recognition method includes:
  • a face recognition device includes:
  • a receiving module for receiving a recognition instruction to obtain a recognition target avatar
  • An acquisition module for acquiring an avatar of a person to be identified
  • a face shape comparison module is configured to compare the face contours of the avatar of the person to be identified with the face contour of the recognition target avatar to detect whether the similarity ratio of the face contours of the avatar of the person to be identified and the recognition target avatar exceeds a preset proportion;
  • a portrait simulation module is configured to perform a portrait simulation of a facial feature on the avatar of the person to be identified when the similarity ratio of the face contour of the avatar of the person to be identified to the recognition target avatar exceeds the preset ratio, to generate a simulated portrait ;
  • a portrait comparison module is configured to compare the simulated portrait with the recognition target avatar, and prompt that the comparison is successful when the similarity between the simulated portrait and the recognition target avatar exceeds a preset threshold.
  • a computer device includes a memory, a processor, and computer-readable instructions stored in the memory and executable on the processor.
  • the processor executes the computer-readable instructions, the following steps are implemented:
  • One or more non-volatile readable storage media storing computer-readable instructions, which when executed by one or more processors, cause the one or more processors to perform the following steps:
  • FIG. 1 is a schematic diagram of an application environment of a face recognition method according to an embodiment of the present application
  • FIG. 2 is a flowchart of a face recognition method according to an embodiment of the present application.
  • step S40 of a face recognition method in an embodiment of the present application is a flowchart of step S40 of a face recognition method in an embodiment of the present application
  • step S402 of a face recognition method in an embodiment of the present application is a flowchart of step S402 of a face recognition method in an embodiment of the present application
  • step S40 of a face recognition method in another embodiment of the present application is a flowchart of step S40 of a face recognition method in another embodiment of the present application.
  • FIG. 6 is a flowchart of a face recognition method in another embodiment of the present application.
  • step S50 of a face recognition method in an embodiment of the present application is a flowchart of step S50 of a face recognition method in an embodiment of the present application.
  • FIG. 8 is a schematic diagram of a face recognition device according to an embodiment of the present application.
  • FIG. 9 is a schematic diagram of a portrait simulation module of a face recognition device according to an embodiment of the present application.
  • FIG. 10 is a schematic diagram of a generation sub-module of a face recognition device in an embodiment of the present application.
  • FIG. 11 is a schematic diagram of a portrait simulation module of a face recognition device in another embodiment of the present application.
  • FIG. 12 is a schematic diagram of a computer device in an embodiment of the present application.
  • the face recognition method provided in this application can be applied in the application environment as shown in FIG. 1, in which a client (computer device) communicates with a server through a network.
  • the client initiates a recognition instruction, and the server performs a series of processing on the monitoring device (communication-connected with the server) in real-time or regularly to capture the video or image of the person to be identified and generates a simulated portrait, and whether the comparison between the simulated portrait and the identified target avatar is successful Prompt.
  • the clients include, but are not limited to, various personal computers, notebook computers, smart phones, tablet computers, cameras, and portable wearable devices.
  • the server can be implemented by an independent server or a server cluster composed of multiple servers.
  • a face recognition method is provided.
  • the method is applied to the server in FIG. 1 as an example for description, and includes the following steps:
  • the identification instruction can be sent by the user to the background server after the client clicks a preset button.
  • the background server retrieves the identification target avatar according to the identification target information contained in the identification instruction.
  • the identification target is a target object that the user wants to track.
  • the identification target may also be a tracking target set by other users according to their needs.
  • the identification target avatar is stored in a tracking database.
  • the preset tracking database can also store other information of the recognition target, such as name, gender, identity information, criminal records, etc., according to user needs.
  • the recognition target can be directly retrieved and displayed. Corresponding information.
  • the avatar of the person to be identified may be obtained from a monitoring device that is communicatively connected to the server; the camera of the monitoring device may capture the video or image of the person to be identified in real time or at regular intervals, and transmit the captured video or image to the server. Set the specifications to capture the avatar of the person to be identified.
  • This process of capturing the avatar of the person to be identified according to the preset specifications can also be specially set up to process the image server;
  • the image server is connected to the monitoring device and the background server, and is used to obtain the captured video or image, And capture an avatar from the video or image and convert it into a avatar of a person to be identified with a preset specification; wherein the preset specification can be set according to user requirements, such as setting the avatar size, pixel requirements, brightness contrast, Remove pre-processing such as repetitive avatars and select the highest retention of the highest pixel shooting angle.
  • the image server stores the avatar of the person to be identified that has been converted into a preset specification, and waits for the background server to retrieve it, and then transfers the avatar of the person to be identified to the background server. When there is no image server, all operations performed by the image server can be performed by the background server.
  • S30 Compare the face contours of the person's head image with the recognition target head image to detect whether the similarity ratio of the face contours of the person's head image to the recognition target head image exceeds a preset ratio.
  • the face contour is also the face shape of a human face. Since the face shape is relatively difficult to cover or change by means of makeup hands among the facial features of a person, it may need to be implemented by means such as surgery (easy to pass Hospital records etc. are tracked). Therefore, in this embodiment, the identification target is initially locked by a face shape that is not easily changed, and the locking accuracy thereof is relatively higher.
  • the preset ratio can be set according to user needs, it can be manually input after learning and summing up from the data previously tracked by the user according to the actual situation, or the initial preset ratio can be automatically set in the system as a preference
  • the range of the preset ratio may be 0.50-0.55, and the tracking effect is better.
  • the facial features include, but are not limited to, ears, eyebrows, eyes, mouth, hair, etc.
  • the above-mentioned common facial features and their shapes can be classified and stored in a simulation database using a unique label association, and the simulation
  • a variety of typical facial features are pre-stored in the database, and each facial feature is a different type of facial feature (such as different eye shapes) belonging to multiple people.
  • the unique label is directly used from the The above-mentioned typical facial features belonging to different people can be called from the simulation database (to simulate the portrait of the person to be identified).
  • the criminal suspect also has specific facial features such as scars and tattoos
  • specific facial features such as scars and tattoos
  • the specific facial features need to be added to the simulated portrait, it can be directly retrieved from the tracking database or simulation database that stores the facial features If there is no storage, these features can also be added manually.
  • S50 Compare the simulated portrait with the recognition target avatar, and prompt that the comparison is successful when the similarity between the simulated portrait and the recognition target avatar exceeds a preset threshold.
  • the comparison success means that the head of the person to be identified matches the head of the target to be identified successfully, and the person to be identified is consistent with the identified target.
  • the comparison between the simulated portrait and the recognition target avatar may be a comparison between the simulated portrait and the recognition target avatar, or only the facial features therein; the preset threshold of the similarity may be set according to requirements. set.
  • the face recognition method of this embodiment introduces a face portrait simulation into a face recognition system, which can not only reduce misjudgments caused by human factors, but also continue to perform retouching on the facial features and facial features of the person to be recognized when they are modified. Tracking improves the chances of users finding a tracking target.
  • the method further includes steps:
  • a manual identification prompt is issued. That is, after all the avatars of the persons to be identified have been identified, the avatars of the persons to be identified cannot be obtained, which indicates that the persons to be tracked in the video or image captured by the monitoring video during tracking have been identified, but the tracking target has not been found.
  • a manual identification prompt is issued at the time and transferred to the manual processing process. At this time, a similar ratio lower than the ratio may be set again according to requirements, and step S30 is performed again.
  • the step S40 includes the following steps:
  • the facial contour of the avatar of the person to be identified is retained as the facial contour of the simulated portrait, and other facial features may be directly Retrieved from the simulation database; the above specific facial features such as scars, tattoos, etc. can also be added at this time as part of the face contour of the simulated portrait, so there is no need to repeatedly add them in subsequent steps.
  • the preset facial features in the simulation database are retrieved, and the preset facial features are correspondingly placed at preset positions in the face contour of the simulated portrait to generate the simulated portrait.
  • the preset facial features are categories set by the user according to requirements.
  • the user may set one or more of ears, eyebrows, nose, mouth, hair, etc. as the preset facial features.
  • a preset position for placing each preset facial feature is correspondingly generated on the simulated portrait.
  • other facial features not set by the user as preset facial features may also be used as part of the facial contour of the simulated portrait.
  • the step S402 includes the following steps:
  • S4021 Perform preset arrangement and combination of the preset facial features retrieved to generate multiple groups of facial feature combinations
  • the arrangement and combination of the preset rules means that each preset facial feature is retrieved according to a priority level (if there is no priority level, it is randomly retrieved), and then each of the preset facial features is sequentially accessed Make the switch; thus generating multiple facial feature combinations.
  • Each facial feature in each group of the facial feature combination is correspondingly placed in a preset position in a face contour of the simulated portrait to generate the simulated portrait.
  • step S402 the method further includes the following steps:
  • Using the original features of the avatar of the person to be identified as facial features in the simulation database for the background server to retrieve may make the recognition and tracking more efficient, because the changes in facial features of the recognition target may not involve all facial features, Therefore, storing the part of the facial features stored in the simulation database for subsequent retrieval will improve the recognition efficiency.
  • step S404 the facial features of the avatar of the person to be identified are classified into the simulation database, and the retrieval order of the facial features of the avatar of the person to be identified is set as a priority.
  • the facial features of the avatar of the person to be identified are classified and stored in the simulation database, and the original facial features in the avatar of the identified person are taken as priority calls by setting the priority level. Facial features.
  • the recognition target's changes to the facial features do not involve all facial features, it will help the group of users to find the recognition target faster.
  • the step S50 includes the following steps:
  • step S501 Acquire the simulated portrait; that is, the server retrieves the simulated portrait processed in step S40 from a simulation database.
  • S502 Compare the simulated portrait with the recognition target avatar, and determine whether the similarity between the simulated portrait and the recognition target avatar exceeds the preset threshold; the comparing the simulated portrait with the recognition target
  • the comparison of the avatars may be a comparison between the simulated portrait and the entire recognition target avatar.
  • the recognition target has a unique facial feature, only the facial features may be compared.
  • the server compares the simulated portrait with the recognition target avatar according to a preset rule, and is similar.
  • the degree exceeds a preset threshold, the information of the successful comparison is sent to the client, prompting the user that the face recognition comparison of the recognition target is successful.
  • step S503 the method further includes steps:
  • the comparison data (including the number of successful comparisons, similarity data, etc.) can be compared each time the comparison is successful. Record and store the simulated portraits, and then perform centralized processing after all the simulated portraits are compared; the centralized processing process can be manually identified again, or a value that is greater than the preset threshold can be set For a higher similarity value, step S50 is repeated, or a similar ratio higher than the preset ratio is set again according to requirements, and step S30 is performed again.
  • each time the comparison is prompted to be successful the comparison can also be stopped, and the simulation image is immediately subjected to the next processing such as manual recognition.
  • Each comparison process can be suspended or issued by the user.
  • the resume command causes the server to suspend or resume the current operation.
  • This application first performs preliminary screening on the avatars of the identified people based on the similar proportions of the face contours. Based on this, the portrait of the recognition target is simulated and compared with the recognition target avatars again to track the recognition targets.
  • This application can reduce human factors
  • the misjudgment of the police and its application to the public security system can improve the efficiency of the police in catching fugitives, and will not result in greatity for fugitives due to insufficient police force or slackness of personnel.
  • a face recognition device is provided, and the face recognition device corresponds to the face recognition method in the above embodiment one-to-one.
  • the face recognition device includes a receiving module 11, an obtaining module 12, a face comparison module 13, a portrait simulation module 14, and a portrait comparison module 15.
  • the detailed description of each function module is as follows:
  • the receiving module 11 is configured to receive a recognition instruction and obtain a recognition target avatar
  • the obtaining module 12 is configured to obtain an avatar of a person to be identified
  • the face shape comparison module 13 is configured to compare the face contours of the head of the person to be identified with the face contour of the recognition target head to detect whether the similar proportion of the face contours of the head of the person to be identified and the face contour of the recognition target head is similar. Exceed the preset ratio;
  • the portrait simulation module 14 is configured to perform a portrait simulation of a facial feature on the avatar of the person to be identified when the similarity ratio of the face contour of the avatar of the person to be identified and the avatar of the recognition target exceeds the preset ratio, Generate simulated portraits;
  • the portrait comparison module 15 is configured to compare the simulated portrait with the recognition target avatar, and prompt that the comparison is successful when the similarity between the simulated portrait and the recognition target avatar exceeds a preset threshold. .
  • the portrait simulation module 14 includes a startup sub-module 141 and a generation sub-module 142;
  • the starting sub-module 141 is configured to start a portrait simulation of a recognition target corresponding to the recognition target avatar when a similar proportion of the face contour of the person to be recognized and the target profile exceeds the preset ratio, Using the facial contour of the avatar of the person to be identified as the facial contour of the simulated portrait;
  • the generating sub-module 142 is configured to retrieve preset facial features in a simulation database, and place each of the facial features in a preset position in a facial contour of the simulated portrait to generate the simulated portrait.
  • the generating sub-module 142 further includes a combining unit 1421 and a generating unit 1422;
  • the combining unit 1421 is configured to retrieve preset facial features in the simulation database, and perform preset arrangement and combination of the retrieved facial features to generate multiple groups of facial feature combinations;
  • the generating unit 1422 is configured to place each facial feature in each group of the facial feature combination correspondingly into a preset position in a face contour of the simulated portrait to generate the simulated portrait.
  • the portrait simulation module 14 further includes an acquisition sub-module 143 and a storage sub-module 144;
  • the acquisition sub-module 143 is configured to acquire other facial features in the avatar of the person to be identified except for a facial contour;
  • the storage sub-module 144 stores the facial features other than the facial contours in the avatar of the person to be identified into the simulation database, and stores the facial features other than the contours in the avatar of the person to be identified
  • the retrieval order of other facial features in the simulation database is set as a priority retrieval.
  • the portrait comparison module is further configured to obtain the simulated portrait; compare the simulated portrait with the recognition target avatar, and determine whether the simulated portrait is similar to the recognition target avatar. Whether the degree exceeds the preset threshold; when the similarity between the simulated portrait and the recognition target avatar exceeds the preset threshold, prompting that the comparison is successful; the similarity between the simulated portrait and the recognition target avatar When the preset threshold is not exceeded, return to the step of generating a simulated portrait.
  • Each module in the above-mentioned face recognition device may be implemented in whole or in part by software, hardware, and a combination thereof.
  • the above-mentioned modules may be embedded in the hardware form or independent of the processor in the computer device, or may be stored in the memory of the computer device in the form of software, so that the processor calls and performs the operations corresponding to the above modules.
  • a computer device is provided.
  • the computer device may be a server, and its internal structure diagram may be as shown in FIG. 12.
  • the computer device includes a processor, a memory, a network interface, and a database connected through a system bus.
  • the processor of the computer device is used to provide computing and control capabilities.
  • the memory of the computer device includes a non-volatile storage medium and an internal memory.
  • the non-volatile storage medium stores an operating system, a computer-readable storage medium, and a database.
  • the internal memory provides an environment for operating the operating system and the computer-readable storage medium in the non-volatile storage medium.
  • the database of the computer device is used to store facial features, identify target avatars, videos or images taken by the monitoring device, and simulate portraits.
  • the network interface of the computer equipment is used to communicate with external terminals through a network connection.
  • the computer-readable storage medium is executed by a processor to implement a face recognition method.
  • a computer device including a memory, a processor, and a computer-readable storage medium stored on the memory and executable on the processor.
  • the processor executes the computer-readable storage medium, the following steps are implemented:
  • one or more non-volatile readable storage media storing computer readable instructions are provided, and the non readable storage medium stores computer readable instructions, the computer readable instructions When executed by one or more processors, causes the one or more processors to perform the following steps:
  • the computer-readable storage medium may be stored in a non-volatile memory.
  • the non-transitory computer-readable storage medium when executed, may include the processes of the embodiments of the methods described above.
  • any reference to the memory, storage, database or other media used in the embodiments provided in this application may include non-volatile and / or volatile memory.
  • Non-volatile memory may include read-only memory (ROM), programmable ROM (PROM), electrically programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM), or flash memory.
  • Volatile memory can include random access memory (RAM) or external cache memory.
  • RAM is available in various forms, such as static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), dual data rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous chain Synchlink DRAM (SLDRAM), memory bus (Rambus) direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)
  • Processing Or Creating Images (AREA)

Abstract

一种人脸识别方法、装置、计算机设备及存储介质,所述方法包括:接收识别指令,获取识别目标头像(S10);获取待识别人头像(S20);将待识别人头像与识别目标头像的脸型轮廓进行比对,对脸型轮廓相似比超过预设比例的待识别人头像进行面部特征的画像模拟并生成模拟画像;将生成的模拟画像与识别目标头像进行比对,当相似度超过预设阈值时,提示比对成功。本方法将人脸画像模拟引入人脸识别系统,不仅可以降低人为因素造成的误判,还可以在待识别人对自身的五官和面部特征进行修饰时,继续对其进行追踪,提高了用户寻找到追踪目标的几率。

Description

人脸识别方法、装置、计算机设备及存储介质
本申请以2018年7月27日提交的申请号为201810843420.4,名称为“人脸识别方法、装置、计算机设备及存储介质”的中国发明专利申请为基础,并要求其优先权。
技术领域
本申请涉及人脸识别技术领域,尤其涉及一种人脸识别方法、装置、计算机设备及存储介质。
背景技术
随着科学技术的发展,人脸识别技术已经在很多场景中得到广泛应用,比如,公安系统对犯罪嫌疑人进行人脸识别,以追踪犯罪嫌疑人。但目前的人脸识别技术通常是将被识别人的头像原图与追踪目标头像进行一对一比对,并在该方案的人脸识别的基础上对犯罪嫌疑人进行追捕,该方案的不足之处在于:为了逃避追捕,狡猾的犯罪分子很多时候会对身上某些比较容易改变的地方进行修饰或改变,让人眼很难分辨出他们原来的面貌,如此,现有技术中简单的一比一的人脸识别就失去了意义,其追踪效果不尽如人意。
发明内容
基于此,有必要针对上述技术问题,提供一种人脸识别方法、装置、计算机设备及存储介质,用于提高对追踪目标辨认的精度和准确度,提升找到追踪目标的几率。
一种人脸识别方法,包括:
接收识别指令,获取识别目标头像;
获取待识别人头像;
将所述待识别人头像与所述识别目标头像的脸型轮廓进行比对,以检测所述待识别人头像与所述识别目标头像的脸型轮廓相似比是否超过预设比例;
在所述待识别人头像与所述识别目标头像的脸型轮廓相似比例超过所述预设比例时,在所述待识别人头像上进行面部特征的画像模拟,生成模拟画像;
将所述模拟画像与所述识别目标头像进行比对,并在所述模拟画像与所述识别目标头像的相似度超过预设阈值时,提示比对成功。
一种人脸识别的装置,包括:
接收模块,用于接收识别指令,获取识别目标头像;
获取模块,用于获取待识别人头像;
脸型比对模块,用于将所述待识别人头像与所述识别目标头像的脸型轮廓进行比对,以检测所述待识别人头像与所述识别目标头像的脸型轮廓相似比例是否超过预设比例;
画像模拟模块,用于在所述待识别人头像与所述识别目标头像的脸型轮廓相似比超过所述预设比例时,在所述待识别人头像上进行面部特征的画像模拟,生成模拟画像;
画像比对模块,用于将所述模拟画像与所述识别目标头像进行比对,并在所述模拟画像与所述识别目标头像的相似度超过预设阈值时,提示比对成功。
一种计算机设备,包括存储器、处理器以及存储在所述存储器中并可在所述处理器上运行的计算机可读指令,所述处理器执行所述计算机可读指令时实现如下步骤:
接收识别指令,获取识别目标头像;
获取待识别人头像;
将所述待识别人头像与所述识别目标头像的脸型轮廓进行比对,以检测所述待识别人头像与所述识别目标头像的脸型轮廓相似比是否超过预设比例;
在所述待识别人头像与所述识别目标头像的脸型轮廓相似比例超过所述预设比例时,在所述待识别人头像上进行面部特征的画像模拟,生成模拟画像;
将所述模拟画像与所述识别目标头像进行比对,并在所述模拟画像与所述识别目标头像的相似度超过预设阈值时,提示比对成功。
一个或多个存储有计算机可读指令的非易失性可读存储介质,所述计算机可读指令被一个或多个处理器执行时,使得所述一个或多个处理器执行如下步骤:
接收识别指令,获取识别目标头像;
获取待识别人头像;
将所述待识别人头像与所述识别目标头像的脸型轮廓进行比对,以检测所述待识别人头像与所述识别目标头像的脸型轮廓相似比是否超过预设比例;
在所述待识别人头像与所述识别目标头像的脸型轮廓相似比例超过所述预设比例时,在所述待识别人头像上进行面部特征的画像模拟,生成模拟画像;
将所述模拟画像与所述识别目标头像进行比对,并在所述模拟画像与所述识别目标头像的相似度超过预设阈值时,提示比对成功。
本申请的一个或多个实施例的细节在下面的附图和描述中提出,本申请的其他特征和优点将从说明书、附图以及权利要求变得明显。
附图说明
为了更清楚地说明本申请实施例的技术方案,下面将对本申请实施例的描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1是本申请一实施例中人脸识别方法的应用环境示意图;
图2是本申请一实施例中人脸识别方法的流程图;
图3是本申请一实施例中人脸识别方法的步骤S40的流程图;
图4是本申请一实施例中人脸识别方法的步骤S402的流程图;
图5是本申请另一实施例中人脸识别方法的步骤S40的流程图;
图6是本申请另一实施例中人脸识别方法的流程图;
图7是本申请一实施例中人脸识别方法的步骤S50的流程图;
图8是本申请一实施例中人脸识别装置的示意图;
图9是本申请一实施例中人脸识别装置的画像模拟模块的示意图;
图10是本申请一实施例中人脸识别装置的生成子模块的示意图;
图11是本申请另一实施例中人脸识别装置的画像模拟模块的示意图;
图12是本申请一实施例中计算机设备的示意图。
具体实施方式
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
本申请提供的人脸识别方法,可应用在如图1的应用环境中,其中,客户端(计算机设备)通过网络与服务器进行通信。客户端发起一个识别指令,服务器对监控设备(与服务器通信连接)实时或定时拍摄待识别人的视频或图像进行一系列处理并生成模拟画像, 并对模拟画像与识别目标头像的比对是否成功进行提示。其中,客户端包括但不限于为各种个人计算机、笔记本电脑、智能手机、平板电脑、摄像头和便携式可穿戴设备。服务器可以用独立的服务器或者是多个服务器组成的服务器集群来实现。
在一实施例中,如图2所示,提供一种人脸识别方法,以该方法应用在图1中的服务器为例进行说明,包括如下步骤:
S10,接收识别指令,获取识别目标头像。
所述识别指令由用户在客户端点击预设按钮之后即可发送至后台服务器,所述后台服务器在接收识别指令之后,根据所述识别指令中包含的识别目标信息,调取识别目标头像。所述识别目标为用户想要追踪的目标对象,比如,将公安系统想要追踪的犯罪嫌疑人作为识别目标,识别目标也可以是其他用户根据需求设定的追踪目标。所述识别目标头像存储在追踪数据库中,比如,在所述识别目标为犯罪嫌疑人时,所述识别目标头像通常已存储在公安系统的追踪数据库中。预设的所述追踪数据库中亦可根据用户需求对应存储该识别目标的其他信息,比如姓名、性别、身份信息、犯罪记录等,在模拟画像比对成功后,可以直接调取并显示识别目标的对应信息。
S20,获取待识别人头像。
所述待识别人头像可以自与服务器通信连接的监控设备中获取;监控设备的摄像头可以实时或定时拍摄待识别人的视频或图像,并将拍摄的视频或图像传送至服务器中,服务器根据预设规格截取其中的待识别人的头像。根据预设规格截取其中的待识别人的头像的这一过程,亦可专门设立图像服务器进行处理;该图像服务器连接于所述监控设备及后台服务器,且其用于获取拍摄的视频或图像,并自所述视频或图像中截取头像并将其转化为预设规格的待识别人头像;其中,预设规格可以根据用户需求进行设定,比如:设定头像大小、像素要求、亮度对比度、去除重复头像并在其中选取像素最高拍摄角度最好的留存等预处理。图像服务器将已转换为预设规格的待识别人头像存储,并等候后台服务器的调取之后,将其调取的待识别人头像传给后台服务器。在不存在图像服务器时,所述图像服务器所执行的操作均可由后台服务器执行。
S30,将所述待识别人头像与所述识别目标头像的脸型轮廓进行比对,以检测所述待识别人头像与所述识别目标头像的脸型轮廓相似比是否超过预设比例。
在本实施例中,所述脸型轮廓也即人脸的脸型,由于脸型是人的面部特征中比较难通过化妆手等手段来进行掩饰或改变的,可能需要通过手术等手段来实现(易通过医院记录等被追踪到),因此,本实施例最先通过不易更改的脸型来初步锁定识别目标,其锁定精确度相对更高。
所述预设比例可以根据用户需求进行设定,其可以根据实际情况,从用户此前追踪的数据中进行学习和总结后手动输入,亦可在系统中自动设定初始的预设比例,作为优选,所述预设比例的范围可以为0.50-0.55,其追踪效果较好。
S40,在所述待识别人头像与所述识别目标头像的脸型轮廓相似比例超过所述预设比例时,在所述待识别人头像上进行面部特征的画像模拟,生成模拟画像。
其中,所述面部特征包括但不限定于为耳、眉、眼、口、头发等,上述常见的面部特征及其形状等,可以分类且使用唯一标号关联保存在模拟数据库中,且所述模拟数据库中预存有多种典型的面部特征,且各面部特征为属于多个人的不同类型的面部特征(比如不同的眼型),在用户需要进行画像模拟时,直接通过所述唯一标号从所述模拟数据库中调取上述属于不同人的典型面部特征(以对待识别人进行画像模拟)即可。而在犯罪嫌疑人(识别目标)还有伤疤、纹身等特定面部特征时,若需要将该特定面部特征加入模拟画像中,可以直接从存储该面部特征的追踪数据库或模拟数据库中直接调取,若无存储,亦可手动添加该等特征。
S50,将所述模拟画像与所述识别目标头像进行比对,并在所述模拟画像与所述识别 目标头像的相似度超过预设阈值时,提示比对成功。
其中,所述比对成功是指待识别头人头像与所述待识别目标头像匹配成功,所述待识别人与所述识别目标一致。所述模拟画像与所述识别目标头像进行比对,可以是对模拟画像与识别目标头像的整体进行对比,亦可是仅对比其中的面部特征;所述相似度的预设阈值可以根据需求进行设定。
本实施例的人脸识别方法将人脸画像模拟引入人脸识别系统,不仅可以降低人为因素造成的误判,还可以在待识别人对自身的五官和面部特征进行修饰时,继续对其进行追踪,提升了用户寻找到追踪目标的几率。
如图6所示,本申请的所述步骤S30之后还包括步骤:
S60,在所述待识别人头像与所述识别目标头像的脸型轮廓相似比例并未超过所述预设比例时,返回至获取下一个所述待识别人头像。也即,在脸型轮廓相似比例并未超过预设比例时,直接对下一个待识别人头像进行比对。
S70,在无法获取所述待识别人头像时,发出人工识别提示。也即,在所有待识别人头像已经识别完毕之后,无法获得待识别人头像,说明追踪时监控录像所拍摄的视频或图像中的待追踪人已经全部识别完毕,但并未找到追踪目标,此时发人工识别提示,转入人工处理过程。此时,可重新根据需求设定一个比所述比例更低的相似比例,重新执行步骤S30。
如图3所示,在一实施例中,所述步骤S40包括以下步骤:
S401,在所述待识别人头像与所述识别目标头像的脸型轮廓相似比例超过所述预设比例时,启动对与所述识别目标头像对应的识别目标的画像模拟,将所述待识别人头像的脸型轮廓作为所述模拟画像的脸型轮廓;
在本实施例中,在确认脸型轮廓超过所述预设比例时,由于其他面部特征可能易被更改,因此保留所述待识别人头像的脸型轮廓作为模拟画像的脸型轮廓,其他面部特征可以直接从模拟数据库中调取;上述特定的面部特征比如伤疤、纹身等,亦可在此时添加作为模拟画像的脸型轮廓的一部分,从而无需在后续步骤中重复添加。
S402,调取模拟数据库中的预设面部特征,并将所述预设面部特征对应放置在所述模拟画像的脸型轮廓中的预设位置,生成所述模拟画像。
在本实施例中,预设面部特征为用户根据需求设定的类别,用户可根据需求设定耳、眉、鼻、口、头发等其中的一种或多种作为预设面部特征,此时所述模拟画像上也会对应生成用于放置各预设面部特征的预设位置。在用户设定的类别不包含所有面部特征的类别时,在上述步骤S401中,可以将用户并未设定为预设面部特征的其他面部特征也作为所述模拟画像的脸型轮廓的一部分。
如图4所示,在一实施例中,所述步骤S402包括以下步骤:
S4021,对调取的所述预设面部特征进行预设规则的排列组合,生成多组面部特征组合;
在本实施例中,所述预设规则的排列组合是指,按照优先级别(若无优先级别即随机调取)调取各预设面部特征,并在随后依次对各所述预设面部特征进行切换;从而生多个面部特征组合。
S4022,将每一组所述面部特征组合中的各面部特征对应放置入所述模拟画像的脸型轮廓中的预设位置,生成所述模拟画像。
在生成一组面部特征组合之后,就可以将其对应放置入所述脸型轮廓中的预设位置,生成模拟画像,以供后续与识别头像进行比对;在比对成功后,可以先停止继续生成面部特征组合,在确认本次比对成功的面部组合特征不是识别目标后,再继续生成面部特征组合;亦可在比对成功后,在后台继续生成面部特征组合,以供下一次比对直接调取。
如图5所示,在另一实施例中,所述步骤S402之前还包括以下步骤:
S403,获取所述待识别人头像中除脸型轮廓之外的其他面部特征;
将所述待识别人头像的原有特征作为模拟数据库中面部特征以供后台服务器调取,可能可以使得识别追踪的效率更高,因为识别目标对于面部特征的更改可能并不涉及所有面部特征,因此,保留的那部分面部特征存储至所述模拟数据库中以供后续调取,会提升识别效率。
S404,将获取的所述待识别人头像的面部特征分类存储至所述模拟数据库中,并将获取的所述待识别人头像的面部特征调取顺序设置为优先调取。
也即,在该步骤中,将所述待识别人头像的面部特征分类存储至所述模拟数据库中,并通过优先级别的设置,使得所述识别人头像中原有的面部特征作为优先调取的面部特征,此时,若识别目标对于面部特征的更改不涉及所有面部特征,会帮组用户更快找寻到识别目标。
如图7所示,在一实施例中,所述步骤S50包括以下步骤:
S501,获取所述模拟画像;也即,服务器在模拟数据库中调取经步骤S40处理的所述模拟画像。
S502,将所述模拟画像与所述识别目标头像进行比对,并判断所述模拟画像与所述识别目标头像的相似度是否超过所述预设阈值;所述将模拟画像与所述识别目标头像进行比对,可以是对模拟画像与识别目标头像的整体进行比对,在识别目标的脸上有唯一面部特征时,亦可仅对比其中的面部特征。
S503,在所述模拟画像与所述识别目标头像的相似度超过所述预设阈值时,提示比对成功;服务器根据预设规则将所述模拟画像与所述识别目标头像进行比对,相似度超过预设阈值时,即将比对成功的信息发送至客户端,提示用户对待识别目标的人脸识别比对成功。
S504,在所述模拟画像与所述目标头像的相似度并未超过所述预设阈值时,返回至所述生成模拟画像的步骤。在所述模拟画像与所述目标头像的相似度并未超过预设阈值时,返回至从模拟数据库中调取经S40步骤处理的模拟画像,也即返回至步骤S501。
所述步骤S503之后还包括步骤:
S505,记录本次比对数据并将所述模拟画像存储,并返回至继续获取所述模拟画像,并在获取所述模拟画像失败时,调取已存储的比对成功的所有所述模拟画像。
在该步骤中,在设定一个相似度的预设阈值进行比对后之后,可以在每一次提示比对成功时,将其比对数据(包括第几次比对成功、相似度数据等)记录并将所述模拟画像进行存储,等所有模拟画像均比对完毕之后,再进行集中处理;该集中处理过程可以由人工再次进行识别,亦可在设定一个比所述预设阈值的数值更高的比对相似度的值,再重复执行步骤S50,或者重新根据需求设定一个比所述预设比例更高的相似比例,重新执行步骤S30。
在一实施例中,每一次提示比对成功时,亦可停止继续比对,并当即对该模拟画像进行人工识别等下一步处理,每一次比对过程进行中,都可以由用户下达暂停或继续指令,使得服务器暂停或继续当前操作。
本申请首先通过脸型轮廓的相似比例对待识别人头像进行初步筛选,在此基础上对识别目标进行画像模拟之后再次与识别目标头像进行比对识别,以追踪识别目标,本申请可以降低人为因素造成的误判,且将其应用于公安系统,可提高警察抓捕逃犯的效率,不会因为警力不足或者人员懈怠而造成逃犯的逍遥法外。
应理解,上述实施例中各步骤的序号的大小并不意味着执行顺序的先后,各过程的执行顺序应以其功能和内在逻辑确定,而不应对本申请实施例的实施过程构成任何限定。
在一实施例中,提供一种人脸识别装置,该人脸识别装置与上述实施例中人脸识别方法一一对应。如图8所示,该人脸识别装置包括接收模块11、获取模块12、脸型比对模 块13、画像模拟模块14和画像比对模块15。各功能模块详细说明如下:
所述接收模块11,用于接收识别指令,获取识别目标头像;
所述获取模块12,用于获取待识别人头像;
所述脸型比对模块13,用于将所述待识别人头像与所述识别目标头像的脸型轮廓进行比对,以检测所述待识别人头像与所述识别目标头像的脸型轮廓相似比例是否超过预设比例;
所述画像模拟模块14,用于在所述待识别人头像与所述识别目标头像的脸型轮廓相似比超过所述预设比例时,在所述待识别人头像上进行面部特征的画像模拟,生成模拟画像;
所述画像比对模块15,用于将所述模拟画像与所述识别目标头像进行比对,并在所述模拟画像与所述识别目标头像的相似度超过预设阈值时,提示比对成功。
在一实施例中,如图9所示,所述画像模拟模块14包括启动子模块141和生成子模块142;
所述启动子模块141,用于在所述待识别人头像与所述目标头像的脸型轮廓相似比例超过所述预设比例时,启动对与所述识别目标头像对应的识别目标的画像模拟,将所述待识别人头像的脸型轮廓作为所述模拟画像的脸型轮廓;
所述生成子模块142,用于调取模拟数据库中的预设面部特征,并将各所述面部特征对应放置在所述模拟画像的脸型轮廓中的预设位置,生成所述模拟画像。
在一实施例中,如图10所示,所述生成子模块142还包括组合单元1421和生成单元1422;
所述组合单元1421,用于调取所述模拟数据库中的预设面部特征,并对调取的面部特征进行预设规则的排列组合之后,生成多组面部特征组合;
所述生成单元1422,用于将每一组所述面部特征组合中的各面部特征对应放置入所述模拟画像的脸型轮廓中的预设位置,生成所述模拟画像。
在一实施例中,如图11所示,所述画像模拟模块14还包括获取子模块143和存储子模块144;
所述获取子模块143,用于获取所述待识别人头像中除脸型轮廓之外的其他面部特征;
所述存储子模块144,将获取的所述待识别人头像中除脸型轮廓之外的其他面部特征分类存储至所述模拟数据库中,并将所述待识别人头像中除脸型轮廓之外的其他面部特征在所述模拟数据库中的调取顺序设置为优先调取。
在一个实施例中,所述画像比对模块还用于获取所述模拟画像;将所述模拟画像与所述识别目标头像进行比对,并判断所述模拟画像与所述识别目标头像的相似度是否超过所述预设阈值;在所述模拟画像与所述识别目标头像的相似度超过所述预设阈值时,提示比对成功;在所述模拟画像与所述识别目标头像的相似度并未超过所述预设阈值时,返回至所述生成模拟画像的步骤。
关于人脸识别装置的具体限定可以参见上文中对于人脸识别方法的限定,在此不再赘述。上述人脸识别装置中的各个模块可全部或部分通过软件、硬件及其组合来实现。上述各模块可以硬件形式内嵌于或独立于计算机设备中的处理器中,也可以以软件形式存储于计算机设备中的存储器中,以便于处理器调用执行以上各个模块对应的操作。
在一个实施例中,提供了一种计算机设备,该计算机设备可以是服务器,其内部结构图可以如图12所示。该计算机设备包括通过系统总线连接的处理器、存储器、网络接口和数据库。其中,该计算机设备的处理器用于提供计算和控制能力。该计算机设备的存储器包括非易失性存储介质、内存储器。该非易失性存储介质存储有操作系统、计算机可读存储介质和数据库。该内存储器为非易失性存储介质中的操作系统和计算机可读存储介质的运行提供环境。该计算机设备的数据库用于存储面部特征、识别目标头像、监控设备拍摄的视频或图像、以及模拟画像。该计算机设备的网络接口用于与外部的终端通过网络连 接通信。该计算机可读存储介质被处理器执行时以实现一种人脸识别方法。
在一个实施例中,提供了一种计算机设备,包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机可读存储介质,处理器执行计算机可读存储介质时实现以下步骤:
接收识别指令,获取识别目标头像;
获取待识别人头像;
将所述待识别人头像与所述识别目标头像的脸型轮廓进行比对,以检测所述待识别人头像与所述识别目标头像的脸型轮廓相似比是否超过预设比例;
在所述待识别人头像与所述识别目标头像的脸型轮廓相似比例超过所述预设比例时,在所述待识别人头像上进行面部特征的画像模拟,生成模拟画像;
将所述模拟画像与所述识别目标头像进行比对,并在所述模拟画像与所述识别目标头像的相似度超过预设阈值时,提示比对成功。
在一个实施例中,提供了一个或多个存储有计算机可读指令的非易失性可读存储介质,该非易失性可读存储介质上存储有计算机可读指令,该计算机可读指令被一个或多个处理器执行时,使得一个或多个处理器实现以下步骤:
接收识别指令,获取识别目标头像;
获取待识别人头像;
将所述待识别人头像与所述识别目标头像的脸型轮廓进行比对,以检测所述待识别人头像与所述识别目标头像的脸型轮廓相似比是否超过预设比例;
在所述待识别人头像与所述识别目标头像的脸型轮廓相似比例超过预设比例时,在所述待识别人头像上进行面部特征的画像模拟,生成模拟画像;
将所述模拟画像与所述识别目标头像进行比对,并在所述模拟画像与所述识别目标头像的相似度超过预设阈值时,提示比对成功。本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,是可以通过计算机可读存储介质来指令相关的硬件来完成,所述的计算机可读存储介质可存储于一非易失性计算机可读取存储介质中,该计算机可读存储介质在执行时,可包括如上述各方法的实施例的流程。其中,本申请所提供的各实施例中所使用的对存储器、存储、数据库或其它介质的任何引用,均可包括非易失性和/或易失性存储器。非易失性存储器可包括只读存储器(ROM)、可编程ROM(PROM)、电可编程ROM(EPROM)、电可擦除可编程ROM(EEPROM)或闪存。易失性存储器可包括随机存取存储器(RAM)或者外部高速缓冲存储器。作为说明而非局限,RAM以多种形式可得,诸如静态RAM(SRAM)、动态RAM(DRAM)、同步DRAM(SDRAM)、双数据率SDRAM(DDRSDRAM)、增强型SDRAM(ESDRAM)、同步链路(Synchlink)DRAM(SLDRAM)、存储器总线(Rambus)直接RAM(RDRAM)、直接存储器总线动态RAM(DRDRAM)、以及存储器总线动态RAM(RDRAM)等。
所属领域的技术人员可以清楚地了解到,为了描述的方便和简洁,仅以上述各功能单元、模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能单元、模块完成,即将所述装置的内部结构划分成不同的功能单元或模块,以完成以上描述的全部或者部分功能。
以上所述实施例仅用以说明本申请的技术方案,而非对其限制;尽管参照前述实施例对本申请进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本申请各实施例技术方案的精神和范围,均应包含在本申请的保护范围之内。

Claims (20)

  1. 一种人脸识别方法,其特征在于,包括:
    接收识别指令,获取识别目标头像;
    获取待识别人头像;
    将所述待识别人头像与所述识别目标头像的脸型轮廓进行比对,以检测所述待识别人头像与所述识别目标头像的脸型轮廓相似比是否超过预设比例;
    在所述待识别人头像与所述识别目标头像的脸型轮廓相似比例超过所述预设比例时,在所述待识别人头像上进行面部特征的画像模拟,生成模拟画像;
    将所述模拟画像与所述识别目标头像进行比对,并在所述模拟画像与所述识别目标头像的相似度超过预设阈值时,提示比对成功。
  2. 如权利要求1所述的人脸识别方法,其特征在于,所述在所述待识别人头像与所述识别目标头像的脸型轮廓相似比例超过所述预设比例时,在所述待识别人头像上进行面部特征的画像模拟,生成模拟画像,包括:
    在所述待识别人头像与所述目标头像的脸型轮廓相似比例超过所述预设比例时,启动对与所述识别目标头像对应的识别目标的画像模拟,将所述待识别人头像的脸型轮廓作为所述模拟画像的脸型轮廓;
    调取模拟数据库中的预设面部特征,并将所述预设面部特征对应放置在所述模拟画像的脸型轮廓中的预设位置,生成所述模拟画像。
  3. 如权利要求2所述的人脸识别方法,其特征在于,所述调取模拟数据库中的预设面部特征,并将所述预设面部特征对应放置在所述模拟画像的脸型轮廓中的预设位置,生成所述模拟画像,包括:
    对调取的所述预设面部特征进行预设规则的排列组合,生成多组面部特征组合;
    将每一组所述面部特征组合中的各面部特征对应放置入所述模拟画像的脸型轮廓中的预设位置,生成所述模拟画像。
  4. 如权利要求2所述的人脸识别方法,其特征在于,所述调取模拟数据库中的预设面部特征,并将所述预设面部特征对应放置在所述模拟画像的脸型轮廓中的预设位置,生成所述模拟画像之前,包括:
    获取所述待识别人头像中除脸型轮廓之外的其他面部特征;
    将获取的所述待识别人头像中除脸型轮廓之外的其他面部特征分类存储至所述模拟数据库中,并将所述待识别人头像中除脸型轮廓之外的其他面部特征在所述模拟数据库中的调取顺序设置为优先调取。
  5. 如权利要求1所述的人脸识别方法,其特征在于,所述将所述模拟画像与所述识别目标头像进行比对,并在所述模拟画像与所述识别目标头像的相似度超过预设阈值时,提示比对成功,包括:
    获取所述模拟画像;
    将所述模拟画像与所述识别目标头像进行比对,并判断所述模拟画像与所述识别目标头像的相似度是否超过所述预设阈值;
    在所述模拟画像与所述识别目标头像的相似度超过所述预设阈值时,提示比对成功;
    在所述模拟画像与所述识别目标头像的相似度并未超过所述预设阈值时,返回至所述生成模拟画像的步骤。
  6. 一种人脸识别装置,其特征在于,包括:
    接收模块,用于接收识别指令,获取识别目标头像;
    获取模块,用于获取待识别人头像;
    脸型比对模块,用于将所述待识别人头像与所述识别目标头像的脸型轮廓进行比对, 以检测所述待识别人头像与所述识别目标头像的脸型轮廓相似比例是否超过预设比例;
    画像模拟模块,用于在所述待识别人头像与所述识别目标头像的脸型轮廓相似比超过所述预设比例时,在所述待识别人头像上进行面部特征的画像模拟,生成模拟画像;
    画像比对模块,用于将所述模拟画像与所述识别目标头像进行比对,并在所述模拟画像与所述识别目标头像的相似度超过预设阈值时,提示比对成功。
  7. 如权利要求6所述的人脸识别装置,其特征在于,所述画像模拟模块包括:
    启动子模块,用于在所述待识别人头像与所述目标头像的脸型轮廓相似比例超过所述预设比例时,启动对与所述识别目标头像对应的识别目标的画像模拟,将所述待识别人头像的脸型轮廓作为所述模拟画像的脸型轮廓;
    生成子模块,用于调取模拟数据库中的预设面部特征,并将所述预设面部特征对应放置在所述模拟画像的脸型轮廓中的预设位置,生成所述模拟画像。
  8. 如权利要求7所述的人脸识别装置,其特征在于,所述生成子模块包括:
    组合单元,用于对调取的所述预设面部特征进行预设规则的排列组合,生成多组面部特征组合;
    生成单元,用于将每一组所述面部特征组合中的各面部特征对应放置入所述模拟画像的脸型轮廓中的预设位置,生成所述模拟画像。
  9. 如权利要求7所述的人脸识别装置,其特征在于,所述画像模拟模块还包括:
    获取子模块,获取所述待识别人头像中除脸型轮廓之外的其他面部特征;
    存储子模块,将获取的所述待识别人头像中除脸型轮廓之外的其他面部特征分类存储至所述模拟数据库中,并将所述待识别人头像中除脸型轮廓之外的其他面部特征在所述模拟数据库中的调取顺序设置为优先调取。
  10. 如权利要求6所述的人脸识别装置,其特征在于,所述画像比对模块还用于获取所述模拟画像;将所述模拟画像与所述识别目标头像进行比对,并判断所述模拟画像与所述识别目标头像的相似度是否超过所述预设阈值;在所述模拟画像与所述识别目标头像的相似度超过所述预设阈值时,提示比对成功;在所述模拟画像与所述识别目标头像的相似度并未超过所述预设阈值时,返回至所述生成模拟画像的步骤。
  11. 一种计算机设备,包括存储器、处理器以及存储在所述存储器中并可在所述处理器上运行的计算机可读指令,其特征在于,所述处理器执行所述计算机可读指令时实现如下步骤:
    接收识别指令,获取识别目标头像;
    获取待识别人头像;
    将所述待识别人头像与所述识别目标头像的脸型轮廓进行比对,以检测所述待识别人头像与所述识别目标头像的脸型轮廓相似比是否超过预设比例;
    在所述待识别人头像与所述识别目标头像的脸型轮廓相似比例超过所述预设比例时,在所述待识别人头像上进行面部特征的画像模拟,生成模拟画像;
    将所述模拟画像与所述识别目标头像进行比对,并在所述模拟画像与所述识别目标头像的相似度超过预设阈值时,提示比对成功。
  12. 如权利要求11所述的计算机设备,其特征在于,所述在所述待识别人头像与所述识别目标头像的脸型轮廓相似比例超过所述预设比例时,在所述待识别人头像上进行面部特征的画像模拟,生成模拟画像,包括:
    在所述待识别人头像与所述目标头像的脸型轮廓相似比例超过所述预设比例时,启动对与所述识别目标头像对应的识别目标的画像模拟,将所述待识别人头像的脸型轮廓作为所述模拟画像的脸型轮廓;
    调取模拟数据库中的预设面部特征,并将所述预设面部特征对应放置在所述模拟画像的脸型轮廓中的预设位置,生成所述模拟画像。
  13. 如权利要求12所述的计算机设备,其特征在于,所述调取模拟数据库中的预设面部特征,并将所述预设面部特征对应放置在所述模拟画像的脸型轮廓中的预设位置,生成所述模拟画像,包括:
    对调取的所述预设面部特征进行预设规则的排列组合,生成多组面部特征组合;
    将每一组所述面部特征组合中的各面部特征对应放置入所述模拟画像的脸型轮廓中的预设位置,生成所述模拟画像。
  14. 如权利要求12所述的计算机设备,其特征在于,所述调取模拟数据库中的预设面部特征,并将所述预设面部特征对应放置在所述模拟画像的脸型轮廓中的预设位置,生成所述模拟画像之前,包括:
    获取所述待识别人头像中除脸型轮廓之外的其他面部特征;
    将获取的所述待识别人头像中除脸型轮廓之外的其他面部特征分类存储至所述模拟数据库中,并将所述待识别人头像中除脸型轮廓之外的其他面部特征在所述模拟数据库中的调取顺序设置为优先调取。
  15. 如权利要求11所述的计算机设备,其特征在于,所述将所述模拟画像与所述识别目标头像进行比对,并在所述模拟画像与所述识别目标头像的相似度超过预设阈值时,提示比对成功,包括:
    获取所述模拟画像;
    将所述模拟画像与所述识别目标头像进行比对,并判断所述模拟画像与所述识别目标头像的相似度是否超过所述预设阈值;
    在所述模拟画像与所述识别目标头像的相似度超过所述预设阈值时,提示比对成功;
    在所述模拟画像与所述识别目标头像的相似度并未超过所述预设阈值时,返回至所述生成模拟画像的步骤。
  16. 一个或多个存储有计算机可读指令的非易失性可读存储介质,其特征在于,所述计算机可读指令被一个或多个处理器执行时,使得所述一个或多个处理器执行如下步骤:
    接收识别指令,获取识别目标头像;
    获取待识别人头像;
    将所述待识别人头像与所述识别目标头像的脸型轮廓进行比对,以检测所述待识别人头像与所述识别目标头像的脸型轮廓相似比是否超过预设比例;
    在所述待识别人头像与所述识别目标头像的脸型轮廓相似比例超过所述预设比例时,在所述待识别人头像上进行面部特征的画像模拟,生成模拟画像;
    将所述模拟画像与所述识别目标头像进行比对,并在所述模拟画像与所述识别目标头像的相似度超过预设阈值时,提示比对成功。
  17. 如权利要求16所述的非易失性可读存储介质,其特征在于,所述在所述待识别人头像与所述识别目标头像的脸型轮廓相似比例超过所述预设比例时,在所述待识别人头像上进行面部特征的画像模拟,生成模拟画像,包括:
    在所述待识别人头像与所述目标头像的脸型轮廓相似比例超过所述预设比例时,启动对与所述识别目标头像对应的识别目标的画像模拟,将所述待识别人头像的脸型轮廓作为所述模拟画像的脸型轮廓;
    调取模拟数据库中的预设面部特征,并将所述预设面部特征对应放置在所述模拟画像的脸型轮廓中的预设位置,生成所述模拟画像。
  18. 如权利要求17所述的非易失性可读存储介质,其特征在于,所述调取模拟数据库中的预设面部特征,并将所述预设面部特征对应放置在所述模拟画像的脸型轮廓中的预设位置,生成所述模拟画像,包括:
    对调取的所述预设面部特征进行预设规则的排列组合,生成多组面部特征组合;
    将每一组所述面部特征组合中的各面部特征对应放置入所述模拟画像的脸型轮廓中 的预设位置,生成所述模拟画像。
  19. 如权利要求17所述的非易失性可读存储介质,其特征在于,所述调取模拟数据库中的预设面部特征,并将所述预设面部特征对应放置在所述模拟画像的脸型轮廓中的预设位置,生成所述模拟画像之前,包括:
    获取所述待识别人头像中除脸型轮廓之外的其他面部特征;
    将获取的所述待识别人头像中除脸型轮廓之外的其他面部特征分类存储至所述模拟数据库中,并将所述待识别人头像中除脸型轮廓之外的其他面部特征在所述模拟数据库中的调取顺序设置为优先调取。
  20. 如权利要求16所述的非易失性可读存储介质,其特征在于,所述将所述模拟画像与所述识别目标头像进行比对,并在所述模拟画像与所述识别目标头像的相似度超过预设阈值时,提示比对成功,包括:
    获取所述模拟画像;
    将所述模拟画像与所述识别目标头像进行比对,并判断所述模拟画像与所述识别目标头像的相似度是否超过所述预设阈值;
    在所述模拟画像与所述识别目标头像的相似度超过所述预设阈值时,提示比对成功;
    在所述模拟画像与所述识别目标头像的相似度并未超过所述预设阈值时,返回至所述生成模拟画像的步骤。
PCT/CN2018/106263 2018-07-27 2018-09-18 人脸识别方法、装置、计算机设备及存储介质 WO2020019451A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201810843420.4 2018-07-27
CN201810843420.4A CN109063628B (zh) 2018-07-27 2018-07-27 人脸识别方法、装置、计算机设备及存储介质

Publications (1)

Publication Number Publication Date
WO2020019451A1 true WO2020019451A1 (zh) 2020-01-30

Family

ID=64835901

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/106263 WO2020019451A1 (zh) 2018-07-27 2018-09-18 人脸识别方法、装置、计算机设备及存储介质

Country Status (2)

Country Link
CN (1) CN109063628B (zh)
WO (1) WO2020019451A1 (zh)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111460910A (zh) * 2020-03-11 2020-07-28 深圳市新镜介网络有限公司 人脸脸型的分类方法、装置、终端设备及存储介质
CN112233740A (zh) * 2020-09-28 2021-01-15 广州金域医学检验中心有限公司 患者身份识别方法、装置、设备和介质
CN113486319A (zh) * 2021-08-02 2021-10-08 安徽文香科技有限公司 一种在线教育平台的用户认证方法及装置

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109839614B (zh) * 2018-12-29 2020-11-06 深圳市天彦通信股份有限公司 固定式采集设备的定位系统及方法
CN111209845A (zh) * 2020-01-03 2020-05-29 平安科技(深圳)有限公司 人脸识别方法、装置、计算机设备及存储介质
CN113450121B (zh) * 2021-06-30 2022-08-05 湖南校智付网络科技有限公司 用于校园支付的人脸识别方法

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104143076A (zh) * 2013-05-09 2014-11-12 腾讯科技(深圳)有限公司 人脸形状的匹配方法和系统
CN105869134A (zh) * 2016-03-24 2016-08-17 西安电子科技大学 基于方向图模型的人脸画像合成方法
CN108090223A (zh) * 2018-01-05 2018-05-29 牛海波 一种基于互联网信息的开放学者画像方法
CN108154133A (zh) * 2018-01-10 2018-06-12 西安电子科技大学 基于非对称联合学习的人脸画像-照片识别方法

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105868716B (zh) * 2016-03-29 2019-08-13 中国科学院上海高等研究院 一种基于面部几何特征的人脸识别方法
CN108038475A (zh) * 2017-12-29 2018-05-15 浪潮金融信息技术有限公司 人脸图像识别方法及装置、计算机存储介质、终端

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104143076A (zh) * 2013-05-09 2014-11-12 腾讯科技(深圳)有限公司 人脸形状的匹配方法和系统
CN105869134A (zh) * 2016-03-24 2016-08-17 西安电子科技大学 基于方向图模型的人脸画像合成方法
CN108090223A (zh) * 2018-01-05 2018-05-29 牛海波 一种基于互联网信息的开放学者画像方法
CN108154133A (zh) * 2018-01-10 2018-06-12 西安电子科技大学 基于非对称联合学习的人脸画像-照片识别方法

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111460910A (zh) * 2020-03-11 2020-07-28 深圳市新镜介网络有限公司 人脸脸型的分类方法、装置、终端设备及存储介质
CN112233740A (zh) * 2020-09-28 2021-01-15 广州金域医学检验中心有限公司 患者身份识别方法、装置、设备和介质
CN112233740B (zh) * 2020-09-28 2024-03-29 广州金域医学检验中心有限公司 患者身份识别方法、装置、设备和介质
CN113486319A (zh) * 2021-08-02 2021-10-08 安徽文香科技有限公司 一种在线教育平台的用户认证方法及装置

Also Published As

Publication number Publication date
CN109063628B (zh) 2023-04-21
CN109063628A (zh) 2018-12-21

Similar Documents

Publication Publication Date Title
WO2020019451A1 (zh) 人脸识别方法、装置、计算机设备及存储介质
US10546183B2 (en) Liveness detection
US9613200B2 (en) Ear biometric capture, authentication, and identification method and system
US9077678B1 (en) Facilitating photo sharing
WO2020024400A1 (zh) 课堂监控方法、装置、计算机设备及存储介质
US11093773B2 (en) Liveness detection method, apparatus and computer-readable storage medium
WO2016172872A1 (zh) 用于验证活体人脸的方法、设备和计算机程序产品
WO2019062080A1 (zh) 身份识别方法、电子装置及计算机可读存储介质
CN109472208A (zh) 基于人脸识别的办证方法、装置、计算机设备及存储介质
WO2020125386A1 (zh) 表情识别方法、装置、计算机设备和存储介质
CA3152812A1 (en) Facial recognition method and apparatus
US9116926B2 (en) Sharing photos
WO2022252642A1 (zh) 基于视频图像的行为姿态检测方法、装置、设备及介质
Findling et al. Towards face unlock: on the difficulty of reliably detecting faces on mobile phones
CN110197107B (zh) 微表情识别方法、装置、计算机设备及存储介质
TW202036372A (zh) 應用於安全防護之影像辨識系統
WO2021082045A1 (zh) 微笑表情检测方法、装置、计算机设备及存储介质
US20240104965A1 (en) Face liveness detection methods and apparatuses
WO2020019456A1 (zh) 用户指令匹配方法、装置、计算机设备及存储介质
WO2020172870A1 (zh) 一种目标对象的移动轨迹确定方法和装置
WO2017000217A1 (zh) 活体检测方法及设备、计算机程序产品
WO2020019457A1 (zh) 用户指令匹配方法、装置、计算机设备及存储介质
JP2023521254A (ja) 画像処理デバイス、画像処理方法、およびプログラム
CN110837901A (zh) 云试驾预约审核方法及装置、存储介质、云服务器
CN113627387A (zh) 基于人脸识别的并行身份认证方法、装置、设备和介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18927835

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18927835

Country of ref document: EP

Kind code of ref document: A1