WO2019095587A1 - Procédé de reconnaissance faciale, serveur d'application et support de stockage lisible par ordinateur - Google Patents

Procédé de reconnaissance faciale, serveur d'application et support de stockage lisible par ordinateur Download PDF

Info

Publication number
WO2019095587A1
WO2019095587A1 PCT/CN2018/077640 CN2018077640W WO2019095587A1 WO 2019095587 A1 WO2019095587 A1 WO 2019095587A1 CN 2018077640 W CN2018077640 W CN 2018077640W WO 2019095587 A1 WO2019095587 A1 WO 2019095587A1
Authority
WO
WIPO (PCT)
Prior art keywords
face
classified
sample
classifier
recognized
Prior art date
Application number
PCT/CN2018/077640
Other languages
English (en)
Chinese (zh)
Inventor
戴磊
Original Assignee
平安科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 平安科技(深圳)有限公司 filed Critical 平安科技(深圳)有限公司
Publication of WO2019095587A1 publication Critical patent/WO2019095587A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/758Involving statistics of pixels or of feature values, e.g. histogram matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Definitions

  • the present application relates to the field of face recognition technologies, and in particular, to a face recognition method, an application server, and a computer readable storage medium.
  • identity recognition technology is becoming more and more mature, including face recognition technology.
  • face recognition technology when the face recognition algorithm determines that the error acceptance rate and the error rejection rate are different due to the threshold setting, the following may occur:
  • the threshold is set too high, the probability of false positives is reduced, that is, the acceptance rate of the error is lowered, and at the same time, the phenomenon that the recognition object is rejected by the person may be rejected, that is, the rejection rate of the error may also increase;
  • the threshold setting is low, the probability of false positives is increased, that is, the acceptance rate of the error is increased, and the rejection rate of the error is simultaneously decreased.
  • the present application proposes a face recognition method, an application server, and a computer readable storage medium to reduce an erroneous acceptance rate and an erroneous rejection rate.
  • the present application provides a face recognition method, the method comprising the steps of:
  • the matching value of the to-be-identified face and each sample to be classified is smaller than the corresponding preset value, input the face to be recognized into a classifier connected in parallel, wherein the parallel
  • the classifier connected in the manner includes samples to be classified;
  • the sample to be classified having the smallest total weight value is selected as the recognized face to output the sample to be classified.
  • the present application further provides an application server, including a memory and a processor, where the memory stores a face recognition system operable on the processor, where the face recognition system is The steps of the face recognition method as described above are implemented when the processor is executed.
  • the present application further provides a computer readable storage medium storing a face recognition system, the face recognition system being executable by at least one processor to The at least one processor performs the steps of the face recognition method as described above.
  • the face recognition method, the application server and the computer readable storage medium proposed by the present application can reduce the error acceptance rate and the wrong rejection rate to improve the accuracy of face recognition.
  • 1 is a schematic diagram of an optional hardware architecture of an application server of the present application
  • FIG. 2 is a schematic diagram of program modules of the first, second, and third embodiments of the present applicant's face recognition system
  • FIG. 3 is a schematic diagram of a program module of a fourth embodiment of the present applicant's face recognition system
  • FIG. 4 is a schematic flow chart of the first embodiment of the present applicant's face recognition method
  • FIG. 5 is a schematic flow chart of a second embodiment of the present applicant's face recognition method
  • FIG. 6 is a schematic flow chart of a third embodiment of the present applicant's face recognition method.
  • FIG. 1 it is a schematic diagram of an optional hardware architecture of the application server 2 of the present application.
  • the application server 2 may include, but is not limited to, the memory 11, the processor 12, and the network interface 13 being communicably connected to each other through a system bus. It is to be noted that FIG. 2 only shows the application server 2 with components 11-13, but it should be understood that not all illustrated components may be implemented, and more or fewer components may be implemented instead.
  • the application server 2 may be a computing device such as a rack server, a blade server, a tower server, or a rack server.
  • the application server 2 may be an independent server or a server cluster composed of multiple servers. .
  • the memory 11 includes at least one type of readable storage medium including a flash memory, a hard disk, a multimedia card, a card type memory (eg, SD or DX memory, etc.), a random access memory (RAM), a static Random access memory (SRAM), read only memory (ROM), electrically erasable programmable read only memory (EEPROM), programmable read only memory (PROM), magnetic memory, magnetic disk, optical disk, and the like.
  • the memory 11 may be an internal storage unit of the application server 2, such as a hard disk or memory of the application server 2.
  • the memory 11 may also be an external storage device of the application server 2, such as a plug-in hard disk equipped on the application server 2, a smart memory card (SMC), and a secure digital number. (Secure Digital, SD) card, flash card, etc.
  • the memory 11 can also include both the internal storage unit of the application server 2 and its external storage device.
  • the memory 11 is generally used to store an operating system installed in the application server 2 and various types of application software, such as program codes of the face recognition system 200. Further, the memory 11 can also be used to temporarily store various types of data that have been output or are to be output.
  • the processor 12 may be a Central Processing Unit (CPU), controller, microcontroller, microprocessor, or other data processing chip in some embodiments.
  • the processor 12 is typically used to control the overall operation of the application server 2.
  • the processor 12 is configured to run program code or process data stored in the memory 11, such as running the face recognition system 200 or the like.
  • the network interface 13 may comprise a wireless network interface or a wired network interface, which is typically used to establish a communication connection between the application server 2 and other electronic devices.
  • a face recognition system 200 Referring to Fig. 2, it is a program block diagram of the first, second and third embodiments of the applicant's face recognition system 200.
  • the face recognition system 200 includes a series of computer program instructions stored in the memory 11, and when the computer program instructions are executed by the processor 12, the face recognition operation of the embodiments of the present application can be implemented. .
  • the face recognition system 200 can be divided into one or more modules based on the particular operations implemented by the various portions of the computer program instructions. For example, in FIG. 2, the face recognition system 200 can be divided into an acquisition module 201, a calculation module 202, a determination module 203, an input module 204, a selection module 205, an assignment module 206, a selection module 207, a cutting module 208, and Normalized module 209. among them:
  • the obtaining module 201 is configured to obtain information about a face to be recognized.
  • the obtaining module 201 acquires face information of the user to identify the user.
  • the face to be recognized can be collected by any device such as a camera, a digital camera, or a scanner.
  • the calculating module 202 is configured to separately calculate matching values of the to-be-classified sample in the classifier to be recognized and the classifier connected in a serial manner.
  • the first classifier and the second classifier are connected in a serial manner, and the to-be-identified face passes through the first classifier and the second classifier in sequence to perform face matching.
  • the calculating module 202 respectively calculates matching values of the samples to be classified in the first classifier and the second classifier when the face to be recognized passes through the first classifier and the second classifier.
  • the serial mode indicates that each respective classifier is sample-trained in time with each different subset in time.
  • the determining module 203 is configured to determine whether the matching value of the to-be-identified face and each sample to be classified is less than a corresponding preset value.
  • the determining module 203 determines the matching value. Whether it is less than the corresponding preset value.
  • the input module 204 is configured to: when the matching values of the sample to be classified and the face to be recognized in the first classifier and the second classifier are both smaller than the corresponding preset value, The face to be recognized is input to a classifier connected in parallel.
  • the classifiers connected in parallel include samples to be classified.
  • the determining module 203 determines that the matching values of the sample to be classified and the face to be recognized in the first classifier and the second classifier are both smaller than the corresponding preset value
  • the determining The module 203 determines that the face to be recognized is not successfully identified in the first classifier and the second classifier.
  • the input module 204 inputs the face to be recognized to a classifier connected in parallel.
  • the parallel mode indicates that each of the corresponding classifiers is sampled with different subsets at the same time.
  • the classifier connected in parallel includes at least the first classifier and the second classifier.
  • the selecting module 205 is configured to arrange the samples to be classified according to the similarity with the face to be recognized, and select a preset number of samples to be classified with high similarity with the face to be recognized.
  • the selecting module 205 separately compares the sample to be classified in the to-be-classifier with the to-be-identified according to the calculated similarity between the to-be-identified face and the sample to be classified in the classifier. Face similarity is arranged in descending order.
  • the selection module 205 selects the sample A to be classified and the sample B to be classified in the first classifier, and selects the sample A to be classified and the sample B to be classified in the second classifier.
  • the value-adding module 206 is configured to assign different weight values to the samples to be classified according to the similarity between the to-be-identified face and the sample to be classified, and calculate a total weight value corresponding to the sample to be classified.
  • the evaluation module 206 assigns different weight values to the selected first two to-be-classified samples according to the similarity between the to-be-identified face and the sample to be classified, and calculates the first two samples to be classified.
  • the total weight value obtained For example, after the selection module 205 selects the sample A to be classified and the sample B to be classified, the evaluation module 206 assigns different weight values to the classified sample A and the sample to be classified, respectively, wherein the first classifier has an assignment module. 206 assigns a weight value of 1 to the sample A to be classified, and a weight value of 2 to the sample B to be classified, and the weighting value of the sample A to be classified in the second classifier is 2, and the weight value of the sample B to be classified is given. 2, at this time, the sample A to be classified obtains a total weight value of 3, and the sample B to be classified obtains a total weight value of 4.
  • the selecting module 207 is configured to select a sample to be classified with the smallest total weight value as the recognized human face to output the sample to be classified.
  • the weight value obtained by the sample A to be classified in the first classifier is 1, the weight value obtained by the sample B to be classified is 2, and the weight value obtained by the sample A to be classified in the second classifier 2, the sample B to be classified obtains a weight value of 2, the total weight value obtained by the sample A to be classified is 3, and the sample B to be classified obtains a total weight value of 4, and the selection module 207 selects the sample A to be classified as
  • the recognized face is output and the sample A to be classified is output.
  • the recognized face is output by an output device, wherein the output device includes a display or an alarm or the like. It should be noted that the smaller the total weight value obtained by the sample to be classified, the higher the similarity with the face to be recognized.
  • the classifier refers to when the input data contains a plurality of samples, and each sample includes a plurality of attributes, and one of the special attributes is referred to as a class (for example, high, medium, and low degrees of similarity).
  • the purpose of the classifier is to analyze the input data and build a model and use this model to classify the input data.
  • the above classifier includes: a support vector machine classifier, an artificial neural network classifier, a fuzzy classifier, a Bayesian classifier, a template matching classifier, a geometric classifier, and the like can be used.
  • the calculation module 202 is further configured to calculate a first matching value between the to-be-identified face and the sample to be classified in the first classifier.
  • the calculation module 202 calculates a first matching value of the to-be-identified face and the sample to be classified in the first classifier according to the face histogram.
  • the determining module 203 is further configured to determine whether the first matching value is greater than a first preset value.
  • the selecting module 207 is further configured to: when the first matching value is greater than the first preset value, select a sample to be classified in the first classifier as the recognized human face.
  • the selecting module 207 selects the to-be-selected The classification sample is the recognized face.
  • the determining module 203 determines that the first matching value is smaller than the first preset value, then:
  • the calculation module 202 is further configured to calculate a second matching value of the to-be-identified face and the sample to be classified in the second classifier.
  • the calculating module 202 calculates a second matching value of the to-be-identified face and the sample to be classified in the second classifier according to the face histogram.
  • the determining module 203 is further configured to determine whether the second matching value is greater than a second preset value.
  • the selecting module 207 is further configured to: when the second matching value is greater than the second preset value, select a sample to be classified in the second classifier as the recognized human face.
  • the face recognition system 200 includes the acquisition module 201, the calculation module 202, the determination module 203, the input module 204, the selection module 205, the assignment module 206, and the selection module 207 in the first embodiment.
  • a cutting module 208 and a normalization module 209 are also included.
  • the cutting module 208 is configured to calibrate and cut the face to be recognized.
  • the cutting module 208 performs calibration cutting on the face to be recognized to obtain and identify feature information of the face to be recognized.
  • the normalization module 209 is further configured to normalize the cut face to be recognized by the histogram to obtain a face histogram.
  • the normalization module 209 performs histogram normalization on the face to be recognized to obtain a histogram of the face, by straightening the face The graph is compared with the sample to be classified, and the matching value between the face to be recognized and the sample to be classified is calculated.
  • the present application also proposes a face recognition method.
  • FIG. 4 it is a schematic flowchart of the first embodiment of the present applicant's face recognition method.
  • the order of execution of the steps in the flowchart shown in FIG. 4 may be changed according to different requirements, and some steps may be omitted.
  • Step S400 acquiring information of a face to be recognized.
  • the face information of the user is acquired to identify the user.
  • the face to be recognized can be collected by any device such as a camera, a digital camera, or a scanner.
  • Step S402 respectively calculating a matching value of the to-be-classified sample in the classifier to be recognized and the classifier connected in a serial manner.
  • the first classifier and the second classifier are connected in a serial manner, and the to-be-identified face passes through the first classifier and the second classifier in sequence to perform face matching. And matching values of the samples to be classified in the first classifier and the second classifier when the face to be recognized passes through the first classifier and the second classifier, respectively.
  • the serial mode indicates that each respective classifier is sample-trained in time with each different subset in time.
  • Step S404 determining whether the matching value of the to-be-identified face and each sample to be classified is smaller than a corresponding preset value.
  • the matching values of the to-be-identified face and the samples to be classified in the first classifier and the second classifier are respectively calculated, it is determined whether the matching values are all smaller than the corresponding preset values.
  • Step S406 when the matching values of the sample to be classified and the face to be recognized in the first classifier and the second classifier are both smaller than the corresponding preset value, the face to be recognized is input.
  • the classifiers connected in parallel include samples to be classified.
  • the matching values of the sample to be classified and the face to be recognized in the first classifier and the second classifier are both smaller than the corresponding preset value
  • determining the person to be identified The face is not recognized successfully in both the first classifier and the second classifier.
  • the face to be recognized is input to a classifier connected in parallel.
  • the parallel mode indicates that each of the corresponding classifiers is sampled with different subsets at the same time.
  • the classifier connected in parallel includes at least the first classifier and the second classifier.
  • Step S408 Arranging the samples to be classified according to the similarity with the face to be recognized, and selecting a preset number of samples to be classified with high similarity with the face to be recognized.
  • the similarity between the sample to be classified and the face to be recognized in the classifier is High to low order.
  • the first N samples to be classified with high similarity to the face to be recognized are selected from the samples to be classified that are arranged.
  • N 2.
  • the sample A to be classified and the sample B to be classified in the first classifier are selected, and the sample A to be classified and the sample B to be classified in the second classifier are selected.
  • Step S410 assign different weight values to the samples to be classified according to the similarity between the to-be-identified face and the sample to be classified, and calculate a total weight value corresponding to the sample to be classified.
  • different weight values are respectively assigned to the first two samples to be classified according to the similarity between the face to be identified and the sample to be classified, and the total weights obtained by the first two samples to be classified are calculated.
  • the sample to be classified A and the sample to be classified B are respectively given different weight values, wherein the weight value of the sample A to be classified in the first classifier is given 1
  • the weight value of the sample B to be classified in the first classifier is 2, and the weight value of the sample A to be classified in the second classifier is 2, and the weight value of the sample B to be classified in the second classifier is 2, and the time is
  • the classification sample A obtains a total weight value of 3
  • the sample to be classified B obtains a total weight value of 4.
  • Step S412 selecting a sample to be classified with the smallest total weight value as the recognized face to output the sample to be classified.
  • the weight value obtained by the sample A to be classified in the first classifier is 1, the weight value obtained by the sample B to be classified is 2, and the weight value obtained by the sample A to be classified in the second classifier 2, the sample B to be classified obtains a weight value of 2, the sample A to be classified obtains a total weight value of 3, and the sample B to be classified obtains a total weight value of 4, and the sample A to be classified is selected as the recognized face and The sample A to be classified is output.
  • the recognized face is output by an output device, wherein the output device includes a display or an alarm or the like. It should be noted that the higher the similarity of the face, the fewer the number of votes obtained.
  • the classifier refers to when the input data contains a plurality of samples, and each sample includes a plurality of attributes, and one of the special attributes is referred to as a class (for example, high, medium, and low degrees of similarity).
  • the purpose of the classifier is to analyze the input data and build a model and use this model to classify the input data.
  • the above classifier includes: a support vector machine classifier, an artificial neural network classifier, a fuzzy classifier, a Bayesian classifier, a template matching classifier, a geometric classifier, and the like can be used.
  • FIG. 5 it is a schematic flowchart of the second embodiment of the present applicant's face recognition method.
  • the steps S500, S506-S516 of the face recognition method are similar to the steps S400-S412 of the first embodiment, except that the method further includes steps S502-S504.
  • the method includes the following steps:
  • Step S500 obtaining information of a face to be recognized.
  • the face information of the user is acquired to identify the user.
  • the face to be recognized can be collected by any device such as a camera, a digital camera, or a scanner.
  • Step S502 calibrating and cutting the face to be recognized.
  • the face to be recognized is subjected to calibration cutting to obtain and identify feature information of the face to be recognized.
  • Step S504 the histogram normalizes the cut face to be recognized to obtain a face histogram, so as to calculate a matching value between the face to be recognized and the sample to be classified through a face histogram.
  • a histogram normalization is performed on the face to be recognized to obtain a face histogram, so that the face to be recognized is calculated by the face histogram and the face The matching value of the sample to be classified.
  • Step S506 respectively calculating a matching value of the to-be-classified sample in the classifier to be recognized and the classifier connected in a serial manner.
  • the first classifier and the second classifier are connected in a serial manner, and the to-be-identified face passes through the first classifier and the second classifier in sequence to perform face matching. And matching values of the samples to be classified in the first classifier and the second classifier when the face to be recognized passes through the first classifier and the second classifier, respectively.
  • the serial mode indicates that each respective classifier is sample-trained in time with each different subset in time.
  • Step S508 determining whether the matching value of the to-be-identified face and each sample to be classified is less than a corresponding preset value.
  • the matching values of the to-be-identified face and the samples to be classified in the first classifier and the second classifier are respectively calculated, it is determined whether the matching values are all smaller than the corresponding preset values.
  • Step S510 when the matching value of the sample to be classified and the face to be recognized in the first classifier and the second classifier are both smaller than the corresponding preset value, input the face to be recognized To classifiers connected in parallel.
  • the classifiers connected in parallel include samples to be classified.
  • the matching values of the sample to be classified and the face to be recognized in the first classifier and the second classifier are both smaller than the corresponding preset value
  • determining the person to be identified The face is not recognized successfully in both the first classifier and the second classifier.
  • the face to be recognized is input to a classifier connected in parallel.
  • the parallel mode indicates that each of the corresponding classifiers is sampled with different subsets at the same time.
  • the classifier connected in parallel includes at least the first classifier and the second classifier.
  • Step S512 the samples to be classified are arranged according to the similarity with the face to be recognized, and a predetermined number of samples to be classified with high similarity with the face to be recognized are selected.
  • the similarity between the sample to be classified and the face to be recognized in the classifier is High to low order.
  • the first N samples to be classified with high similarity to the face to be recognized are selected from the samples to be classified that are arranged.
  • N 2.
  • the sample A to be classified and the sample B to be classified in the first classifier are selected, and the sample A to be classified and the sample B to be classified in the second classifier are selected.
  • Step S514 assign different weight values to the samples to be classified according to the similarity between the face to be identified and the sample to be classified, and calculate a total weight value corresponding to the sample to be classified.
  • the first two to-be-classified samples are assigned different weight values according to the similarity between the to-be-identified face and the sample to be classified, and the total weights obtained by the first two samples to be classified are calculated. .
  • the weight values corresponding to the sample A to be classified and the sample B to be classified are respectively assigned, wherein the weight value of the sample A to be classified in the first classifier is assigned to 1,
  • the weight value of the sample B to be classified in the first classifier is 2
  • the weight value of the sample A to be classified in the second classifier is 2
  • the weight value of the sample B to be classified in the second classifier is 2, and the time is
  • the classification sample A obtains a total weight value of 3
  • the sample to be classified B obtains a total weight value of 4.
  • Step S516 selecting a sample to be classified with the smallest total weight value as the recognized human face to output the sample to be classified.
  • the weight value obtained by the sample A to be classified in the first classifier is 1, the weight value obtained by the sample B to be classified in the first classifier is 2, and the sample to be classified in the second classifier A obtains a weight value of 2, a weight value obtained by the sample B to be classified in the second classifier is 2, a total weight value obtained by the sample A to be classified is 3, and a total weight value of 4 to be classified is selected.
  • the sample A to be classified is taken as the recognized face and the sample A to be classified is output.
  • the recognized face is output by an output device, wherein the output device includes a display or an alarm or the like. It should be noted that the higher the similarity of the face, the fewer the number of votes obtained.
  • FIG. 6 is a schematic flowchart diagram of a third embodiment of the present applicant's face recognition method.
  • step S506 of the second embodiment further includes the following steps:
  • Step S600 Calculate a first matching value between the to-be-identified face and the sample to be classified in the first classifier.
  • the first matching value of the to-be-identified face and the sample to be classified in the first classifier is calculated according to the face histogram.
  • step S602 it is determined whether the first matching value is greater than the first preset value. If the first matching value is greater than the first preset value, step S604 is performed, otherwise step S606 is performed.
  • Step S604 selecting a sample to be classified in the first classifier as the recognized face.
  • the first matching value is greater than the first preset value, it indicates that the to-be-identified face is successfully matched with the sample to be classified in the first classifier, and the sample to be classified is selected to be recognized. Face.
  • Step S606 calculating a second matching value of the to-be-identified face and the sample to be classified in the second classifier.
  • the second matching value of the to-be-identified face and the sample to be classified in the second classifier is calculated according to the face histogram.
  • Step S608 determining whether the second matching value is greater than a second preset value.
  • Step S610 When the second matching value is greater than the second preset value, select a sample to be classified in the second classifier as the recognized human face.
  • the face recognition method proposed in this embodiment can reduce the error acceptance rate and the false rejection rate, thereby improving the accuracy of face recognition.
  • the foregoing embodiment method can be implemented by means of software plus a necessary general hardware platform, and of course, can also be through hardware, but in many cases, the former is better.
  • Implementation Based on such understanding, the technical solution of the present application, which is essential or contributes to the prior art, may be embodied in the form of a software product stored in a storage medium (such as ROM/RAM, disk,
  • the optical disc includes a number of instructions for causing a terminal device (which may be a mobile phone, a computer, a server, an air conditioner, or a network device, etc.) to perform the methods described in various embodiments of the present application.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

L'invention concerne un procédé de reconnaissance faciale, celui-ci étant appliqué à un serveur d'application et comprenant les étapes suivantes : acquérir des informations d'un visage à reconnaître ; calculer des valeurs de correspondance entre le visage à reconnaître et des échantillons devant être classés dans des classificateurs connectés en série, respectivement ; si les valeurs de correspondance sont toutes inférieures à une valeur prédéfinie, entrer le visage à reconnaître dans les classificateurs connectés en parallèle ; sélectionner un nombre prédéfini d'échantillons à classifier qui ont un degré élevé de similarité ; attribuer des valeurs aux échantillons à classifier et calculer des valeurs de poids correspondantes ; et sélectionner l'échantillon à classifier qui a la valeur de poids minimale comme visage reconnu. L'invention concerne également un serveur d'application et un support de stockage lisible par ordinateur. Au moyen du procédé de reconnaissance faciale, du serveur d'application et du support de stockage lisible par ordinateur proposés dans la présente invention, le taux d'acceptation d'erreur et le taux de rejet d'erreur peuvent être réduits, ce qui permet d'améliorer la précision de la reconnaissance faciale.
PCT/CN2018/077640 2017-11-17 2018-02-28 Procédé de reconnaissance faciale, serveur d'application et support de stockage lisible par ordinateur WO2019095587A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201711141724.8A CN108052864B (zh) 2017-11-17 2017-11-17 人脸识别方法、应用服务器及计算机可读存储介质
CN201711141724.8 2017-11-17

Publications (1)

Publication Number Publication Date
WO2019095587A1 true WO2019095587A1 (fr) 2019-05-23

Family

ID=62118894

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/077640 WO2019095587A1 (fr) 2017-11-17 2018-02-28 Procédé de reconnaissance faciale, serveur d'application et support de stockage lisible par ordinateur

Country Status (2)

Country Link
CN (1) CN108052864B (fr)
WO (1) WO2019095587A1 (fr)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112241664A (zh) * 2019-07-18 2021-01-19 顺丰科技有限公司 人脸识别方法、装置、服务器及存储介质
CN113240394A (zh) * 2021-05-19 2021-08-10 国网福建省电力有限公司 一种基于人工智能的电力营业厅服务方法

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109034048A (zh) * 2018-07-20 2018-12-18 苏州中德宏泰电子科技股份有限公司 人脸识别算法模型切换方法与装置
CN109063656B (zh) * 2018-08-08 2021-08-24 厦门市美亚柏科信息股份有限公司 一种利用多个人脸引擎进行人脸查询的方法及装置

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101187977A (zh) * 2007-12-18 2008-05-28 北京中星微电子有限公司 一种人脸认证的方法和装置
US7822696B2 (en) * 2007-07-13 2010-10-26 Microsoft Corporation Histogram-based classifiers having variable bin sizes
CN102855496A (zh) * 2012-08-24 2013-01-02 苏州大学 遮挡人脸认证方法及系统
CN103136533A (zh) * 2011-11-28 2013-06-05 汉王科技股份有限公司 基于动态阈值的人脸识别方法及装置
CN103218606A (zh) * 2013-04-10 2013-07-24 哈尔滨工程大学 一种基于人脸均值和方差能量图的多姿态人脸识别方法
CN103902961A (zh) * 2012-12-28 2014-07-02 汉王科技股份有限公司 一种人脸识别方法及装置

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN100426314C (zh) * 2005-08-02 2008-10-15 中国科学院计算技术研究所 一种基于特征分组的多分类器组合人脸识别方法
CN101075291B (zh) * 2006-05-18 2010-05-12 中国科学院自动化研究所 一种用于人脸识别的高效提升训练方法
CN100585617C (zh) * 2008-07-04 2010-01-27 西安电子科技大学 基于分类器集成的人脸识别系统及其方法
KR20160011916A (ko) * 2014-07-23 2016-02-02 삼성전자주식회사 얼굴 인식을 통한 사용자 식별 방법 및 장치

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7822696B2 (en) * 2007-07-13 2010-10-26 Microsoft Corporation Histogram-based classifiers having variable bin sizes
CN101187977A (zh) * 2007-12-18 2008-05-28 北京中星微电子有限公司 一种人脸认证的方法和装置
CN103136533A (zh) * 2011-11-28 2013-06-05 汉王科技股份有限公司 基于动态阈值的人脸识别方法及装置
CN102855496A (zh) * 2012-08-24 2013-01-02 苏州大学 遮挡人脸认证方法及系统
CN103902961A (zh) * 2012-12-28 2014-07-02 汉王科技股份有限公司 一种人脸识别方法及装置
CN103218606A (zh) * 2013-04-10 2013-07-24 哈尔滨工程大学 一种基于人脸均值和方差能量图的多姿态人脸识别方法

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112241664A (zh) * 2019-07-18 2021-01-19 顺丰科技有限公司 人脸识别方法、装置、服务器及存储介质
CN113240394A (zh) * 2021-05-19 2021-08-10 国网福建省电力有限公司 一种基于人工智能的电力营业厅服务方法

Also Published As

Publication number Publication date
CN108052864A (zh) 2018-05-18
CN108052864B (zh) 2019-04-26

Similar Documents

Publication Publication Date Title
US11017220B2 (en) Classification model training method, server, and storage medium
WO2019095587A1 (fr) Procédé de reconnaissance faciale, serveur d'application et support de stockage lisible par ordinateur
CN110796154B (zh) 一种训练物体检测模型的方法、装置以及设备
WO2022213465A1 (fr) Procédé et appareil de reconnaissance d'image à base de réseau neuronal, dispositif électronique et support
CN110362677B (zh) 文本数据类别的识别方法及装置、存储介质、计算机设备
CN108171203B (zh) 用于识别车辆的方法和装置
WO2019051941A1 (fr) Procédé, appareil et dispositif d'identification de type de véhicule, et support de stockage lisible par ordinateur
US11126827B2 (en) Method and system for image identification
US11062120B2 (en) High speed reference point independent database filtering for fingerprint identification
US20140032450A1 (en) Classifying unclassified samples
WO2022105179A1 (fr) Procédé et appareil de reconnaissance d'image de caractéristiques biologiques, dispositif électronique et support de stockage lisible
CN113646758A (zh) 信息处理设备、个人识别设备、信息处理方法和存储介质
CN111753863A (zh) 一种图像分类方法、装置、电子设备及存储介质
WO2021244521A1 (fr) Procédé et appareil de formation de modèle de classification d'objet, dispositif électronique, et support de stockage
WO2019020083A1 (fr) Procédé et dispositif d'authentification d'utilisateur, basés sur des informations de caractéristiques
CN111046879A (zh) 证件图像分类方法、装置、计算机设备及可读存储介质
CN111291817A (zh) 图像识别方法、装置、电子设备和计算机可读介质
WO2020168754A1 (fr) Procédé et dispositif de prédiction de performance se basant sur un modèle de prédiction, et support de stockage
CN112668482A (zh) 人脸识别训练方法、装置、计算机设备及存储介质
CN111159481B (zh) 图数据的边预测方法、装置及终端设备
WO2019119635A1 (fr) Procédé de développement d'utilisateur initial, dispositif électronique et support de stockage lisible par ordinateur
CN110490058B (zh) 行人检测模型的训练方法、装置、系统和计算机可读介质
CN110390344B (zh) 备选框更新方法及装置
CN111783088B (zh) 一种恶意代码家族聚类方法、装置和计算机设备
CN112818946A (zh) 年龄识别模型的训练、年龄识别方法、装置及电子设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18877708

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS (EPO FORM 1205A DATED 02.10.2020)

122 Ep: pct application non-entry in european phase

Ref document number: 18877708

Country of ref document: EP

Kind code of ref document: A1