WO2019035544A1 - Appareil et procédé de reconnaissance faciale par apprentissage - Google Patents

Appareil et procédé de reconnaissance faciale par apprentissage Download PDF

Info

Publication number
WO2019035544A1
WO2019035544A1 PCT/KR2018/006818 KR2018006818W WO2019035544A1 WO 2019035544 A1 WO2019035544 A1 WO 2019035544A1 KR 2018006818 W KR2018006818 W KR 2018006818W WO 2019035544 A1 WO2019035544 A1 WO 2019035544A1
Authority
WO
WIPO (PCT)
Prior art keywords
face
image
unit
face recognition
neural network
Prior art date
Application number
PCT/KR2018/006818
Other languages
English (en)
Korean (ko)
Inventor
이상윤
배한별
전태재
도진경
Original Assignee
연세대학교 산학협력단
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 연세대학교 산학협력단 filed Critical 연세대학교 산학협력단
Publication of WO2019035544A1 publication Critical patent/WO2019035544A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/469Contour-based spatial representations, e.g. vector-coding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/50Extraction of image or video features by performing operations within image blocks; by using histograms, e.g. histogram of oriented gradients [HoG]; by summing image-intensity values; Projection analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions

Definitions

  • the present invention relates to a face recognition apparatus and method using learning.
  • Face recognition technology has been utilized in various fields, and a popular method is an authentication method for determining whether a newly input face image is identical to a specific face registered in advance.
  • neural networks have been trained by various sample images of each face in a way of recognizing the same face, and a method using learning method of recognizing the face of the input image using the training neural network is used.
  • the performance of the face recognition is sensitive to the change of the illumination, so that it is affected by the environment.
  • the present invention proposes a feature point detection apparatus and method using learning that can effectively detect feature points even with a relatively adaptive amount of learning data.
  • an image processing apparatus comprising: an input unit for inputting an input image; A face detection unit for detecting a face area in the input image; A histogram unit for performing histogram smoothing on the detected face area; An image adjustment unit for performing image adjustment on the detected face area; A plurality of first face recognizing units for recognizing faces in the detected face region; A second face recognizing unit for recognizing a face in the face area in which the histogram smoothing is performed; A third face recognizing unit for recognizing a face in the face region in which the image adjustment is performed; And a face determination unit for collecting the face recognition results of the plurality of first face recognition units, the second face recognition unit, and the third face recognition unit to determine a face recognized by a majority of the faces as a recognized face,
  • the first face recognition unit, the second face recognition unit, and the third face recognition unit are learned using a composite neural network.
  • the face detection unit is learned using a three-stage composite neural network.
  • the plurality of first face recognition units, the second face recognition unit, and the third face recognition unit extract the feature vectors of the face of the reference image and learn the distance values between the feature vectors of the same face as the input values .
  • the histogram smoothing is performed by normalizing cumulative distribution values of brightness values of pixels of an image and multiplying the maximum brightness values.
  • a pixel having a brightness value of 1% of the pixels of the image is mapped to a maximum brightness
  • a pixel having a brightness value of 1% is mapped to a minimum brightness
  • the remaining pixels are mapped to the maximum brightness And the minimum brightness.
  • an image processing method comprising the steps of: (a) inputting an input image; (b) detecting a face region in the input image; (c) performing histogram smoothing on the detected face region; (d) performing image adjustment on the detected face area; (e) recognizing a face in the detected face area using a plurality of pre-learned first concurrent neural networks; (f) recognizing a face in the face region in which the histogram smoothing is performed using a previously learned second synthesized neural network; (g) recognizing a face in the face region in which the image adjustment is performed using a previously learned third composite neural network; And (h) collecting the face recognition results of the first plurality of artificial neural networks, the second artificial neural network and the third artificial neural network, and determining a face recognized by the majority of the plurality of first articulated object neural networks, A face recognizing method using the learning is provided.
  • the present invention has the advantage of being robust against illumination changes.
  • FIG. 1 is a diagram for explaining a combined product neural network algorithm.
  • FIG. 2 is a diagram for explaining a convolution method of a composite-object-based neural network.
  • FIG. 3 is a diagram for explaining a downsampling method of a composite-articulated network.
  • FIG. 4 is a view for explaining a face recognition learning process according to a preferred embodiment of the present invention.
  • FIG. 5 is a diagram for explaining a face recognition process using learning according to an embodiment of the present invention.
  • FIG. 6 is a structural diagram of a face recognition learning apparatus according to a preferred embodiment of the present invention.
  • FIG. 7 is a diagram for explaining a combined-product neural network of the face detecting unit of the present invention.
  • FIG. 8 is a structural diagram of a face recognition apparatus using learning according to a preferred embodiment of the present invention.
  • FIG. 9 is a flowchart showing a face recognition learning method according to a preferred embodiment of the present invention with time.
  • FIG. 10 is a flowchart illustrating a face recognition method using learning according to a preferred embodiment of the present invention with time.
  • first, second, etc. may be used to describe various components, but the components should not be limited by the terms. The terms are used only for the purpose of distinguishing one component from another.
  • first component may be referred to as a second component, and similarly, the second component may also be referred to as a first component.
  • the present invention learns information about a reference face image, and recognizes a face image input using a learning result. For example, after extracting the feature vector of the face by detecting the face part in the reference face image, the distance value of the feature vector between the reference face images of the same face is learned, and the distance value of the feature vector of the input face image The face of the input face image can be recognized as the same face as the face of the reference face image.
  • the present invention may utilize a deep learning algorithm and, as an example, use a Convolutional Neural Network (CNN) algorithm.
  • CNN Convolutional Neural Network
  • the composite neural network is a model that simulates human brain function based on the assumption that when a person recognizes an object, it extracts the basic features of the object and then recognizes the object based on the result after complicated calculation in the brain. Recently, it is widely used in image recognition and speech recognition.
  • various filters for extracting features of images through convolution (Conv) operation are basically used together with pooling or non-linear activation functions for adding nonlinear characteristics. do.
  • FIG. 1 is a diagram for explaining a combined product neural network algorithm.
  • FIG. FIG. 2 is a diagram for explaining a convolution method of a resultant artificial neural network, and
  • FIG. 3 is a diagram for explaining a downsampling method of a resultant artificial neural network.
  • the composite neural network algorithm extracts a feature map for an input image through convolution and downsampling on the input image, and identifies or classifies the input image through the feature map. (classification).
  • the feature map includes feature information on the input image.
  • convolutions C1, C2, C3
  • downsampling MP1, MP2
  • the number of repetitions can be variously determined according to the embodiment.
  • the pixel value 230 of the convolution layer can be determined by multiplying the weight of the filter by the pixel value for each corresponding pixel for a specific region of the input image where the filter is overlapped.
  • a weight sum is performed on the pixel values (0, 0, 0, 0, 1, 1, 0, 1, 2)
  • the size of the input image 200 is 7 ⁇ 7
  • the size of the filter 210 is 3 ⁇ 3 (the size of the input image 200 is 3 ⁇ 3)
  • a convolution layer of size 5X5 can be generated.
  • the size of the convolutional layer i.e., the convoluted image, relative to the input image decreases.
  • a convolution layer having a size of 7 ⁇ 7 equal to the size of the input image can be generated.
  • the number of convolution layers is determined by the number of filters used.
  • downsampling is performed to reduce the size of the convolution layer, that is, to lower the resolution.
  • a method commonly used for downsampling is max-pooling (MP).
  • MP max-pooling
  • a maximum pulling layer smaller than the size of the convolution layer can be generated by taking the maximum value among the pixel values of the convolution layer included in the kernel used for downsampling.
  • the feature map is input to the fully-connected neural network, and the learning of the parameters of the resultant neural network is performed according to the difference value between the label for the given input image and the output value of the neural network do.
  • FIG. 4 is a view for explaining a face recognition learning process according to a preferred embodiment of the present invention.
  • the face recognition apparatus using learning according to a preferred embodiment of the present invention can be learned through the process shown in FIG.
  • the face recognition learning apparatus can detect a face region in an image by receiving various reference images of faces to be recognized for learning.
  • the facial region detection process can be learned by the composite neural network.
  • a three-stage composite neural network is used for face region detection.
  • the three-stage resultant neural network for face area detection may be pre-learned, and the reference images may be composed of only the face area so that the face detection process may be omitted and the resultant neural network may be trained.
  • the histogram smoothing process and the image adjustment process can be performed on the reference image in which the face region is detected.
  • the histogram smoothing process and the image adjustment process can offset the change of the face due to the illumination change of the image.
  • a number of first composite neural networks are now trained by the face region of the reference image.
  • three first-product-formed neural networks are used.
  • the number of first concurrent neural networks can be adjusted according to how much the illumination change of the image is to be considered. The more the first composite neural network is used, the less the illumination change is considered.
  • a plurality of first concurrent neural networks may be trained by different reference images for the same face.
  • the second articulated neural network is trained by the face region of the reference image subjected to the histogram smoothing process
  • the third articulated neural network is trained by the face region of the reference image subjected to the image adjustment process.
  • the first, second, and third composite neural networks may extract feature vectors from the face region in the image and be trained by feature vector values.
  • the first, second, and third artificial neural networks and the third artificial neural network learn the distance values between the feature vectors of various reference images of the same face.
  • the face of the input image can be learned as the face of the learned image.
  • FIG. 5 is a diagram for explaining a face recognition process using learning according to an embodiment of the present invention.
  • a face recognition apparatus using learning may receive an input image for learning and detect a face region in the image.
  • the face region detection process can be learned in advance by the composite neural network.
  • a three-stage composite neural network is used for face region detection.
  • the histogram smoothing process and the image adjustment process can be performed on the reference image in which the face region is detected.
  • the histogram smoothing process and the image adjustment process can cancel the change of the image according to the illumination change.
  • three first-product-formed neural networks are used.
  • the number of first concurrent neural networks can be adjusted according to how much the illumination change of the image is to be considered. The more the first composite neural network is used, the less the illumination change is considered.
  • the first composite neural network may be pre-learned through a plurality of reference images.
  • the second concurrent neural network recognizes the face in the face region of the input image subjected to the histogram smoothing process
  • the third concurrent neural network recognizes the face in the face region of the reference image subjected to the image adjustment process.
  • the second combined artificial neural network and the third concurrent artificial neural network may also be previously learned with a plurality of reference images subjected to the histogram smoothing process and the image adjusting process.
  • the methods of recognizing faces of the first, second, and third artificial neural networks and the third artificial neural network are the same as those previously learned, and various known methods can be used.
  • the first and second artificial neural networks and the third artificial neural network extract the feature vectors from the face region in the image and compare the extracted feature vector values with the feature vector values of the previously learned reference images. have. Since the feature vector values of the reference images of the same face learned in advance are trained, if the distance between the feature vector values extracted from the input image and the feature vector values of the specific face learned in advance is less than a predetermined value, The face can be recognized as a specific face learned in the dictionary.
  • the face recognition apparatus using the learning of the present invention collects face recognition results of a plurality of artificial neural networks.
  • the face recognition results of the total five synthetic neural networks including the three first-artificial neural networks, the second artificial neural network, and the third artificial neural network are collected, .
  • the face recognition apparatus using the learning of the present invention is robust against illumination change and is involved in voting because it selects the face to be finally recognized by collecting the face recognition result by the histogram smoothing process, It is possible to determine in advance how much weight is to be given to the illumination change by adjusting the number of the composite neural networks.
  • FIG. 6 is a structural diagram of a face recognition learning apparatus according to a preferred embodiment of the present invention.
  • a face recognition learning apparatus includes an input unit 610, a face detection unit 620, a histogram unit 630, an image adjustment unit 640, a first face recognition unit 650 A second face recognizing unit 660, and a third face recognizing unit 670.
  • a reference image for learning face recognition is input.
  • the reference image may be an image made up only of the face region by detecting the face region in advance.
  • the face detection unit 620 detects a face region in the inputted reference image. Face detection can be learned using a composite neural network.
  • the face detector 120 may be configured as a three-stage composite neural network.
  • FIG. 7 is a diagram for explaining a combined-product neural network of the face detecting unit of the present invention.
  • the face detector 620 may be configured as a three-stage composite neural network of P-Net, R-Net, and O-Net.
  • the face detecting unit 620 transforms the size of the reference image, generates an image pyramid, and then finds a candidate region of the face using a sliding window method using P-Net.
  • the Bounding box regression vector resulting from the P-Net neural network the position of the candidate region is adjusted and the number of candidate regions is combined by applying a known non-maximum suppression (NMS) Can be reduced.
  • NMS non-maximum suppression
  • candidate regions that are not more than face are further removed by using R-Net having a more complicated structure for the candidate regions detected in Step 1, the position of the candidate region is adjusted, and the number of candidate regions can be reduced.
  • the O-Net neural network of a more complicated structure is used to finally locate the face region and the landmark among the candidate regions detected in the second step.
  • the face detecting unit 620 may be omitted.
  • the histogram unit 630 may perform histogram equalization on the reference image in which the face region is detected. First, the PDF (Probability Density Function) is generated, the CDF (Cumulative Distribution Function) is obtained, and the cumulative histogram is normalized. Finally, the normalized cumulative histogram is multiplied by the maximum brightness value, and the histogram unit 630 performs histogram smoothing on the image. Histogram smoothing adjusts the brightness of each pixel constantly according to the brightness distribution of the image, so that the illumination change of the image can be canceled.
  • the PDF Probability Density Function
  • CDF Cumulative Distribution Function
  • the image adjusting unit 640 may perform image adjustment on the reference image in which the face region is detected. First, the image is expressed as a histogram. Then, the pixel value is incremented by 1 every time the pixel value is 0, and the pixel value is found when the sum of the pixel numbers exceeds 1% of the total number of image pixels . The pixel value at this time is called low_val. Also, the pixel value is decreased by 1 every time the pixel value is 255, and the pixel value is found when the sum of the number of pixels exceeds 1% of the total number of image pixels while the number of pixels is increased. The pixel value at this time is called high_val.
  • Pixels from 0 to low_val are mapped to pixel value 0 and pixels from 255 to 255 are mapped to pixel value 255.
  • Pixels from when the pixel value is low_val + 1 to high_val-1 can perform image adjustment by mapping between pixel values 1 and 254 using histogram equalization. The image adjustment can offset the illumination change of the image by mapping the brightness values of the upper 1% and lower 1% pixels in the image to the maximum value and the minimum value and mapping the remaining pixels therebetween.
  • the face recognition learning apparatus can have a robust characteristic against illumination change.
  • the first face recognizing unit 650 is composed of a composite neural network, and is trained by the face region of the reference image. In the embodiment of the present invention, three first face recognizing units 650 are used. The number of the first face recognizing units 650 can be adjusted according to how much illumination change of the image is to be considered. The more the first face recognizing unit 650 is used, the less the illumination change is considered. Also, the plurality of first face recognizing units 650 may be trained by different reference images for the same face.
  • the second face recognizing unit 660 may also be composed of a composite neural network and may be trained by the face region of the reference image subjected to the histogram smoothing process.
  • the third face recognizing unit 670 may also be composed of a composite neural network and may be trained by the face region of the reference image on which the image adjusting process has been performed.
  • FIG. 8 is a structural diagram of a face recognition apparatus using learning according to a preferred embodiment of the present invention.
  • a face recognition learning apparatus includes an input unit 710, a face detection unit 720, a histogram unit 730, an image adjustment unit 740, a first face recognition unit 750 , A second face recognizing unit 760, a third face recognizing unit 770, and a face determining unit 780.
  • an input image for face recognition is input.
  • the face detection unit 720 detects a face region in the input image.
  • the face area detection can be learned in advance by the face detection unit 620 of the above-described face recognition learning apparatus.
  • the face region detection method is the same as the training method of the face detection unit 620 of the face recognition learning apparatus described above.
  • the histogram unit 730 may perform histogram equalization on the input image in which the face region is detected. First, the PDF (Probability Density Function) is generated, the CDF (Cumulative Distribution Function) is obtained, and the cumulative histogram is normalized. Finally, the normalized cumulative histogram is multiplied by the maximum brightness value, and the histogram unit 730 performs histogram smoothing on the image.
  • the PDF Probability Density Function
  • CDF Cumulative Distribution Function
  • the image adjustment unit 740 may perform image adjustment on the input image in which the face region is detected. First, the image is expressed as a histogram. Then, the pixel value is incremented by 1 every time the pixel value is 0, and the pixel value is found when the sum of the pixel numbers exceeds 1% of the total number of image pixels . The pixel value at this time is called low_val. Also, the pixel value is decreased by 1 every time the pixel value is 255, and the pixel value is found when the sum of the number of pixels exceeds 1% of the total number of image pixels while the number of pixels is increased. The pixel value at this time is called high_val.
  • Pixels from 0 to low_val are mapped to pixel value 0 and pixels from 255 to 255 are mapped to pixel value 255. Pixels from when the pixel value is low_val + 1 to high_val-1 can perform image adjustment by mapping between pixel values 1 and 254 using histogram equalization.
  • the face recognition learning apparatus can have characteristics robust to illumination change.
  • the first face recognizing unit 750 is composed of a composite neural network and can be learned in advance by the first face recognizing unit 650 of the above-described face recognizing learning apparatus.
  • the face recognition method is the same as the training method of the first face recognition unit 650 of the above-described face recognition learning apparatus.
  • three first face recognizing units 750 are used.
  • the number of the first face recognizing units 750 can be adjusted according to how much the illumination change of the image is to be considered. The more the first face recognizer 750 is used, the less consideration is given to the illumination change.
  • the plurality of first face recognizing units 750 may be trained by different reference images for the same face.
  • the second face recognizing unit 760 is also made up of a composite neural network and can be learned in advance by the second face recognizing unit 660 of the above-described face recognizing learning apparatus.
  • the face recognition method is the same as the training method of the second face recognition unit 660 of the above-described face recognition learning apparatus.
  • the third face recognizing unit 770 is also made of a composite neural network and can be learned in advance by the third face recognizing unit 670 of the face recognition learning apparatus.
  • the face recognition method is the same as the training method of the third face recognition unit 670 of the face recognition learning apparatus.
  • the face determination unit 780 collects the face recognition results of the first face recognition unit 750, the second face recognition unit 760, and the third face recognition unit 770, As the recognized face.
  • the face recognition learning apparatus and the face recognition apparatus using the learning according to the preferred embodiment of the present invention include the second face recognition unit 760 and the third face recognition unit 770 robust to illumination change,
  • the face recognition result of a plurality of face recognition units is collected and the face recognition result is determined by a majority vote. Therefore, it is possible to perform more accurate face recognition that is robust against the illumination change.
  • FIG. 9 is a flowchart showing a face recognition learning method according to a preferred embodiment of the present invention with time.
  • a face recognition learning method includes an input step S610, a face area detection step S620, a histogram smoothing step S630, an image adjustment step S640, Learning step S650.
  • the input step S610 is a step for receiving the reference image from the input unit 610.
  • the face region detection step (S620) is a step of detecting a face region in the face detection unit (620).
  • the histogram smoothing step (S630) is a step of performing histogram smoothing in the histogram unit (630).
  • the image adjustment step (S640) is a step of performing image adjustment in the image adjustment unit (640).
  • the face recognition learning step S650 is a step of learning face recognition in the first face recognition unit 650, the second face recognition unit 660 and the third face recognition unit 670.
  • FIG. 10 is a flowchart illustrating a face recognition method using learning according to a preferred embodiment of the present invention with time.
  • a face recognition method using learning includes an input step S710, a face area detection step S720, a histogram smoothing step S730, an image adjustment step S740, A face recognition step S750 and a face determination step S760.
  • the input step S710 is a step of receiving an input image from the input unit 710.
  • the face region detection step S720 is a step of detecting a face region in the face detection unit 720.
  • the histogram smoothing step (S730) is a step of performing histogram smoothing in the histogram unit (730).
  • the image adjusting step S740 is a step of performing image adjustment in the image adjusting unit 740.
  • the face recognizing step S750 is a step of recognizing faces by the first face recognizing part 750, the second face recognizing part 760 and the third face recognizing part 770.
  • the face determination step (S760) is a step of determining the face recognized by the face determination unit (780).
  • the above-described technical features may be implemented in the form of program instructions that can be executed through various computer means and recorded in a computer-readable medium.
  • the computer-readable medium may include program instructions, data files, data structures, and the like, alone or in combination.
  • the program instructions recorded on the medium may be those specially designed and constructed for the embodiments or may be available to those skilled in the art of computer software.
  • Examples of computer-readable media include magnetic media such as hard disks, floppy disks and magnetic tape; optical media such as CD-ROMs and DVDs; magnetic media such as floppy disks; Magneto-optical media, and hardware devices specifically configured to store and execute program instructions such as ROM, RAM, flash memory, and the like.
  • program instructions include machine language code such as those produced by a compiler, as well as high-level language code that can be executed by a computer using an interpreter or the like.
  • the hardware device may be configured to operate as one or more software modules to perform the operations of the embodiments, and vice versa.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Data Mining & Analysis (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Biophysics (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • Computing Systems (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Biomedical Technology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

L'invention concerne un appareil et un procédé de reconnaissance faciale par apprentissage. L'appareil de l'invention comprend : une unité d'entrée dans laquelle est entrée une image d'entrée ; une unité de détection de visage qui détecte une zone de visage à partir de l'image d'entrée ; une unité d'histogramme qui effectue une égalisation d'histogramme sur la zone de visage détectée ; une unité d'ajustement d'image qui effectue un ajustement d'image sur la zone de visage détectée ; une pluralité de premières unités de reconnaissance faciale qui reconnaissent un visage à partir de la zone de visage détectée ; une deuxième unité de reconnaissance faciale qui reconnaît un visage à partir de la zone de visage sur laquelle est effectuée l'égalisation d'histogramme ; une troisième unité de reconnaissance faciale qui reconnaît un visage à partir de la zone de visage sur laquelle est effectué l'ajustement d'image ; et une unité de détermination de visage qui collecte les résultats de reconnaissance faciale provenant de la pluralité de premières unités de reconnaissance faciale, de la deuxième unité de reconnaissance faciale et de la troisième unité de reconnaissance faciale, puis détermine, en tant que visage reconnu, le visage reconnu par une majorité, la pluralité de premières unités de reconnaissance faciale, la deuxième unité de reconnaissance faciale et la troisième unité de reconnaissance faciale apprenant à l'aide d'un réseau neuronal convolutionnel. L'appareil de l'invention offre l'avantage d'être robuste aux changements de lumière.
PCT/KR2018/006818 2017-08-18 2018-06-18 Appareil et procédé de reconnaissance faciale par apprentissage WO2019035544A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR10-2017-0104596 2017-08-18
KR1020170104596A KR101877683B1 (ko) 2017-08-18 2017-08-18 학습을 이용한 얼굴 인식 장치 및 방법

Publications (1)

Publication Number Publication Date
WO2019035544A1 true WO2019035544A1 (fr) 2019-02-21

Family

ID=62919937

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2018/006818 WO2019035544A1 (fr) 2017-08-18 2018-06-18 Appareil et procédé de reconnaissance faciale par apprentissage

Country Status (2)

Country Link
KR (1) KR101877683B1 (fr)
WO (1) WO2019035544A1 (fr)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109858467A (zh) * 2019-03-01 2019-06-07 北京视甄智能科技有限公司 一种基于关键点区域特征融合的人脸识别方法及装置
CN111931551A (zh) * 2020-05-26 2020-11-13 东南大学 一种基于轻量级级联网络的人脸检测方法

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102148699B1 (ko) 2018-11-01 2020-08-27 전형고 얼굴 인식 장치 및 그 방법
CN110321873B (zh) * 2019-07-12 2023-10-10 苏州思萃工业大数据技术研究所有限公司 基于深度学习卷积神经网络的敏感图片识别方法及系统

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20110054886A (ko) * 2009-11-18 2011-05-25 원광대학교산학협력단 조명변화에 견고한 얼굴 표정 인식방법
KR20150011097A (ko) * 2013-07-22 2015-01-30 현대모비스 주식회사 야간 주행시 보행자 인식을 위한 운전자 지원 시스템 및 그 동작방법
US20150125049A1 (en) * 2013-11-04 2015-05-07 Facebook, Inc. Systems and methods for facial representation
KR20160033553A (ko) * 2014-09-18 2016-03-28 한국과학기술연구원 3차원 얼굴모델 투영을 통한 얼굴 인식 방법 및 시스템
KR20170050465A (ko) * 2015-10-30 2017-05-11 에스케이텔레콤 주식회사 얼굴 인식 장치 및 방법

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20110054886A (ko) * 2009-11-18 2011-05-25 원광대학교산학협력단 조명변화에 견고한 얼굴 표정 인식방법
KR20150011097A (ko) * 2013-07-22 2015-01-30 현대모비스 주식회사 야간 주행시 보행자 인식을 위한 운전자 지원 시스템 및 그 동작방법
US20150125049A1 (en) * 2013-11-04 2015-05-07 Facebook, Inc. Systems and methods for facial representation
KR20160033553A (ko) * 2014-09-18 2016-03-28 한국과학기술연구원 3차원 얼굴모델 투영을 통한 얼굴 인식 방법 및 시스템
KR20170050465A (ko) * 2015-10-30 2017-05-11 에스케이텔레콤 주식회사 얼굴 인식 장치 및 방법

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109858467A (zh) * 2019-03-01 2019-06-07 北京视甄智能科技有限公司 一种基于关键点区域特征融合的人脸识别方法及装置
CN111931551A (zh) * 2020-05-26 2020-11-13 东南大学 一种基于轻量级级联网络的人脸检测方法
CN111931551B (zh) * 2020-05-26 2022-04-12 东南大学 一种基于轻量级级联网络的人脸检测方法

Also Published As

Publication number Publication date
KR101877683B1 (ko) 2018-07-12

Similar Documents

Publication Publication Date Title
WO2019035544A1 (fr) Appareil et procédé de reconnaissance faciale par apprentissage
WO2016163755A1 (fr) Procédé et appareil de reconnaissance faciale basée sur une mesure de la qualité
WO2021002549A1 (fr) Système basé sur un apprentissage profond et procédé permettant de déterminer automatiquement le degré de dommage de chaque zone de véhicule
EP3461290A1 (fr) Modèle d'apprentissage pour détection de région faciale saillante
WO2011096651A2 (fr) Procédé et dispositif d'identification de visage
WO2014193040A1 (fr) Système et procédé d'analyse de données de détection
WO2020204525A1 (fr) Procédé et dispositif d'apprentissage combiné utilisant la fonction de perte transformée et l'amélioration de caractéristique basée sur un réseau neuronal profond pour la reconnaissance de locuteur qui est fiable dans un environnement bruyant
WO2013048159A1 (fr) Procédé, appareil et support d'enregistrement lisible par ordinateur pour détecter un emplacement d'un point de caractéristique de visage à l'aide d'un algorithme d'apprentissage adaboost
WO2020196985A1 (fr) Appareil et procédé de reconnaissance d'action vidéo et de détection de section d'action
WO2021100919A1 (fr) Procédé, programme et système pour déterminer si un comportement anormal se produit, sur la base d'une séquence de comportement
WO2021153971A1 (fr) Procédé d'apprentissage de réseau neuronal utilisant une image anonymisée et serveur mettant en œuvre celui-ci
WO2016122068A1 (fr) Procédé pour reconnaître un pneu et dispositif associé
WO2021246810A1 (fr) Procédé d'entraînement de réseau neuronal par auto-codeur et apprentissage multi-instance, et système informatique pour la mise en oeuvre de ce procédé
WO2020032506A1 (fr) Système de détection de vision et procédé de détection de vision l'utilisant
EP3459009A2 (fr) Procédé de quantification adaptative pour codage d'image d'iris
WO2021145502A1 (fr) Dispositif et procédé de reconnaissance faciale et de stress
WO2022075714A1 (fr) Procédé et système d'extraction d'intégration de locuteur utilisant une technique de regroupement basée sur un système de reconnaissance de la parole pour l'identification de locuteur, et leur support d'enregistrement
EP4232994A1 (fr) Procédé permettant d'entraîner et de tester un réseau d'apprentissage utilisateur à utiliser pour reconnaître des données brouillées créées par brouillage de données originales pour protéger des informations personnelles et dispositif d'apprentissage utilisateur et dispositif de test faisant appel à celui-ci
WO2016148322A1 (fr) Procédé et dispositif de détection d'activité vocale sur la base d'informations d'image
WO2020204219A1 (fr) Procédé de classification de valeurs aberrantes dans un apparentissage de reconnaissance d'objet à l'aide d'une intelligence artificielle, dispositif de classification et robot
WO2020067615A1 (fr) Procédé de commande d'un dispositif d'anonymisation vidéo permettant d'améliorer les performances d'anonymisation, et dispositif associé
WO2015182802A1 (fr) Procédé de traitement d'image et système l'utilisant, et support d'enregistrement pour son exécution
WO2023200280A1 (fr) Procédé d'estimation de fréquence cardiaque sur la base d'image corrigée, et dispositif associé
WO2021071258A1 (fr) Dispositif et procédé d'apprentissage d'image de sécurité mobile basés sur l'intelligence artificielle
WO2017155315A1 (fr) Procédé de classement de véhicules spécifique à la taille pour zone locale, et procédé de détection de véhicules utilisant ledit procédé

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18846947

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18846947

Country of ref document: EP

Kind code of ref document: A1