CN110956055A - Face image living body detection method and system - Google Patents

Face image living body detection method and system Download PDF

Info

Publication number
CN110956055A
CN110956055A CN201811125836.9A CN201811125836A CN110956055A CN 110956055 A CN110956055 A CN 110956055A CN 201811125836 A CN201811125836 A CN 201811125836A CN 110956055 A CN110956055 A CN 110956055A
Authority
CN
China
Prior art keywords
detected
image
vector
face
color channel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201811125836.9A
Other languages
Chinese (zh)
Inventor
李海青
侯广琦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Zhongke Hongxing Technology Co Ltd
Original Assignee
Beijing Zhongke Hongxing Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Zhongke Hongxing Technology Co Ltd filed Critical Beijing Zhongke Hongxing Technology Co Ltd
Priority to CN201811125836.9A priority Critical patent/CN110956055A/en
Publication of CN110956055A publication Critical patent/CN110956055A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to a human face image living body detection method and a system, wherein the detection method comprises the following steps: collecting depth information and a face image of a face to be detected, and extracting color channel information of the face image; obtaining an image to be detected according to the color channel information and the depth information; acquiring all characteristic points of an image to be detected to form a characteristic vector; reducing the dimension of the feature vector to obtain a fractional vector of the preset element number; and comparing the sizes of all elements in the fractional vector, and taking a preset class corresponding to the largest element as the class of the face to be detected. The embodiment of the invention obtains the depth information of the face to be detected and the color channel information in the face image, fuses the depth information and the color channel information to obtain the image to be detected, and finally obtains the score vector for confirming the category of the face to be detected.

Description

Face image living body detection method and system
Technical Field
The invention relates to the technical field of face recognition, in particular to a face image in-vivo detection method and system.
Background
With the advancement of technology, biometric identification technology is increasingly being applied to daily life. The face recognition technology is widely applied as a biological feature recognition technology due to the characteristics of easy collection, non-contact, high recognition rate and the like. The human face living body detection technology is an important component of a human face recognition system, and the safety of the human face recognition system can be effectively improved by distinguishing whether an object in front of a camera is a real person or a false body. The current human face living body detection method mainly identifies a single-frame human face image under visible light through a machine learning method, but because living body detection only detects RGB channel information of a human face, when a person detects plane human face prostheses such as a printed human face image, a human face video and the like, the human face prostheses are difficult to effectively identify in the prior art, so that other persons can also identify the human face.
Disclosure of Invention
In order to solve the problems in the prior art, at least one embodiment of the invention provides a face image live detection method and system.
In a first aspect, an embodiment of the present invention provides a face image live detection method, where the detection method includes:
collecting depth information and a face image of a face to be detected, and extracting color channel information of the face image;
obtaining an image to be detected according to the color channel information and the depth information;
acquiring all characteristic points of the image to be detected to form a characteristic vector;
reducing the dimension of the feature vector to obtain a fractional vector of a preset element number;
and comparing the sizes of all elements in the fractional vector, and taking a preset category corresponding to the largest element as the category of the face to be detected.
Based on the above technical solutions, the embodiments of the present invention may be further improved as follows.
With reference to the first aspect, in a first embodiment of the first aspect, the obtaining all feature points of the image to be detected to form a feature vector specifically includes:
extracting all the characteristic points of the image to be detected based on a grid model constructed by a deep convolutional neural network;
and converting the characteristic points into characteristic vectors.
With reference to the first embodiment of the first aspect, in a second embodiment of the first aspect, the performing dimensionality reduction on the feature vector to obtain a fractional vector of a preset number of elements specifically includes:
and reducing the dimension of the feature vector based on a softmax classifier to obtain a fraction vector of a preset element number.
With reference to the second embodiment of the first aspect, in a third embodiment of the first aspect, the detection method further includes:
calculating a loss function of the fractional vector through a softmax loss function;
optimizing the loss function by using a back propagation algorithm;
updating the softmax classifier with the loss function.
With reference to the first aspect, in a fourth embodiment of the first aspect, the obtaining an image to be detected according to the color channel information and the depth information specifically includes:
converting the color channel information into an original data matrix;
adding a matrix layer to the original data matrix;
adding the depth information into the matrix layer to obtain a data matrix to be detected;
and obtaining the image to be detected according to the data matrix to be detected.
With reference to the first aspect or any one of the first, second, third, or fourth embodiments of the first aspect, in a fifth embodiment of the first aspect, the category of the face to be detected includes: living human faces, printed human face pictures and human face videos.
In a second aspect, an embodiment of the present invention provides a face image living body detection system, where the detection system includes: the system comprises a data acquisition subsystem, a first data processing subsystem, a second data processing subsystem, a third data processing subsystem and a judgment subsystem;
the data acquisition subsystem is used for acquiring depth information and a face image of a face to be detected and extracting color channel information of the face image;
the first data processing subsystem is used for obtaining an image to be detected according to the color channel information and the depth information;
the second data processing subsystem is used for acquiring all characteristic points of the image to be detected to form a characteristic vector;
the third data processing subsystem is used for reducing the dimension of the characteristic vector to obtain a fractional vector with a preset element number;
and the judging subsystem is used for comparing the sizes of all elements in the fractional vector, and taking the preset category corresponding to the largest element as the category of the face to be detected.
With reference to the second aspect, in a first embodiment of the second aspect, the second data processing subsystem is specifically configured to extract all the feature points of the image to be detected based on a mesh model constructed by a deep convolutional neural network; and converting the feature points into feature vectors.
With reference to the first embodiment of the second aspect, in a second embodiment of the second aspect, the detection system further comprises: an optimization processing subsystem;
the third data processing subsystem is specifically used for carrying out dimensionality reduction on the feature vector based on a softmax classifier to obtain a fractional vector with a preset element number;
the optimization processing subsystem is specifically used for calculating a loss function of the fractional vector through a softmax loss function; optimizing the loss function by using a back propagation algorithm; updating the softmax classifier with the loss function.
With reference to the second aspect or any one of the first and second embodiments of the second aspect, in a third embodiment of the second aspect, the first data processing subsystem is specifically configured to convert the color channel information into an original data matrix; adding a matrix layer to the original data matrix; adding the depth information into the matrix layer to obtain a data matrix to be detected; and obtaining the image to be detected according to the data matrix to be detected.
Compared with the prior art, the technical scheme of the invention has the following advantages: the embodiment of the invention obtains the depth information of the face to be detected and the color channel information in the face image, fuses the depth information and the color channel information to obtain the image to be detected, and finally obtains the score vector for confirming the category of the face to be detected.
Drawings
FIG. 1 is a schematic flow chart of a face image live detection method according to an embodiment of the present invention;
fig. 2 is a schematic flow chart of a face image live detection method according to another embodiment of the present invention;
FIG. 3 is a schematic flow chart of a face image live detection method according to another embodiment of the present invention;
fig. 4 is a schematic flow chart of a human face image live detection method according to another embodiment of the present invention;
fig. 5 is a schematic structural diagram of a face image live detection system according to yet another embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be obtained by a person skilled in the art without any inventive step based on the embodiments of the present invention, are within the scope of the present invention.
As shown in fig. 1, a face image live detection method provided in an embodiment of the present invention includes:
and S11, collecting the depth information and the face image of the face to be detected, and extracting the color channel information of the face image.
In this embodiment, the depth information of the image refers to a depth map of the input picture, and the depth map is a grayscale map, and the color at a closer distance is lighter and the color at a farther distance is darker. Therefore, the depth map of the image actually reflects the three-dimensional information and the spatial structure information of the image, and there are many methods for acquiring the depth information of the image, wherein the depth image is acquired by using a TOF camera most commonly. The principle of TOF camera acquiring depth images is: by emitting successive near infrared pulses to the target scene and then receiving the light pulses reflected back by the object with the sensor. By comparing the phase difference between the emitted light pulse and the light pulse reflected by the object, the transmission delay between the light pulses can be calculated, the distance between the object and the emitter can be further obtained, and finally a depth image can be obtained. The color channel information of the face image is divided into three channels of RGB image, which are respectively red, green and blue, four channels of CMYK image, and cyan, magenta, yellow and black, and each color channel stores information of color elements in the image. The colors in all color channels are mixed in superposition to produce the colors of the pixels in the image. In this embodiment, the color channel information of the face image may be three-channel information of an RGB image, the input picture forms a three-dimensional tensor, the picture height is the picture width is 3, and R, G, B each channel contains different information, so that the RGB channel as a whole describes the information of the input picture. And RGB channel information of the image is directly obtained by a general color camera. The pictures taken by the color camera are transmitted into the device, namely, a picture matrix with the picture height and the picture width of 3 is formed, wherein the layer with the picture height and the picture width is a channel, and the three layers correspond to RGB channels respectively.
And S12, obtaining an image to be detected according to the color channel information and the depth information.
In this embodiment, the obtained face image is composed of color channel information, each color channel information represents a picture of the color in the face image, the depth information is used as an independent channel to be fused with the color channel information of the face image, the depth information of the image reflects three-dimensional space information of the image, and the combination of the depth information and the channel information can have a good effect on the living body detection problem.
For example, as shown in fig. 2, the method for obtaining the image to be detected according to the color channel information and the depth information in step S12 includes:
and S21, converting the color channel information into an original data matrix.
In this step, since the color channel information may be used as a whole to describe information of the picture, the color channel information is converted into an original data matrix, each layer in the original data matrix corresponds to one color channel information, and the face picture is converted into matrix data.
And S22, adding a matrix layer to the original data matrix.
Adding a dimension on the basis of original matrix data, namely adding a data matrix with the original picture height and the picture width of 3 into a data matrix with the picture height and the picture width of 4, wherein elements at the same position in the data matrix represent information at the same position on the face to be detected, and the information comprises color information and depth information.
And S23, adding the depth information into the matrix layer to obtain the data matrix to be detected.
In this embodiment, we combine the depth information as the fourth channel with the original RGB channel information. The specific method comprises the following steps: adding a dimension on the basis of the original RGB channel data image matrix, namely adding the original image height and image width data matrix into the image height and image width data matrix 4, adding the depth information channel data into the newly added fourth layer matrix, and enabling the first three layers to be the original RGB channel data matrix, thereby stacking the RGB channel data into input data with higher dimension.
And S24, obtaining an image to be detected according to the data matrix to be detected.
The new data structure is used as input to carry out living body detection, the depth information and the RGB channel information are directly stacked to be used as a method for fusing the depth information, the operation is simple and convenient, the depth information can pass through each layer of structure of the network, and therefore the network can more fully utilize the face features contained in the depth information, and a better classification effect is achieved.
And S13, acquiring all characteristic points of the image to be detected to form a characteristic vector.
Because the information represented by the elements at the same position in the data matrix to be detected in the image to be detected is the information at the same position of the face to be detected, the characteristic points in the image to be detected are obtained by comparing the information at each position on the face to be detected.
For example, as shown in fig. 3, the method for acquiring all feature points of the image to be detected in step S13 to form a feature vector includes:
s31, extracting all feature points of the image to be detected based on the grid model constructed by the deep convolutional neural network.
The method comprises the steps of carrying out convolution operation on an image to be detected through a depth convolution neural network to obtain a convolution layer of the image to be detected, and calculating according to the convolution layer to obtain feature points of the image to be detected, wherein the feature points in the image to be detected can be obtained by obtaining feature points in each color channel information and feature points in the depth information in a summary mode, for example, obtaining an area with dense colors in each color channel information as the feature points, wherein the area with dense colors can be an area with the number of pixel points exceeding a preset threshold value, then obtaining an area with large depth mutation in the depth information as the feature points, and then taking the area with the feature points in the color channel information and the depth information as the feature points of the image to be detected, so that the feature points are screened.
And S32, converting the characteristic points into characteristic vectors.
The information of the image to be detected is input as an element of the characteristic vector, so that data storage and data calculation are facilitated, and various items of information of the image to be detected are described through the characteristic vector.
And S14, reducing the dimensions of the feature vectors to obtain fraction vectors with preset element numbers.
And summarizing elements which represent the information of the image to be detected in the feature vector, realizing the dimensionality reduction of the feature vector, and obtaining a score vector, wherein each element in the score vector represents the score of different classes of the face to be detected, namely, the score vector for confirming the class of the face to be detected is obtained by calculating the feature vector.
For example, the feature vector may be processed by a softmax classifier, the softmax classifier performs dimensionality reduction on the feature vector to obtain a fractional vector of a preset number of elements, specifically, for example, the feature vector is a 20-dimensional vector, the softmax classifier includes a 20 x-dimensional parameter matrix, where x is the number of elements in the fractional vector, and the feature vector is multiplied by the softmax classifier to obtain an x-dimensional fractional vector, where the parameter matrix in the softmax classifier may be obtained by reversely deriving a result obtained by a specific experiment or by feedback compensation of a simulation experiment.
And S15, comparing the sizes of all elements in the score vectors, and taking the preset class corresponding to the largest element as the class of the face to be detected.
Combining the ginger dimension of the feature vector in the above steps to obtain a fractional vector, comparing the sizes of the elements in the fractional vector to obtain an element with the maximum value, and determining the category corresponding to the element with the maximum value as the category of the face to be detected, for example, if the face to be detected is a living face, the depth information of the elements in the feature vector is not all consistent, the value of the element obtained from the depth information in the fractional vector will be higher, and if the depth information of the elements in the feature vector is consistent, the value of the element obtained from the depth information in the fractional vector will be 0 or a cross bottom, and at this time, comparing other fractional vector values in the same way, such as printing a face photo, a face video, and waiting for detecting the category of the face.
As shown in fig. 4, the embodiment of the present invention further provides a face image live detection method, and compared with the detection method shown in fig. 1, the differences are that:
and S41, calculating a loss function of the fraction vector through a softmax loss function.
And calculating a loss function in the process of processing the feature vector by a softmax classifier through a softmax loss function, and adjusting various parameters in the processing process through the loss function so as to reduce errors of subsequent calculation and improve the accuracy.
S42, and optimizing the loss function by using a back propagation algorithm.
The back propagation algorithm is that the learning process of the algorithm consists of a forward propagation process and a back propagation process. In the forward propagation process, input information passes through the hidden layer through the input layer, is processed layer by layer and is transmitted to the output layer. If the expected output value cannot be obtained in the output layer, the sum of squares of the output and the expected error is taken as an objective function, backward propagation is carried out, the partial derivatives of the objective function to the weight of each neuron are calculated layer by layer to form the gradient of the objective function to the weight vector, the gradient is used as the basis for modifying the weight, the learning of the network is completed in the weight modifying process, namely, the parameters in the loss function are further optimized through a backward propagation algorithm, and the accuracy of the algorithm is improved.
And S43, updating the softmax classifier through a loss function.
And updating the softmax classifier according to the loss function, and reducing the calculation error when classifying next time.
As shown in fig. 5, an embodiment of the present invention provides a face image live detection system, where the detection system includes: the system comprises a data acquisition subsystem, a first data processing subsystem, a second data processing subsystem, a third data processing subsystem and a judgment subsystem.
In this embodiment, the data acquisition subsystem is configured to acquire depth information of a face to be detected and a face image, and extract color channel information of the face image.
In this embodiment, the first data processing subsystem is configured to obtain an image to be detected according to color channel information and depth information, and specifically, the first data processing subsystem converts the color channel information into an original data matrix; adding a matrix layer to the original data matrix; adding the depth information into a matrix layer to obtain a data matrix to be detected; and obtaining an image to be detected according to the data matrix to be detected.
In this embodiment, the second data processing subsystem is configured to obtain feature vectors formed by all feature points of the image to be detected, and specifically, extract all feature points of the image to be detected based on a mesh model constructed by a deep convolutional neural network; and converting the feature points into feature vectors.
In this embodiment, the third data processing subsystem is configured to perform dimension reduction on the feature vector to obtain a fractional vector of a preset number of elements, and the third data processing subsystem performs dimension reduction on the feature vector based on the softmax classifier to obtain a fractional vector of the preset number of elements.
In this embodiment, the judging subsystem is configured to compare sizes of elements in the score vector, and use a preset category corresponding to a largest element as a category of the face to be detected.
In this embodiment, the detection system further includes: the optimization processing subsystem is specifically used for calculating a loss function of the fraction vector through a softmax loss function; optimizing the loss function by using a back propagation algorithm; the softmax classifier is updated by a loss function.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (10)

1. A face image living body detection method is characterized by comprising the following steps:
collecting depth information and a face image of a face to be detected, and extracting color channel information of the face image;
obtaining an image to be detected according to the color channel information and the depth information;
acquiring all characteristic points of the image to be detected to form a characteristic vector;
reducing the dimension of the feature vector to obtain a fractional vector of a preset element number;
and comparing the sizes of all elements in the fractional vector, and taking a preset category corresponding to the largest element as the category of the face to be detected.
2. The detection method according to claim 1, wherein the obtaining of all feature points of the image to be detected constitutes a feature vector, and specifically comprises:
extracting all the characteristic points of the image to be detected based on a grid model constructed by a deep convolutional neural network;
and converting the characteristic points into characteristic vectors.
3. The detection method according to claim 2, wherein the reducing the dimensions of the feature vector to obtain a fractional vector of a preset number of elements specifically comprises:
and reducing the dimension of the feature vector based on a softmax classifier to obtain a fraction vector of a preset element number.
4. The detection method according to any one of claim 3, characterized in that the detection method further comprises:
calculating a loss function of the fractional vector through a softmax loss function;
optimizing the loss function by using a back propagation algorithm;
updating the softmax classifier with the loss function.
5. The detection method according to claim 1, wherein obtaining the image to be detected according to the color channel information and the depth information specifically comprises:
converting the color channel information into an original data matrix;
adding a matrix layer to the original data matrix;
adding the depth information into the matrix layer to obtain a data matrix to be detected;
and obtaining the image to be detected according to the data matrix to be detected.
6. The detection method according to any one of claims 1 to 5, wherein the classes of the faces to be detected include: living human faces, printed human face pictures and human face videos.
7. A face image liveness detection system, the detection system comprising: the system comprises a data acquisition subsystem, a first data processing subsystem, a second data processing subsystem, a third data processing subsystem and a judgment subsystem;
the data acquisition subsystem is used for acquiring depth information and a face image of a face to be detected and extracting color channel information of the face image;
the first data processing subsystem is used for obtaining an image to be detected according to the color channel information and the depth information;
the second data processing subsystem is used for acquiring all characteristic points of the image to be detected to form a characteristic vector;
the third data processing subsystem is used for reducing the dimension of the characteristic vector to obtain a fractional vector with a preset element number;
and the judging subsystem is used for comparing the sizes of all elements in the fractional vector, and taking the preset category corresponding to the largest element as the category of the face to be detected.
8. The detection system according to claim 7, wherein the second data processing subsystem is specifically configured to extract all the feature points of the image to be detected based on a mesh model constructed by a deep convolutional neural network; and converting the feature points into feature vectors.
9. The detection system of claim 8, further comprising: an optimization processing subsystem;
the third data processing subsystem is specifically used for carrying out dimensionality reduction on the feature vector based on a softmax classifier to obtain a fractional vector with a preset element number;
the optimization processing subsystem is specifically used for calculating a loss function of the fractional vector through a softmax loss function; optimizing the loss function by using a back propagation algorithm; updating the softmax classifier with the loss function.
10. The detection system according to any one of claims 7 to 9, wherein the first data processing subsystem is configured to convert the color channel information into a raw data matrix; adding a matrix layer to the original data matrix; adding the depth information into the matrix layer to obtain a data matrix to be detected; and obtaining the image to be detected according to the data matrix to be detected.
CN201811125836.9A 2018-09-26 2018-09-26 Face image living body detection method and system Pending CN110956055A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811125836.9A CN110956055A (en) 2018-09-26 2018-09-26 Face image living body detection method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811125836.9A CN110956055A (en) 2018-09-26 2018-09-26 Face image living body detection method and system

Publications (1)

Publication Number Publication Date
CN110956055A true CN110956055A (en) 2020-04-03

Family

ID=69964685

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811125836.9A Pending CN110956055A (en) 2018-09-26 2018-09-26 Face image living body detection method and system

Country Status (1)

Country Link
CN (1) CN110956055A (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007219899A (en) * 2006-02-17 2007-08-30 Seiko Epson Corp Personal identification device, personal identification method, and personal identification program
CN105956572A (en) * 2016-05-15 2016-09-21 北京工业大学 In vivo face detection method based on convolutional neural network
CN107122709A (en) * 2017-03-17 2017-09-01 上海云从企业发展有限公司 Biopsy method and device
CN107451510A (en) * 2016-05-30 2017-12-08 北京旷视科技有限公司 Biopsy method and In vivo detection system
CN107590430A (en) * 2017-07-26 2018-01-16 百度在线网络技术(北京)有限公司 Biopsy method, device, equipment and storage medium
CN107609536A (en) * 2017-09-29 2018-01-19 百度在线网络技术(北京)有限公司 Information generating method and device

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007219899A (en) * 2006-02-17 2007-08-30 Seiko Epson Corp Personal identification device, personal identification method, and personal identification program
CN105956572A (en) * 2016-05-15 2016-09-21 北京工业大学 In vivo face detection method based on convolutional neural network
CN107451510A (en) * 2016-05-30 2017-12-08 北京旷视科技有限公司 Biopsy method and In vivo detection system
CN107122709A (en) * 2017-03-17 2017-09-01 上海云从企业发展有限公司 Biopsy method and device
CN107590430A (en) * 2017-07-26 2018-01-16 百度在线网络技术(北京)有限公司 Biopsy method, device, equipment and storage medium
CN107609536A (en) * 2017-09-29 2018-01-19 百度在线网络技术(北京)有限公司 Information generating method and device

Similar Documents

Publication Publication Date Title
CN108038456B (en) Anti-deception method in face recognition system
CN110956094B (en) RGB-D multi-mode fusion personnel detection method based on asymmetric double-flow network
CN109543640B (en) Living body detection method based on image conversion
CN108460356B (en) Face image automatic processing system based on monitoring system
US8571271B2 (en) Dual-phase red eye correction
KR102462818B1 (en) Method of motion vector and feature vector based fake face detection and apparatus for the same
Premebida et al. Pedestrian detection combining RGB and dense LIDAR data
US11100350B2 (en) Method and system for object classification using visible and invisible light images
CN110298297B (en) Flame identification method and device
CN109684922B (en) Multi-model finished dish identification method based on convolutional neural network
Yamamoto et al. General improvement method of specular component separation using high-emphasis filter and similarity function
CN108282644B (en) Single-camera imaging method and device
CN107103589B (en) A kind of highlight area restorative procedure based on light field image
US8831357B2 (en) System and method for image and video search, indexing and object classification
JP2002109525A (en) Method for changing image processing path based on image conspicuousness and appealingness
KR20180133657A (en) Multiple view point vehicle recognition apparatus using machine learning
KR101906796B1 (en) Device and method for image analyzing based on deep learning
KR101907883B1 (en) Object detection and classification method
CN111815528A (en) Bad weather image classification enhancement method based on convolution model and feature fusion
Chai A probabilistic framework for building extraction from airborne color image and DSM
Hussain et al. Color constancy algorithm for mixed-illuminant scene images
CN105184771A (en) Adaptive moving target detection system and detection method
US20050238209A1 (en) Image recognition apparatus, image extraction apparatus, image extraction method, and program
CN116612468A (en) Three-dimensional target detection method based on multi-mode fusion and depth attention mechanism
WO2017101347A1 (en) Method and device for identifying and encoding animation video

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20200403