CN112508972A - Information identification method and device based on artificial intelligence - Google Patents
Information identification method and device based on artificial intelligence Download PDFInfo
- Publication number
- CN112508972A CN112508972A CN202110036434.7A CN202110036434A CN112508972A CN 112508972 A CN112508972 A CN 112508972A CN 202110036434 A CN202110036434 A CN 202110036434A CN 112508972 A CN112508972 A CN 112508972A
- Authority
- CN
- China
- Prior art keywords
- area
- neural network
- pixel value
- picture
- value
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/23—Clustering techniques
- G06F18/232—Non-hierarchical techniques
- G06F18/2321—Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
- G06F18/23213—Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/136—Segmentation; Edge detection involving thresholding
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/187—Segmentation; Edge detection involving region growing; involving region merging; involving connected component labelling
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/28—Quantising the image, e.g. histogram thresholding for discrimination between background and foreground patterns
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Data Mining & Analysis (AREA)
- Multimedia (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- Computing Systems (AREA)
- Probability & Statistics with Applications (AREA)
- General Health & Medical Sciences (AREA)
- Computational Linguistics (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Biophysics (AREA)
- Molecular Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Biomedical Technology (AREA)
- Evolutionary Biology (AREA)
- Health & Medical Sciences (AREA)
- Image Analysis (AREA)
Abstract
The invention relates to the field of artificial intelligence, and provides an information identification method and device based on artificial intelligence, which are used for improving the information extraction efficiency. The invention provides an information identification method based on artificial intelligence, which comprises the following steps: collecting a picture as a data set, and selecting a certain area as a parameter area after denoising and binaryzation of the picture in the data set; performing threshold cutting on a parameter area according to the segmentation starting point, the maximum pixel value and the minimum pixel value to obtain a cut image; converting the cut image into a gray-scale image and then converting the gray-scale image into a binary two-dimensional array; performing cyclic re-cutting according to the pixels from small to large to obtain a re-cut picture; and inputting the picture after re-cutting as a sample into the trained neural network for identification. The efficiency of information identification is obviously improved, and the calculation resources required by the subsequent neural network feature extraction are reduced.
Description
Technical Field
The invention relates to the field of artificial intelligence, in particular to an information identification method based on artificial intelligence.
Background
Information recognition, particularly information recognition in images, needs to divide the images, but the existing dividing method, particularly the dividing method of human images, cannot directly perform targeted extraction on some parts on human faces.
Disclosure of Invention
The invention provides an information identification method based on artificial intelligence for improving the information extraction efficiency.
In order to solve the technical problems, the technical scheme provided by the invention is as follows:
the information identification method based on artificial intelligence comprises the following steps:
collecting a picture as a data set, and selecting a certain area as a parameter area after denoising and binaryzation of the picture in the data set;
determining a maximum pixel value and a minimum pixel value in a parameter implementation region, and selecting a segmentation starting point from the parameter region;
performing threshold cutting on a parameter area according to the segmentation starting point, the maximum pixel value and the minimum pixel value to obtain a cut image;
converting the cut image into a gray-scale image and then converting the gray-scale image into a binary two-dimensional array;
8, dividing the cut image area in a communication mode, starting from each pixel in the area, and calculating any pixel in the reached area through a combination of movement in eight directions, namely, an upper direction, a lower direction, a left direction, a right direction, an upper left direction, an upper right direction, a lower left direction and a lower right direction on the premise of not exceeding the area; obtaining a two-dimensional array with a first area value, a second area value and a boundary line value;
overlapping the calculated lowest point value of the first area and the calculated highest point value of the second area with the two-dimensional array of the cut image, calculating the interval size of the upper area and the lower area of each vertical pixel, and performing circular re-cutting according to the pixel size from small to large to obtain a re-cut image;
taking the picture after the re-cutting as a sample, labeling the sample to obtain a training set and a testing set, inputting the training set into a convolutional neural network for training to obtain a trained neural network model;
and after threshold cutting and circular re-cutting are carried out on the picture to be recognized, inputting the picture into a trained neural network for recognition.
Firstly, cutting out a main parameter area needing identification information from the image, then cutting the image of the parameter area again, and inputting the cut image into a neural network for feature extraction.
The efficiency of information identification is obviously improved, and the calculation resources required by the subsequent neural network feature extraction are reduced.
Preferably, the threshold cut is:
and traversing pixel values of pixel points of the parameter image from the segmentation starting point, setting all pixel points with pixel values between the minimum pixel value and the maximum pixel value as 1, setting all pixel points with pixel values between the minimum pixel value and the maximum pixel value as 0, and extracting the pixel points with pixel values set as 1 to obtain the cut image.
Preferably, the method for determining the maximum pixel value and the minimum pixel value is:
selecting a sub-region on the parameter region according to a preset geometric figure, setting the maximum pixel value of the sub-region as the maximum pixel value of the parameter region, and setting the minimum pixel value of the sub-region as the minimum pixel value of the parameter region.
Preferably, said 8 communication is:
N8(p)=N4∪(x+1,y+1), (x+1,y-1), (x-1,y+1), (x-1,y-1)。
preferably, when the pixels are cut again in a cycle from small to large, the connectivity of the cut picture is calculated once in each cycle, that is, whether the feature quantity reaches a specified quantity is calculated, and if so, the cutting is stopped.
Preferably, the convolutional neural network comprises a first convolutional neural network, and the first convolutional neural network extracts high-latitude characteristic quantities from the re-cut picture to obtain a high-latitude characteristic quantity data set.
Preferably, the convolutional neural network comprises a second convolutional neural network comprising a plurality of convolutional layers; the second convolutional neural network extracts a feature quantity subset from the high-latitude feature quantity through a convolution operation, and the dimensionality of the extracted feature quantity subset gradually decreases with the multilayer superposition of the convolution operation; clustering the N types of feature quantum sets by adopting a clustering algorithm to obtain K clustering centers of all feature quantity subsets, wherein the K clustering centers are N × K clusters; according to the high-latitude characteristic quantity relative to the farthest cluster distance of the same type and the nearest cluster distance of the non-same type, the difference of the two distances is maximized, namely a loss function, the second convolutional neural network is trained to minimize the loss function, and a trained neural network model is obtained.
Information identification device based on artificial intelligence includes:
a data acquisition module that acquires a picture as a data set;
the preprocessing module is used for denoising and binarizing the pictures in the data set and then selecting a certain region as a parameter region;
a first segmentation module that determines a maximum pixel value and a minimum pixel value in an implementation parameter region from which to select a segmentation start point; performing threshold cutting on a parameter area according to the segmentation starting point, the maximum pixel value and the minimum pixel value to obtain a cut image; converting the cut image into a gray-scale image and then converting the gray-scale image into a binary two-dimensional array;
the communication processing module adopts 8-way communication to divide the cut image area, and calculates any pixel in the reached area by the combination of movement in eight directions, namely, the up, down, left, right, left, up, right, down, left and right directions from each pixel in the area on the premise of not exceeding the area; obtaining a two-dimensional array with a first area value, a second area value and a boundary line value;
the second cutting module is overlapped with the two-dimensional array of the cut image according to the calculated lowest point value of the first area and the calculated highest point value of the second area, calculates the interval size of the upper area and the lower area of each vertical pixel, and performs circular re-cutting according to the pixels from small to large to obtain a re-cut image;
the training module is used for marking a label on the sample by taking the picture which is cut again as the sample to obtain a training set and a testing set, and inputting the training set into the convolutional neural network for training to obtain a trained neural network model;
and the recognition module performs threshold cutting and cyclic re-cutting on the picture to be recognized, and inputs the picture to be recognized into the trained neural network for recognition.
A terminal comprising a processor and a memory, the memory having stored therein a computer program, the processor executing the computer program to implement the method described above.
A storage medium stores a computer program executable to implement the above method.
Compared with the prior art, the invention has the beneficial effects that: the efficiency of information identification is obviously improved, and the calculation resources required by the subsequent neural network feature extraction are reduced; when the features of the face image are extracted, the extraction can be performed on the area above the eyebrow or below the eyebrow, and the extraction efficiency is improved.
Drawings
Fig. 1 is a schematic flow chart of an artificial intelligence-based information identification method.
Fig. 2 is a schematic structural diagram of an information recognition device based on artificial intelligence.
Detailed Description
The following examples are further illustrative of the present invention and are not intended to be limiting thereof.
In some embodiments of the present application, taking image feature acquisition of a human face as an example, the information recognition method based on artificial intelligence includes:
s100, acquiring a picture containing a human face as a data set, specifically taking the picture acquired at a vehicle yard entrance as an example, denoising and binarizing the picture in the data set, and selecting a certain area as a parameter area; the parameter area is a preset ellipse or circle, and is used for intercepting the area of the face;
s200, determining a maximum pixel value and a minimum pixel value in an implementation parameter area, and selecting a segmentation starting point from the parameter area;
s300, performing threshold cutting on a parameter area according to the segmentation starting point, the maximum pixel value and the minimum pixel value to obtain a cut image; the cut image is an elliptical or circular image containing a human face
S400, converting the cut image into a gray-scale image and then converting the gray-scale image into a binary two-dimensional array;
s500, adopting 8-way communication to divide the cut image area, starting from each pixel in the area, and calculating any pixel in the reached area through a combination of movement in eight directions, namely, an upper direction, a lower direction, a left direction, a right direction, an upper left direction, an upper right direction, a lower left direction and a lower right direction on the premise of not exceeding the area; obtaining a two-dimensional array with a first area value, a second area value and a boundary line value;
s600, overlapping the calculated lowest point value of the first area and the calculated highest point value of the second area with a two-dimensional array of the cut image, calculating the interval size of the upper area and the lower area of each vertical pixel, and performing circular re-cutting according to the pixel size from small to large to obtain a re-cut image; then, the image containing the human face is divided into two areas by taking eyebrows as boundary lines by cutting through a connected algorithm and the like;
s700, using the picture which is cut again as a sample, labeling the sample to obtain a training set and a testing set, inputting the training set into a convolutional neural network for training to obtain a trained neural network model;
and S800, performing threshold cutting and circular re-cutting on the picture to be recognized, and inputting the picture to be recognized into a trained neural network for recognition.
The extraction efficiency can be improved and the calculation resources can be saved by extracting the features of the image subjected to the two-time cutting through the neural network.
In some embodiments of the present application, the parameter area is a preset semicircle, and the cut image is an elliptical or circular image containing a half of a human face, and only one eyebrow in the image is used as a boundary.
In some embodiments of the present application, the threshold cut is:
and traversing pixel values of pixel points of the parameter image from the segmentation starting point, setting all pixel points with pixel values between the minimum pixel value and the maximum pixel value as 1, setting all pixel points with pixel values between the minimum pixel value and the maximum pixel value as 0, and extracting the pixel points with pixel values set as 1 to obtain the cut image.
In some embodiments of the present application, the maximum pixel value and the minimum pixel value are determined by:
selecting a sub-region on the parameter region according to a preset geometric figure, setting the maximum pixel value of the sub-region as the maximum pixel value of the parameter region, and setting the minimum pixel value of the sub-region as the minimum pixel value of the parameter region.
In some embodiments of the present application, the 8-way communication is:
N8(p)=N4∪(x+1,y+1), (x+1,y-1), (x-1,y+1), (x-1,y-1)。
in some embodiments of the present application, when cyclic re-segmentation is performed according to pixels from small to large, connectivity of the segmented picture is calculated once per cycle, that is, whether the feature quantity reaches a specified quantity is calculated, and if so, the segmentation is stopped.
Taking the semicircular parameter area as an example, whether the number of nostrils reaches a specified number is calculated, specifically, whether the number of closed circles reaches the specified number is calculated.
In some embodiments of the present application, the convolutional neural network includes a first convolutional neural network, and the first convolutional neural network extracts a high-latitude feature amount from the re-cut picture to obtain a high-latitude feature amount data set.
In some embodiments of the present application, the convolutional neural network comprises a second convolutional neural network comprising a plurality of convolutional layers; the second convolutional neural network extracts a feature quantity subset from the high-latitude feature quantity through a convolution operation, and the dimensionality of the extracted feature quantity subset gradually decreases with the multilayer superposition of the convolution operation; clustering the N types of feature quantum sets by adopting a clustering algorithm to obtain K clustering centers of all feature quantity subsets, wherein the K clustering centers are N × K clusters; according to the high-latitude characteristic quantity relative to the farthest cluster distance of the same type and the nearest cluster distance of the non-same type, the difference of the two distances is maximized, namely a loss function, the second convolutional neural network is trained to minimize the loss function, and a trained neural network model is obtained.
Two neural networks are adopted for feature extraction, and especially parameter extraction is carried out on the image after twice cutting, so that the features of the face below the eyebrow can be effectively extracted, and the computing resources are saved.
The information identification device based on artificial intelligence, in some embodiments of the present application, includes:
a data acquisition module 100, said data acquisition module 100 acquiring pictures as a data set;
the preprocessing module 200 is used for denoising and binarizing the pictures in the data set and then selecting a certain region as a parameter region;
a first segmentation module 300, said first segmentation module 300 determining a maximum pixel value and a minimum pixel value in an implementation parameter region from which to select a segmentation start point; performing threshold cutting on a parameter area according to the segmentation starting point, the maximum pixel value and the minimum pixel value to obtain a cut image; converting the cut image into a gray-scale image and then converting the gray-scale image into a binary two-dimensional array;
the connected processing module 400 is used for dividing the cut image area by 8-way connection, starting from each pixel in the area, and calculating any pixel in the reached area by the combination of movement in eight directions, namely, the up-down direction, the left-right direction, the up-left direction, the up-right direction, the down-left direction and the down-right direction on the premise of not exceeding the area; obtaining a two-dimensional array with a first area value, a second area value and a boundary line value;
the second cutting module 500 overlaps the two-dimensional array of the cut image according to the calculated lowest point value of the first area and the calculated highest point value of the second area, calculates the interval between the upper area and the lower area of each vertical pixel, and performs circular re-cutting according to the pixel size from small to large to obtain a re-cut image;
the training module 600, the training module 600 takes the cut picture as a sample, tags the sample to obtain a training set and a testing set, and inputs the training set into the convolutional neural network for training to obtain a trained neural network model;
and the recognition module 700 is used for inputting the pictures to be recognized into the trained neural network for recognition after performing threshold cutting and cyclic re-cutting on the pictures to be recognized.
A terminal, in some embodiments of the present application, comprises a processor and a memory, the memory having stored therein a computer program, the processor executing the computer program to implement the method described above.
A storage medium stores, in some embodiments of the present application, a computer program executable to implement the above-described method.
Firstly, cutting out a main parameter area needing identification information from the image, then cutting the image of the parameter area again, and inputting the cut image into a neural network for feature extraction.
The efficiency of information identification is obviously improved, and the calculation resources required by the subsequent neural network feature extraction are reduced.
The above detailed description is specific to possible embodiments of the present invention, and the above embodiments are not intended to limit the scope of the present invention, and all equivalent implementations or modifications that do not depart from the scope of the present invention should be included in the present claims.
Claims (10)
1. The information identification method based on artificial intelligence is characterized by comprising the following steps:
collecting a picture as a data set, and selecting a certain area as a parameter area after denoising and binaryzation of the picture in the data set;
determining a maximum pixel value and a minimum pixel value in a parameter implementation region, and selecting a segmentation starting point from the parameter region;
performing threshold cutting on a parameter area according to the segmentation starting point, the maximum pixel value and the minimum pixel value to obtain a cut image;
converting the cut image into a gray-scale image and then converting the gray-scale image into a binary two-dimensional array;
8, dividing the cut image area in a communication mode, starting from each pixel in the area, and calculating any pixel in the reached area through a combination of movement in eight directions, namely, an upper direction, a lower direction, a left direction, a right direction, an upper left direction, an upper right direction, a lower left direction and a lower right direction on the premise of not exceeding the area; obtaining a two-dimensional array with a first area value, a second area value and a boundary line value;
overlapping the calculated lowest point value of the first area and the calculated highest point value of the second area with the two-dimensional array of the cut image, calculating the interval size of the upper area and the lower area of each vertical pixel, and performing circular re-cutting according to the pixel size from small to large to obtain a re-cut image;
taking the picture after the re-cutting as a sample, labeling the sample to obtain a training set and a testing set, inputting the training set into a convolutional neural network for training to obtain a trained neural network model;
and after threshold cutting and circular re-cutting are carried out on the picture to be recognized, inputting the picture into a trained neural network for recognition.
2. The artificial intelligence based information recognition method of claim 1, wherein the threshold cut is:
and traversing pixel values of pixel points of the parameter image from the segmentation starting point, setting all pixel points with pixel values between the minimum pixel value and the maximum pixel value as 1, setting all pixel points with pixel values between the minimum pixel value and the maximum pixel value as 0, and extracting the pixel points with pixel values set as 1 to obtain the cut image.
3. The artificial intelligence based information recognition method of claim 1, wherein the maximum pixel value and the minimum pixel value are determined by:
selecting a sub-region on the parameter region according to a preset geometric figure, setting the maximum pixel value of the sub-region as the maximum pixel value of the parameter region, and setting the minimum pixel value of the sub-region as the minimum pixel value of the parameter region.
4. The artificial intelligence based information recognition method of claim 1, wherein the 8-connectivity is:
N8(p)=N4∪(x+1,y+1), (x+1,y-1), (x-1,y+1), (x-1,y-1)。
5. the artificial intelligence based information recognition method of claim 1, wherein when the pixels are cut again in a cycle from small to large, connectivity of the cut pictures is calculated once in each cycle, that is, whether the feature quantity reaches a specified quantity is calculated, and if so, the cutting is stopped.
6. The artificial intelligence based information recognition method of claim 1, wherein the convolutional neural network comprises a first convolutional neural network, and the first convolutional neural network extracts high-latitude feature quantities from the re-cut picture to obtain a high-latitude feature quantity data set.
7. The artificial intelligence based information recognition method of claim 1, wherein the convolutional neural network comprises a second convolutional neural network, the second convolutional neural network comprising a plurality of convolutional layers; the second convolutional neural network extracts a feature quantity subset from the high-latitude feature quantity through a convolution operation, and the dimensionality of the extracted feature quantity subset gradually decreases with the multilayer superposition of the convolution operation; clustering the N types of feature quantum sets by adopting a clustering algorithm to obtain K clustering centers of all feature quantity subsets, wherein the K clustering centers are N × K clusters; according to the high-latitude characteristic quantity relative to the farthest cluster distance of the same type and the nearest cluster distance of the non-same type, the difference of the two distances is maximized, namely a loss function, the second convolutional neural network is trained to minimize the loss function, and a trained neural network model is obtained.
8. An information recognition apparatus based on artificial intelligence, comprising:
a data acquisition module that acquires a picture as a data set;
the preprocessing module is used for denoising and binarizing the pictures in the data set and then selecting a certain region as a parameter region;
a first segmentation module that determines a maximum pixel value and a minimum pixel value in an implementation parameter region from which to select a segmentation start point; performing threshold cutting on a parameter area according to the segmentation starting point, the maximum pixel value and the minimum pixel value to obtain a cut image; converting the cut image into a gray-scale image and then converting the gray-scale image into a binary two-dimensional array;
the communication processing module adopts 8-way communication to divide the cut image area, and calculates any pixel in the reached area by the combination of movement in eight directions, namely, the up, down, left, right, left, up, right, down, left and right directions from each pixel in the area on the premise of not exceeding the area; obtaining a two-dimensional array with a first area value, a second area value and a boundary line value;
the second cutting module is overlapped with the two-dimensional array of the cut image according to the calculated lowest point value of the first area and the calculated highest point value of the second area, calculates the interval size of the upper area and the lower area of each vertical pixel, and performs circular re-cutting according to the pixels from small to large to obtain a re-cut image;
the training module is used for marking a label on the sample by taking the picture which is cut again as the sample to obtain a training set and a testing set, and inputting the training set into the convolutional neural network for training to obtain a trained neural network model;
and the recognition module performs threshold cutting and cyclic re-cutting on the picture to be recognized, and inputs the picture to be recognized into the trained neural network for recognition.
9. A terminal comprising a processor and a memory, the memory having stored therein a computer program, the processor executing the computer program to implement the method of any one of claims 1 to 7.
10. A storage medium having stored thereon a computer program executable to implement the method of any one of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110036434.7A CN112508972A (en) | 2021-01-12 | 2021-01-12 | Information identification method and device based on artificial intelligence |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110036434.7A CN112508972A (en) | 2021-01-12 | 2021-01-12 | Information identification method and device based on artificial intelligence |
Publications (1)
Publication Number | Publication Date |
---|---|
CN112508972A true CN112508972A (en) | 2021-03-16 |
Family
ID=74952218
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110036434.7A Pending CN112508972A (en) | 2021-01-12 | 2021-01-12 | Information identification method and device based on artificial intelligence |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112508972A (en) |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20110182517A1 (en) * | 2010-01-20 | 2011-07-28 | Duke University | Segmentation and identification of layered structures in images |
CN103425985A (en) * | 2013-08-28 | 2013-12-04 | 山东大学 | Method for detecting forehead wrinkles on face |
CN104881639A (en) * | 2015-05-14 | 2015-09-02 | 江苏大学 | Method of detection, division, and expression recognition of human face based on layered TDP model |
CN109272466A (en) * | 2018-09-19 | 2019-01-25 | 维沃移动通信有限公司 | A kind of tooth beautification method and device |
CN109583333A (en) * | 2018-11-16 | 2019-04-05 | 中证信用增进股份有限公司 | Image-recognizing method based on water logging method and convolutional neural networks |
CN110490057A (en) * | 2019-07-08 | 2019-11-22 | 特斯联(北京)科技有限公司 | A kind of self-adaptive identification method and system based on face big data artificial intelligence cluster |
CN110826408A (en) * | 2019-10-09 | 2020-02-21 | 西安工程大学 | Face recognition method by regional feature extraction |
-
2021
- 2021-01-12 CN CN202110036434.7A patent/CN112508972A/en active Pending
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20110182517A1 (en) * | 2010-01-20 | 2011-07-28 | Duke University | Segmentation and identification of layered structures in images |
CN103425985A (en) * | 2013-08-28 | 2013-12-04 | 山东大学 | Method for detecting forehead wrinkles on face |
CN104881639A (en) * | 2015-05-14 | 2015-09-02 | 江苏大学 | Method of detection, division, and expression recognition of human face based on layered TDP model |
CN109272466A (en) * | 2018-09-19 | 2019-01-25 | 维沃移动通信有限公司 | A kind of tooth beautification method and device |
CN109583333A (en) * | 2018-11-16 | 2019-04-05 | 中证信用增进股份有限公司 | Image-recognizing method based on water logging method and convolutional neural networks |
CN110490057A (en) * | 2019-07-08 | 2019-11-22 | 特斯联(北京)科技有限公司 | A kind of self-adaptive identification method and system based on face big data artificial intelligence cluster |
CN110826408A (en) * | 2019-10-09 | 2020-02-21 | 西安工程大学 | Face recognition method by regional feature extraction |
Non-Patent Citations (1)
Title |
---|
官瑞坤 等: "基于Face++的"刷脸"课堂考勤系统", 《信息系统工程》 * |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20210118144A1 (en) | Image processing method, electronic device, and storage medium | |
CN105447441B (en) | Face authentication method and device | |
WO2024001123A1 (en) | Image recognition method and apparatus based on neural network model, and terminal device | |
CN110046574A (en) | Safety cap based on deep learning wears recognition methods and equipment | |
US11475572B2 (en) | Systems and methods for object detection and recognition | |
CN111027377B (en) | Double-flow neural network time sequence action positioning method | |
CN111126175A (en) | Facial image recognition algorithm based on deep convolutional neural network | |
CN111259908A (en) | Machine vision-based steel coil number identification method, system, equipment and storage medium | |
CN108876795A (en) | A kind of dividing method and system of objects in images | |
WO2023124278A1 (en) | Image processing model training method and apparatus, and image classification method and apparatus | |
CN115439804A (en) | Monitoring method and device for high-speed rail maintenance | |
CN110222647B (en) | Face in-vivo detection method based on convolutional neural network | |
CN103745204A (en) | Method of comparing physical characteristics based on nevus spilus points | |
CN116503848B (en) | Intelligent license plate recognition method, device, equipment and storage medium | |
CN117079339B (en) | Animal iris recognition method, prediction model training method, electronic equipment and medium | |
CN113688930A (en) | Thyroid nodule calcification recognition device based on deep learning | |
CN117877068A (en) | Mask self-supervision shielding pixel reconstruction-based shielding pedestrian re-identification method | |
CN112508972A (en) | Information identification method and device based on artificial intelligence | |
CN116824141A (en) | Livestock image instance segmentation method and device based on deep learning | |
CN110059742A (en) | Safety protector wearing recognition methods and equipment based on deep learning | |
CN112380966B (en) | Monocular iris matching method based on feature point re-projection | |
CN115471901A (en) | Multi-pose face frontization method and system based on generation of confrontation network | |
CN102087705B (en) | Iris identification method based on blanket dimension and lacunarity | |
CN110136100B (en) | Automatic classification method and device for CT slice images | |
CN111898473A (en) | Driver state real-time monitoring method based on deep learning |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WD01 | Invention patent application deemed withdrawn after publication | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20210316 |