CN110175584A - A kind of facial feature extraction reconstructing method - Google Patents
A kind of facial feature extraction reconstructing method Download PDFInfo
- Publication number
- CN110175584A CN110175584A CN201910461666.XA CN201910461666A CN110175584A CN 110175584 A CN110175584 A CN 110175584A CN 201910461666 A CN201910461666 A CN 201910461666A CN 110175584 A CN110175584 A CN 110175584A
- Authority
- CN
- China
- Prior art keywords
- image
- face
- feature extraction
- facial feature
- value
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
- G06V40/171—Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- General Health & Medical Sciences (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Processing (AREA)
Abstract
The present invention relates to Internet technical fields, in particular a kind of facial feature extraction reconstructing method, image, image preprocessing are obtained including picture library, obtains face area, Face detection, acquisition characteristic parameter and display renderings, and image preprocessing includes median filtering, image gray processing, Sobel edge extracting, contrast enhancing, similarity calculation and binaryzation.The present invention can be modified and be safeguarded by the multilayered structure established, feature location is that the purpose of recognition of face is the position of individual determining face in the picture, each structures locating of face is determined and calculated by label human face region, facial features localization is the presence or absence of detection face characteristic and position, such as the presence or absence of eyes, nose, nostril, mouth, lip etc. and position, pass through skin cluster, it is then high to the accuracy rate of the acquisition of face area, success rate reaches 98% or more, and speed is fast, is much less work.
Description
Technical field
The present invention relates to Internet technical field, specially a kind of facial feature extraction reconstructing method.
Background technique
Recognition of face refers to facial image or video the judgement to input wherein with the presence or absence of face, if there is people
Face then further provides the position of every face, the location information of size and each major facial organ.And according to these letters
Breath, further extracts the identity characteristic that every face contains, and it is compared with the face in known face database, to know
The identity of not every face.
Since the distribution of human face five-sense-organ is very similar, and face itself is again a flexible article, expression, posture
Or hair style, the ever-changing of makeup all bring sizable trouble, therefore the accuracy rate of facial feature extraction to correct identification
Lowly.In consideration of it, we provide a kind of facial feature extraction reconstructing method.
Summary of the invention
The purpose of the present invention is to provide a kind of facial feature extraction reconstructing methods, to solve to propose in above-mentioned background technique
Nowadays since the distribution of human face five-sense-organ is very similar, and face itself is again a flexible article, and facial characteristics mentions
The low problem of the accuracy rate taken.
To achieve the above object, the invention provides the following technical scheme:
A kind of facial feature extraction reconstructing method, including camera collection image or picture library obtain image, image is located in advance
Reason obtains face area, Face detection, obtains characteristic parameter and display renderings, the specific steps are as follows:
S1: application program acquires an image by camera or opens picture library, and wherein one is chosen from picture library
Image;
S2: the image of shooting or the image of selection are subjected to image preprocessing work, the feature made it have is in the picture
Significantly show;
S3: face area is obtained according to the colour of skin, and realizes obtaining for face area by the way that the non-linear segmentation of the colour of skin is color transformed
It takes;
S4: first passing through color and screen to face's marginal position and candidate feature, then special by PCA algorithm and geometry
The position of eyes, nose and mouth is marked in sign;
S5: the position of eyes, nose and the mouth that will acquire is as characteristic parameter;
S6: the characteristic parameter combination face marginal position that will acquire is reconfigured, and final effect picture is obtained.
Preferably, described image pretreatment includes median filtering, image gray processing, Sobel edge extracting, contrast increasing
By force, similarity calculation and binaryzation.
Preferably, the median filtering is smoothed image, the visual noise of image is reduced.
Preferably, color image is converted to gray level image by described image gray processing, gray level image is remaining face
On the basis of main feature information, gross information content is reduced, the processing method of described image gray processing includes maximum value process, average value
Method and weighted average method;
Maximum value process: the value of RGB is made to be equal to the maximum value in three values i.e.:
R=G=B=max (R, G, B), maximum value process is for completing the very high gray scale of brightness;
Mean value method: taking R, the average values of G, B tri- values i.e.:
Mean value method is for completing the soft gray scale of brightness;
Weighted average method: different weights is assigned to R, G, B according to importance, and makes the weighted value of RGB averagely i.e.:
R=G=B=(WRR+WGG+WBB)/3, wherein WR、WGAnd WBThe respectively weight of R, G, B, works as WR/ 3=0.3, WG/3
=0.59, WBWhen/3=0.11, it may be assumed that
R=G=B=0.3R+0.59G+0.11B obtains most reasonable gray level image.
Preferably, the Sobel edge extracting uses gradient differential sharpening image, make the noise and striped of image border
Enhanced, Sobel edge extracting is the difference for being separated by two rows or two column, enhances both sides of edges element, edge seems
It is thick and bright.
Preferably, described image edge refers to the pixel that pixel gray value has Spline smoothing or roof shape to change in image
Set, the method that the detection method of image border uses Sobel operator.
Preferably, the contrast enhancing handles image, contrast is pulled open, the side for keeping image originally fuzzy
Edge is apparent from.
Preferably, the similarity calculation is used to differentiate the similarity degree of two objects, convenient for the determination of binaryzation threshold values.
Preferably, the binaryzation is will to acquire the multi-level gray scale image procossing obtained into bianry image, whole picture figure
As the interior only black and white two-value of picture, a pixel is indicated with a bit, " 1 " indicates black, and " 0 " indicates white, in order to divide
Analysis understands, identifies and reduces calculation amount.
Compared with prior art, the beneficial effects of the present invention are: this facial feature extraction reconstructing method establish it is multi-level
Structure all can be modified and be safeguarded, feature location is that the purpose of recognition of face is the position of individual determining face in the picture
It sets, determines and calculate each structures locating of face by label human face region, facial features localization is detection face characteristic
Whether there is or not and position, such as the presence or absence of eyes, nose, nostril, mouth, lip etc. and position, by skin cluster, then to face
The accuracy rate of the acquisition in region is high, and success rate reaches 98% or more, and speed is fast, is much less work.
Detailed description of the invention
Fig. 1 is flow diagram of the invention;
Fig. 2 is the block diagram of image preprocessing of the invention;
Fig. 3 is the specific block diagram of Face detection of the present invention.
Specific embodiment
Below in conjunction with the embodiment of the present invention, technical scheme in the embodiment of the invention is clearly and completely described,
Obviously, described embodiments are only a part of the embodiments of the present invention, instead of all the embodiments.Based in the present invention
Embodiment, every other embodiment obtained by those of ordinary skill in the art without making creative efforts, all
Belong to the scope of protection of the invention.
Embodiment 1
A kind of facial feature extraction reconstructing method, as shown in Figure 1, including that camera collection image or picture library obtain figure
Picture, image preprocessing obtain face area, Face detection, obtain characteristic parameter and display renderings, the specific steps are as follows:
S1: application program acquires an image by camera or opens picture library, and wherein one is chosen from picture library
Image;
S2: the image of shooting or the image of selection are subjected to image preprocessing work, the feature made it have is in the picture
Significantly show;
S3: face area is obtained according to the colour of skin, and realizes obtaining for face area by the way that the non-linear segmentation of the colour of skin is color transformed
It takes;
S4: first passing through color and screen to face's marginal position and candidate feature, then special by PCA algorithm and geometry
The position of eyes, nose and mouth is marked in sign;
S5: the position of eyes, nose and the mouth that will acquire is as characteristic parameter, as shown in Figure 3;
S6: the characteristic parameter combination face marginal position that will acquire is reconfigured, and final effect picture is obtained.
It is worth noting that final effect figure, which can also carry out image restoring, re-starts image procossing, to improve standard
True rate.
Further, as shown in Fig. 2, image preprocessing include median filtering, it is image gray processing, Sobel edge extracting, right
Than degree enhancing, similarity calculation and binaryzation.
Specifically, median filtering is smoothed image, the visual noise of image is reduced, in image acquisition process
In, due to the influence of various factors, image often will appear some irregular noises, and enter image has in transmission, storage etc.
There may be the loss of data.To influence the quality of image.The process for handling noise is known as filtering.Filtering can reduce image
Visual noise.
It is worth noting that color image is converted to gray level image by image gray processing, gray level image is remaining face
On the basis of main feature information, reduce gross information content, the processing method of image gray processing include maximum value process, mean value method and
Weighted average method;
Maximum value process: the value of RGB is made to be equal to the maximum value in three values i.e.:
R=G=B=max (R, G, B), maximum value process is for completing the very high gray scale of brightness;
Mean value method: taking R, the average values of G, B tri- values i.e.:
Mean value method is for completing the soft gray scale of brightness;
Weighted average method: different weights is assigned to R, G, B according to importance, and makes the weighted value of RGB averagely i.e.:
R=G=B=(WRR+WGG+WBB)/3, wherein WR, WG and WB be respectively R, G, B weight, work as WR/3=0.3,
When WG/3=0.59, WB/3=0.11, it may be assumed that
R=G=B=0.3R+0.59G+0.11B obtains most reasonable gray level image.
Further, Sobel edge extracting uses gradient differential sharpening image, obtains the noise of image border and striped
Enhancing, Sobel edge extracting is the difference for being separated by two rows or two column, enhance both sides of edges element, edge seem it is thick and
It is bright, Sobel extract the advantages of: use gradient differential sharpening image, equally enhanced noise, striped etc., Soble operator is then
This problem is overcome to a certain extent: due to introducing equilibrating factor, thus being had to the random noise in image certain
Smoothing effect;Since it is to be separated by the difference of two rows or two column, therefore the element of both sides of edges is enhanced, therefore edge seems thick
And it is bright.
It is worth noting that, image border refers to the pixel that pixel gray value has Spline smoothing or roof shape to change in image
Set, the method that the detection method of image border uses Sobel operator.
In addition, contrast enhancing handles image, contrast is pulled open, the edge for keeping image originally fuzzy becomes clear
It is clear.
It is worth noting that, similarity calculation is used to differentiate the similarity degrees of two objects, convenient for the determination of binaryzation threshold values,
Similarity calculation is the algorithm, such as text, fingerprint, face etc. set to differentiate the similarity degree of two objects.In order to just
In the determination of binarization threshold, the meaning of colour of skin similarity calculation be by calculate with pixel similar in face complexion, really
Determine human face region, shown with gray level image, and provides the fiducial value that can calculate threshold value for binaryzation.
Binaryzation is that will acquire the multi-level gray scale image procossing obtained into bianry image, only black in entire image picture
White two-value indicates a pixel with a bit, and " 1 " indicates black, and " 0 " indicates white, in order to analysis and understanding, identification and
Reduce calculation amount.
Facial feature extraction reconstructing method of the invention is adopted as developing instrument using MFC using visual c++ 6.0
With Object--oriented method, program is write with C Plus Plus, is refined by function, establish general structure, to reduce numerous
Trivial property increases the reusability and portability of code, improves efficiency, and the multilayered structure of foundation is all that can modify and tie up
Shield.All structures be all it is open, new method can be added thereto to support new function, without to original function structure
At any threat.The multi-level class formation established in the present invention all can be modified and be safeguarded, feature location is recognition of face
Purpose be the position of individual determining face in the picture, each organ for determining and calculating face by label human face region is fixed
Position, facial features localization are detection the presence or absence of face characteristic and position, such as eyes, nose, nostril, mouth, lip etc.
Whether there is or not and position, by skin cluster, then high to the accuracy rate of the acquisition of face area, success rate reaches 98% or more, and
Speed is fast, is much less work.
Embodiment 2
As second of embodiment of the invention, if the facial feature extraction of recognition of face passes through camera collection image
A number of factors is had in the process to impact, specific as follows:
(1) illumination variation: in recognition of face, the variation of illumination condition often causes the obvious change of face appearance or appearance
Change, shade caused by illumination variation, block, light and shade area, half-light, bloom can all make discrimination decline to a great extent.The variation of illumination
It can come from the difference of radiation direction or Energy distribution, also will receive the influence of face 3D structure.The method for solving illumination variation
Two classes can be divided to: one kind can be described as passively method, by learning the variation due to visible spectrum image caused by illumination change
To try influence caused by reducing illumination variation;The another kind of method that can be described as active makes acquisition using Active Imaging technology
Image has the characteristics that the collected image under fixed lighting condition, or the acquisition mode being only not affected by light change obtains
The characteristics of image obtained.
(2) attitudes vibration: caused to project deformation if the posture of people changes when acquiring facial image
It can cause the stretching of face face different parts, compress and block, make image that very big change occur.Human face posture is in three-dimensional space
Between variation share 6 freedom degrees: the translation along X, Y, Z axis and the rotation around X, Y, Z axis.Wherein, the translation along X, Y-axis is being schemed
The variation that face location is shown as on picture can obtain variable quantity again by seat by using detection method appropriate to its correction
Mark transformation is realized;The variation for showing as ratio on the image along the variation of Z axis, to its correction can by scaling two dimensional image or
Three-dimensional face is realized.Variation around axis can be divided into Plane Rotation, vertical depth rotation and the rotation of side depth.Wherein, plane is revolved
Turn to be rotation about the z axis;Vertical depth rotation, which is also named, is rotated up and down or pitches rotation, is the rotation around X-axis;Depth rotation in side has
When be referred to as rotate left and right or horizontal deflection, be the rotation around Y-axis.Around the rotation of X and Y-axis in the variation of above-mentioned 6 freedom degrees
It is difficult to determine directly from image.Overcome the problems, such as that a kind of method that attitudes vibration is brought is that face is estimated from image not
Same posture, then try to be switched back to the standard posture of face, it is identified with the face identification method of standard.There are also a kind of sides
Method is to learn and remember the feature under many attitude, this, which is equivalent to, establishes multiple postures, and workload can be larger.Finally, can also
To construct the 3D model on head, posture extraneous features are extracted therefrom to identify face.
The basic principles, main features and advantages of the present invention have been shown and described above.The technology of the industry
For personnel it should be appreciated that the present invention is not limited to the above embodiments, described in the above embodiment and specification is only the present invention
Preference, be not intended to limit the invention, without departing from the spirit and scope of the present invention, the present invention also has various
Changes and improvements, these changes and improvements all fall within the protetion scope of the claimed invention.The claimed scope of the invention is by institute
Attached claims and its equivalent thereof.
Claims (9)
1. a kind of facial feature extraction reconstructing method, it is characterised in that: including camera collection image or picture library obtain image,
Image preprocessing obtains face area, Face detection, obtains characteristic parameter and display renderings, the specific steps are as follows:
S1: application program acquires an image by camera or opens picture library, and a wherein image is chosen from picture library;
S2: the image of shooting or the image of selection are subjected to image preprocessing work, the feature made it have is obvious in the picture
Show;
S3: face area is obtained according to the colour of skin, and passes through the color transformed acquisition for realizing face area of the non-linear segmentation of the colour of skin;
S4: first passing through color and screen to face's marginal position and candidate feature, then passes through PCA algorithm and geometrical characteristic pair
The position of eyes, nose and mouth is marked;
S5: the position of eyes, nose and the mouth that will acquire is as characteristic parameter;
S6: the characteristic parameter combination face marginal position that will acquire is reconfigured, and final effect picture is obtained.
2. facial feature extraction reconstructing method according to claim 1, it is characterised in that: described image pretreatment includes
Value filtering, image gray processing, Sobel edge extracting, contrast enhancing, similarity calculation and binaryzation.
3. facial feature extraction reconstructing method according to claim 2, it is characterised in that: the median filtering to image into
Row smoothing processing reduces the visual noise of image.
4. facial feature extraction reconstructing method according to claim 2, it is characterised in that: described image gray processing will be colored
Image is converted to gray level image, and gray level image reduces gross information content on the basis of remaining face main feature information, described
The processing method of image gray processing includes maximum value process, mean value method and weighted average method;
Maximum value process: the value of RGB is made to be equal to the maximum value in three values i.e.:
R=G=B=max (R, G, B), maximum value process is for completing the very high gray scale of brightness;
Mean value method: R, the average value of G, B tri- values are taken, it may be assumed that
Mean value method is for completing the soft gray scale of brightness;
Weighted average method: different weights is assigned to R, G, B according to importance, and makes the weighted value average value of RGB, it may be assumed that
R=G=B=(WRR+WGG+WBB)/3, wherein WR、WGAnd WBThe respectively weight of R, G, B, works as WR/ 3=0.3, WG/ 3=
0.59, WBWhen/3=0.11, it may be assumed that
R=G=B=0.3R+0.59G+0.11B obtains most reasonable gray level image.
5. facial feature extraction reconstructing method according to claim 2, it is characterised in that: the Sobel edge extracting is adopted
With gradient differential sharpening image, enhance the noise of image border and striped, Sobel edge extracting is to be separated by two rows or two
The difference of column enhances both sides of edges element, and edge seems thick and bright.
6. facial feature extraction reconstructing method according to claim 5, it is characterised in that: described image edge refers to image
Middle pixel gray value has the pixel collection of Spline smoothing or the variation of roof shape, and the detection method of image border uses Sobel operator
Method.
7. facial feature extraction reconstructing method according to claim 2, it is characterised in that: the contrast enhancing is to image
It is handled, contrast is pulled open, the edge for keeping image originally fuzzy is apparent from.
8. facial feature extraction reconstructing method according to claim 2, it is characterised in that: the similarity calculation is for sentencing
The similarity degree of other two objects, convenient for the determination of binaryzation threshold values.
9. facial feature extraction reconstructing method according to claim 2, it is characterised in that: the binaryzation is will to acquire to obtain
The multi-level gray scale image procossing obtained indicates one with a bit at bianry image, the interior only black and white two-value of entire image picture
A pixel, " 1 " indicate black, and " 0 " indicates white, in order to analysis and understanding, identification and reduce calculation amount.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910461666.XA CN110175584A (en) | 2019-05-30 | 2019-05-30 | A kind of facial feature extraction reconstructing method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910461666.XA CN110175584A (en) | 2019-05-30 | 2019-05-30 | A kind of facial feature extraction reconstructing method |
Publications (1)
Publication Number | Publication Date |
---|---|
CN110175584A true CN110175584A (en) | 2019-08-27 |
Family
ID=67696692
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910461666.XA Pending CN110175584A (en) | 2019-05-30 | 2019-05-30 | A kind of facial feature extraction reconstructing method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110175584A (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110647843A (en) * | 2019-09-23 | 2020-01-03 | 江苏集萃智能传感技术研究所有限公司 | Face image processing method |
CN112185495A (en) * | 2020-09-22 | 2021-01-05 | 深圳市宏泰和信息科技有限公司 | Medical equipment case data acquisition method and system |
CN113162918A (en) * | 2021-03-25 | 2021-07-23 | 重庆扬成大数据科技有限公司 | Method for extracting abnormal data under condition of rapidly mining four-in-one network |
CN113188662A (en) * | 2021-03-16 | 2021-07-30 | 云南电网有限责任公司玉溪供电局 | Infrared thermal imaging fault automatic identification system and method |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102982322A (en) * | 2012-12-07 | 2013-03-20 | 大连大学 | Face recognition method based on PCA (principal component analysis) image reconstruction and LDA (linear discriminant analysis) |
CN106682653A (en) * | 2017-03-09 | 2017-05-17 | 重庆信科设计有限公司 | KNLDA-based RBF neural network face recognition method |
CN109543518A (en) * | 2018-10-16 | 2019-03-29 | 天津大学 | A kind of human face precise recognition method based on integral projection |
-
2019
- 2019-05-30 CN CN201910461666.XA patent/CN110175584A/en active Pending
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102982322A (en) * | 2012-12-07 | 2013-03-20 | 大连大学 | Face recognition method based on PCA (principal component analysis) image reconstruction and LDA (linear discriminant analysis) |
CN106682653A (en) * | 2017-03-09 | 2017-05-17 | 重庆信科设计有限公司 | KNLDA-based RBF neural network face recognition method |
CN109543518A (en) * | 2018-10-16 | 2019-03-29 | 天津大学 | A kind of human face precise recognition method based on integral projection |
Non-Patent Citations (1)
Title |
---|
ZHANG06189: "人脸识别技术(FRT)", 《道客巴巴》 * |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110647843A (en) * | 2019-09-23 | 2020-01-03 | 江苏集萃智能传感技术研究所有限公司 | Face image processing method |
CN110647843B (en) * | 2019-09-23 | 2023-09-19 | 量准(上海)医疗器械有限公司 | Face image processing method |
CN112185495A (en) * | 2020-09-22 | 2021-01-05 | 深圳市宏泰和信息科技有限公司 | Medical equipment case data acquisition method and system |
CN113188662A (en) * | 2021-03-16 | 2021-07-30 | 云南电网有限责任公司玉溪供电局 | Infrared thermal imaging fault automatic identification system and method |
CN113162918A (en) * | 2021-03-25 | 2021-07-23 | 重庆扬成大数据科技有限公司 | Method for extracting abnormal data under condition of rapidly mining four-in-one network |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112766160B (en) | Face replacement method based on multi-stage attribute encoder and attention mechanism | |
CN110175584A (en) | A kind of facial feature extraction reconstructing method | |
CN105518744B (en) | Pedestrian recognition methods and equipment again | |
CN104834898B (en) | A kind of quality classification method of personage's photographs | |
CN105139004B (en) | Facial expression recognizing method based on video sequence | |
Lin | Face detection in complicated backgrounds and different illumination conditions by using YCbCr color space and neural network | |
CN106446872A (en) | Detection and recognition method of human face in video under low-light conditions | |
CN104794693B (en) | A kind of portrait optimization method of face key area automatic detection masking-out | |
CN106796449A (en) | Eye-controlling focus method and device | |
CN110738676A (en) | GrabCT automatic segmentation algorithm combined with RGBD data | |
CN105913456A (en) | Video significance detecting method based on area segmentation | |
CN107066969A (en) | A kind of face identification method | |
CN112634125B (en) | Automatic face replacement method based on off-line face database | |
CN108470178B (en) | Depth map significance detection method combined with depth credibility evaluation factor | |
CN108846343B (en) | Multi-task collaborative analysis method based on three-dimensional video | |
CN110032932A (en) | A kind of human posture recognition method based on video processing and decision tree given threshold | |
CN110021029A (en) | A kind of real-time dynamic registration method and storage medium suitable for RGBD-SLAM | |
CN105184273B (en) | A kind of dynamic image front face reconstructing system and method based on ASM | |
CN108288040A (en) | Multi-parameter face identification system based on face contour | |
Zhipeng et al. | Face detection system based on skin color model | |
KR20020085669A (en) | The Apparatus and Method for Abstracting Peculiarity of Two-Dimensional Image & The Apparatus and Method for Creating Three-Dimensional Image Using Them | |
Sablatnig et al. | Structural analysis of paintings based on brush strokes | |
CN106503611B (en) | Facial image eyeglass detection method based on marginal information projective iteration mirror holder crossbeam | |
CN115661903A (en) | Map recognizing method and device based on spatial mapping collaborative target filtering | |
CN109753912A (en) | A kind of multi-light spectrum palm print matching process based on tensor |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20190827 |
|
RJ01 | Rejection of invention patent application after publication |