CN106372652A - Hair style identification method and hair style identification apparatus - Google Patents
Hair style identification method and hair style identification apparatus Download PDFInfo
- Publication number
- CN106372652A CN106372652A CN201610743694.7A CN201610743694A CN106372652A CN 106372652 A CN106372652 A CN 106372652A CN 201610743694 A CN201610743694 A CN 201610743694A CN 106372652 A CN106372652 A CN 106372652A
- Authority
- CN
- China
- Prior art keywords
- hair style
- image
- images
- hair
- recognized
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/22—Matching criteria, e.g. proximity measures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- Bioinformatics & Computational Biology (AREA)
- Computational Linguistics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Evolutionary Biology (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Image Analysis (AREA)
Abstract
The invention relates to the field of image processing and discloses a hair style identification method and a hair style identification apparatus. The hair style identification method comprises the steps of presetting an image library and N hair styles, wherein the image library comprises images corresponding to the N hair styles, each hair style at least corresponds to 2 images, and N is a natural number; comparing a to-be-identified image with the images in the image library one by one to obtain similarity between the to-be-identified image and the images; and determining the hair style of the to-be-identified image by utilizing the similarity. Through the hair style identification method and the hair style identification apparatus provided by the invention, the problem of high hair style identification failure rate caused by low hair detection accuracy is solved.
Description
Technical field
The present invention relates to image processing field, particularly to a kind of recognition methodss of hair style and hair style identifying device.
Background technology
With Face datection, the development of image recognition technology, in man-machine interaction, amusement is mutually for the hair style identification based on image
Dynamic, it is widely applied in the function such as U.S. face auto heterodyne.
The hair style identification of image is realized, the basis being based primarily upon hair detection is identified work in prior art.But,
During realizing the present invention, inventor finds that in prior art, at least there are the following problems: although can based on hair detection
To carry out hair style identification, but due to the accuracy of hair detection relatively low it is difficult to realize the good identification to hair style, this results in
Hair style recognition failures rate is high.
Content of the invention
The purpose of embodiment of the present invention is to provide a kind of hair style recognition methodss and hair style identifying device so that in image
Scene residing for middle personage is complicated, in the case of hair detection is inaccurate, hair style in image still can be recognized accurately, significantly
Improve resolution and the accuracy of hair style, and there is good robustness.
For solving above-mentioned technical problem, embodiments of the present invention provide a kind of hair style recognition methodss, comprising: default figure
As storehouse and n kind hair style, described image storehouse includes image corresponding with described n kind hair style, and wherein, every kind of hair style at least corresponds to 2
Individual image, described n is natural number;Images to be recognized is compared one by one with each image in described image storehouse, treats described in acquisition respectively
Identification image and the similarity of described each image;Determine the hair style of described images to be recognized using each described similarity.
Embodiments of the present invention additionally provide a kind of hair style identifying device, comprising: presetting module, for pre-set image storehouse
With n kind hair style, described image storehouse includes image corresponding with described n kind hair style, wherein, at least corresponding 2 figures of every kind of hair style
Picture, described n is natural number;Comparing module, for being compared images to be recognized one by one with each image in described image storehouse, is obtained respectively
Obtain the similarity of described images to be recognized and described each image;Identification module, for treating using described in each described similarity determination
The hair style of identification image.
Embodiment of the present invention in terms of existing technologies, when hair style identifies, by images to be recognized and pre-stored image
Compare, then determine the hair style in images to be recognized using similarity.Image due to prestoring has multiple, and hair style is
Know, by using the multiple method comparing, hair style is identified, substantially increases resolution and the accuracy of hair style, and have
There is more preferable robustness.
In addition, described each image in images to be recognized and described image storehouse compared one by one, using default bilateral
Depth convolutional neural networks model, images to be recognized is compared one by one with each image in described image storehouse.By making full use of depth
Degree convolutional neural networks describe the characteristic of hair style texture and shape comprehensively, differentiate that distance is near for similar hair style, and different sends out
Type differentiates that distance is remote such that it is able to accurately identify hair style.
In addition, default bilateral depth convolutional neural networks model, obtained using following methods: preset and include m hair style
The Sample Storehouse of image, described m is the natural number more than 2;Hair style image in default Sample Storehouse is matched two-by-two, according to pairing
Hair style image in described Sample Storehouse is divided into identical hair style group and different hair style group by result;Using default bilateral depth convolution
Neural network framework, is trained to the hair style image in described identical hair style group and described difference hair style group respectively, obtains institute
State default bilateral depth convolutional neural networks model.By way of matching two-by-two, the hair style image in Sample Storehouse is divided into
Identical hair style and different hair style are trained, and quickly obtain bilateral depth convolutional neural networks model.
In addition, in respectively the hair style image in identical hair style group and different hair style group being trained, described hair style image
It is the hair style image through overcorrection.By being corrected hair style image, increased the identification degree of hair style in image.
In addition, hair style imagery exploitation following methods are corrected: described hair style image is carried out with recognition of face, obtains described hair style
Face in image, positions the key point of described face;Using hair style image described in the aligning of described key point.By adopting
Carry out the positioning of face key point with sdm (security device manager has supervision descent algorithm) method, to face figure
As carrying out geometric transformation, make two eyes in the same horizontal line, thus reaching the effect of correcting image.
In addition, carrying out to hair style image, after recognition of face, also including: on described hair style image, by the face recognizing
Region extends preset ratio, as identification region;Described respectively the hair style image in identical hair style group and different hair style groups is entered
Among row training, respectively the identification region of the hair style image in identical hair style group and different hair style group is trained.By inciting somebody to action
The human face region extension preset ratio recognizing, as identification region, decreases data volume during training, ensure that simultaneously and knowing
Hair style region is included in other region.
Brief description
Fig. 1 is a kind of flow chart of first embodiment of the invention hair style recognition methodss;
Fig. 2 is a kind of flow chart of second embodiment of the invention hair style recognition methodss;
Fig. 3 is a kind of flow chart of third embodiment of the invention hair style recognition methodss;
Fig. 4 is the schematic diagram of Face datection in a kind of third embodiment of the invention hair style recognition methodss;
Fig. 5 is the schematic diagram of face key point location in a kind of third embodiment of the invention hair style recognition methodss;
Fig. 6 is bilateral depth convolutional neural networks model training in a kind of third embodiment of the invention hair style recognition methodss
The flow chart in stage;
Fig. 7 is that in a kind of third embodiment of the invention hair style recognition methodss, depth convolutional neural networks frame structure is illustrated
Figure;
Fig. 8 is a kind of structured flowchart of four embodiment of the invention hair style identifying device;
Fig. 9 is the user terminal actual device structured flowchart of fifth embodiment of the invention.
Specific embodiment
For making the object, technical solutions and advantages of the present invention clearer, below in conjunction with each reality to the present invention for the accompanying drawing
The mode of applying is explained in detail.However, it will be understood by those skilled in the art that in each embodiment of the present invention,
In order that reader more fully understands that the application proposes many ins and outs.But, even if there is no these ins and outs and base
In following embodiment many variations and modification it is also possible to realize the application technical scheme required for protection.
The first embodiment of the present invention is related to a kind of hair style recognition methodss, and idiographic flow is as shown in Figure 1.
In a step 101, pre-set image storehouse and 5 kinds of hair styles.
Specifically, image library includes image corresponding with 5 kinds of hair styles, and every kind of hair style at least corresponds to 2 images.
It should be noted that in present embodiment, hair style is divided into: bob, plank inch, hair-waving, aircraft are sent out and fluffy,
Default hair style species is not limited to 5 kinds in actual applications, can also be multiple for neat bang long hair, VS Sassoon hair style, straight hair etc.
Hair style, here will not enumerate.
In a step 102, obtain the similarity of images to be recognized and each image.
Specifically, need to be compared images to be recognized one by one with image each in image library, then obtain respectively and treat
Identification image and the similarity of each image.
It should be noted that the images to be recognized in the present embodiment is the images to be recognized comprising face.
In step 103, the hair style of images to be recognized is determined using each similarity.
Present embodiment, when hair style identifies, images to be recognized is compared with pre-stored image, then utilizes similarity
Determine the hair style in images to be recognized.Image due to prestoring has multiple, and hair style is it is known that by using the multiple method comparing
Hair style is identified, substantially increases resolution and the accuracy of hair style, and there is more preferable robustness.
Second embodiment of the present invention is related to a kind of hair style recognition methodss, and present embodiment is the excellent of first embodiment
Change, using default bilateral depth convolutional neural networks model, make the phase knowledge and magnanimity of the images to be recognized of acquisition and each image more
Accurately, identification degree is high, and concrete operations flow process is as shown in Figure 2.
In step 201, pre-set image storehouse and 5 kinds of hair styles.
Due to the step 101 in the step 201 in Fig. 2 and Fig. 1 just the same it is intended to be used for presetting in systems and waiting to know
The pre-stored image that other image is compared, repeats no more here.
In step 202., using default bilateral depth convolutional neural networks model, images to be recognized and each image are obtained
Similarity.
Specifically, need images to be recognized with each image in image library according to default bilateral depth convolutional Neural net
Network model is compared one by one, then obtains the similarity of images to be recognized and each image respectively.
It should be noted that default bilateral depth convolutional neural networks model, it is possible to use following methods obtain:
The default Sample Storehouse including m hair style image, m is the natural number more than 2;
First, the hair style image in default Sample Storehouse is matched two-by-two, according to pairing result by the hair style in Sample Storehouse
Image is divided into identical hair style group and different hair style group;So-called join two-by-two can for allow hair style image repeat matching method,
Can also be the matching method not allowing hair style image to repeat.
Such as, in default Sample Storehouse, one has 10 hair style images, if according to the matching method allowing repetition, joined
To result it is: 1 can be one group with any one hair style image in 2 to 10, and 2 can be with any one hair style in 3 to 10
Image is one group, and 3 can be one group with any one hair style image in 4 to 10, be paired to always finally according to this rule;
If according to the matching method not allowing repetition, pairing result is: 1 and 2 are one group, and 3 and 4 are one group, and 5 and 6 are one group, 7 and 8
For one group, 9 and 10 are one group.
Then, using default bilateral depth convolutional neural networks framework, respectively to identical hair style group and different hair style group
In hair style image be trained, obtain default bilateral depth convolutional neural networks model.
In step 203, the hair style of images to be recognized is determined using each similarity.
Specifically, determined using each similarity in the hair style of images to be recognized it is necessary first to according to images to be recognized
Similarity, chooses the image of predetermined number, such as, it is ranked up according to size according to similarity, take the first two ten result;So
Just the corresponding hair style of images to be recognized can be determined according to the selected corresponding hair style of image afterwards, such as, to the knot chosen
Fruit is voted, and the most hair style of poll is the corresponding hair style of images to be recognized.
Present embodiment, when hair style identifies, by using default bilateral depth convolutional neural networks model, will be waited to know
Other image is compared with pre-stored image, then determines the hair style in images to be recognized using similarity, further improves
The resolution of hair style and accuracy, and there is more preferable robustness.
Third embodiment of the present invention is related to a kind of hair style recognition methodss, and present embodiment is the excellent of second embodiment
Change, by the rectification to hair style image, make that the knowledge and magnanimity of hair style in images to be recognized are more accurate, identification degree is high, concrete grasp
Make flow process as shown in Figure 3.
In step 301, pre-set image storehouse and 5 kinds of hair styles.
Due to the step 101 in the step 301 in Fig. 3 and Fig. 1 just the same it is intended to be used for presetting in systems and waiting to know
The pre-stored image that other image is compared, repeats no more here.
In step 302, hair style image is corrected.
Specifically, when hair style image being corrected, can be corrected in accordance with the following methods:
First, hair style image is carried out with recognition of face, obtains the face in hair style image, the key point of locating human face.
It should be noted that using existing haar (haar-link features Lis Hartel in present embodiment
Levy) method for detecting human face carries out facial image acquisition, as shown in Figure 4 it would be desirable to face area in carrying out the image of hair style identification
Domain is intercepted, and then carries out face using sdm (security device manager has supervision descent algorithm) method crucial
Point location, as shown in figure 5, by the positions such as the eyebrow in images to be recognized, eyes, face, nose or these position combination in any
The part obtaining afterwards is as key point.
Then, using the aligning hair style image of key point.
Such as, rotate hair style image using key point, and/or utilize key point deformation hair style image.
In step 303, using default bilateral depth convolutional neural networks model, images to be recognized and each image are obtained
Similarity.
In step 304, the hair style of images to be recognized is determined using each similarity.
Due to the step 202 in the step 303 in Fig. 3, step 304 and Fig. 2, step 203 is just the same it is intended to utilization is pre-
If bilateral depth convolutional neural networks model, obtain the similarity of images to be recognized and each image, and according to the phase getting
Determine the hair style of images to be recognized like degree, repeat no more here.
Training to the bilateral depth convolutional neural networks model employed in present embodiment hair style recognition methodss below
Specifically illustrate.
Step 601, carries out Face datection and crucial point location to input picture.
Specifically, by facial image acquisition is carried out using existing haar method for detecting human face, carried out with sdm method
Face key point location.
Step 602, determines human face posture according to key point integral position and corrects.
Specifically, after obtaining face key point, two eyes can be obtained, eyebrow, the position such as face, nose several
What position, judges the attitude of current face's image according to these.Then the position according to two eyes, carries out geometry to face figure
Conversion, makes two eyes in the same horizontal line, reaches the effect of correcting image with this.
Step 603, expands facial image.
Specifically, the facial image due to being detected by haar method for detecting human face does not often comprise hair style region,
It is thus desirable to by the facial image of detection upwards, left and right each expand 0.2 times, downwards expand 0.6 times, hair style part be included into
Come.
Step 604, trains bilateral depth convolutional neural networks model.
Specifically, by using a kind of efficient depth convolutional neural networks framework, a large amount of hair style samples being joined two-by-two
Right, hair style sample is divided into by identical hair style and different hair style samples pair according to pairing structure, carries out according to such compound mode
Training, obtains bilateral depth convolutional neural networks model, specific depth convolutional neural networks frame structure is as shown in Figure 7:
First, from Sample Storehouse 701, hair style image is divided into by identical hair style group or different hair style according to pairing structure two-by-two
Two width hair style images, respectively hair style image dd and hair style image dd is chosen in group, and respectively to the hair style image dd in 7021
Carry out convolution operation c with the hair style image dd in 7022;After having executed convolution operation c, then to hair style image dd and
Hair style image dd carries out pondization operation p, and the convolution operation of such first round and pondization operation just finish.And then to process
First time convolution operation c and pondization operate the hair style image dd and hair style image dd of p to carry out convolution operation c of the second wheel
Operate p with pondization.Hair style image dd and hair style image dd, after two-wheeled convolution operation and pondization operate, proceeds by
Excitation operation r, after the completion of excitation operation r operates to hair style image dd and hair style image dd, to hair style image dd and hair style figure
As dd is connected f entirely, after said process has been carried out, probability output the most at last.
Step 605, builds hair style identification image library.
Specifically, each hair style is built Sample Storehouse according to each attitude in each visual angle.
In present embodiment, by correcting to hair style image to be identified, and utilize bilateral depth convolutional network model
Compare, enormously simplify the complexity in hair style identification process, and improve recognition efficiency, adapt to Various Complex field
Scape, by reducing the sensitivity to human face posture, substantially increases identification degree and the accuracy of hair style, and has preferably
Robustness.
The step of various methods divides above, is intended merely to describe clear, can merge into when realizing a step or
Some steps are split, is decomposed into multiple steps, as long as comprising identical logical relation, all in the protection domain of this patent
Interior;To adding inessential modification in algorithm or in flow process or introducing inessential design, but do not change its algorithm
With the core design of flow process all in the protection domain of this patent.
Four embodiment of the invention is related to a kind of hair style identifying device, and concrete structure is as shown in Figure 8.
Identifying device 800 specifically includes: presetting module 801, comparing module 802, identification module 803.
Presetting module 801, for pre-set image storehouse and n kind hair style, image library includes image corresponding with n kind hair style,
Wherein, every kind of hair style at least corresponds to 2 images, and n is natural number.
Comparing module 802, for comparing images to be recognized one by one with image each in image library, obtains figure to be identified respectively
As the similarity with each image.
Identification module 803, for determining the hair style of images to be recognized using each similarity.
It is seen that, present embodiment is the system embodiment corresponding with first embodiment, and present embodiment can be with
First embodiment is worked in coordination enforcement.The relevant technical details mentioned in first embodiment still have in the present embodiment
Effect, in order to reduce repetition, repeats no more here.Correspondingly, the relevant technical details mentioned in present embodiment are also applicable in
In first embodiment.
The hair style identifying device being provided by present embodiment, when hair style identifies, comparing module 802 is by images to be recognized
Compare with the pre-stored image in pre- storing module 801, then identification module 803 is determined in images to be recognized using similarity
Hair style.Image due to prestoring has multiple, and hair style is it is known that be identified to hair style by using the multiple method comparing, greatly
Improve greatly resolution and the accuracy of hair style, and there is more preferable robustness.
Below the actual device structure of user terminal according to the present invention is illustrated.
5th embodiment of the present invention is related to a kind of user terminal, and its concrete structure is as shown in Figure 9.This user terminal
900 include: memorizer 901, processor 902, display 903.Wherein memorizer 901 is used for storing processor 902 and can perform generation
Code or other information.Wherein processor is the core of terminal, the function that the comparing module being related in said apparatus embodiment is processed
Mainly realized by processor 902.Wherein display is used for the data after processing video-stream processor, and display 901 also has
There is photographic head, can be used for obtaining the information of input, be then passed to processor 902 and processed.
In present embodiment, after the display 903 in user terminal 900 gets the hair style image of input, will obtain
Find that image passes to the process that processor 902 carries out Face datection and crucial point location, finally realize face and correct, then
Verified by the bilateral depth convolutional neural networks model being pre-stored in memorizer 901, obtain presetting the acquaintance of each image
Spend, and the image forward to ranking is ranked up, by display, final vote is obtained and differentiate that the image of result shows.
It is noted that involved each module in present embodiment is logic module, in actual applications, one
Individual logical block can be a part for a physical location or a physical location, can also be with multiple physics lists
The combination of unit is realized.Additionally, for the innovative part projecting the present invention, will not be with solution institute of the present invention in present embodiment
The unit that the technical problem relation of proposition is less close introduces, but this is not intended that in present embodiment there are not other lists
Unit.
It will be appreciated by those skilled in the art that all or part of step realized in above-described embodiment method can be by
Program to complete come the hardware to instruct correlation, and this program storage, in a storage medium, includes some instructions use so that one
Individual equipment (can be single-chip microcomputer, chip etc.) or processor (processor) execute each embodiment methods described of the application
All or part of step.And aforesaid storage medium includes: u disk, portable hard drive, read only memory (rom, read-only
Memory), random access memory (ram, random access memory), magnetic disc or CD etc. are various can store journey
The medium of sequence code.
It will be understood by those skilled in the art that the respective embodiments described above are to realize the specific embodiment of the present invention,
And in actual applications, can to it, various changes can be made in the form and details, without departing from the spirit and scope of the present invention.
Claims (10)
1. a kind of hair style recognition methodss are it is characterised in that include:
Pre-set image storehouse and n kind hair style, described image storehouse includes image corresponding with described n kind hair style, wherein, every kind of hair style
At least correspond to 2 images, described n is natural number;
Images to be recognized is compared one by one with each image in described image storehouse, obtains described images to be recognized and described each figure respectively
The similarity of picture;
Determine the hair style of described images to be recognized using each described similarity.
2. hair style recognition methodss according to claim 1 it is characterised in that described by images to be recognized and described image storehouse
In each image compare one by one, specifically include: images to be recognized is compared one by one with each image in described image storehouse.
3. hair style recognition methodss according to claim 2 are it is characterised in that described default bilateral depth convolutional Neural net
Network model, is obtained using following methods:
The default Sample Storehouse including m hair style image, described m is the natural number more than 2;
Hair style image in default Sample Storehouse is matched two-by-two, according to pairing result, the hair style image in described Sample Storehouse is divided
For identical hair style group and different hair style group;
Using default bilateral depth convolutional neural networks framework, respectively in described identical hair style group and described difference hair style group
Hair style image be trained, obtain described default bilateral depth convolutional neural networks model.
4. hair style recognition methodss according to claim 3 are it is characterised in that described send out to identical hair style group and difference respectively
During hair style image in type group is trained, described hair style image is the hair style image through overcorrection.
5. hair style recognition methodss according to claim 4 are it is characterised in that described hair style imagery exploitation following methods are rectified
Just:
Described hair style image is carried out with recognition of face, obtains the face in described hair style image, position the key point of described face;
Using hair style image described in the aligning of described key point.
6. hair style recognition methodss according to claim 5 are it is characterised in that described in the aligning of described utilization key point
In hair style image, specifically include:
Rotate described hair style image using described key point, and/or utilize hair style image described in described key point deformation.
7. hair style recognition methodss according to claim 5 are it is characterised in that described key point is one below or it is any
Combination: eyes, eyebrow and face.
8. hair style recognition methodss according to claim 5 are it is characterised in that described carry out recognition of face to hair style image
Afterwards, also include: on described hair style image, the human face region recognizing is extended preset ratio, as identification region;
Described respectively the hair style image in identical hair style group and different hair style groups is trained among, respectively to identical hair style group
It is trained with the identification region of the hair style image in different hair style groups.
9. hair style recognition methodss according to claim 2 are it is characterised in that described images to be recognized is treating through overcorrection
Identification image.
10. a kind of hair style identifying device is it is characterised in that include:
Presetting module, for pre-set image storehouse and n kind hair style, described image storehouse includes image corresponding with described n kind hair style,
Wherein, every kind of hair style at least corresponds to 2 images, and described n is natural number;
Comparing module, for comparing images to be recognized one by one with each image in described image storehouse, obtains described to be identified respectively
Image and the similarity of described each image;
Identification module, for determining the hair style of described images to be recognized using each described similarity.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610743694.7A CN106372652A (en) | 2016-08-28 | 2016-08-28 | Hair style identification method and hair style identification apparatus |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610743694.7A CN106372652A (en) | 2016-08-28 | 2016-08-28 | Hair style identification method and hair style identification apparatus |
Publications (1)
Publication Number | Publication Date |
---|---|
CN106372652A true CN106372652A (en) | 2017-02-01 |
Family
ID=57903186
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610743694.7A Pending CN106372652A (en) | 2016-08-28 | 2016-08-28 | Hair style identification method and hair style identification apparatus |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106372652A (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109117760A (en) * | 2018-07-27 | 2019-01-01 | 北京旷视科技有限公司 | Image processing method, device, electronic equipment and computer-readable medium |
CN109271846A (en) * | 2018-08-01 | 2019-01-25 | 深圳云天励飞技术有限公司 | Personal identification method, apparatus and storage medium |
CN109408940A (en) * | 2018-10-18 | 2019-03-01 | 杭州数为科技有限公司 | A kind of identification of hair style and restoring method, apparatus and system |
CN110033448A (en) * | 2019-04-15 | 2019-07-19 | 中国医学科学院皮肤病医院 | A kind of male bald Hamilton classification prediction analysis method of AI auxiliary of AGA clinical image |
WO2020063527A1 (en) * | 2018-09-30 | 2020-04-02 | 叠境数字科技(上海)有限公司 | Human hairstyle generation method based on multi-feature retrieval and deformation |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102368300A (en) * | 2011-09-07 | 2012-03-07 | 常州蓝城信息科技有限公司 | Target population various characteristics extraction method based on complex environment |
CN104239898A (en) * | 2014-09-05 | 2014-12-24 | 浙江捷尚视觉科技股份有限公司 | Method for carrying out fast vehicle comparison and vehicle type recognition at tollgate |
CN105574543A (en) * | 2015-12-16 | 2016-05-11 | 武汉烽火众智数字技术有限责任公司 | Vehicle brand and model identifying method and system based on deep learning |
US20160189252A1 (en) * | 2012-02-22 | 2016-06-30 | Paypal, Inc. | User identification and personalization based on automotive identifiers |
-
2016
- 2016-08-28 CN CN201610743694.7A patent/CN106372652A/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102368300A (en) * | 2011-09-07 | 2012-03-07 | 常州蓝城信息科技有限公司 | Target population various characteristics extraction method based on complex environment |
US20160189252A1 (en) * | 2012-02-22 | 2016-06-30 | Paypal, Inc. | User identification and personalization based on automotive identifiers |
CN104239898A (en) * | 2014-09-05 | 2014-12-24 | 浙江捷尚视觉科技股份有限公司 | Method for carrying out fast vehicle comparison and vehicle type recognition at tollgate |
CN105574543A (en) * | 2015-12-16 | 2016-05-11 | 武汉烽火众智数字技术有限责任公司 | Vehicle brand and model identifying method and system based on deep learning |
Non-Patent Citations (2)
Title |
---|
SERGEY ZAGORUYKO 等: "Learning to Compare Image Patches via Convolutional Neural Networks", 《2015 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR)》 * |
陈传波 等: "《数字图像处理》", 31 July 2004, 北京:机械工业出版社 * |
Cited By (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109117760A (en) * | 2018-07-27 | 2019-01-01 | 北京旷视科技有限公司 | Image processing method, device, electronic equipment and computer-readable medium |
CN109117760B (en) * | 2018-07-27 | 2021-01-22 | 北京旷视科技有限公司 | Image processing method, image processing device, electronic equipment and computer readable medium |
CN109271846A (en) * | 2018-08-01 | 2019-01-25 | 深圳云天励飞技术有限公司 | Personal identification method, apparatus and storage medium |
WO2020063527A1 (en) * | 2018-09-30 | 2020-04-02 | 叠境数字科技(上海)有限公司 | Human hairstyle generation method based on multi-feature retrieval and deformation |
KR20200070409A (en) * | 2018-09-30 | 2020-06-17 | 플렉스-브이알 디지털 테크놀로지 (상하이) 씨오., 엘티디. | Human hairstyle creation method based on multiple feature search and transformation |
GB2581758A (en) * | 2018-09-30 | 2020-08-26 | Plex-Vr Digital Tech (Shanghai) Coltd | Human hair style generation method based on multi-feature search and deformation |
KR102154470B1 (en) * | 2018-09-30 | 2020-09-09 | 플렉스-브이알 디지털 테크놀로지 (상하이) 씨오., 엘티디. | 3D Human Hairstyle Generation Method Based on Multiple Feature Search and Transformation |
US10891511B1 (en) | 2018-09-30 | 2021-01-12 | Plex-Vr Digital Technology (Shanghai) Co., Ltd. | Human hairstyle generation method based on multi-feature retrieval and deformation |
GB2581758B (en) * | 2018-09-30 | 2021-04-14 | Plex Vr Digital Tech Shanghai Co Ltd | Human hair style generation method based on multi-feature search and deformation |
CN109408940A (en) * | 2018-10-18 | 2019-03-01 | 杭州数为科技有限公司 | A kind of identification of hair style and restoring method, apparatus and system |
CN110033448A (en) * | 2019-04-15 | 2019-07-19 | 中国医学科学院皮肤病医院 | A kind of male bald Hamilton classification prediction analysis method of AI auxiliary of AGA clinical image |
CN110033448B (en) * | 2019-04-15 | 2021-05-18 | 中国医学科学院皮肤病医院 | AI-assisted male baldness Hamilton grading prediction analysis method for AGA clinical image |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106372652A (en) | Hair style identification method and hair style identification apparatus | |
CN109284733B (en) | Shopping guide negative behavior monitoring method based on yolo and multitask convolutional neural network | |
KR102486699B1 (en) | Method and apparatus for recognizing and verifying image, and method and apparatus for learning image recognizing and verifying | |
US10373244B2 (en) | System and method for virtual clothes fitting based on video augmented reality in mobile phone | |
US9349076B1 (en) | Template-based target object detection in an image | |
US9069948B2 (en) | Methods, systems, and media for measuring quality of gesture-based passwords | |
WO2018028546A1 (en) | Key point positioning method, terminal, and computer storage medium | |
CN105303179A (en) | Fingerprint identification method and fingerprint identification device | |
US11244157B2 (en) | Image detection method, apparatus, device and storage medium | |
CN110222780A (en) | Object detecting method, device, equipment and storage medium | |
US20200257885A1 (en) | High speed reference point independent database filtering for fingerprint identification | |
CN102272774B (en) | Method, apparatus and computer program product for providing face pose estimation | |
US9558389B2 (en) | Reliable fingertip and palm detection | |
US20150262370A1 (en) | Image processing device, image processing method, and image processing program | |
US11468296B2 (en) | Relative position encoding based networks for action recognition | |
CN110968734A (en) | Pedestrian re-identification method and device based on depth measurement learning | |
CN104881657B (en) | Side face recognition methods, side face construction method and system | |
KR20180107988A (en) | Apparatus and methdo for detecting object of image | |
WO2022170896A1 (en) | Key point detection method and system, intelligent terminal, and storage medium | |
US11164327B2 (en) | Estimation of human orientation in images using depth information from a depth camera | |
CN106295620A (en) | Hair style recognition methods and hair style identification device | |
CN110502961A (en) | A kind of facial image detection method and device | |
CN106355247B (en) | Data processing method and device, chip and electronic equipment | |
CN110633630B (en) | Behavior identification method and device and terminal equipment | |
CN111507289A (en) | Video matching method, computer device and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20170201 |
|
WD01 | Invention patent application deemed withdrawn after publication |