CN105809096A - Figure labeling method and terminal - Google Patents
Figure labeling method and terminal Download PDFInfo
- Publication number
- CN105809096A CN105809096A CN201410851972.1A CN201410851972A CN105809096A CN 105809096 A CN105809096 A CN 105809096A CN 201410851972 A CN201410851972 A CN 201410851972A CN 105809096 A CN105809096 A CN 105809096A
- Authority
- CN
- China
- Prior art keywords
- personage
- picture
- mark
- marked
- character features
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
- 238000002372 labelling Methods 0.000 title claims abstract description 15
- 238000000034 method Methods 0.000 claims abstract description 32
- 238000001514 detection method Methods 0.000 abstract description 16
- 238000001914 filtration Methods 0.000 abstract description 3
- 230000009467 reduction Effects 0.000 description 8
- 238000010586 diagram Methods 0.000 description 6
- 239000011159 matrix material Substances 0.000 description 6
- 238000012163 sequencing technique Methods 0.000 description 6
- 230000008859 change Effects 0.000 description 4
- 230000000052 comparative effect Effects 0.000 description 4
- 238000012549 training Methods 0.000 description 4
- 238000013461 design Methods 0.000 description 3
- 230000000694 effects Effects 0.000 description 3
- PXFBZOLANLWPMH-UHFFFAOYSA-N 16-Epiaffinine Natural products C1C(C2=CC=CC=C2N2)=C2C(=O)CC2C(=CC)CN(C)C1C2CO PXFBZOLANLWPMH-UHFFFAOYSA-N 0.000 description 2
- 230000009471 action Effects 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 230000001815 facial effect Effects 0.000 description 2
- 238000012423 maintenance Methods 0.000 description 2
- 230000000877 morphologic effect Effects 0.000 description 2
- 238000010606 normalization Methods 0.000 description 2
- 230000000717 retained effect Effects 0.000 description 2
- 238000011524 similarity measure Methods 0.000 description 2
- 230000009469 supplementation Effects 0.000 description 2
- 230000008901 benefit Effects 0.000 description 1
- 230000015572 biosynthetic process Effects 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 230000008569 process Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Health & Medical Sciences (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Image Analysis (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
The invention discloses a figure labeling method. According to the method, figure characteristics of labeled figures in labeled figure pictures are acquired through skin color filtering in combination with face detection; according to the acquired figure characteristics of the labeled figures, corresponding figures are identified by utilizing figure characteristic similarities to carry out cluster labeling of the corresponding figures in unlabeled figure pictures. The invention further provides a terminal. Through the figure labeling method, labeling accuracy and efficiency are high.
Description
Technical field
The present invention relates to communication field, particularly relate to personage's mask method and terminal.
Background technology
Along with developing rapidly of science and technology, Face datection is used widely in every respect, but about the mark of face, especially personage is marked, staff often can only be relied on to mark one by one, or utilize text to obtain people information from face picture context, the former wastes time and energy, inefficiency, and the accuracy of the latter is big to the dependency of context, does not often reach desirable effect.Therefore, how to design personage's mask method that a kind of accuracy is high and annotating efficiency is high, be a problem demanding prompt solution.
Summary of the invention
Present invention is primarily targeted at a kind of personage's mask method of offer and terminal, it is intended to solve personage and mark the problem that accuracy is low and annotating efficiency is low.
For achieving the above object, the present invention provides a kind of personage's mask method, and described personage's mask method includes:
Marked, to obtain, the character features having marked personage in personage's picture in conjunction with color filter and Face datection;
According to the character features marking personage obtained, utilize the personage that character features similarity identification is corresponding, to carry out cluster mark to not marking in personage's picture corresponding personage.
Preferably, described in conjunction with color filter and Face datection to include before obtaining the step having marked the character features having marked personage in personage's picture:
By retrieving local file, obtain personage's picture;Gather all personage's pictures, form personage's picture set.
Preferably, described include to obtain the step having marked the character features having marked personage in personage's picture in conjunction with color filter and Face datection:
In conjunction with the personage manually marked in color filter and Face datection identification personage's picture;
Personage according to the manually mark identified, obtains the character features manually marking personage.
Preferably, according to the character features marking personage obtained, utilize the personage that character features similarity identification is corresponding, to include not marking after personage corresponding in personage's picture carries out the step that cluster marks:
Corresponding personage in the personage's picture marked not yet after cluster mark is iterated mark and/or recommends sequence.
Preferably, according to the character features marking personage obtained, utilize the personage that character features similarity identification is corresponding, to include not marking after personage corresponding in personage's picture carries out the step that cluster marks:
Being shown by source figure, embed display and/or the mode that show of thumbnail, personage's picture that cluster is marked displays.
In order to solve above-mentioned technical problem, the present invention further provides a kind of terminal, described terminal includes:
Acquisition module, for having marked, to obtain, the character features having marked personage in personage's picture in conjunction with color filter and Face datection;
Labeling module, for according to the character features marking personage obtained, utilizing the personage that character features similarity identification is corresponding, to carry out cluster mark to not marking in personage's picture corresponding personage.
Preferably, described terminal also includes:
Form module, for by retrieving local file, obtaining personage's picture;Gather all personage's pictures, form personage's picture set.
Preferably, described acquisition module includes:
Recognition unit, in conjunction with the personage manually marked in color filter and Face datection identification personage's picture;
Acquiring unit, for the personage according to the manually mark identified, obtains the character features manually marking personage.
Preferably, described terminal also includes:
Order module, for being iterated mark to corresponding personage in the personage's picture marked not yet after cluster mark and/or recommend sequence.
Preferably, described terminal also includes:
Display module, for being shown, embed the mode that display and/or thumbnail show by source figure, is displayed personage's picture of cluster mark.
Personage's mask method provided by the invention, by having marked, to obtain, the character features having marked personage in personage's picture in conjunction with color filter and Face datection;According to the character features marking personage obtained, utilize the personage that character features similarity identification is corresponding, to carry out cluster mark to not marking in personage's picture corresponding personage.The present invention marks accuracy and efficiency is high.
Accompanying drawing explanation
Fig. 1 is the schematic flow sheet of personage's mask method first embodiment of the present invention;
Fig. 2 is the schematic flow sheet of personage's mask method the second embodiment of the present invention;
Fig. 3 be described in Fig. 1 in conjunction with color filter and Face datection to obtain the refinement schematic flow sheet of the step having marked the character features having marked personage in personage's picture;
Fig. 4 is the schematic flow sheet of personage's mask method the 3rd embodiment of the present invention;
Fig. 5 is the schematic flow sheet of personage's mask method the 4th embodiment of the present invention;
Fig. 6 is the high-level schematic functional block diagram of terminal first embodiment of the present invention;
Fig. 7 is the high-level schematic functional block diagram of terminal the second embodiment of the present invention;
Fig. 8 is the high-level schematic functional block diagram of acquisition module described in Fig. 6.
Fig. 9 is the high-level schematic functional block diagram of terminal the 3rd embodiment of the present invention;
Figure 10 is the high-level schematic functional block diagram of terminal the 4th embodiment of the present invention;
The realization of the object of the invention, functional characteristics and advantage will in conjunction with the embodiments, are described further with reference to accompanying drawing.
Detailed description of the invention
Should be appreciated that specific embodiment described herein is only in order to explain the present invention, is not intended to limit the present invention.
The present invention provides a kind of personage's mask method, and with reference to Fig. 1, in the first embodiment, described personage's mask method includes:
Step S100, in conjunction with color filter and Face datection to have marked, to obtain, the character features having marked personage in personage's picture in conjunction with color filter and Face datection.
In the present embodiment, the personage having marked personage's picture acceptance of the bid note manually marks, it is also possible to think terminal automatic marking, it is also possible to be the result utilizing several times manually to mark, the face not marked is carried out automatic marking.Wherein, artificial mark can be that many figure mark simultaneously, it is also possible to is revised by artificial or remove mark by hand;Automatic marking can also be single personage or multiple personage carry out free hand drawing simultaneously or schemes mark more.In the present embodiment, face information is adopted the character features of personage's picture to be detected and identifies, and in conjunction with color filter and human face detection tech.Face datection is based on the framework realization of V-J human-face detector.Traditional V-J human-face detector is by scanning a large amount of detection block, extract Haar feature, carry out Adaboost algorithm again and quickly filter non-face picture box, and in the present embodiment, according to the priori necessarily comprising major part area of skin color in face, each face detection block is first carried out Face Detection, can fast and effeciently assist to determine whether face by Face Detection result, thus filtering as previous stage, filter out major part region.Obtained the colour of skin likelihood score of each pixel during concrete Face Detection by a large amount of training pictures, calculate the average colour of skin likelihood score of detection block, when more than entire image average colour of skin likelihood score, retained picture is sent into subordinate's grader, otherwise filter.Then traversal detects all pictures under current local path, preserves human face data, creates a data cell about each face.This data cell contains the people information of this face, serial number information, eigenvector information, source-information, coordinate and attitude information.During establishment, related personal image information is empty, and follow-up labeling operation can change this item of information, and the figure path, source of source-information this face i.e. is reading value, serial number information is design load, progressively increasing according to the order of sequence, coordinate information is the end value of Face datection, characteristic vector and attitude information is then result of calculation value.The dimension of final characteristic vector is 200 dimensions, wherein 100 dimensions are the LBP features after dimensionality reduction, LBP (LocalBinaryPattern, local binary patterns) 100 dimension is HOG (HistogramofOrientedGradient, the histograms of oriented gradients) feature after dimensionality reduction.HOG feature is passed through to carry out positioning feature point first with SDM algorithm, orient right and left eyes, the left and right corners of the mouth and nose be totally five characteristic points, the position at face center can be calculated by the coordinate of the left and right corners of the mouth, utilize the position at face center and three points of right and left eyes as the standard of alignment, by the fixing position that these 3 are mapped in 100*100 sized images by affine change, carry out the facial image life in this 100*100 size and extract LBP feature and HOG feature, then it is utilized respectively good PCA and the LDA dimensionality reduction matrix of training in advance respectively LBP and HOG Feature Dimension Reduction value 100 to be tieed up, carry out mould normalization respectively, it is connected into the characteristic vector of 200 dimensions.By five human face characteristic points having good positioning, and the three-dimensional coordinate of the corresponding point in universal three-dimensional human face model, reverse can go out corresponding rotation map matrix, (length of side is equal to the front face frame of a standard to utilize this rotation map matrix, and without spin) convert, just can obtain the three-dimensional face frame identical with human face posture, may then pass through the face frame that perspective projection obtains having obvious 3D vision effect.
Step S200, according to the character features marking personage obtained, utilize the personage that character features similarity identification is corresponding, to carry out cluster mark to not marking in personage's picture corresponding personage.
In this embodiment, utilize the personage that character features similarity identification is corresponding, according to the corresponding personage identified, cluster mark is adopted to not marking personage corresponding in personage's picture, wherein, cluster mark uses K-mean cluster mode, preset a k value, all personages are clustered, cluster result is divided into k class, the cluster centre taking each class displays and points out mark, default people information can be read carry out selecting mark, terminal maintenance one preserves the file presetting personage, at this moment this file it is loaded into, just a personage can be selected in the personage that file provides to be labeled, allow also to input mark.After completing the artificial mark of this k cluster centre, triggering automatic marking, to each cluster set, all pictures and cluster centre picture in set carry out Similarity Measure, and similarity adopts same mark personage more than the picture of threshold value and cluster centre.To the remaining picture not marked again by K-mean cluster, above-mentioned labeling operation can be repeated, until abandoning circulation or completing the mark of all personage's pictures.
Personage's mask method that the present embodiment provides, by having marked, to obtain, the character features having marked personage in personage's picture in conjunction with color filter and Face datection;According to the character features marking personage obtained, utilize the personage that character features similarity identification is corresponding, to carry out cluster mark, accuracy and annotating efficiency high to not marking in personage's picture corresponding personage.
As in figure 2 it is shown, the schematic flow sheet that Fig. 2 is personage's mask method the second embodiment of the present invention, on the basis of first embodiment, include before described step S100:
Step S100A, by retrieve local file, obtain personage's picture;Gather all personage's pictures, form personage's picture set.
Terminal retrieval local file a, it is intended that local path containing picture, obtains picture file, carry out Face datection, obtain personage's picture, all of personage's picture is gathered, form personage's picture set, all of personage's picture is saved in personage's picture set.
Personage's mask method that the present embodiment provides, by Face datection and picture set, forms personage's picture set, is beneficial to quick mark, is greatly improved annotating efficiency.
As it is shown on figure 3, the refinement schematic flow sheet that Fig. 3 is step S100 described in Fig. 1, described step S100 includes:
Step S110, in conjunction with the personage of manual mark in color filter and Face datection identification personage's picture.
The terminal touch action according to pickup people, is converted to beacon signal by touch signals, thus in conjunction with the personage manually marked in color filter and Face datection identification personage's picture.
Step S120, personage according to the manually mark identified, obtain the character features manually marking personage.
The terminal personage according to the manually mark identified, by means such as recognitions of face, obtains the feature manually marking personage, such as eye mask, face's morphological characteristic.
As shown in Figure 4, Fig. 4 is the schematic flow sheet of personage's mask method the 3rd embodiment of the present invention, on the basis of first embodiment, and personage's mask method that the 3rd embodiment provides, include after described step S200:
Step S300, corresponding personage in personage's picture of marking not yet after cluster mark is iterated mark and/or recommends sequence.
After terminating cluster mark link, terminal can start iteration mark, choose the picture of one or more same personage, after completing the mark of this personage, the recommendation sequence of triggering terminal, according to current mark personage, what the face picture not marked is carried out about current people is integrated ordered, by similarity sequencing display from high to low, consequently facilitating artificial further mark, choose forward face to be labeled, save the lookup time.In described labeling operation module, during input, there is option recommendation, it is recommended that item is the sequencing of similarity result of the current face to mark selected and different personages.The embodiment of option recommendation is: the face picture in currently selected and all pictures that marked is compared, by sequencing of similarity comparative result, personage corresponding for comparative result is alternatively recommended personage.Then trigger first round iteration, to marking the picture of personage and all personage's pictures that do not mark calculate similarity, similarity is labeled automatically higher than the personage's picture that do not mark of threshold value.After treating that last round of mark is complete, it is possible to continue to select artificial mark or iteration, due in last round of, it is possible to some pictures of automatic marking, therefore the input picture of iteration changes, and iteration there may be other pictures more and is marked again.And still can not by automatic marking for those successive ignition, it is possible to artificial supplementation marks, and then adopts iteration mark and recommends the mode of sequence repeated overlapping, until completing personage's mark of all face picture.Above-mentioned similarity uses cosine similarity, cosine similarity to be exactly the cosine value of two characteristic vector angle theta,In the normalized situation of characteristic vector mould, two vectorial cosine similarity and its inner product are proportional.The detailed description of the invention recommending sequence can describe as follows, if the set of all face picture is O, has been labeled as the pictures A of current people, and not marking pictures is B, and the pictures being labeled as other personages are C.B takes all pictures in a picture X and A and calculates similarity, take maximum as the picture X similarity PX to A.Again all pictures in X and C are calculated similarity afterwards, if there is similarity PC more than PX, then it is assumed that picture X is closer to the personage of non-A, therefore subtracts 1 by PX.Complete in B after all pictures again, to PX descending sort, before the figure sector-meeting that only those are not subtracted 1 comes.These are exactly the result recommended.
Personage's mask method that the present embodiment provides, is iterated mark to corresponding personage in the personage's picture marked not yet after cluster mark and/or recommends sequence, thus improving the efficiency of mark.
As it is shown in figure 5, the schematic flow sheet that Fig. 5 is personage's mask method the 4th embodiment of the present invention, on the basis of first embodiment, personage's mask method that the 4th embodiment provides, include after described step S200:
Step S400, being shown by source figure, embed display and/or the mode that show of thumbnail, personage's picture that cluster is marked displays.
Terminal is according to the corresponding personage identified, to personage's picture of all picture set according to the character features identified, classification stores, such as it is placed under identical file folder to store personage's picture of same personage, and the quantity of personage's picture after statistical classification, carrying out classification display, the mode of described display can be show by source figure, can also be that face information embeds display, it is also possible to be that thumbnail shows.When wherein face information embeds display, the face utilizing the prominent label detection of attitude information to arrive, make use of the method for freetype (font) display Chinese that characters name is also shown on picture.Picture is grouped by the personage utilizing mark, the thumbnail of the display all pictures containing corresponding personage, wherein, can containing single and many people picture in packet, and face information is embedded display, thus highlighting the personage chosen, and in thumbnail shows, provide the function cancelling mark.
Personage's mask method that the present embodiment provides, is shown by source figure, embeds and show and/or the mode that shows of thumbnail, improve annotating efficiency.
As shown in Figure 6, Fig. 6 is the schematic flow sheet of terminal first embodiment of the present invention, and in the first embodiment, described terminal includes:
Acquisition module 10, for having marked, to obtain, the character features having marked personage in personage's picture in conjunction with color filter and Face datection;
Labeling module 20, for according to the character features marking personage obtained, utilizing the personage that character features similarity identification is corresponding, to carry out cluster mark to not marking in personage's picture corresponding personage.
In the present embodiment, the personage having marked personage's picture acceptance of the bid note manually marks, it is also possible to think terminal automatic marking, it is also possible to be the result utilizing several times manually to mark, the face not marked is carried out automatic marking.Wherein, artificial mark can be that many figure mark simultaneously, it is also possible to is revised by artificial or remove mark by hand;Automatic marking can also be single personage or multiple personage carry out free hand drawing simultaneously or schemes mark more.In the present embodiment, the acquisition module 10 of terminal adopts face information the character features of personage's picture is detected and identifies, and in conjunction with color filter and Face datection.Face datection realizes based on the framework of V-J human-face detector.Traditional V-J human-face detector is by scanning a large amount of detection block, extract Haar feature, carry out Adaboost algorithm again and quickly filter non-face picture box, and in the present embodiment, each face detection block, according to the priori necessarily comprising major part area of skin color in face, is first carried out Face Detection, can fast and effeciently assist to determine whether face by Face Detection result by acquisition module 10, thus filtering as previous stage, filter out major part region.Obtained the colour of skin likelihood score of each pixel during concrete Face Detection by a large amount of training pictures, calculate the average colour of skin likelihood score of detection block, when more than entire image average colour of skin likelihood score, retained picture is sent into subordinate's grader, otherwise filter.Then traversal detects all pictures under current local path, preserves human face data, creates a data cell about each face.This data cell contains the people information of this face, serial number information, eigenvector information, source-information, coordinate and attitude information.During establishment, related personal image information is empty, and follow-up labeling operation can change this item of information, and the figure path, source of source-information this face i.e. is reading value, serial number information is design load, progressively increasing according to the order of sequence, coordinate information is the end value of Face datection, characteristic vector and attitude information is then result of calculation value.The dimension of final characteristic vector is 200 dimensions, wherein 100 dimensions are the LBP features after dimensionality reduction, LBP (LocalBinaryPattern, local binary patterns) 100 dimension is HOG (HistogramofOrientedGradient, the histograms of oriented gradients) feature after dimensionality reduction.HOG feature is passed through to carry out positioning feature point first with SDM algorithm, orient right and left eyes, the left and right corners of the mouth and nose be totally five characteristic points, the position at face center can be calculated by the coordinate of the left and right corners of the mouth, utilize the position at face center and three points of right and left eyes as the standard of alignment, by the fixing position that these 3 are mapped in 100*100 sized images by affine change, carry out the facial image life in this 100*100 size and extract LBP feature and HOG feature, then it is utilized respectively good PCA and the LDA dimensionality reduction matrix of training in advance respectively LBP and HOG Feature Dimension Reduction value 100 to be tieed up, carry out mould normalization respectively, it is connected into the characteristic vector of 200 dimensions.By five human face characteristic points having good positioning, and the three-dimensional coordinate of the corresponding point in universal three-dimensional human face model, reverse can go out corresponding rotation map matrix, (length of side is equal to the front face frame of a standard to utilize this rotation map matrix, and without spin) convert, just can obtain the three-dimensional face frame identical with human face posture, may then pass through the face frame that perspective projection obtains having obvious 3D vision effect.
In this embodiment, the labeling module 20 of terminal utilizes the personage that character features similarity identification is corresponding, according to the corresponding personage identified, cluster mark is adopted to not marking personage corresponding in personage's picture, wherein, cluster mark uses K-mean cluster mode, preset a k value, all personages are clustered, cluster result is divided into k class, the cluster centre taking each class displays and points out mark, default people information can be read carry out selecting mark, terminal maintenance one preserves the file presetting personage, at this moment this file it is loaded into, just a personage can be selected in the personage that file provides to be labeled, allow also to input mark.After completing the artificial mark of this k cluster centre, triggering automatic marking, to each cluster set, all pictures and cluster centre picture in set carry out Similarity Measure, and similarity adopts same mark personage more than the picture of threshold value and cluster centre.To the remaining picture not marked again by K-mean cluster, above-mentioned labeling operation can be repeated, until abandoning circulation or completing the mark of all personage's pictures.
The terminal that the present embodiment provides, by having marked, to obtain, the character features having marked personage in personage's picture in conjunction with color filter and Face datection;According to the character features marking personage obtained, utilize the personage that character features similarity identification is corresponding, to carry out cluster mark, accuracy and annotating efficiency high to not marking in personage's picture corresponding personage.
As it is shown in fig. 7, the schematic flow sheet that Fig. 7 is terminal the second embodiment of the present invention, on the basis of first embodiment, described terminal also includes:
Form module 30, for by retrieving local file, obtaining personage's picture;Gather all personage's pictures, form personage's picture set.
The formation module 30 of terminal retrieves local file, it is intended that a local path containing picture, obtains picture file, carry out Face datection, obtain personage's picture, all of personage's picture is gathered, form personage's picture set, all of personage's picture is saved in personage's picture set.
The terminal that the present embodiment provides, by Face datection and picture set, forms personage's picture set, is beneficial to quick mark, is greatly improved annotating efficiency.
As shown in Figure 8, Fig. 8 is the high-level schematic functional block diagram of acquisition module described in Fig. 6, and described acquisition module 10 includes:
Recognition unit 11, in conjunction with the personage manually marked in color filter and Face datection identification personage's picture;
Acquiring unit 12, for the personage according to the manually mark identified, obtains the character features manually marking personage.
The recognition unit 11 of the terminal touch action according to pickup people, is converted to beacon signal by touch signals, thus in conjunction with the personage manually marked in color filter and Face datection identification personage's picture.
The acquiring unit 12 of the terminal personage according to the manually mark identified, by means such as recognitions of face, obtains the feature manually marking personage, such as eye mask, face's morphological characteristic.
As it is shown in figure 9, the schematic flow sheet that Fig. 9 is terminal the 3rd embodiment of the present invention, on the basis of first embodiment, described terminal also includes:
Order module 30, for being iterated mark to corresponding personage in the personage's picture marked not yet after cluster mark and/or recommend sequence.
After the order module 30 of terminal terminates cluster mark link, terminal can start iteration mark, choose the picture of one or more same personage, after completing the mark of this personage, the recommendation sequence of triggering terminal, according to current mark personage, what the face picture not marked is carried out about current people is integrated ordered, by similarity sequencing display from high to low, consequently facilitating artificial further mark, choose forward face to be labeled, save the lookup time.In described labeling operation module, during input, there is option recommendation, it is recommended that item is the sequencing of similarity result of the current face to mark selected and different personages.The embodiment of option recommendation is: the face picture in currently selected and all pictures that marked is compared, by sequencing of similarity comparative result, personage corresponding for comparative result is alternatively recommended personage.Then trigger first round iteration, to marking the picture of personage and all personage's pictures that do not mark calculate similarity, similarity is labeled automatically higher than the personage's picture that do not mark of threshold value.After treating that last round of mark is complete, it is possible to continue to select artificial mark or iteration, due in last round of, it is possible to some pictures of automatic marking, therefore the input picture of iteration changes, and iteration there may be other pictures more and is marked again.And still can not by automatic marking for those successive ignition, it is possible to artificial supplementation marks, and then adopts iteration mark and recommends the mode of sequence repeated overlapping, until completing personage's mark of all face picture.Above-mentioned similarity uses cosine similarity, cosine similarity to be exactly the cosine value of two characteristic vector angle theta,In the normalized situation of characteristic vector mould, two vectorial cosine similarity and its inner product are proportional.The detailed description of the invention recommending sequence can describe as follows, if the set of all face picture is O, has been labeled as the pictures A of current people, and not marking pictures is B, and the pictures being labeled as other personages are C.B takes all pictures in a picture X and A and calculates similarity, take maximum as the picture X similarity PX to A.Again all pictures in X and C are calculated similarity afterwards, if there is similarity PC more than PX, then it is assumed that picture X is closer to the personage of non-A, therefore subtracts 1 by PX.Complete in B after all pictures again, to PX descending sort, before the figure sector-meeting that only those are not subtracted 1 comes.These are exactly the result recommended.
The terminal that the present embodiment provides, is iterated mark to corresponding personage in the personage's picture marked not yet after cluster mark and/or recommends sequence, thus improving the efficiency of mark.
As shown in Figure 10, Figure 10 is the schematic flow sheet of terminal the 4th embodiment of the present invention, and on the basis of first embodiment, described terminal also includes:
Display module 40, for being shown, embed the mode that display and/or thumbnail show by source figure, is displayed personage's picture of cluster mark.
The display module 40 of terminal is according to the corresponding personage identified, to personage's picture of all picture set according to the character features identified, classification stores, such as it is placed under identical file folder to store personage's picture of same personage, and the quantity of personage's picture after statistical classification, carrying out classification display, the mode of described display can be show by source figure, can also be that face information embeds display, it is also possible to be that thumbnail shows.When wherein face information embeds display, the face utilizing the prominent label detection of attitude information to arrive, make use of the method for freetype (font) display Chinese that characters name is also shown on picture.Picture is grouped by the personage utilizing mark, the thumbnail of the display all pictures containing corresponding personage, wherein, can containing single and many people picture in packet, and face information is embedded display, thus highlighting the personage chosen, and in thumbnail shows, provide the function cancelling mark.
The terminal that the present embodiment provides, is shown by source figure, embeds and show and/or the mode that shows of thumbnail, improve annotating efficiency.
These are only the preferred embodiments of the present invention; not thereby the scope of the claims of the present invention is limited; every equivalent structure utilizing description of the present invention and accompanying drawing content to make or equivalence flow process conversion; or directly or indirectly it is used in other relevant technical fields, all in like manner include in the scope of patent protection of the present invention.
Claims (10)
1. personage's mask method, it is characterised in that described personage's mask method includes:
In conjunction with color filter and Face datection to have marked, to obtain, the character features having marked personage in personage's picture in conjunction with color filter and Face datection;
According to the character features marking personage obtained, utilize the personage that character features similarity identification is corresponding, to carry out cluster mark to not marking in personage's picture corresponding personage.
2. personage's mask method as claimed in claim 1, it is characterised in that described in conjunction with color filter and Face datection to include before obtaining the step having marked the character features having marked personage in personage's picture:
By retrieving local file, obtain personage's picture;Gather all personage's pictures, form personage's picture set.
3. personage's mask method as claimed in claim 2, it is characterised in that described in conjunction with color filter and Face datection to include to obtain the step having marked the character features having marked personage in personage's picture in conjunction with color filter and Face datection:
In conjunction with color filter and Face datection in conjunction with the personage of manual mark in color filter and Face datection identification personage's picture;
Personage according to the manually mark identified, obtains the character features manually marking personage.
4. the personage's mask method as described in any one of claim 1, it is characterized in that, the described character features marking personage according to acquisition, utilizes the personage that character features similarity identification is corresponding, to include not marking after personage corresponding in personage's picture carries out the step that cluster marks:
Corresponding personage in the personage's picture marked not yet after cluster mark is iterated mark and/or recommends sequence.
5. the personage's mask method as described in any one of Claims 1-4, it is characterized in that, the described character features marking personage according to acquisition, utilizes the personage that character features similarity identification is corresponding, to include not marking after personage corresponding in personage's picture carries out the step that cluster marks:
Being shown by source figure, embed display and/or the mode that show of thumbnail, personage's picture that cluster is marked displays.
6. a terminal, it is characterised in that described terminal includes:
Acquisition module, for having marked, to obtain, the character features having marked personage in personage's picture in conjunction with color filter and Face datection;
Labeling module, for according to the character features marking personage obtained, utilizing the personage that character features similarity identification is corresponding, to carry out cluster mark to not marking in personage's picture corresponding personage.
7. terminal as claimed in claim 6, it is characterised in that described terminal also includes:
Form module, for by retrieving local file, obtaining personage's picture;Gather all personage's pictures, form personage's picture set.
8. terminal as claimed in claim 6, it is characterised in that described acquisition module includes:
Recognition unit, in conjunction with the personage manually marked in color filter and Face datection identification personage's picture;
Acquiring unit, for the personage according to the manually mark identified, obtains the character features manually marking personage.
9. the terminal as described in any one of claim 6 to 8, it is characterised in that described terminal also includes:
Order module, for being iterated mark to corresponding personage in the personage's picture marked not yet after cluster mark and/or recommend sequence.
10. the terminal as described in any one of claim 6 to 8, it is characterised in that described terminal also includes:
Display module, for being shown, embed the mode that display and/or thumbnail show by source figure, is displayed personage's picture of cluster mark.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410851972.1A CN105809096A (en) | 2014-12-31 | 2014-12-31 | Figure labeling method and terminal |
PCT/CN2015/073337 WO2016106966A1 (en) | 2014-12-31 | 2015-02-27 | Character labelling method, terminal and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410851972.1A CN105809096A (en) | 2014-12-31 | 2014-12-31 | Figure labeling method and terminal |
Publications (1)
Publication Number | Publication Date |
---|---|
CN105809096A true CN105809096A (en) | 2016-07-27 |
Family
ID=56284052
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201410851972.1A Withdrawn CN105809096A (en) | 2014-12-31 | 2014-12-31 | Figure labeling method and terminal |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN105809096A (en) |
WO (1) | WO2016106966A1 (en) |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107391703A (en) * | 2017-07-28 | 2017-11-24 | 北京理工大学 | The method for building up and system of image library, image library and image classification method |
CN107657269A (en) * | 2017-08-24 | 2018-02-02 | 百度在线网络技术(北京)有限公司 | A kind of method and apparatus for being used to train picture purification model |
CN108229384A (en) * | 2017-12-29 | 2018-06-29 | 广州图语信息科技有限公司 | Face clustering method and device using continuity structure and user terminal |
CN109658572A (en) * | 2018-12-21 | 2019-04-19 | 上海商汤智能科技有限公司 | Image processing method and device, electronic equipment and storage medium |
CN111353059A (en) * | 2020-03-02 | 2020-06-30 | 腾讯科技(深圳)有限公司 | Picture processing method and device, computer-readable storage medium and electronic device |
CN111695628A (en) * | 2020-06-11 | 2020-09-22 | 北京百度网讯科技有限公司 | Key point marking method and device, electronic equipment and storage medium |
CN112766296A (en) * | 2019-11-06 | 2021-05-07 | 济南信通达电气科技有限公司 | Power transmission line potential safety hazard target detection model training method and device |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111381743B (en) * | 2018-12-29 | 2022-07-12 | 深圳光启高等理工研究院 | Data marking method, computer device and computer readable storage medium |
CN112863493A (en) * | 2021-01-14 | 2021-05-28 | 北京天行汇通信息技术有限公司 | Voice data labeling method and device and electronic equipment |
CN113657173B (en) * | 2021-07-20 | 2024-05-24 | 北京搜狗科技发展有限公司 | Data processing method and device for data processing |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101196994A (en) * | 2007-12-26 | 2008-06-11 | 腾讯科技(深圳)有限公司 | Image content recognizing method and recognition system |
CN101795400A (en) * | 2010-03-16 | 2010-08-04 | 上海复控华龙微系统技术有限公司 | Method for actively tracking and monitoring infants and realization system thereof |
CN103473275A (en) * | 2013-08-23 | 2013-12-25 | 中山大学 | Automatic image labeling method and automatic image labeling system by means of multi-feature fusion |
CN104133875A (en) * | 2014-07-24 | 2014-11-05 | 北京中视广信科技有限公司 | Face-based video labeling method and face-based video retrieving method |
CN104217008A (en) * | 2014-09-17 | 2014-12-17 | 中国科学院自动化研究所 | Interactive type labeling method and system for Internet figure video |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101344922B (en) * | 2008-08-27 | 2011-11-02 | 华为技术有限公司 | Human face detection method and device |
CN103839076B (en) * | 2014-02-25 | 2017-05-10 | 中国科学院自动化研究所 | Network sensitive image identification method based on light characteristics |
-
2014
- 2014-12-31 CN CN201410851972.1A patent/CN105809096A/en not_active Withdrawn
-
2015
- 2015-02-27 WO PCT/CN2015/073337 patent/WO2016106966A1/en active Application Filing
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101196994A (en) * | 2007-12-26 | 2008-06-11 | 腾讯科技(深圳)有限公司 | Image content recognizing method and recognition system |
CN101795400A (en) * | 2010-03-16 | 2010-08-04 | 上海复控华龙微系统技术有限公司 | Method for actively tracking and monitoring infants and realization system thereof |
CN103473275A (en) * | 2013-08-23 | 2013-12-25 | 中山大学 | Automatic image labeling method and automatic image labeling system by means of multi-feature fusion |
CN104133875A (en) * | 2014-07-24 | 2014-11-05 | 北京中视广信科技有限公司 | Face-based video labeling method and face-based video retrieving method |
CN104217008A (en) * | 2014-09-17 | 2014-12-17 | 中国科学院自动化研究所 | Interactive type labeling method and system for Internet figure video |
Cited By (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107391703A (en) * | 2017-07-28 | 2017-11-24 | 北京理工大学 | The method for building up and system of image library, image library and image classification method |
CN107391703B (en) * | 2017-07-28 | 2019-11-15 | 北京理工大学 | The method for building up and system of image library, image library and image classification method |
CN107657269A (en) * | 2017-08-24 | 2018-02-02 | 百度在线网络技术(北京)有限公司 | A kind of method and apparatus for being used to train picture purification model |
CN108229384A (en) * | 2017-12-29 | 2018-06-29 | 广州图语信息科技有限公司 | Face clustering method and device using continuity structure and user terminal |
CN109658572A (en) * | 2018-12-21 | 2019-04-19 | 上海商汤智能科技有限公司 | Image processing method and device, electronic equipment and storage medium |
US11410001B2 (en) | 2018-12-21 | 2022-08-09 | Shanghai Sensetime Intelligent Technology Co., Ltd | Method and apparatus for object authentication using images, electronic device, and storage medium |
CN112766296A (en) * | 2019-11-06 | 2021-05-07 | 济南信通达电气科技有限公司 | Power transmission line potential safety hazard target detection model training method and device |
CN112766296B (en) * | 2019-11-06 | 2023-04-07 | 济南信通达电气科技有限公司 | Power transmission line potential safety hazard target detection model training method and device |
CN111353059A (en) * | 2020-03-02 | 2020-06-30 | 腾讯科技(深圳)有限公司 | Picture processing method and device, computer-readable storage medium and electronic device |
CN111695628A (en) * | 2020-06-11 | 2020-09-22 | 北京百度网讯科技有限公司 | Key point marking method and device, electronic equipment and storage medium |
CN111695628B (en) * | 2020-06-11 | 2023-05-05 | 北京百度网讯科技有限公司 | Key point labeling method and device, electronic equipment and storage medium |
Also Published As
Publication number | Publication date |
---|---|
WO2016106966A1 (en) | 2016-07-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN105809096A (en) | Figure labeling method and terminal | |
Balntas et al. | Pose guided RGBD feature learning for 3D object pose estimation | |
Shahab et al. | ICDAR 2011 robust reading competition challenge 2: Reading text in scene images | |
CN105631039B (en) | A kind of picture browsing method | |
CN103839084B (en) | Multi-kernel support vector machine multi-instance learning algorithm applied to pedestrian re-identification | |
CN101477696B (en) | Human character cartoon image generating method and apparatus | |
CN109299639B (en) | Method and device for facial expression recognition | |
US20210089827A1 (en) | Feature representation device, feature representation method, and program | |
WO2019061658A1 (en) | Method and device for positioning eyeglass, and storage medium | |
CN104200228B (en) | Recognizing method and system for safety belt | |
CN107729875A (en) | Three-dimensional face identification method and device | |
CN103824052A (en) | Multilevel semantic feature-based face feature extraction method and recognition method | |
JP6410450B2 (en) | Object identification device, object identification method, and program | |
WO2009117607A1 (en) | Methods, systems, and media for automatically classifying face images | |
Van Gemert | Exploiting photographic style for category-level image classification by generalizing the spatial pyramid | |
CN105224929A (en) | A kind of method of searching human face photo | |
CN103971131A (en) | Preset facial expression recognition method and device | |
CN105095475B (en) | Imperfect attribute based on two-graded fusion marks pedestrian recognition methods and system again | |
JP2010108494A (en) | Method and system for determining characteristic of face within image | |
CN111178195A (en) | Facial expression recognition method and device and computer readable storage medium | |
CN110415212A (en) | Abnormal cell detection method, device and computer readable storage medium | |
CN106203539A (en) | The method and apparatus identifying container number | |
Winn et al. | Object class recognition at a glance | |
CN114239754B (en) | Pedestrian attribute identification method and system based on attribute feature learning decoupling | |
CN111429409A (en) | Method and system for identifying glasses worn by person in image and storage medium thereof |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WW01 | Invention patent application withdrawn after publication | ||
WW01 | Invention patent application withdrawn after publication |
Application publication date: 20160727 |