CN101685464A - Method for automatically labeling images based on community potential subject excavation - Google Patents

Method for automatically labeling images based on community potential subject excavation Download PDF

Info

Publication number
CN101685464A
CN101685464A CN200910099916A CN200910099916A CN101685464A CN 101685464 A CN101685464 A CN 101685464A CN 200910099916 A CN200910099916 A CN 200910099916A CN 200910099916 A CN200910099916 A CN 200910099916A CN 101685464 A CN101685464 A CN 101685464A
Authority
CN
China
Prior art keywords
image
community
label
candidate
marked
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN200910099916A
Other languages
Chinese (zh)
Other versions
CN101685464B (en
Inventor
吴飞
邵健
庄越挺
陈烨
朱科
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University ZJU
Original Assignee
Zhejiang University ZJU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University ZJU filed Critical Zhejiang University ZJU
Priority to CN2009100999166A priority Critical patent/CN101685464B/en
Publication of CN101685464A publication Critical patent/CN101685464A/en
Application granted granted Critical
Publication of CN101685464B publication Critical patent/CN101685464B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The invention discloses a method for automatically labeling images based on community potential subject excavation, which comprises the following steps: 1) adopting a hidden Dirichlet allocation modelto excavate implicit subjects in a single community; 2) after obtaining the probability distribution of image labels and the implicit subjects by analyzing community potential subjects, deleting theimage labels of which the probability of the community image labels and the implicit subjects is smaller than a set value k to perform 'de-noising' filtration on the community image labels; 3) generating image candidate labeling labels of the images to be labeled by propagating similar image labels; 4) optimizing the image candidate labeling labels according to the relativity between the image candidate labeling labels and the implicit subjects of the images; and 5) obtaining the final labeling result of the images through the information fusion of a plurality of communities. The method makesfull use of the information on different communities in which the images are positioned and the information on the community potential subjects in a social shared network to label the images, and compared with the conventional labeling method, the generated labeling result is more accurate.

Description

The method of the automatic image annotation that excavates based on the potential theme of community
Technical field
The present invention relates to the automatic mark field of image, relate in particular to a kind of method that marks automatically based on the image of social sharing network.
Background technology
We is along with the fast development of network and multimedia technology, and the amount of images on the internet is explosive increase.According to statistics, 2008, Google index Web webpage scale reached 1,000,000,000,000, and wherein view data is above tens.In recent years, shared network has caused Internet user's special concern, marks on the Flickr of website the masses that provide digital picture to share, and the image of its index surpasses 3,000,000,000, and increases fast with every month speed of millions of.
The image tag information that the Internet user manually adds for the Flickr image is that the high-efficiency management and the retrieval of image brought very big facility.But, analyse in depth discovery by result to the manual mark of Flickr image, the label of 64% image all is less than or equals 3.How the image of a large amount of no labels or label deficiency is added automatically or improves it and have the hot issue that label is a current research.
Different with normal image, the internet is shared image and is had following several characteristics:
Shared network picture quality is uneven, by different user by different cameral at different time from different angles or use the difference skill of taking pictures to take and obtain;
The shared network picture material is abundant, the label entry of Flickr image has surpassed 100,000,000 3 thousand ten thousand, has contained more than 6,000 ten thousand notions, has included various content, incident and object or the like such as landscape, building, personal portrait, active clip;
Shared network image, semantic complexity, an image often comprises a plurality of different subject informations simultaneously, may both comprise subject informations such as " Sky ", " Clouds " such as an image, has also comprised subject informations such as " Water ", " River " simultaneously.
Because the shared network image has These characteristics, therefore be difficult to use traditional algorithm that it is effectively marked.The shared image of analysing in depth on the Flickr can be found a notable feature: when the user according to time, place or incident with image uploading behind album, can further it be recommended in the corresponding community and go according to image subject.Community among the Flickr is meant the image collection that comprises a certain particular topic, and when the user uploads the image on community the time that does not meet the community theme, the keeper can delete these unrelated images, and this has just guaranteed the consistance on the community image subject.Therefore, can utilize the subject information of image place community that image is marked.Simultaneously, can further be subdivided into the fact of a plurality of sub-topicses again at a certain community theme, can imply theme to community and excavate, the combining image visual similarity is finally obtained meticulousr annotation results then.
Summary of the invention
The objective of the invention is to overcome the deficiencies in the prior art, a kind of method of the automatic image annotation that excavates based on the potential theme of community is provided.
The method of the automatic image annotation that excavates based on the potential theme of community comprises the steps:
1) adopt latent Di Li Cray apportion model that the implicit theme in the single community is excavated;
2) by after potential subject analysis obtains the probability distribution of image tag and implicit theme to community, deletion community image tag and implicit theme probability come the community image tag is carried out " denoising " filtration less than the image tag of setting value k;
3) propagate the image candidate who produces image to be marked by the similar image label and mark label;
4) marking between the implicit theme of label and image correlativity according to the image candidate marks label to the image candidate and is optimized;
5) obtain the final annotation results of image by many community information fusion.
Described image candidate by similar image label propagation generation image to be marked marks the step of label: for image I to be marked in the community u, image I to be marked uAnd the probability between the image tag w calculates from following formula:
Figure A20091009991600051
Wherein P (w|J) represents image tag w occurrence number among the training image J, P (I u| J) represent image I to be marked uAnd the visual similarity between the training image J, choose and image I to be marked uThe highest pairing image tag w of 10 width of cloth training image J of visual similarity is as image I to be marked uThe candidate mark label, i.e. P (w|I u) be worth 10 maximum image tag w as image I to be marked uThe image candidate mark label.
Correlativity marks the step that label is optimized to the image candidate between the described implicit theme that marks label and image according to the image candidate:
1) marks label w by calculating in all implicit themes that two image candidates mark probability product between the label and obtaining the image candidate kAnd w lBetween implicit topic similarity, computing formula is:
Figure A20091009991600052
The probability distribution of φ presentation video label and implicit theme wherein;
2) marking label and other image candidate by the computed image candidate marks implicit topic relativity sum between the label and obtains the image candidate and mark label w iWith image I to be marked uThe correlativity of implicit theme, computing formula is:
Figure A20091009991600053
P (w wherein j| w i) the presentation video candidate marks label w jAnd w iBetween implicit topic similarity;
3) recomputate the image candidate and mark label w iWith image I to be marked uProbability, computing formula is: P ' (w i| I u)=P (w i| I u) * R (w i, I u), P (w|I wherein u) represent image I to be marked uWith image tag w iBetween probability, R (w i, I u) the presentation video candidate marks label w iWith image I to be marked uThe correlativity of implicit theme.
Described step of image being carried out final mark by many community information fusion:
1) by being chosen at the theme that the most frequent image tag is represented community appears in the community from the title of each community, just in WordNet " entity " semantic tree, find the node of representing this community by this image tag then, constitute the HD between each community;
2) by the HD between each community, each community is carried out final mark by merging to image from top to bottom successively, for obtaining a new father node by the markup information of each child node community is averaged between each community that contains common ancestor's node, delete child node, reach the purpose of fusion;
3) mark the final annotation results that preceding 5 values of label obtain image to be marked by choosing the image candidate.
The present invention has made full use of the information of the different community at image place, and the potential subject information that utilizes the community place comes the mark label is carried out " denoising " and optimization, therefore more accurate than traditional result that mask method marked, markup information is also more extensive.
Description of drawings
Fig. 1 is based on the method flow diagram of the automatic image annotation of the potential theme excavation of community.
Fig. 2 is automatic image annotation result of the present invention.
Fig. 3 is latent Di Li Cray apportion model.
Embodiment
The method of the automatic image annotation that excavates based on the potential theme of community comprises the steps:
1) adopt latent Di Li Cray apportion model that the implicit theme in the single community is excavated;
2) by after potential subject analysis obtains the probability distribution of image tag and implicit theme to community, deletion community image tag and implicit theme probability come the community image tag is carried out " denoising " filtration less than the image tag of setting value k;
3) propagate the image candidate who produces image to be marked by the similar image label and mark label;
4) marking between the implicit theme of label and image correlativity according to the image candidate marks label to the image candidate and is optimized;
5) obtain the final annotation results of image by many community information fusion.
The latent Di Li Cray apportion model of described employing is as follows to the step that the implicit theme in the single community excavates:
1) latent Di Li Cray apportion model is commonly used to text is carried out subject analysis; Concern between image (document) d, implicit theme z, the image tag w in the latent Di Li Cray apportion model (as Fig. 3) mainly by implicit variable θ and φ decision, wherein θ presentation video d theme distributes, the β z label that is the theme distributes, and α, β are the prior probability of implicit variable θ, φ.Prior probability α, β obey the Di Li Cray and distribute, and T is a community theme sum, and D is the community total number of images, N dBe each image tagged total number of labels;
2) because direct computed image is concentrated the probability more complicated between implicit theme z and image d and the image tag w, adopt Gibbs to sample usually and simplify the LDA Model Calculation; For i image tag token, the image tag index of this token is w i, token corresponding image index is d i, Gibbs sampling alternately is considered each image tag token, by calculating the number of times that other token is assigned to each theme, estimates which theme current token is assigned to.In this process, theme is recycled sampling, and the theme conditional probability is:
P ( z i = j | z - i , w i , d i , · ) ∝ C w i j WT + β Σ w = 1 W C w i j WT + Wβ · C d i j DT + α Σ t = 1 T C d i j DT + Tα - - - ( 1 )
Wherein, z i=j represents that theme j is given token i, z by assignment -iExpression except tokeni the theme of other image tag token distribute, " " represents other all Given informations, such as all other image tag index w -i, image index d -iAnd prior probability α, β.C WTWith C DTBe respectively that size is W*T, D*T dimension matrix.
Figure A20091009991600072
Presentation video label w is assigned to the number of times of theme j,
Figure A20091009991600073
The image tag of presentation video d the inside is assigned to the number of times (not comprising current label token i) of theme j;
3) in each Gibbs sampling, all images label all is assigned to some themes in the image set.When Gibbs sampling by iteration behind enough number of times, the theme probability just approaches priori Di Li Cray and distributes.After the Gibbs sampling finished, the label-theme distribution phi and the theme-image θ that have just obtained finding the solution distributed, and the theme conditional probability is:
φ i j = C ij WT + β Σ k = 1 W C kj WT + Wβ , θ j d = C dj DT + α Σ k = 1 T C dk DT + Tα - - - ( 2 )
Wherein
Figure A20091009991600075
Presentation video label w is assigned to the number of times of theme j, The image tag of presentation video d the inside is assigned to the number of times of theme j, the number of W presentation video label, and T represents the number of theme, α, β represent prior probability.
Described image candidate by similar image label propagation generation image to be marked marks the step of label: for image I to be marked in the community u, image I to be marked uAnd the probability between the image tag w calculates from following formula:
Figure A20091009991600077
Wherein P (w|J) represents image tag w occurrence number among the training image J, P (I u| J) represent image I to be marked uAnd the visual similarity between the training image J, choose and image I to be marked uThe highest pairing image tag w of 10 width of cloth training image J of visual similarity is as image I to be marked uThe candidate mark label, i.e. P (w|I u) be worth 10 maximum image tag w as image I to be marked uThe image candidate mark label.
Correlativity marks the step that label is optimized to the image candidate between the described implicit theme that marks label and image according to the image candidate:
1) marks label w by calculating in all implicit themes that two image candidates mark probability product between the label and obtaining the image candidate kAnd w lBetween implicit topic similarity, computing formula is:
Figure A20091009991600081
The probability distribution of φ presentation video label and implicit theme wherein;
2) marking label and other image candidate by the computed image candidate marks implicit topic relativity sum between the label and obtains the image candidate and mark label w iWith image I to be marked uThe correlativity of implicit theme, computing formula is: P (w wherein j| w i) the presentation video candidate marks label w jAnd w iBetween implicit topic similarity;
3) recomputate the image candidate and mark label w iWith image I to be marked uProbability, computing formula is: P ' (w i| I u)=P (w i| I u) * R (w i, I u), P (w|I wherein u) represent image I to be marked uWith image tag w iBetween probability, R (w i, I u) the presentation video candidate marks label w iWith image I to be marked uThe correlativity of implicit theme.
Described step of image being carried out final mark by many community information fusion:
1) by being chosen at the theme that the most frequent image tag is represented community appears in the community from the title of each community, just in WordNet " entity " semantic tree, find the node of representing this community by this image tag then, constitute the HD between each community;
2) by the HD between each community, each community is carried out final mark by merging to image from top to bottom successively, for obtaining a new father node by the markup information of each child node community is averaged between each community that contains common ancestor's node, delete child node, reach the purpose of fusion;
3) mark the final annotation results that preceding 5 values of label obtain image to be marked by choosing the image candidate.
The present invention has made full use of the information of the different community in image place in the social sharing network, and utilize the potential subject information in community place to come the mark label is carried out " denoising " and optimization, therefore more accurate than the annotation results of traditional mask method generation, markup information is also more extensive.
As shown in Figure 1, the method for the automatic image annotation that excavates based on the potential theme of community specifies as follows:
1), finds N the different community at this image place for an image to be marked;
2) distributing mould to imply theme to the latent Di Li Cray of each community utilization excavates;
3) according to the correlativity of community label and the implicit theme of community the community label being carried out " denoising " filters;
4) propagate the image candidate who produces image to be marked by the similar image label and mark label;
5) marking between the implicit theme of label and image correlativity according to the image candidate marks label to the image candidate and is optimized;
6) come image is marked by many community information fusion;
7) obtain the final image annotation results of image to be marked.
Embodiment 1
Fig. 2 has provided an object lesson of the automatic image annotation that excavates based on the potential theme of community.
1) chooses an image to be marked, find 3 different community at this image place: community 1 " Water, Oceans, Lakes, Rivers, Creeks ", community 2 " Sky ﹠amp; Clouds ", community 3 " Beautiful Scenery ";
2) utilizing latent Di Li Cray to distribute mould to imply theme respectively to 3 community excavates;
3) according to the correlativity of community label and the implicit theme of community 3 community labels being carried out " denoising " filters;
4) propagate the image candidate who produces image to be marked by the similar image label and mark label " river sanwater antonio bexar county courthouse blue clouds sea ";
5) marking between the implicit theme of label and image correlativity according to the candidate marks label to the candidate and is optimized and obtains the image candidate and mark label " river san water bexar blue courthouse antonio countyclouds sea ";
6) obtain the image candidate by 2 community information fusion and mark label " clouds river san skywater bexar blue courthouse antonio count "; Obtain the image candidate by 3 community information fusion and mark label " sky blue clouds river water san landscape bexar courthouse mountains "
7) mark the final image annotation results " sky blue clouds river water " that preceding 5 values of label obtain image to be marked by choosing the image candidate.
Can see from top example, different with traditional image labeling method is, the present invention has made full use of the information of the different community at image place in the social sharing network, and the potential subject information that utilizes the community place comes the mark label is carried out " denoising " and optimization, therefore more accurate than traditional annotation results that mask method produced, markup information is also more extensive.

Claims (4)

1. the method based on the automatic image annotation of the potential theme excavation of community is characterized in that comprising the steps:
1) adopt latent Di Li Cray apportion model that the implicit theme in the single community is excavated;
2) by after potential subject analysis obtains the probability distribution of image tag and implicit theme to community, deletion community image tag and implicit theme probability come the community image tag is carried out " denoising " filtration less than the image tag of setting value k;
3) propagate the image candidate who produces image to be marked by the similar image label and mark label;
4) marking between the implicit theme of label and image correlativity according to the image candidate marks label to the image candidate and is optimized;
5) obtain the final annotation results of image by many community information fusion.
2. the method for a kind of automatic image annotation that excavates based on the potential theme of community according to claim 1, it is characterized in that described image candidate by similar image label propagation generation image to be marked marks the step of label: for image I to be marked in the community u, image I to be marked uAnd the probability between the image tag w calculates from following formula:
Figure A2009100999160002C1
Wherein P (w|J) represents image tag w occurrence number among the training image J, P (I u| J) represent image I to be marked uAnd the visual similarity between the training image J, choose and image I to be marked uThe highest pairing image tag w of 10 width of cloth training image J of visual similarity is as image I to be marked uThe candidate mark label, i.e. P (w|I u) be worth 10 maximum image tag w as image I to be marked uThe image candidate mark label.
3. the method for a kind of automatic image annotation that excavates based on the potential theme of community according to claim 1, it is characterized in that correlativity marks the step that label is optimized to the image candidate between the described implicit theme that marks label and image according to the image candidate:
1) marks label w by calculating in all implicit themes that two image candidates mark probability product between the label and obtaining the image candidate kAnd w lBetween implicit topic similarity, computing formula is: The probability distribution of φ presentation video label and implicit theme wherein;
2) marking label and other image candidate by the computed image candidate marks implicit topic relativity sum between the label and obtains the image candidate and mark label w iWith image I to be marked uThe correlativity of implicit theme, computing formula is: P (w wherein j| w i) the presentation video candidate marks label w jAnd w iBetween implicit topic similarity;
3) recomputate the image candidate and mark label w iWith image I to be marked uProbability, computing formula is: P ' (w i| I u)=P (w i| I u) * R (w i, I u), P (w|I wherein u) represent image I to be marked uWith image tag w iBetween probability, R (w i, I u) the presentation video candidate marks label w iWith image I to be marked uThe correlativity of implicit theme.
4. the method for a kind of automatic image annotation that excavates based on the potential theme of community according to claim 1 is characterized in that described step of image being carried out final mark by many community information fusion:
1) by being chosen at the theme that the most frequent image tag is represented community appears in the community from the title of each community, just in WordNet " entity " semantic tree, find the node of representing this community by this image tag then, constitute the HD between each community;
2) by the HD between each community, each community is carried out final mark by merging to image from top to bottom successively, for obtaining a new father node by the markup information of each child node community is averaged between each community that contains common ancestor's node, delete child node, reach the purpose of fusion;
3) mark the final annotation results that preceding 5 values of label obtain image to be marked by choosing the image candidate.
CN2009100999166A 2009-06-18 2009-06-18 Method for automatically labeling images based on community potential subject excavation Expired - Fee Related CN101685464B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2009100999166A CN101685464B (en) 2009-06-18 2009-06-18 Method for automatically labeling images based on community potential subject excavation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2009100999166A CN101685464B (en) 2009-06-18 2009-06-18 Method for automatically labeling images based on community potential subject excavation

Publications (2)

Publication Number Publication Date
CN101685464A true CN101685464A (en) 2010-03-31
CN101685464B CN101685464B (en) 2011-08-24

Family

ID=42048628

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2009100999166A Expired - Fee Related CN101685464B (en) 2009-06-18 2009-06-18 Method for automatically labeling images based on community potential subject excavation

Country Status (1)

Country Link
CN (1) CN101685464B (en)

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102496146A (en) * 2011-11-28 2012-06-13 南京大学 Image segmentation method based on visual symbiosis
CN102760149A (en) * 2012-04-05 2012-10-31 中国人民解放军国防科学技术大学 Automatic annotating method for subjects of open source software
CN102902821A (en) * 2012-11-01 2013-01-30 北京邮电大学 Methods for labeling and searching advanced semantics of imagse based on network hot topics and device
CN103065157A (en) * 2012-12-24 2013-04-24 南京邮电大学 Image labeling method based on activation diffusion theory
CN103631890A (en) * 2013-11-15 2014-03-12 北京奇虎科技有限公司 Method and device for mining image principal information
CN103678335A (en) * 2012-09-05 2014-03-26 阿里巴巴集团控股有限公司 Method and device for identifying commodity with labels and method for commodity navigation
CN103714178A (en) * 2014-01-08 2014-04-09 北京京东尚科信息技术有限公司 Automatic image marking method based on word correlation
CN103927309A (en) * 2013-01-14 2014-07-16 阿里巴巴集团控股有限公司 Method and device for marking information labels for business objects
WO2015165230A1 (en) * 2014-04-28 2015-11-05 华为技术有限公司 Social contact message monitoring method and device
CN106021365A (en) * 2016-05-11 2016-10-12 上海迪目信息科技有限公司 High-dimension spatial point covering hypersphere video sequence annotation system and method
CN109145936A (en) * 2018-06-20 2019-01-04 北京达佳互联信息技术有限公司 A kind of model optimization method and device
CN113408633A (en) * 2021-06-29 2021-09-17 北京百度网讯科技有限公司 Method, apparatus, device and storage medium for outputting information

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN100514332C (en) * 2006-06-01 2009-07-15 上海杰图软件技术有限公司 Method for annotating electronic map through photograph collection having position information
CN100401302C (en) * 2006-09-14 2008-07-09 浙江大学 Image meaning automatic marking method based on marking significance sequence

Cited By (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102496146B (en) * 2011-11-28 2014-03-05 南京大学 Image segmentation method based on visual symbiosis
CN102496146A (en) * 2011-11-28 2012-06-13 南京大学 Image segmentation method based on visual symbiosis
CN102760149B (en) * 2012-04-05 2015-02-25 中国人民解放军国防科学技术大学 Automatic annotating method for subjects of open source software
CN102760149A (en) * 2012-04-05 2012-10-31 中国人民解放军国防科学技术大学 Automatic annotating method for subjects of open source software
CN103678335B (en) * 2012-09-05 2017-12-08 阿里巴巴集团控股有限公司 The method of method, apparatus and the commodity navigation of commodity sign label
CN103678335A (en) * 2012-09-05 2014-03-26 阿里巴巴集团控股有限公司 Method and device for identifying commodity with labels and method for commodity navigation
CN102902821A (en) * 2012-11-01 2013-01-30 北京邮电大学 Methods for labeling and searching advanced semantics of imagse based on network hot topics and device
CN102902821B (en) * 2012-11-01 2015-08-12 北京邮电大学 The image high-level semantics mark of much-talked-about topic Network Based, search method and device
CN103065157A (en) * 2012-12-24 2013-04-24 南京邮电大学 Image labeling method based on activation diffusion theory
CN103927309B (en) * 2013-01-14 2017-08-11 阿里巴巴集团控股有限公司 A kind of method and device to business object markup information label
CN103927309A (en) * 2013-01-14 2014-07-16 阿里巴巴集团控股有限公司 Method and device for marking information labels for business objects
CN103631890B (en) * 2013-11-15 2017-05-17 北京奇虎科技有限公司 Method and device for mining image principal information
CN103631890A (en) * 2013-11-15 2014-03-12 北京奇虎科技有限公司 Method and device for mining image principal information
CN103714178B (en) * 2014-01-08 2017-01-25 北京京东尚科信息技术有限公司 Automatic image marking method based on word correlation
CN103714178A (en) * 2014-01-08 2014-04-09 北京京东尚科信息技术有限公司 Automatic image marking method based on word correlation
WO2015165230A1 (en) * 2014-04-28 2015-11-05 华为技术有限公司 Social contact message monitoring method and device
US10250550B2 (en) 2014-04-28 2019-04-02 Huawei Technologies Co., Ltd. Social message monitoring method and apparatus
CN106021365A (en) * 2016-05-11 2016-10-12 上海迪目信息科技有限公司 High-dimension spatial point covering hypersphere video sequence annotation system and method
CN109145936A (en) * 2018-06-20 2019-01-04 北京达佳互联信息技术有限公司 A kind of model optimization method and device
CN109145936B (en) * 2018-06-20 2019-07-09 北京达佳互联信息技术有限公司 A kind of model optimization method and device
CN113408633A (en) * 2021-06-29 2021-09-17 北京百度网讯科技有限公司 Method, apparatus, device and storage medium for outputting information

Also Published As

Publication number Publication date
CN101685464B (en) 2011-08-24

Similar Documents

Publication Publication Date Title
CN101685464B (en) Method for automatically labeling images based on community potential subject excavation
CN101872346B (en) Method for generating video navigation system automatically
CN103065153B (en) A kind of video key frame extracting method based on color quantization and cluster
Purves et al. Describing place through user generated content
CN103810303B (en) Image search method and system based on focus object recognition and theme semantics
JP2011198364A (en) Method of adding label to medium document and system using the same
TW201417575A (en) Autocaptioning of images
CN106202475A (en) The method for pushing of a kind of video recommendations list and device
CN101295360B (en) Semi-supervision image classification method based on weighted graph
CN104683885A (en) Video key frame abstract extraction method based on neighbor maintenance and reconfiguration
JP2011055169A (en) Electronic apparatus and image processing method
US8750684B2 (en) Movie making techniques
CN104199838B (en) A kind of user model constructing method based on label disambiguation
CN101894129B (en) Video topic finding method based on online video-sharing website structure and video description text information
CN110678861A (en) Image selection suggestions
Fiallos et al. Detecting topics and locations on Instagram photos
Min et al. Multimodal spatio-temporal theme modeling for landmark analysis
Lin et al. Discovering multirelational structure in social media streams
CN104298754B (en) Information excavating transmission method, social network device and system by trunk of sequence of pictures
Shi et al. Multi-temporal urban semantic understanding based on GF-2 remote sensing imagery: from tri-temporal datasets to multi-task mapping
Jing et al. Integration of text and image analysis for flood event image recognition
CN100573532C (en) A kind of intelligent prompt method of file label
CN109101972A (en) A kind of semantic segmentation convolutional neural networks with contextual information coding
CN102368266A (en) Sorting method of unlabelled pictures for network search
CN110750651B (en) Knowledge graph construction method based on scientific and technological achievements

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20110824

Termination date: 20140618

EXPY Termination of patent right or utility model