CN103530403B - A kind of structurized Image Description Methods - Google Patents

A kind of structurized Image Description Methods Download PDF

Info

Publication number
CN103530403B
CN103530403B CN201310504488.7A CN201310504488A CN103530403B CN 103530403 B CN103530403 B CN 103530403B CN 201310504488 A CN201310504488 A CN 201310504488A CN 103530403 B CN103530403 B CN 103530403B
Authority
CN
China
Prior art keywords
image
subclass
class
training
candidate
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201310504488.7A
Other languages
Chinese (zh)
Other versions
CN103530403A (en
Inventor
韦星星
韩亚洪
操晓春
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianjin University
Original Assignee
Tianjin University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianjin University filed Critical Tianjin University
Priority to CN201310504488.7A priority Critical patent/CN103530403B/en
Publication of CN103530403A publication Critical patent/CN103530403A/en
Application granted granted Critical
Publication of CN103530403B publication Critical patent/CN103530403B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/583Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • G06F16/5838Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using colour

Landscapes

  • Engineering & Computer Science (AREA)
  • Library & Information Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Image Analysis (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention belongs to image retrieval technologies field, particularly relate to a kind of structurized Image Description Methods, including: obtain the image being trained, and to 3 layers of tree label of object configurations each in image, form training set;Extracting the low-level feature of each object of image in training set, training obtains the class of all candidates, the grader that subclass and Attribute Relative are answered, and forms the intermediate data required for next step modeling;Structural environment random field models, training obtains model parameter;For image to be described, first carry out image segmentation, be partitioned in image to be described, the object comprised, then extract the low-level feature of each object in image to be described,;Utilize the conditional random field models (CRF) constructed and training to obtain model parameter afterwards, use and use the long-pending belief propagation algorithm of maximum, the tree label of object each in image to be described, is predicted.The present invention can improve the discrimination between image and image, produces and preferably retrieves result.

Description

A kind of structurized Image Description Methods
Art
The invention belongs to image retrieval technologies field, particularly relate to a kind of structurized Image Description Methods.
Background technology
Use more abundant semantic information describe piece image no matter to understand this image or from Web region retrieval This image is all and weight.On the one hand, when in the face of a width new image, people are the most inquisitive is the object in image Belong to which class (being such as a kind of animal or a kind of vehicles), when after the category information getting it, further people Wonder it is belonging to which subclass (belonging to birds, still fall within felid), additionally, every kind of object all to have oneself peculiar Attribute information, ratio is such as whether have feather, if can fly, if food meat etc..By these information, people can be from multiple Angle understands piece image more accurately, gets more knowledge about objects in images simultaneously.On the other hand, at figure As searching field, owing to computer is to use low-level feature to represent piece image, the retrieval be so given by computer is tied Fruit can not well mate the retrieval of user and be intended to, for this " semantic gap " that overcome field of image search to exist, also Us are needed to use more abundant more accurate semantic information to describe piece image.
Emerge various Image Description Methods, such as use single label to retouch State the object in image and belong to animal or plant, although this description method specifies the classification of objects in images, but institute The information contained is very limited, and in order to overcome this shortcoming, people are created that a tag library, then chooses in tag library therewith Relevant multiple labels describe certain object in image, but tag library is the most limited, it is impossible to cover in nature All objects, the most just create picture based on attribute and describe method, and this kind of method uses the attribute information of object, is such as No meeting flies, if having feather etc. to go to describe piece image, even if advantage of this is that and encountering the image that a width was not met, People still can use some basic attribute informations to go to describe it, thus obtains some perceptual knowledge to image.No matter What angle is these Image Description Methods be from, uses what information to describe image, and their target is provided to obtain Semantic information more abundant in image.
Summary of the invention
It is an object of the invention to provide a kind of new structured image and describe method, use the tree of 3 layers Semantic label describes piece image, so that the content describing image is abundanter.
The structurized Image Description Methods that the present invention proposes comprises the steps:
The first step, obtains the image being trained, and to 3 layers of tree label of object configurations each in image, is formed Training set:
(1) obtain the image being trained, build image collection IMG;
(2) use image segmentation algorithm to be partitioned in set IMG the object included in each image, constitute collection of objects OBJ;
(3) being labeled each object in set OBJ, the content of mark includes the class belonging to object, subclass and The attribute having, forms class set CLASS, subclass set SUBCLASS and community set ATTRIBUTE;
(4) according to markup information, each object configurations in set OBJ is included class-subclass-attribute 3 layers tree mark Sign, form the tag set Y corresponding with OBJ, the element one_to_one corresponding of two set;
(5) class set CLASS has the most identical element, each element in sequential scan CLASS, identical element Only retain one, form the class set Cla of candidate, its subclass set SUBCLASS and community set ATTRIBUTE is done identical Scanning, obtain the subclass set Subcl and the community set Attri of candidate of candidate;
(6) structure includes: collection of objects OBJ, tag set Y, the class set Cla of candidate, the subclass set Subcl of candidate And training set Tr of the community set Attri of candidate;
Second step, the low-level feature of each object of the image in extraction training set, training obtains the class of all candidates, son The grader that class and Attribute Relative are answered, forms the intermediate data required for next step modeling, and step is as follows:
(1) local binary patterns (LBP) feature of each object, structural feature set X in collection of objects OBJ is extracted;
(2) according to characteristic set X and class set CLASS, training obtains gathering each element w in ClaiSVM classifier SVM_wi, constitute the grader set S corresponding with classw, in like manner, according to characteristic set X and subclass set SUBCLASS, training Obtain each element v in subclass set SubcljSVM classifier SVM_vj, constitute the grader set S corresponding with subclassv, According to characteristic set X and community set ATTRIBUTE, training obtains gathering each element u in AttrikSVM classifier SVM_ uk, constitute the grader set S answered with Attribute Relativeu
(3) in training set Tr, grader set S is calculateduIn each grader SVM_ukPrecision ratio and recall curve (PR curve), obtains grader SVM_u according to this PR curvekThreshold value threk, constitute and SuCorresponding threshold value set Threshold;
(4) in training set Tr, add up each element w in the class set Cla of candidateiSubclass set with candidate Each element v in SubcljCo-occurrence probabilities pij, i.e. tag set Y has w simultaneouslyiAnd viPhysical quantities account in OBJ total Object number NmRatio;Training set Tr is added up each element v in the subclass set Subcl of candidateiProperty set with candidate Close each element u in AttrikCo-occurrence probabilities gjk, i.e. set Y has u simultaneouslykAnd viPhysical quantities account for total thing in OBJ Body number NmRatio, additionally, in training set Tr statistics containing candidate subclass set Subcl in element viBut do not contain candidate Community set in element ukProbability qjk, i.e. set Y has viBut there is no ukPhysical quantities account for total object number N in OBJm Ratio;
(5) structure comprises: grader set Sw, Sv, Su, threshold value set Threshold, probability statistics pij, gjk, qjk Intermediate data, in case next step modeling use;
3rd step, structural environment random field models (CRF), training obtains model parameter;
4th step, for image to be described, first carries out image segmentation, is partitioned in image to be described, the object comprised, According still further to the method for aforesaid second step, extract the low-level feature of each object in image to be described,;Afterwards, the 3rd step structure is utilized Conditional random field models (CRF) and the training made obtain model parameter, use and use the long-pending belief propagation algorithm of maximum, to be described, In image, 3 layers of tree label of each object are predicted.
The present invention uses the tree semantic primitive of 3 layers to describe image, and from this semantic primitive, user is not only The class belonging to objects in images and concrete subclass information can be got, and the genus that objects in images has can be got Property information, thus produce more abundant to picture material and describe more accurately, so can improve between image and image Discrimination, convenient eliminate semantic gap when image retrieval, produce and preferably retrieve result, the present invention is also supplied to simultaneously The methods of exhibiting of a kind of picture material more intuitively of user, i.e. carrys out the class of organization charts's picture with 3 layers of tree, subclass, Attribute information, facilitates the understanding image that user is more prone to.
Accompanying drawing explanation
Fig. 1: CRF model structure schematic diagram.
Fig. 2: the present invention use training set in some examples, wherein the image of the first row be from network download from So image, object therein has used rectangle frame to mark.3 layers of tree label that the second each object of behavior is corresponding.
The left figure of Fig. 3 is natural image, wherein needs the object described to use rectangle frame to mark, and right figure is to use this 3 layers of tree semantic label that the invention of bright proposition dopes.
The left figure of Fig. 4 is natural image, wherein needs the object described to use rectangle frame to mark, and right figure is to use this 3 layers of tree semantic label that the invention of bright proposition dopes.
Detailed description of the invention
Here two width images are chosen as image to be described, respectively Fig. 3, the image on the left side in Fig. 4, use in the present invention It is predicted exporting 3 layers of tree semantic primitive by the method described.
The model parameter of condition random field (CRF) is obtained: specifically comprise the following steps that firstly the need of training
1, structure training set step is as follows:
(1) write the image in the retrieval result of crawlers download Google picture searching, constitute image collectionWherein NdIt it is the total number of images in set IMG;
(2) use image segmentation algorithm to be partitioned in set IMG the object included in each image, constitute collection of objectsWherein NmIt is the object sum in set OBJ, because piece image there may be multiple thing Body, so Nm≥Nd
(3) Amazons Mechanical Turk instrument is used to be labeled, each object in set OBJ including object ObjlAffiliated class classl, subclass subclasslAnd the attribute attr havingl1,…,attrlp, lp represents object ObjlThe attribute having Number, forms class setSubclass set And community set attr 11 , • • • , attr 1 p • • • , attr l 1 , • • • , attr lp , • • • , attr N m 1 , • • • attr N m p , } ;
(4) according to markup information to each object Obj in set OBJlConstruct 3 layers of tree label (class-subclass- Attribute) Yl={ classl,subclassl,attrl1,…,attrlp, form the tag set corresponding with OBJ(the element one_to_one corresponding of two set);
(5) class set CLASS has the most identical element, each element in sequential scan CLASS, identical element Only retain one, form the class set of candidateWherein NwIt is the sum of different elements in set Cla, right Subclass set SUBCLASS and community set ATTRIBUTE does identical scanning, obtains the subclass set of candidateCommunity set with candidateNvAnd NuIt is in set Subcl and Attri respectively The sum of different elements;
(6) structure includes: collection of objectsTag setCandidate's Class setThe subclass set of candidateAnd the community set of candidateTraining set Tr.The training set generated is as shown in Figure 2.
2, process data step as follows:
(1) each object Obj in collection of objects OBJ is extractedl(l=1 ..., Nm) local binary patterns feature (Local Binary Patterns) LBP feature Xl, structureCharacteristic set;
(2) according to characteristic setWith class setTrain Each element w in set ClaiSVM classifier SVM_wi, constitute the grader set corresponding with classIn like manner, according to characteristic setWith subclass setTraining obtains gathering each element v in SubclajSVM classifier SVM_ vj, constitute the grader set corresponding with subclassAccording toAnd genus Property setTraining is gathered Each element u in AttrikSVM classifier SVM_uk, constitute the grader set answered with Attribute Relative
(3) set of computations S in training set TruIn each grader SVM_ukPrecision ratio and recall curve (PR is bent Line), obtain SVM_u according to this PR curvekThreshold value threk, constitute and SuCorresponding threshold value set
(4) each element w in statistics set Cla in training set Tri(i=1,2 ..., Nw) and set Subcla In each element vj(j=1,2 ..., Nv) co-occurrence probabilities pij(i=1 ..., Nw, j=1 ..., Nv) (i.e. same in set Y Time there is wiAnd viPhysical quantities account for total object number N in OBJmRatio);In training set Tr in statistics set Subcla often One element vi(i=1,2 ..., Nv) and gather each element u in Attrik(k=1,2 ..., Nu) co-occurrence probabilities gjk (j=1 ..., Nv, k=1 ..., Nu) (i.e. set Y has u simultaneouslykAnd viPhysical quantities account for total object number N in OBJm's Ratio), additionally, statistics is containing element v in set Subcla in training set Tri(i=1,2 ..., Nv) but do not contain set Element u in Attrik(k=1,2 ..., Nu) probability qjk(j=1 ..., Nv, k=1 ..., Nu) (i.e. set Y has vi But there is no ukPhysical quantities account for total object number N in OBJmRatio);
(5) structure comprises: grader set Threshold value setProbability statistics pij(i= 1,...,Nw, j=1 ..., Nv), gjk(j=1 ..., Nv, k=1 ..., Nu), qjk(j=1 ..., Nv, k=1 ..., Nu) Intermediate data, in case next step modeling use.
3, build CRF model and training parameter step is as follows:
(1) structure CRF model as shown in Figure 1:
Wherein o1It is category node, represents the class belonging to object in image, can any one in the class set Cla of candidate Value (i.e. o at individual element1∈{1,…,Nw});o2It is child class node, represents the subclass belonging to object in image, can be candidate Subclass set Subcl in value (i.e. o at any one element2∈{1,…,Nv});ok(k=3 ..., m), m=2+NuIt is to belong to Property node, represent the attribute that the object in image has, span is 1 and 2 (i.e. ok∈ 1,2}), wherein 1 represents object not Having this attribute, 2 represent that object has this attribute).So we can use Ol={ o1,o2,o3…,omReplace Yl= {classl,subclassl,attrl1,…,attrlpRepresent object ObjlThe 3 layers of tree label having;CRF model Need maximize equation below:
P ( O l | X l ) = 1 Z ( X l ) exp { - E ( X l , O l ) } - - - ( 1 )
Wherein, XlRefer to from object ObjlThe low-level feature extracted, Z (Xl) it is a normalized constant, it is called partition Function, E (Xl,Ol) it is called energy function, it may be seen that maximize formula (1) to be equivalent to minimize energy function, energy letter The formula of number is defined as follows:
Wherein,It is called nodal potential function (node potential), γ1It is defined as o1Corresponding graderAt XlOutput above, γ2It is defined as o2Corresponding graderAt XlOutput above, works as ok=2 (k= 3 ..., time m), γkIt is defined as okCorresponding grader SVM_uk-2At XlOutput above, works as ok=1 (k=3 ..., m), γk(k =3 ..., m) it is defined as okCorresponding grader SVM_uk-2Threshold value threk-2(k-2∈{1,…,Nu});It is called Limit potential function (edge potential), whereinWork as ok=1 (k=3 ..., time m),
Work as ok=2 (k=3 ..., time m),ω={ ω1,…,ωm+nBe CRF model needs maximize the parameter that formula (1) obtains;
(2) stochastic gradient descent method (stochastic gradient descent) is used to minimize the energy in formula (2) Flow function E (Xl,Ol), thus obtain the parameter ω={ ω of CRF model1,…,ωm+n}。
Secondly, after training obtains CRF model parameter, being next predicted image to be described, step is as follows:
(1) image segmentation algorithm is used to be partitioned into image Image to be described,tIn the object that comprisesIts Middle NtRepresenting the object sum being partitioned in image to be described, in Fig. 3, Fig. 4, red frame show use image segmentation algorithm segmentation The object gone out.
(2) Image is extractedtIn each objectLocal binary patterns feature (Local Binary Patterns) LBP featureL=1 ..., Nt
(3) model parameter ω={ ω that training obtains is utilized1,…,ωm+nMaximization equation below:
Wherein,It is defined asIn first element o1Corresponding grader?Output above,Definition ForIn second element o2Corresponding grader?Output above, works as ok=2 (k=3 ..., time m),Fixed Justice is okCorresponding grader SVM_uk-2?Output above, works as ok=1 (k=3 ..., m), γk(k=3 ..., m) definition For okCorresponding grader SVM_uk-2Threshold value threk-2(k-2∈{1,…,Nu});Node o1And o2Between the value on limitNode o2With node ok(k=3 ..., m) between the value on limitNeed a point situation discussion, work as ok=1 (k =3 ..., time m),Work as okTime
(4) the long-pending belief propagation algorithm (max-product belief propagation.) of maximum is used to carry out solution formula (3), thus obtain making formula (3) maximized outputThe object i.e. doped's 3 layers of tree label.
3 layers of tree label such as Fig. 3 of final prediction output, in Fig. 4 shown in right side.
Conclusion: the present invention is directed to iamge description problem and propose a kind of structurized description method.From present invention definition 3 In layer tree semantic label, user not only can obtain the class belonging to objects in images, subclass, it is also possible to get object The attribute information having, and class, subclass, the structural relation between attribute, thus produce image more accurate, abundanter Description.

Claims (1)

1. a structurized Image Description Methods, comprises the following steps:
The first step, obtains the image being trained, and to 3 layers of tree label of object configurations each in image, forms training Collection:
(1) obtain the image being trained, build image collection IMG;
(2) use image segmentation algorithm to be partitioned in set IMG the object included in each image, constitute collection of objects OBJ;
(3) being labeled each object in set OBJ, the content of mark includes the class belonging to object, subclass and having Attribute, formed class set CLASS, subclass set SUBCLASS and community set ATTRIBUTE;
(4) according to markup information, each object configurations in set OBJ is included class-subclass-attribute 3 layers tree label, Form the tag set Y corresponding with OBJ, the element one_to_one corresponding of two set;
(5) having the most identical element in class set CLASS, each element in sequential scan CLASS, identical element is only protected Stay one, form the class set Cla of candidate, its subclass set SUBCLASS and community set ATTRIBUTE is done identical sweeping Retouch, obtain the subclass set Subcl and the community set Attri of candidate of candidate;
(6) structure includes: collection of objects OBJ, tag set Y, the class set Cla of candidate, the subclass set Subcl of candidate and Training set Tr of the community set Attri of candidate;
Second step, extracts the low-level feature of each object of image in training set, and training obtains the class of all candidates, subclass with And the grader that Attribute Relative is answered, forming the intermediate data required for next step modeling, step is as follows:
(1) local binary patterns (LBP) feature of each object, structural feature set X in collection of objects OBJ is extracted;
(2) according to characteristic set X and class set CLASS, training obtains gathering each element w in ClaiSVM classifier SVM_ wi, constitute the grader set S corresponding with classw, in like manner, according to characteristic set X and subclass set SUBCLASS, training obtains Each element v in subclass set SubcljSVM classifier SVM_vj, constitute the grader set S corresponding with subclassv, according to Characteristic set X and community set ATTRIBUTE, training obtains gathering each element u in AttrikSVM classifier SVM_uk, structure Become the grader set S answered with Attribute Relativeu
(3) in training set Tr, grader set S is calculateduIn each grader SVM_ukPrecision ratio and recall curve (PR is bent Line), obtain grader SVM_u according to this PR curvekThreshold value threk, constitute and SuCorresponding threshold value set Threshold;
(4) in training set Tr, add up each element w in the class set Cla of candidateiWith in the subclass set Subcl of candidate Each element vjCo-occurrence probabilities pij, i.e. tag set Y has w simultaneouslyiAnd viPhysical quantities account for total object number in OBJ NmRatio;Training set Tr is added up each element v in the subclass set Subcl of candidateiCommunity set with candidate Each element u in AttrikCo-occurrence probabilities gjk, i.e. set Y has u simultaneouslykAnd viPhysical quantities account for total object in OBJ Number NmRatio, additionally, in training set Tr statistics containing candidate subclass set Subcl in element viBut do not contain candidate's Element u in community setkProbability qjk, i.e. set Y has viBut there is no ukPhysical quantities account for total object number N in OBJm's Ratio;
(5) structure comprises: grader set Sw, Sv, Su, threshold value set Threshold, probability statistics pij, gjk, qjkIn Between data, in case next step modeling use;
3rd step, structural environment random field models (CRF), training obtains model parameter;
4th step, for image to be described, first carries out image segmentation, is partitioned in image to be described, the object comprised, then presses According to the method for aforesaid second step, extract the low-level feature of each object in image to be described,;Afterwards, the 3rd step structure is utilized Conditional random field models (CRF) and training obtain model parameter, use and use the long-pending belief propagation algorithm of maximum, to image to be described, In 3 layers of tree label of each object be predicted.
CN201310504488.7A 2013-10-23 2013-10-23 A kind of structurized Image Description Methods Expired - Fee Related CN103530403B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310504488.7A CN103530403B (en) 2013-10-23 2013-10-23 A kind of structurized Image Description Methods

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310504488.7A CN103530403B (en) 2013-10-23 2013-10-23 A kind of structurized Image Description Methods

Publications (2)

Publication Number Publication Date
CN103530403A CN103530403A (en) 2014-01-22
CN103530403B true CN103530403B (en) 2016-09-28

Family

ID=49932412

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310504488.7A Expired - Fee Related CN103530403B (en) 2013-10-23 2013-10-23 A kind of structurized Image Description Methods

Country Status (1)

Country Link
CN (1) CN103530403B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104537676B (en) * 2015-01-12 2017-03-22 南京大学 Gradual image segmentation method based on online learning
US9875736B2 (en) * 2015-02-19 2018-01-23 Microsoft Technology Licensing, Llc Pre-training and/or transfer learning for sequence taggers
US11514244B2 (en) * 2015-11-11 2022-11-29 Adobe Inc. Structured knowledge modeling and extraction from images
CN110019663B (en) * 2017-09-30 2022-05-17 北京国双科技有限公司 Case information pushing method and system, storage medium and processor
CN108875934A (en) * 2018-05-28 2018-11-23 北京旷视科技有限公司 A kind of training method of neural network, device, system and storage medium
CN110162644B (en) * 2018-10-10 2022-12-20 腾讯科技(深圳)有限公司 Image set establishing method, device and storage medium
CN115114966B (en) * 2022-08-29 2023-04-07 苏州魔视智能科技有限公司 Method, device and equipment for determining operation strategy of model and storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102207966A (en) * 2011-06-01 2011-10-05 华南理工大学 Video content quick retrieving method based on object tag
CN102364498A (en) * 2011-10-17 2012-02-29 江苏大学 Multi-label-based image recognition method

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9317779B2 (en) * 2012-04-06 2016-04-19 Brigham Young University Training an image processing neural network without human selection of features

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102207966A (en) * 2011-06-01 2011-10-05 华南理工大学 Video content quick retrieving method based on object tag
CN102364498A (en) * 2011-10-17 2012-02-29 江苏大学 Multi-label-based image recognition method

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
图像多标签学习综述;黄焱;《云南民族大学学报(自然科学版)》;20111110;第20卷(第6期);490-496 *
基于条件随机场的多标签图像分类;徐振宇 等;《辽宁工业大学学报(自然科学版)》;20120815;第32卷(第4期);223-230 *
基于语义分析的图像多标签标注算法研究;胡微微;《中国优秀硕士学位论文全文数据库信息科技辑》;20130615;第2013年卷(第6期);I138-1051 *

Also Published As

Publication number Publication date
CN103530403A (en) 2014-01-22

Similar Documents

Publication Publication Date Title
CN103530403B (en) A kind of structurized Image Description Methods
CN103530405B (en) A kind of image search method based on hierarchy
US11687781B2 (en) Image classification and labeling
Ocer et al. Tree extraction from multi-scale UAV images using Mask R-CNN with FPN
CN108763445B (en) Construction method, device, computer equipment and the storage medium in patent knowledge library
Liu et al. Automatic detection of oil palm tree from UAV images based on the deep learning method
CN110083700A (en) A kind of enterprise's public sentiment sensibility classification method and system based on convolutional neural networks
CN110909164A (en) Text enhancement semantic classification method and system based on convolutional neural network
CN106447066A (en) Big data feature extraction method and device
CN104142995B (en) The social event recognition methods of view-based access control model attribute
CN112241481A (en) Cross-modal news event classification method and system based on graph neural network
CN106445988A (en) Intelligent big data processing method and system
CN106682696A (en) Multi-example detection network based on refining of online example classifier and training method thereof
Henry et al. Automated LULC map production using deep neural networks
CN108038205A (en) For the viewpoint analysis prototype system of Chinese microblogging
CN113254652B (en) Social media posting authenticity detection method based on hypergraph attention network
CN110196945A (en) A kind of microblog users age prediction technique merged based on LSTM with LeNet
CN106897776A (en) A kind of continuous type latent structure method based on nominal attribute
CN105894038A (en) Credit card fraud prediction method based on signal transmission and link mode
CN107908757A (en) Website classification method and system
Çalışkan et al. Forest road detection using deep learning models
Stevenson et al. Deep residential representations: Using unsupervised learning to unlock elevation data for geo-demographic prediction
CN110008337A (en) The parallel LSTM structure classification of customs products method measured based on phase response
Yao et al. Semantic segmentation based on stacked discriminative autoencoders and context-constrained weakly supervised learning
Montellano Butterfly, Larvae and Pupae Defects Detection Using Convolutional Neural Network and Apriori Algorithm

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20160928

Termination date: 20211023

CF01 Termination of patent right due to non-payment of annual fee