CN106446933A - Multi-target detection method based on context information - Google Patents

Multi-target detection method based on context information Download PDF

Info

Publication number
CN106446933A
CN106446933A CN201610785155.XA CN201610785155A CN106446933A CN 106446933 A CN106446933 A CN 106446933A CN 201610785155 A CN201610785155 A CN 201610785155A CN 106446933 A CN106446933 A CN 106446933A
Authority
CN
China
Prior art keywords
target
scene
detection
represent
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201610785155.XA
Other languages
Chinese (zh)
Other versions
CN106446933B (en
Inventor
李涛
裴利沈
赵雪专
张栋梁
李冬梅
朱晓珺
曲豪
邹香玲
高大伟
刘永
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
HENAN RADIO & TELEVISION UNIVERSITY
Original Assignee
HENAN RADIO & TELEVISION UNIVERSITY
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by HENAN RADIO & TELEVISION UNIVERSITY filed Critical HENAN RADIO & TELEVISION UNIVERSITY
Priority to CN201610785155.XA priority Critical patent/CN106446933B/en
Publication of CN106446933A publication Critical patent/CN106446933A/en
Application granted granted Critical
Publication of CN106446933B publication Critical patent/CN106446933B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2323Non-hierarchical techniques based on graph theory, e.g. minimum spanning trees [MST] or graph cuts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/29Graphical models, e.g. Bayesian networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/50Extraction of image or video features by performing operations within image blocks; by using histograms, e.g. histogram of oriented gradients [HoG]; by summing image-intensity values; Projection analysis
    • G06V10/507Summing image-intensity values; Histogram projection analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/35Categorising the entire scene, e.g. birthday party or wedding scene
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Multimedia (AREA)
  • Discrete Mathematics (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a multi-target detection method based on context information. The method comprises offline training and an online matching model. By use of a Gist feature of an input picture, through a distance from a scene cluster center, a corresponding scene which the picture is in is selected, and a corresponding selection probability is obtained; and through running a single-target basic detector DPM which all target types already have, a corresponding target detection window and a corresponding target detection score are obtained, and by use of a trained context model, through combination with the Gist feature, detection results of targets are obtained. According to the method, different scenes are distinguished by use of global context information, and then a corresponding target detection model is formed according to mutual relations between targets in different scenes, such that mutual interference between the targets between the different scenes is effectively reduced, and the accuracy of multi-target detection is further improved.

Description

Multi-target detection method based on contextual information
Technical field
The present invention relates to a kind of multiple target based on contextual information for being capable of real-time application in multi-target detection system Detection technique.
Background technology:
Based on the target detection of image or video be computer vision field recent decades and later considerably long one section when Interior study hotspot, is the basis of visual analysis.The technology widely can be adapted to target following, object detection and identification, The subjects such as information security, autonomous driving, image retrieval, robot, man-machine interaction, medical image analysis, Internet of Things and engineering should Use field.
Present object detection system is mainly by the identification that portrays to realize different target of target itself appearance information And detection.Currently, such system mainly uses the feature (as HOG, LBP, SIFT etc.) of engineer or by deep learning The further feature for directly being obtained from image in itself using target appearance, realizes target detection portraying target appearance.But daily Life actually detected in, mostly unrestricted opening environment, complicated and changeable, there is illumination variation, view transformation, and target hides The interference such as gear, if the simple appearance information from target itself, when the information that target itself in image or video is provided When very little, only target classification cannot be accurately judged to according to target itself.
Chinese Central China University of Science and Technology Wang Yue ring, the invention such as Liu Chang, Chen Junling《A kind of target based on context constraint is known Other method》On December 7th, 2012 applies for a patent to China national Department of Intellectual Property and gets the Green Light, in the public affairs of on April 17th, 2013 Open, publication number:CN103049763A.
A kind of target identification method based on context constraint is this publication disclosed, for remote sensing images scene classification And the detection identification of target.The method is filtered processing to image first, then carries out region segmentation, divides the image into as many Individual connected domain, and each connected domain is marked, secondly, the characteristic vector of each connected domain is calculated, and is input to prior instruction Scene classification calculating is carried out in the grader that perfects, and is exported category label figure, then, on this basis, is recognized as needed Target, delimit target local area that may be present on labelling figure, and carry out pretreatment operation to the regional area, Area-of-interest is calculated in the region, finally, is extracted feature, and is input in grader and is identified.The invention provides one Planting fast and effectively scene classification method, it is intended to effective context constraint is provided for target recognition, improve recognition efficiency and standard True rate.The algorithm flow is illustrated in fig. 1 shown below.
Above-mentioned patented technology document yet suffers from defect:Although the patent using cut zone and is labeled so as to obtain Scene classification, then on basis of classification, carries out global context constraint so as to calculating area-of-interest, obtain correlated characteristic to Amount, recognizes respective objects by the grader for training.But such method obtains target just with global scene context Probability Area, it is contemplated that relative position distribution of the target based on scene, have ignored symbiosis between target and portrays.In addition, working as target Self-information content is compared with hour, it is impossible to accurately portrays target, cannot obtain corresponding target detection by grader.
Content of the invention:
The problems such as present invention is for target self-information deficiency, by means of the correlation for coming from picture or video outside target Information, direct or indirect for target detection provided auxiliary information so as to improve the accuracy of target detection.
Realize the technical scheme adopted by the goal of the invention:A kind of multi-target detection method based on contextual information, its It is characterised by including training and Matching Model on line under line, training under line obtains subtree model step:
Step one:First against training set, using LableMe software, the image object class in training set is labeled, Obtain the training set image of target identification;And train the DPM detector of each target in image;Step 2:Calculate training set The Gist feature of middle picture obtains global context information;Then using improved Spectral Clustering, scene partitioning is realized;
Step 3:Scene is represented by hidden variable, then under different scenes, according to the mark of the target of training picture As a result symbiosis and the location distribution information of target are obtained;
Step 4:By target in two width pictures in calculating training set, the mapping in transformed space is distributed, judges two mesh Whether mark is concordance target, forms concordance target pair;
Step 5:The symbiosis for being obtained using step 3 and step 4 and location distribution information and concordance target are to logical The Chow-Liu algorithm for crossing Weight carries out the study of tree construction, then parameter is trained, and obtains subtree model;
Matching Model step on line:
Step one:In detection, first, the Gist feature of calculating input image;
Step 2:Then, according to the Gist feature of input picture, image division in training corresponding scene subspace And obtain the probability distribution of corresponding scene subspace;
Step 3:Then, the DPM detector of the different target by training obtains the detection score value of each target of image With detection window information;
Step 4:The scene probability distribution for being obtained using step 2 and step 3 and each target detection score value and detection window Information, using iterative manner, the subtree prior model that under joint line, training department separately wins, asks for target detection and whether correct Probability MAP estimation, the object detection results so as to revise all kinds of DPM detectors obtain final multi-target detection As a result.
Wherein, the Gist of 520 dimensions in training set per width picture is obtained in the step of training obtains subtree model under line two Feature, its acquisition process step is:First, using the Gabor filter group of one group of different scale and direction, image is filtered Ripple, obtain one group process after filtering after image;Then, respectively filtered image is carried out according to fixed size non-overlapping Stress and strain model, and average is asked for each grid after image division;Finally, each the grid average level for image sets being obtained Connection forms global characteristics, obtains the Gist feature of 520 final dimensions of image, and expression formula is as follows:
Wherein,Represent the Gist feature of jth width image, cat represents that feature is cascaded, IjRepresent grid division for r × l's Jth width gradation of image figure, gmnThe Gabor filter in the n direction of m yardstick is represented respectively,Represent image and Gabor filter Convolution algorithm, ncRepresent the number of filter of convolution, size is m × n,Dimension be r × l × nc.This programme adopts 4 chis Spend the Gabor filter in 8 directions;
Wherein, in the step of training obtains subtree model under line two, 6-8 class subfield is obtained using improved Spectral Clustering Scape, comprises the concrete steps that:First, the Gist feature in input training set per width figure, is obtained using Random Forest method and represents In training set between each image similarity similar matrix;Then, by the use of the similar matrix as input, using spectral clustering Method, clusters to training set picture, realizes the scene partitioning of different training set pictures.
Wherein, the step of under line, training obtains subtree model three incorporates concordance target pair when subtree model is trained Subtree context model, comprises the concrete steps that:
(1)
The each representation in components for obtaining concordance target is as follows
Wherein, (lx(oik),ly(oik)) represent the center of the target frame of o in figure the i-th classification k-th example of target Coordinate.Yardstick represents, visual angle is obtained with the length-width ratio of target frame for p (oik) for sc (oik) with the example goal frame area square root ?;Similar (lx(qil),ly(qil) center position coordinates of the target frame of q in figure the i-th classification l-th example of target) are represented. Yardstick is sc (qil), and visual angle is p (qil);Using variableRepresent that two width in figures are same Corresponding change of the class target variable in different four dimensions spatially, wherein r ∈ R represents mutual relation, and R represents that each is consistent The similar target correspondence set of two width figures of property target centering,Represent that the mutual change of target location is closed System,Represent the mutual variation relation of target scale,Represent the mutual variation relation of aspect;By (2) formula The mapping distribution for calculating, judges respective objects to whether meeting concordance distribution, if met, respective objects are to belonging to same One target normal form, that is, belong to concordance target pair;
(2)
Using greedy cluster, final target complex set under different subspace is generated, avoid by the way of soft ballot turning Change the sensitivity of Spacial domain decomposition and similar purpose all living creatures becomes the redundancy for causing;If the frequency that target occurs is in target complex Do not surpass 50%, then the target is rejected in target complex;Ultimately form the target complex under different scenes subspace;In the target for being formed On the basis of group, in same target complex, by the combination of two of inhomogeneity target, concordance target pair is formed;
(3)
Expression mesh is portrayed jointly to the symbiosis and between simple target and mutual alignment relation by the concordance target for proposing Local context information between mark;Its step is:First, portray concordance target to and sub-scene dependency:
θit=cfit×isfi(3)
Wherein, cfitRepresent frequency of i-th concordance target to appearance in t-th sub-scene, isfiRepresent i-th The inverse scene frequency index of concordance target pair, which is expressed as follows:
Wherein, T represents the total type number of sub-scene, TtRepresent the sub-scene type comprising i-th concordance target pair Number, ξ avoids isf for a minimumiValue is 0;Obtain all of relative coefficient θitAfterwards, it is normalized;
(4)
Using the markup information of training set picture, under different sub-scene t, set up description target symbiosis binary tree and The Gauss tree of description target location relation, both portray priori subtree model jointly;
The joint probability that in binary tree, whether all targets occur is expressed as:
p(b|zt)=p (broot|zt)∏ip(bi|bpa(i),zt) (5)
Wherein, i represents the node in tree, and pa (i) represents the father node of node i, bi∈ { 0,1 } represents that whether target i exists Occur in image;With b ≡ { biRepresent all target class;brootRepresent the root node of subtree, ztIt is a discrete variable, table Show t-th sub-scene space;
The position L of target iiThe appearance of target is depended on, the relation of interdependence between its position has and target occurs Consistent binary tree construction, which is expressed as follows:
P (L | b)=p (Lroot|brootip(Li|Lpa(i),bi,bpa(i)) (6)
Wherein, LrootRepresent the position of root node, Lpa(i)Represent the position of father node.
The Joint Distribution of so variable b and position L is expressed as:
WhereinIt is expressed as:
(5) testing result and the Gist of the simple target detector DPM for having trained
Global characteristics are dissolved in prior model, and global characteristics are represented with g, then its Joint Distribution is expressed as:
Wherein,It is expressed as:
WikRepresent the position of the k-th candidate's window for obtaining using the simple target detection of target class i,
sikRepresent the score value of the k-th candidate's window for obtaining using the simple target detection of target class i;cikRepresent target class i K-th candidate window whether be correct detection, if it is value be 1 otherwise be 0;
(6) training subtree model mainly includes the study of tree structure and the study of relevant parameter;Calculated using Chow-Liu When method carries out the prior model study of tree, by the dependency of the concordance target pair portrayed in formula (3) and scene θit, change interactive information S of target centering father and son's nodei
Si=Si×(1+sigm(θit)) (11)
Then, the Structure learning of subtree prior model is completed according to weight limit;
For the study of model parameter, first, p (b in formula (8)i|bpa(i)) by count target symbiosis with consistent Property target to and mutual information change obtain;p(Li|Lpa(i),bi,bpa(i)) value is carried out according to the appearance of father and son's node, it is divided into Father and son's node co-occurrence, child node occurs, and child node occurs without three kinds of situations and considers that its Gauss distribution obtains value:
By the Gist global characteristics of each training image in formula (9), and estimation p (g | bi), obtain especially by following formula:
For global characteristics g, p (b is estimated using the method for logistic regressioni|g);
The corresponding testing result of single basis detector is integrated, is the Probability p (c of correct detection firstik|bi), its value with Whether target occurs is closely related, and form is as follows:
When target does not occur, then correct verification and measurement ratio takes 0, and when target occurs, the probability of correct detection is detected for correct Number with training set target mark total number of tags ratio;
It is then detected that the location probability p (W of windowik|cik,Li), it is Gauss distribution, is fixed against correct detection cikAnd target The position L of class iiIt is expressed as:
Wherein, when window is correctly detected, then WikMeet Gauss distribution, ΛiRepresent the variance of target predicted position;Window Correctly detect, then WikNot against Li, a constant can be expressed as;
Score value Probability p (s finally, for basic detectorik|cik) acquisition, which is fixed against the result of correct detection cik, it is expressed as:
Wherein p (cik|sik) estimated using the method for logistic regression.
Wherein, compatible portion on line:
(1) in detection, the image j for input obtains Gist global characteristics initially with the method in formula (1)
(2) and then, according to the Gist feature of input picture, image division corresponding scene subspace is obtained in training Obtain the probability distribution of corresponding scene subspace;The probability distribution of corresponding sub-scene is embodied as:
Wherein,Represent the inverse distance at j to t sub- scene clustering center of input picture,Represent all poly- The inverse of class centre distance and;Represented using the probability after normalizationUnder belong to the probability of a certain sub-scene;
(3) using the DPM detector of the different target by training, the initial detecting score value of each target of image is obtained With detection window information;
(4) probability distribution of the sub-scene for being obtained using step (2) and (3) is believed with each target detection score value and detection window Breath, using iterative manner, the subtree prior model that under joint line, training department separately wins, asks for target detection and whether correct The MAP estimation of probability, the object detection results so as to revise all kinds of DPM detectors obtain final multi-target detection knot Really;By the iteration optimization acquisition to following formula:
Beneficial effects of the present invention:In the present system, for target self-information deficiency the problems such as, by means of picture or regard Come from relevant information outside target in frequency, the scene information residing for such as target, the mutual relation and between different target, directly Or indirectly for target detection provided auxiliary information so as to improve the accuracy of target detection.The system is using expression global scene The Gist global characteristics of contextual information, realize scene selection, then, for different scene subspaces, incorporate simple target Between symbiosiss and while position relationship, it is proposed that the concept of concordance target pair, and it is upper and lower as important local Literary information, is dissolved into the target detection model of corresponding sub-tree structure.By concordance target pair, in subtree target detection model In forming process, change corresponding mutual information weight.Change subtree mesh so as to the local context information using concordance target pair The structure of mark detection model.The method utilizes global context data separation different scenes, then according to target under different scenes Between mutual relation, form corresponding target detection model, be effectively reduced interfering between target between different scenes, and And by concordance target pair is introduced, enhance the mutual constraint between target, there is provided the local context information of more robust, Compared to existing system, the accuracy of multi-target detection is further increased.
Description of the drawings:
Fig. 1 is the flow chart of prior art;
Fig. 2 is the present invention for vehicle targets flow chart;
Fig. 3 is concordance target of the present invention to obtaining schematic diagram;
Fig. 4 is multi-target detection method partial detection figure of the present invention based on contextual information.
Specific embodiment:
Due to by the relevant information that will detect in image or video outside target, the such as scene residing for detection target Information, and the mutual relation etc. of different target and detection target, can be direct or indirect for detecting target provided auxiliary information, more Abundant features detection target such that it is able to improve the accuracy of target detection.The present invention is based on this thinking, it is proposed that a kind of Merge the multi-target detection system of multiple contextual informations, the system selects layer and subtree layer two parts to constitute by scene.First, Scene is obtained by Gist global characteristics and layer is selected, then, in corresponding sub-scene, recycle simple target and concordance target Portraying symbiosis and position relationship between target by the probability graph model of tree construction, obtain subtree layer, so as to using global and The contextual information of local realizes multi-target detection.During training, first, in scene, layer is selected, with the Gist character representation overall situation Context information, using this feature, obtains initial scene subset using improved Spectral Clustering, and selects subtree under subset Root node;Then, under respective subset, using the training set image for having marked, by the concordance target that proposes to single Symbiosis between target and mutual alignment relation portray the local context information for representing between target jointly, using the local message, Training obtains different subtree models.When target detection is carried out, first, the Gist feature for obtaining input picture is calculated, in scene Layer is selected, using this feature by the distance with scene clustering center, the corresponding scene for selecting the picture to be located, and is obtained corresponding Select probability;Then, by running the existing simple target basis detector DPM of all target class, corresponding target is obtained Detection window and corresponding target detection score value, using the context model for training, in conjunction with Gist feature, obtain the inspection of target Survey result.The method is reduced using the local for obtaining and global context information or removes according to the acquisition of outward appearance object detector Testing result, complete to simple target detection result be modified, obtain final object detection results.
The step of the present embodiment realizes the multi-target detection system based on context:
Training part under the line of multi-target detection system, 1) first against training set, using LableMe software to training set In image object class be labeled, obtain target identification training set image.2) the Gist feature of picture in training set is calculated Obtain global context information;Then using improved Spectral Clustering, scene partitioning is realized;3) scene is represented by hidden variable, Then, under different scenes, the annotation results according to the target of training picture obtain the symbiosis of target and position distribution letter Breath;4) by target in two width pictures in calculating training set, the mapping in transformed space is distributed, judges whether two targets are one Cause property target, forms concordance target pair;5) 3 are utilized), 4) symbiosis that obtain and location distribution information and concordance target To carrying out the study of tree construction by the Chow-Liu algorithm of Weight, then parameter is trained, obtains subtree model.
Matching Model on line
1) in detection, first, the Gist feature of calculating input image.2) and then, according to the Gist feature of input picture, Image division corresponding scene subspace is obtained the probability distribution of corresponding scene subspace in training;3) then, lead to The DPM detector for crossing the different target for training obtains detection score value and the detection window information of each target of image;4) utilize 2) the scene probability distribution for, 3) obtaining and each target detection score value and detection window information, using iterative manner, instruct under joint line Practice the subtree prior model that part obtains, target detection and the whether MAP estimation of correct probability is asked for, so as to repair The object detection results of just all kinds of DPM detectors obtain final multi-target detection result (DPM detector:DPM (Deformable Parts Model) is an extremely successful algorithm of target detection, continuously obtains VOC (Visual Object Class) detection champion for many years.The weight of numerous graders, segmentation, human body attitude and behavior classification is become at present Want part.Its inventor Pedro Felzenszwalb is authorized " Life Achievement Award " by VOC within 2010.DPM can regard HOG as The extension of (Histogrrams of Oriented Gradients), general idea is consistent with HOG.It is straight that gradient direction is first calculated Fang Tu, then obtains the gradient former (Model) of object with SVM (Surpport Vector Machine) training.Have so Template can just be used directly to classification, simple understand to be exactly model and object matching).
This programme to realize flow process as shown in Figure 2.
For above-mentioned flow process, this programme is elaborated:
Under one line, training obtains subtree model
1) target mark first, is carried out to training set imagery exploitation LabelMe software, is obtained comprising target classification and position The training set image of information, and train the DPM detector of each target in image.
2) and then, calculate training set in sample Gist feature so as to obtain the global context information of sample image, profit Different scenes division is realized with based on improved Spectral Clustering.Detailed step is:
(2.1)
Obtain the Gist feature of 520 dimensions in training set per width picture.Its acquisition process is:First, different using one group The Gabor filter group in yardstick and direction is filtered to image, obtain one group process after filtering after image;Then, divide Other filtered image is carried out non-overlapping stress and strain model according to fixed size, and each grid after image division is asked for Average;Finally, each the grid average for image sets being obtained cascades to form global characteristics, obtains 520 final dimensions of image Gist feature, expression formula is as follows:
Wherein,Represent the Gist feature of jth width image, cat represents that feature is cascaded, IjRepresent grid division for r × l's Jth width gradation of image figure, gmnThe Gabor filter in the n direction of m yardstick is represented respectively,Represent image and Gabor filter Convolution algorithm, ncRepresent the number of filter of convolution, size is m × n,Dimension be r × l × nc.This programme adopts 4 chis Spend the Gabor filter in 8 directions.
(2.2) the Gist feature for the training set for obtaining per width picture, obtains 6-8 class using improved Spectral Clustering Sub-scene.Idiographic flow is:First, the Gist feature in input training set per width figure, is obtained using Random Forest method Represent the similar matrix of similarity between each image in training set;Then, by the use of the similar matrix as input, poly- using spectrum The method of class, clusters to training set picture, realizes the scene partitioning of different training set pictures.
3) in each different scene subspace, using the image subset for obtaining under the scene subspace, using tree construction Probability graph model train corresponding subtree model under the scene.This programme has incorporated concordance when subtree model is trained Target is the description that carries out relation two-by-two between target, it is proposed that the subtree context model of concordance target pair.Detailed process is such as Under:
(3.1) first, according to two neighbouring inhomogeneity targets in two width different images under scene subspace in space bit Put, yardstick, the concordance distribution on visual angle, obtain the concordance target pair under scene subspace.Obtain concordance target pair Detailed process is as shown in Figure 3.Its each representation in components is as follows
Wherein, (lx(oik),ly(oik)) represent the center of the target frame of o in figure the i-th classification k-th example of target Coordinate.Yardstick represents, visual angle is obtained with the length-width ratio of target frame for p (oik) for sc (oik) with the example goal frame area square root ?;Similar (lx(qil),ly(qil) center position coordinates of the target frame of q in figure the i-th classification l-th example of target) are represented. Yardstick is sc (qil), and visual angle is p (qil);Using variableRepresent that two width in figures are same Corresponding change of the class target variable in different four dimensions spatially, wherein r ∈ R represents mutual relation, and R represents that each is consistent The similar target correspondence set of two width figures of property target centering,Represent that the mutual change of target location is closed System,Represent the mutual variation relation of target scale,Represent the mutual variation relation of aspect;By (2) formula The mapping distribution for calculating, judges respective objects to whether meeting concordance distribution, if met, respective objects are to belonging to same One target normal form, that is, belong to concordance target pair;
(3.2)
Using greedy cluster, final target complex set under different subspace is generated.Avoid by the way of soft ballot turning Change the sensitivity of Spacial domain decomposition and similar purpose all living creatures becomes the redundancy for causing.Meanwhile, in order to reduce the series of target complex, such as The frequency that fruit target occurs does not surpass 50% in target complex, then reject the target in target complex, is operated by above, most end form Become the target complex under different scenes subspace.On the basis of the target complex for being formed, in same target complex, by different classifications Target combination of two, forms concordance target pair.
(3.3)
Expression mesh is portrayed jointly to the symbiosis and between simple target and mutual alignment relation by the concordance target for proposing Local context information between mark.First, portray concordance target to and sub-scene dependency.It is shown below:
θit=cfit×isfi(3)
Wherein, cfitRepresent frequency of i-th concordance target to appearance in t-th sub-scene, isfiRepresent i-th The inverse scene frequency index of concordance target pair, which is expressed as follows:
Wherein, T represents the total type number of sub-scene, TtRepresent the sub-scene type comprising i-th concordance target pair Number, ξ is a minimum, to avoid isfiValue is 0.Obtain all of relative coefficient θitAfterwards, it is normalized.
(3.4)
Using the markup information of training set picture, under different sub-scene t, set up description target symbiosis binary tree and The Gauss tree of description target location relation, both portray priori subtree model jointly.
The joint probability that in binary tree, whether all targets occur is expressed as:
p(b|zt)=p (broot|ztip(bi|bpa(i),zt) (5)
Wherein, i represents the node in tree, and pa (i) represents the father node of node i, bi∈ { 0,1 } represents that whether target i exists Occur in image.With b ≡ { biRepresent all target class.brootRepresent the root node of subtree, ztIt is a discrete variable, table Show t-th sub-scene space.
The position L of target iiThe appearance of target is depended on, the relation of interdependence between its position has and target occurs Consistent binary tree construction, which is expressed as follows:
P (L | b)=p (Lroot|brootip(Li|Lpa(i),bi,bpa(i)) (6)
Wherein, LrootRepresent the position of root node, Lpa(i)Represent the position of father node.
The Joint Distribution of so variable b and position L is expressed as:
WhereinIt is expressed as:
(3.5) testing result of the simple target detector DPM for having trained and Gist global characteristics are dissolved into elder generation Test in model, global characteristics are represented with g, then its Joint Distribution is expressed as:
Wherein,It is expressed as:
WikRepresent the position of the k-th candidate's window for obtaining using the simple target detection of target class i, sikRepresent and utilize mesh The score value of k-th candidate's window that the simple target detection of mark class i is obtained;cikWhether k-th candidate window for representing target class i be Correct detection, if it is value is otherwise 0 for 1.
(3.6) training subtree model mainly includes the study of tree structure and the study of relevant parameter.Using Chow-Liu When algorithm carries out the prior model study of tree, pass through
(3.3) the dependency θ of the concordance target pair that portrays in and sceneit, change the friendship of target centering father and son's node Mutual information Si.It is embodied as:
Si=Si×(1+sigm(θit)) (11)
Then, the Structure learning of subtree prior model is completed according to weight limit.
For the study of model parameter, first, p (b in (8) formulai|bpa(i)) by symbiosis and the concordance of statistics target Target to and mutual information change obtain.p(Li|Lpa(i),bi,bpa(i)) value is carried out according to the appearance of father and son's node, it is divided into father Child node co-occurrence, child node occurs, and child node occurs without three kinds of situations and considers that its Gauss distribution obtains value.Concrete form is such as Under:
(9) in formula by the Gist global characteristics of each training image, estimate p (g | bi), obtain especially by following formula:
For global characteristics g, p (b is estimated using the method for logistic regressioni|g).
The corresponding testing result of single basis detector is integrated, is the Probability p (c of correct detection firstik|bi), its value with Whether target occurs is closely related, and form is as follows:
When target does not occur, then correct verification and measurement ratio takes 0, and when target occurs, the probability of correct detection is detected for correct Number with training set target mark total number of tags ratio.
It is then detected that the location probability p (W of windowik|cik,Li), it is Gauss distribution, is fixed against correct detection cikAnd target The position L of class iiIt is expressed as:
Wherein, when window is correctly detected, then WikMeet Gauss distribution, ΛiRepresent the variance of target predicted position;Window Correctly detect, then WikNot against Li, a constant can be expressed as.
Score value Probability p (s finally, for basic detectorik|cik) acquisition, which is fixed against the result of correct detection cik, it is expressed as:
Wherein p (cik|sik) estimated using the method for logistic regression.
Compatible portion on two wires
4) in detection, for input image j initially with 2) in method obtain Gist global characteristics
5) and then, according to the Gist feature of input picture, image division corresponding scene subspace is obtained in training Obtain the probability distribution of corresponding scene subspace.Wherein, the probability distribution of corresponding sub-scene is embodied as:
Wherein,Represent the inverse distance at j to t sub- scene clustering center of input picture,Represent all poly- The inverse of class centre distance and.Represented using the probability after normalizationUnder belong to the probability of a certain sub-scene.
6) using the DPM detector of different target by training obtain each target of image initial detecting score value and Detection window information;
7) 5 are utilized), 6) probability distribution of sub-scene that obtains and each target detection score value and detection window information, adopt Iterative manner, the subtree prior model that under joint line, training department separately wins, ask for target detection and whether correct probability MAP estimation, the object detection results so as to revise all kinds of DPM detectors obtain final multi-target detection result.Specifically Be by obtaining to the iteration optimization of following formula.
This programme has merged contextual information, enriches objective expression, as shown in figure 4, many mesh based on contextual information Mark detection method achieves gratifying testing result.

Claims (5)

1. a kind of multi-target detection method based on contextual information, it is characterised in that including mating mould on training under line and line Type,
Under line, training obtains subtree model step:
Step one:First against training set, using LableMe software, the image object class in training set is labeled, obtains The training set image of target identification;And train the DPM detector of each target in image;
Step 2:The Gist feature for calculating picture in training set obtains global context information;Then using improved spectral clustering Method realizes scene partitioning;
Step 3:Scene is represented by hidden variable, then under different scenes, according to the annotation results of the target of training picture Obtain symbiosis and the location distribution information of target;
Step 4:By target in two width pictures in calculating training set, the mapping in transformed space is distributed, judges that two targets are No is concordance target, forms concordance target pair;
Step 5:The symbiosis for being obtained using step 3 and step 4 and location distribution information and concordance target are to by band The Chow-Liu algorithm of weight carries out the study of tree construction, then parameter is trained, and obtains subtree model;
Matching Model step on line:
Step one:In detection, first, the Gist feature of calculating input image;
Step 2:Then, according to the Gist feature of input picture, image division corresponding scene subspace is obtained in training Obtain the probability distribution of corresponding scene subspace;
Step 3:Then, the DPM detector of the different target by training obtains detection score value and the inspection of each target of image Survey window information;
Step 4:The scene probability distribution for being obtained using step 2 and step 3 is believed with each target detection score value and detection window Breath, using iterative manner, the subtree prior model that under joint line, training department separately wins, asks for target detection and whether correct The MAP estimation of probability, the object detection results so as to revise all kinds of DPM detectors obtain final multi-target detection knot Really.
2. the multi-target detection method based on contextual information according to claim 1, it is characterised in that under line, training is obtained The Gist features of 520 dimensions in training set per width picture are obtained in the step of obtaining subtree model two, and its acquisition process step is:First First, using the Gabor filter group of one group of different scale and direction, image is filtered, obtain one group process after filtering after Image;Then, respectively filtered image is carried out non-overlapping stress and strain model according to fixed size, and to image division after Each grid ask for average;Finally, each the grid average for image sets being obtained cascades to form global characteristics, obtains image most The Gist feature of 520 whole dimensions, expression formula is as follows:
Wherein,Represent the Gist feature of jth width image, cat represents that feature is cascaded, IjRepresent jth of the grid division for r × l Width gradation of image figure, gmnThe Gabor filter in the n direction of m yardstick is represented respectively,Represent the volume of image and Gabor filter Product computing, ncRepresent the number of filter of convolution, size is m × n,Dimension be r × l × nc.
3. the multi-target detection method based on contextual information according to claim 1, it is characterised in that under line, training is obtained In the step of obtaining subtree model two, 6-8 class sub-scene is obtained using improved Spectral Clustering, comprise the concrete steps that:First, it is input into Gist feature in training set per width figure, is obtained using RandomForest method and represents similar between each image in training set The similar matrix of property;Then, by the use of the similar matrix as input, using the method for spectral clustering, training set picture is gathered Class, realizes the scene partitioning of different training set pictures.
4. the multi-target detection method based on contextual information according to claim 1, it is characterised in that under line, training is obtained The step of obtaining subtree model three incorporates the subtree context model of concordance target pair, concrete steps when subtree model is trained It is:
(1) first, according to two neighbouring inhomogeneity targets in two width different images under scene subspace in locus, chi Degree, the concordance distribution on visual angle, obtain the concordance target pair under scene subspace;
The each representation in components for obtaining concordance target is as follows
Wherein, (lx(oik),ly(oik) center position coordinates of the target frame of o in figure the i-th classification k-th example of target) are represented. Yardstick is represented with the example goal frame area square root for sc (oik), and visual angle is that the length-width ratio of p (oik) target frame is obtained;Class As (lx(qil),ly(qil) center position coordinates of the target frame of q in figure the i-th classification l-th example of target) are represented.Yardstick For sc (qil), visual angle is p (qil);Using variableRepresent the same classification of two width in figures Corresponding change of the mark variable in different four dimensions spatially, wherein r ∈ R represents mutual relation, and R represents each concordance mesh The similar target correspondence set of two width figures of mark centering,Represent the mutual variation relation of target location,Represent the mutual variation relation of target scale,Represent the mutual variation relation of aspect;By (2) formula meter The mapping distribution for calculating, judges respective objects to whether meeting concordance distribution, if met, respective objects are to belonging to same Target normal form, that is, belong to concordance target pair;
(2) using greedy cluster, final target complex set under different subspace is generated, conversion is avoided by the way of soft ballot The sensitivity of Spacial domain decomposition and similar purpose all living creatures become the redundancy for causing;If target occur frequency in target complex not Surpass 50%, then the target is rejected in target complex;Ultimately form the target complex under different scenes subspace;In the target complex for being formed On the basis of, in same target complex, by the combination of two of inhomogeneity target, form concordance target pair;
(3) the concordance target by proposing portrays expression target jointly to the symbiosis and between simple target and mutual alignment relation Between local context information;Its step is:First, portray concordance target to and sub-scene dependency:
θit=cfit×isfi(3)
Wherein, cfitRepresent frequency of i-th concordance target to appearance in t-th sub-scene, isfiRepresent i-th consistent The inverse scene frequency index of property target pair, which is expressed as follows:
Wherein, T represents the total type number of sub-scene, TtRepresent the sub-scene type number comprising i-th concordance target pair, ξ Isf is avoided for a minimumiValue is 0;Obtain all of relative coefficient θitAfterwards, it is normalized;
(4) using training set picture markup information, under different sub-scene t, set up description target symbiosis binary tree and The Gauss tree of description target location relation, both portray priori subtree model jointly;
The joint probability that in binary tree, whether all targets occur is expressed as:
p(b|zt)=p (broot|zt)∏ip(bi|bpa(i),zt) (5)
Wherein, i represents the node in tree, and pa (i) represents the father node of node i, biWhether in the picture ∈ { 0,1 } represents target i Occur;With b ≡ { biRepresent all target class;brootRepresent the root node of subtree, ztIt is a discrete variable, represents t-th Sub-scene space;
The position L of target iiThe appearance of target is depended on, the relation of interdependence between its position has consistent with target appearance Binary tree construction, which is expressed as follows:
P (L | b)=p (Lroot|broot)∏ip(Li|Lpa(i),bi,bpa(i)) (6)
Wherein, LrootRepresent the position of root node, Lpa(i)Represent the position of father node;
The Joint Distribution of so variable b and position L is expressed as:
P (b, L)=p (broot|zt)p(Lroot|broot)∏iFi 1(7)
Wherein Fi 1It is expressed as:
Fi 1=p (bi|bpa(i))p(Li|Lpa(i),bi,bpa(i)); (8)
(5) testing result of the simple target detector DPM for having trained and Gist global characteristics are incorporated
To in prior model, global characteristics are represented with g, then its Joint Distribution is expressed as:
Wherein,It is expressed as:
WikRepresent the position of the k-th candidate's window for obtaining using the simple target detection of target class i, sikRepresent using target class i Simple target detection obtain k-th candidate's window score value;cikWhether the kth candidate window of expression target class i is correct Detection, if it is value is otherwise 0 for 1;
(6) training subtree model mainly includes the study of tree structure and the study of relevant parameter;Entered using Chow-Liu algorithm During the prior model study of row tree, by the dependency θ of the concordance target pair portrayed in formula (3) and sceneit, change Become interactive information S of target centering father and son's nodei
Si=Si×(1+sigm(θit)) (11)
Then, the Structure learning of subtree prior model is completed according to weight limit;
For the study of model parameter, first, p (b in formula (8)i|bpa(i)) by symbiosis and the concordance mesh of statistics target Mark to and mutual information change obtain;p(Li|Lpa(i),bi,bpa(i)) value is carried out according to the appearance of father and son's node, it is divided into father and son Node co-occurrence, child node occurs, and child node occurs without three kinds of situations and considers that its Gauss distribution obtains value:
By the Gist global characteristics of each training image in formula (9), and estimation p (g | bi), obtain especially by following formula:
For global characteristics g, p (b is estimated using the method for logistic regressioni|g);
The corresponding testing result of single basis detector is integrated, is the Probability p (c of correct detection firstik|bi), its value and target Whether occur being closely related, form is as follows:
When target does not occur, then correct verification and measurement ratio takes 0, and when target occurs, the probability of correct detection is the number of correct detection The ratio of total number of tags that mesh is marked with target in training set;
It is then detected that the location probability p (W of windowik|cik,Li), it is Gauss distribution, is fixed against correct detection cikWith target class i Position LiIt is expressed as:
Wherein, when window is correctly detected, then WikMeet Gauss distribution, ΛiRepresent the variance of target predicted position;Window does not have Correctly detect, then WikNot against Li, a constant can be expressed as;
Score value Probability p (s finally, for basic detectorik|cik) acquisition, which is fixed against result c of correct detectionik, table It is shown as:
Wherein p (cik|sik) estimated using the method for logistic regression.
5. the multi-target detection method based on contextual information according to claim 1, it is characterised in that matching part on line Point:
(1) in detection, the image j for input obtains Gist global characteristics initially with the method in formula (1)
(2) and then, according to the Gist feature of input picture, image division corresponding scene subspace is obtained phase in training The probability distribution of the scene subspace that answers;The probability distribution of corresponding sub-scene is embodied as:
Wherein,Represent the inverse distance at j to t sub- scene clustering center of input picture,Represent in all clusters The inverse of heart distance and;Represented using the probability after normalizationUnder belong to the probability of a certain sub-scene;
(3) initial detecting score value and the inspection of each target of image are obtained using the DPM detector of the different target by training Survey window information;
(4) probability distribution of the sub-scene for being obtained using step (2) and (3) and each target detection score value and detection window information, Using iterative manner, the subtree prior model that under joint line, training department separately wins, target detection and whether correct general is asked for The MAP estimation of rate, the object detection results so as to revise all kinds of DPM detectors obtain final multi-target detection result; By the iteration optimization acquisition to following formula:
CN201610785155.XA 2016-08-31 2016-08-31 Multi-target detection method based on contextual information Expired - Fee Related CN106446933B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610785155.XA CN106446933B (en) 2016-08-31 2016-08-31 Multi-target detection method based on contextual information

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610785155.XA CN106446933B (en) 2016-08-31 2016-08-31 Multi-target detection method based on contextual information

Publications (2)

Publication Number Publication Date
CN106446933A true CN106446933A (en) 2017-02-22
CN106446933B CN106446933B (en) 2019-08-02

Family

ID=58091496

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610785155.XA Expired - Fee Related CN106446933B (en) 2016-08-31 2016-08-31 Multi-target detection method based on contextual information

Country Status (1)

Country Link
CN (1) CN106446933B (en)

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106951574A (en) * 2017-05-03 2017-07-14 牡丹江医学院 A kind of information processing system and method based on computer network
CN108062531A (en) * 2017-12-25 2018-05-22 南京信息工程大学 A kind of video object detection method that convolutional neural networks are returned based on cascade
CN108363992A (en) * 2018-03-15 2018-08-03 南京邮电大学 A kind of fire behavior method for early warning monitoring video image smog based on machine learning
CN109241819A (en) * 2018-07-07 2019-01-18 西安电子科技大学 Based on quickly multiple dimensioned and joint template matching multiple target pedestrian detection method
WO2019096180A1 (en) * 2017-11-14 2019-05-23 深圳码隆科技有限公司 Object recognition method and system, and electronic device
CN109977738A (en) * 2017-12-28 2019-07-05 深圳Tcl新技术有限公司 A kind of video scene segmentation judgment method, intelligent terminal and storage medium
CN110288629A (en) * 2019-06-24 2019-09-27 湖北亿咖通科技有限公司 Target detection automatic marking method and device based on moving Object Detection
CN110334639A (en) * 2019-06-28 2019-10-15 北京精英系统科技有限公司 A kind of device and method for the error detection result filtering analyzing and detecting algorithm
CN111079674A (en) * 2019-12-22 2020-04-28 东北师范大学 Target detection method based on global and local information fusion
CN111080639A (en) * 2019-12-30 2020-04-28 四川希氏异构医疗科技有限公司 Multi-scene digestive tract endoscope image identification method and system based on artificial intelligence
CN111814885A (en) * 2020-07-10 2020-10-23 云从科技集团股份有限公司 Method, system, device and medium for managing image frames
CN112052350A (en) * 2020-08-25 2020-12-08 腾讯科技(深圳)有限公司 Picture retrieval method, device, equipment and computer readable storage medium
CN112395974A (en) * 2020-11-16 2021-02-23 南京工程学院 Target confidence correction method based on dependency relationship between objects
CN112906696A (en) * 2021-05-06 2021-06-04 北京惠朗时代科技有限公司 English image region identification method and device
CN113138924A (en) * 2021-04-23 2021-07-20 扬州大学 Thread security code identification method based on graph learning

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103577832A (en) * 2012-07-30 2014-02-12 华中科技大学 People flow statistical method based on spatio-temporal context
CN104778466A (en) * 2015-04-16 2015-07-15 北京航空航天大学 Detection method combining various context clues for image focus region
CN104933735A (en) * 2015-06-30 2015-09-23 中国电子科技集团公司第二十九研究所 A real time human face tracking method and a system based on spatio-temporal context learning
CN105631895A (en) * 2015-12-18 2016-06-01 重庆大学 Temporal-spatial context video target tracking method combining particle filtering
CN105740891A (en) * 2016-01-27 2016-07-06 北京工业大学 Target detection method based on multilevel characteristic extraction and context model

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103577832A (en) * 2012-07-30 2014-02-12 华中科技大学 People flow statistical method based on spatio-temporal context
CN104778466A (en) * 2015-04-16 2015-07-15 北京航空航天大学 Detection method combining various context clues for image focus region
CN104933735A (en) * 2015-06-30 2015-09-23 中国电子科技集团公司第二十九研究所 A real time human face tracking method and a system based on spatio-temporal context learning
CN105631895A (en) * 2015-12-18 2016-06-01 重庆大学 Temporal-spatial context video target tracking method combining particle filtering
CN105740891A (en) * 2016-01-27 2016-07-06 北京工业大学 Target detection method based on multilevel characteristic extraction and context model

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
M. J. CHOI等: "A Tree-Based Context Model for Object Recognition", 《TPAMI》 *
李涛等: "基于线性拟合的多运动目标跟踪算法", 《西南师范大学学报》 *

Cited By (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106951574A (en) * 2017-05-03 2017-07-14 牡丹江医学院 A kind of information processing system and method based on computer network
CN106951574B (en) * 2017-05-03 2019-06-14 牡丹江医学院 A kind of information processing system and method based on computer network
WO2019096180A1 (en) * 2017-11-14 2019-05-23 深圳码隆科技有限公司 Object recognition method and system, and electronic device
CN108062531A (en) * 2017-12-25 2018-05-22 南京信息工程大学 A kind of video object detection method that convolutional neural networks are returned based on cascade
CN108062531B (en) * 2017-12-25 2021-10-19 南京信息工程大学 Video target detection method based on cascade regression convolutional neural network
CN109977738A (en) * 2017-12-28 2019-07-05 深圳Tcl新技术有限公司 A kind of video scene segmentation judgment method, intelligent terminal and storage medium
CN108363992A (en) * 2018-03-15 2018-08-03 南京邮电大学 A kind of fire behavior method for early warning monitoring video image smog based on machine learning
CN108363992B (en) * 2018-03-15 2021-12-14 南京钜力智能制造技术研究院有限公司 Fire early warning method for monitoring video image smoke based on machine learning
CN109241819A (en) * 2018-07-07 2019-01-18 西安电子科技大学 Based on quickly multiple dimensioned and joint template matching multiple target pedestrian detection method
CN110288629A (en) * 2019-06-24 2019-09-27 湖北亿咖通科技有限公司 Target detection automatic marking method and device based on moving Object Detection
CN110334639A (en) * 2019-06-28 2019-10-15 北京精英系统科技有限公司 A kind of device and method for the error detection result filtering analyzing and detecting algorithm
CN110334639B (en) * 2019-06-28 2021-08-10 北京精英系统科技有限公司 Device and method for filtering error detection result of image analysis detection algorithm
CN111079674A (en) * 2019-12-22 2020-04-28 东北师范大学 Target detection method based on global and local information fusion
CN111079674B (en) * 2019-12-22 2022-04-26 东北师范大学 Target detection method based on global and local information fusion
CN111080639A (en) * 2019-12-30 2020-04-28 四川希氏异构医疗科技有限公司 Multi-scene digestive tract endoscope image identification method and system based on artificial intelligence
CN111814885A (en) * 2020-07-10 2020-10-23 云从科技集团股份有限公司 Method, system, device and medium for managing image frames
CN112052350A (en) * 2020-08-25 2020-12-08 腾讯科技(深圳)有限公司 Picture retrieval method, device, equipment and computer readable storage medium
CN112052350B (en) * 2020-08-25 2024-03-01 腾讯科技(深圳)有限公司 Picture retrieval method, device, equipment and computer readable storage medium
CN112395974B (en) * 2020-11-16 2021-09-07 南京工程学院 Target confidence correction method based on dependency relationship between objects
CN112395974A (en) * 2020-11-16 2021-02-23 南京工程学院 Target confidence correction method based on dependency relationship between objects
CN113138924A (en) * 2021-04-23 2021-07-20 扬州大学 Thread security code identification method based on graph learning
CN113138924B (en) * 2021-04-23 2023-10-31 扬州大学 Thread safety code identification method based on graph learning
CN112906696A (en) * 2021-05-06 2021-06-04 北京惠朗时代科技有限公司 English image region identification method and device

Also Published As

Publication number Publication date
CN106446933B (en) 2019-08-02

Similar Documents

Publication Publication Date Title
CN106446933A (en) Multi-target detection method based on context information
Zhu et al. Soft proposal networks for weakly supervised object localization
CN112966684B (en) Cooperative learning character recognition method under attention mechanism
Zhang et al. Integrating bottom-up classification and top-down feedback for improving urban land-cover and functional-zone mapping
Galleguillos et al. Context based object categorization: A critical survey
CN110717534B (en) Target classification and positioning method based on network supervision
CN108830188A (en) Vehicle checking method based on deep learning
CN106408030B (en) SAR image classification method based on middle layer semantic attribute and convolutional neural networks
CN109919177B (en) Feature selection method based on hierarchical deep network
Zhang et al. Unsupervised difference representation learning for detecting multiple types of changes in multitemporal remote sensing images
CN107256017B (en) Route planning method and system
CN108537102A (en) High Resolution SAR image classification method based on sparse features and condition random field
Shahab et al. How salient is scene text?
Luo et al. Global salient information maximization for saliency detection
CN112132014B (en) Target re-identification method and system based on non-supervised pyramid similarity learning
CN113761259A (en) Image processing method and device and computer equipment
CN106683102A (en) SAR image segmentation method based on ridgelet filters and convolution structure model
CN104050460B (en) The pedestrian detection method of multiple features fusion
Bhimavarapu et al. Analysis and characterization of plant diseases using transfer learning
Tan et al. Rapid fine-grained classification of butterflies based on FCM-KM and mask R-CNN fusion
Liu et al. A semi-supervised high-level feature selection framework for road centerline extraction
CN116977960A (en) Rice seedling row detection method based on example segmentation
Vaidhehi et al. RETRACTED ARTICLE: An unique model for weed and paddy detection using regional convolutional neural networks
Azis et al. Unveiling Algorithm Classification Excellence: Exploring Calendula and Coreopsis Flower Datasets with Varied Segmentation Techniques
CN107423771B (en) Two-time-phase remote sensing image change detection method

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20190802

Termination date: 20210831