CN105678797A - Image segmentation method based on visual saliency model - Google Patents

Image segmentation method based on visual saliency model Download PDF

Info

Publication number
CN105678797A
CN105678797A CN201610123858.6A CN201610123858A CN105678797A CN 105678797 A CN105678797 A CN 105678797A CN 201610123858 A CN201610123858 A CN 201610123858A CN 105678797 A CN105678797 A CN 105678797A
Authority
CN
China
Prior art keywords
pixel
super
image
segmentation
boundary
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201610123858.6A
Other languages
Chinese (zh)
Inventor
胡海峰
曹向前
潘瑜
张伟
肖翔
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sun Yat Sen University
Original Assignee
SYSU CMU Shunde International Joint Research Institute
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by SYSU CMU Shunde International Joint Research Institute filed Critical SYSU CMU Shunde International Joint Research Institute
Priority to CN201610123858.6A priority Critical patent/CN105678797A/en
Publication of CN105678797A publication Critical patent/CN105678797A/en
Pending legal-status Critical Current

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The invention provides an image segmentation method based on a visual saliency model. The method includes the steps of: firstly performing image background detection and obtaining an image boundary connectivity value; then obtaining a saliency image of an image by using a Superpixel Contrast (SC) method based on Hexagonal Simple Linear Iterative Clustering (HSLIC); and finally automatically performing image segmentation by using the obtained image boundary connectivity value and a saliency value of the saliency image as the region item input of the image segmentation method, and outputting an image saliency region segmentation result.

Description

The image partition method of the remarkable model of view-based access control model
Technical field
The present invention relates to image processing field, more specifically, it relates to the image partition method of the remarkable model of a kind of view-based access control model.
Background technology
Salient region of image detection is a popular research direction of image processing field, and the salient region of image is also exactly the place causing people's visual attention location most, and it often contains the most information in image, so, its range of application is very extensive. In the fields such as it can be used in Target Recognition, Iamge Segmentation, self-adapting compressing, image retrieval, the detection method in a kind of effective saliency region is very helpful for the development in these fields. The method of current existing salient region detection has a lot, is broadly divided into two directions: the method based on local contrast and the method based on global contrast. Based on the significance value of each pixel in the significance detection method of local contrast be by and the contrast gradient of some pixels around it determine, and the significance detection side's rule based on global contrast determines by the contrast gradient of all pixels in it and whole pictures. The method of a kind of effective significance detection is the significance optimization method of the background detection based on Shandong rod, it defines a kind of boundary connected value on image, effectively background area and foreground area can be distinguished, and saliency optimization on this basis can obtain good Saliency maps. In addition, the method for existing a kind of conventional Iamge Segmentation is figure segmentation method. It adopts the thought of max-flow min-cut in graph theory, and source node is S, and sink nodes are T, and area item is converted into the weight of S or T to each pixel, the weight that edge item is converted between pixel. By solving max-flow min-cut, image is divided into prospect and background area.
But, keep effect not to be fine based on integrity in saliency region of the significance optimization method of background detection of Shandong rod and border; Figure segmentation method needs the manual input of user mostly, tentatively determines prospect and background by the subjective judgement of people's eye and priori, therefore, and the method underaction, and easily benefit from the impact of family subjective judgement.
Summary of the invention
The present invention provides the image partition method of the remarkable model of a kind of view-based access control model, the method utilizes the boundary connected value of image and the significance value of Saliency maps as the input of figure segmentation method area item, carry out Iamge Segmentation, the salient region segmentation result of final output image.
In order to reach above-mentioned technique effect, the technical scheme of the present invention is as follows:
An image partition method for the remarkable model of view-based access control model, comprises the following steps:
S1: image A is carried out super-pixel segmentation, obtains the survey ground distance of image A super-pixel, and formation zone, obtains boundary length and boundary connected value;
S2: utilize hexagon simple linear iteration clustering procedure HSLIC that image A is carried out super-pixel segmentation, and the image after segmentation uses super-pixel contrast gradient SC method carry out the significance value that overall situation significance detects the Saliency maps obtaining image A;
S3: utilize the significance value obtained in the boundary connected value obtained in S1 and S2 to carry out Iamge Segmentation, the salient region segmentation result of output image for Iamge Segmentation area item.
Further, the detailed process of step S1 is as follows:
S11: after image A carries out super-pixel segmentation, calculates the survey ground distance of each super-pixel;
S12: distance calculates the formation zone of each super-pixel with utilizing the survey of each super-pixel obtained;
S13: utilize the formation zone of each super-pixel obtained to calculate the boundary length of each super-pixel;
S14: utilize the formation zone of each super-pixel obtained and the boundary length of each super-pixel to calculate the boundary connected value of each super-pixel.
Further, the detailed process after image A being carried out super-pixel segmentation in described step S21 is:
Image A is carried out simple linear iteration cluster segmentation SLIC, records the mark number of each super-pixel, the super-pixel classification belonging to each pixel, super-pixel adjacency matrix, and the super-pixel in image boundary is in order to using.
Further, the detailed process of described step S11 is as follows:
S111: the image after segmentation is carried out color space conversion, is converted to Lab space by rgb space;
S112: according to super-pixel adjacency matrix, calculates all adjacent super-pixel (pi, pi+1) in the Euclidean distance of Lab space:
d a p p ( p i , p i + 1 ) = ( l i - 1 i + 1 ) 2 + ( a i - a i + 1 ) 2 + ( b i - b i + 1 ) 2
Wherein the span of i be 1 to N-1, N be the number of image superpixel, piRepresent i-th super-pixel, pi+1Represent the i-th+1 super-pixel, li, ai, biIt is three components of i-th super-pixel at Lab color space respectively, li+1, ai+1, bi+1It is three components of the i-th+1 super-pixel at Lab color space respectively;
S113: the survey ground distance d of any two super-pixelgeo(pi, pj) it is: from super-pixel piStart to arrive super-pixel p along a road the shortestjDistance:
d g e o ( p i , p j ) = m i n p k = p 1 , p 2 , ... , p n = p j Σ k = 1 n - 1 d a p p ( p i , p k )
Wherein, pk, pi, p2..., pn, pj is the super-pixel of image after segmentation, and i, j span is 1 and is 1 to n-1, n representative from p to N, k spaniTo pjPath on the super-pixel number of process, min represents and gets minimum value, as i=j, dgeo (pi, pj)=0, represents that a super-pixel and its survey ground distance are 0.
Further, the detailed process of described step S12 is as follows:
Super-pixel piFormation zone represent, super-pixel piA soft region of affiliated area. This region description be other super-pixel pjFor super-pixel piThe contribution of region, the formation zone Area (p of super-pixel pii) it is:
A r e a ( p i ) = Σ j = 1 N exp ( - d g e o 2 ( p i , p j ) 2 2 σ c l r ) = Σ j = 1 N S ( p i , p j )
Wherein, exp represents exponential function, and it is the number of image superpixel that the span of i, j is 1 to N, N, σclrRepresent adjustment super-pixel pjTo piThe parameter of regional effect size, σclr=10, S (pi, pj) represent super-pixel pjTo piRegional effect, pi, pjSurveying ground distance more little, it is to piArea contribution more big.
Further, the detailed process of step S13 is as follows:
Super-pixel piBoundary length describe be that the super-pixel in image boundary is for piThe contribution Lenbnd (p in regioni), calculating is defined as:
Len b n d ( p i ) = Σ j = 1 N S ( p i , p j ) · δ ( p j ∈ B n d )
Wherein, Bnd is the set of the super-pixel in image boundary, for the super-pixel in image boundary, and δ (pj∈ Bnd) it is 1, other are 0.
Further, the detailed process of step S14 is as follows:
Super-pixel piBoundary connected value describe be piBelong to the possibility size on the border of image, boundary connected value be one about a function of image superpixel boundary length and formation zone:
B n d C o n ( p i ) = Len b n d ( p i ) A r e a ( p i ) .
Further, the detailed process of described step S3 is as follows:
Thought according to graph theory, regards the node one by one on figure as by super-pixel, and source node is S, and sink nodes are T, and area item is converted into the weight of S or T to each super-pixel, the weight that edge item is converted between super-pixel. By solving max-flow min-cut, image is divided into prospect and background area, when edge item is constant, the significance value of the Saliency maps that the image boundary that obtains of step S1 is connected property value and step S2 obtains is used to input as the weight of area item, automatically Iamge Segmentation is carried out, obtain saliency region segmentation result
Wherein, the weight of area item is:
w e i g h t ( p i ) = w * B o n C o n ( p i ) + ( 1 - w ) * exp ( - S 2 ( p i ) 2 * σ 2 )
Wherein, w, σ are two regulating parameter respectively, w, σ ∈ [0.3,0.6], S (pi) super-pixel p for utilizing step S2 to obtainiSignificance value, BonCon (pi), S (pi) all normalize between [0,1].
Compared with prior art, the useful effect of technical solution of the present invention is:
The background detection that first the present invention carries out image obtains the boundary connected value of image, then SC (SuperpixelContrast) method based on hexagon simple linear iteration cluster HSLIC (HexagonalSimpleLinearIterativeClustering) is used to obtain the Saliency maps of image, finally use the boundary connected value of the image obtained and the significance value of Saliency maps as the input of figure segmentation method area item, automatically Iamge Segmentation is carried out, the salient region segmentation result of final output image.
Accompanying drawing explanation
Fig. 1 is schema of the present invention.
Embodiment
Accompanying drawing, only for exemplary illustration, can not be interpreted as the restriction to this patent;
In order to the present embodiment is better described, some parts of accompanying drawing have omission, zoom in or out, and do not represent the size of actual product;
To those skilled in the art, some known features and illustrate and may omit and be appreciated that in accompanying drawing.
Below in conjunction with drawings and Examples, the technical scheme of the present invention is described further.
Embodiment 1
An image partition method for the remarkable model of view-based access control model, comprises the following steps:
S1: image A is carried out super-pixel segmentation, obtains the survey ground distance of image A super-pixel, and formation zone, obtains boundary length and boundary connected value;
S2: utilize hexagon simple linear iteration clustering procedure HSLIC that image A is carried out super-pixel segmentation, and the image after segmentation uses super-pixel contrast gradient SC method carry out the significance value that overall situation significance detects the Saliency maps obtaining image A;
S3: utilize the significance value obtained in the boundary connected value obtained in S1 and S2 to carry out Iamge Segmentation, the salient region segmentation result of output image for Iamge Segmentation area item.
Further, the detailed process of step S1 is as follows:
S11: after image A carries out super-pixel segmentation, calculates the survey ground distance of each super-pixel;
S12: distance calculates the formation zone of each super-pixel with utilizing the survey of each super-pixel obtained;
S13: utilize the formation zone of each super-pixel obtained to calculate the boundary length of each super-pixel;
S14: utilize the formation zone of each super-pixel obtained and the boundary length of each super-pixel to calculate the boundary connected value of each super-pixel.
Detailed process after image A being carried out super-pixel segmentation in step S21 is:
Image A is carried out simple linear iteration cluster segmentation SLIC, records the mark number of each super-pixel, the super-pixel classification belonging to each pixel, super-pixel adjacency matrix, and the super-pixel in image boundary is in order to using.
The detailed process of step S11 is as follows:
S111: the image after segmentation is carried out color space conversion, is converted to Lab space by rgb space;
S112: according to super-pixel adjacency matrix, calculates all adjacent super-pixel (pi, pi+1) in the Euclidean distance of Lab space:
d a p p ( p i , p i + 1 ) = ( l i - l i + 1 ) 2 + ( a i - a i + 1 ) 2 + ( b i - b i + 1 ) 2
Wherein the span of i be 1 to N-1, N be the number of image superpixel, piRepresent i-th super-pixel, pi+1Represent the i-th+1 super-pixel, li, ai, biIt is three components of i-th super-pixel at Lab color space respectively, li+1, ai+1, bi+1It is three components of the i-th+1 super-pixel at Lab color space respectively;
S113: the survey ground distance d of any two super-pixelgeo(pi, pj) it is: from super-pixel piStart to arrive super-pixel p along a road the shortestjDistance:
d g e o ( p i , p j ) = min p k = p i , p 2 , ... , p n = p j Σ k = 1 n - 1 d a p p ( p i , p k )
Wherein, pk, pi, p2..., pn, pj is the super-pixel of image after segmentation, and i, j span is 1 and is 1 to n-1, n representative from p to N, k spaniTo pjPath on the super-pixel number of process, min represents and gets minimum value, as i=j, dgeo (pi, pj)=0, represents that a super-pixel and its survey ground distance are 0.
The detailed process of step S12 is as follows:
Super-pixel piFormation zone represent, super-pixel piA soft region of affiliated area. This region description be other super-pixel pjFor super-pixel piThe contribution of region, the formation zone Area (p of super-pixel pii) it is:
A r e a ( p i ) = Σ j = 1 N exp ( - d g e o 2 ( p i , p j ) 2 σ c l r 2 ) = Σ j = 1 N S ( p i , p j )
Wherein, exp represents exponential function, and it is the number of image superpixel that the span of i, j is 1 to N, N, σclrRepresent adjustment super-pixel pjTo piThe parameter of regional effect size, σclr=10, S (pi, pj) represent super-pixel pjTo piRegional effect, pi, pjSurveying ground distance more little, it is to piArea contribution more big.
The detailed process of step S13 is as follows:
Super-pixel piBoundary length describe be that the super-pixel in image boundary is for piThe contribution Lenbnd (p in regioni), calculating is defined as:
Len b n d ( p i ) = Σ j = 1 N S ( p i , p j ) · δ ( p j ∈ B n d )
Wherein, Bnd is the set of the super-pixel in image boundary, for the super-pixel in image boundary, and δ (pj∈ Bnd) it is 1, other are 0.
The detailed process of step S14 is as follows:
Super-pixel piBoundary connected value describe be piBelong to the possibility size on the border of image, boundary connected value be one about a function of image superpixel boundary length and formation zone:
B n d C o n ( p i ) = Len b n d ( p i ) A r e a ( p i ) .
The detailed process of step S3 is as follows:
Thought according to graph theory, regards the node one by one on figure as by super-pixel, and source node is S, and sink nodes are T, and area item is converted into the weight of S or T to each super-pixel, the weight that edge item is converted between super-pixel. By solving max-flow min-cut, image is divided into prospect and background area, when edge item is constant, the significance value of the Saliency maps that the image boundary that obtains of step S1 is connected property value and step S2 obtains is used to input as the weight of area item, automatically Iamge Segmentation is carried out, obtain saliency region segmentation result
Wherein, the weight of area item is:
w e i g h t ( p i ) w * B o n C o n ( p i ) + ( 1 - w ) * exp ( 1 - S 2 ( p i ) 2 * σ 2 )
Wherein, w, σ are two regulating parameter respectively, w, σ ∈ [0.3,0.6], S (pi) super-pixel p for utilizing step S2 to obtainiSignificance value, BonCon (pi), S (pi) all normalize between [0,1].
The parts that same or similar label is corresponding same or similar;
Accompanying drawing describes position relation for only for exemplary illustration, the restriction to this patent can not be interpreted as;
Obviously, the above embodiment of the present invention is only for example of the present invention is clearly described, and is not the restriction to embodiments of the present invention. For those of ordinary skill in the field, can also make other changes in different forms on the basis of the above description. Here without the need to also cannot all enforcement modes be given exhaustive. All any amendment, equivalent replacement and improvement etc. done within the spirit and principles in the present invention, all should be included within the protection domain of the claims in the present invention.

Claims (8)

1. the image partition method of the remarkable model of view-based access control model, it is characterised in that, comprise the following steps:
S1: image A is carried out super-pixel segmentation, obtains the survey ground distance of image A super-pixel, and formation zone, obtains boundary length and boundary connected value;
S2: utilize hexagon simple linear iteration clustering procedure HSLIC that image A is carried out super-pixel segmentation, and the image after segmentation uses super-pixel contrast gradient SC method carry out the significance value that overall situation significance detects the Saliency maps obtaining image A;
S3: utilize the significance value obtained in the boundary connected value obtained in S1 and S2 to carry out Iamge Segmentation, the salient region segmentation result of output image for Iamge Segmentation area item.
2. the image partition method of the remarkable model of view-based access control model according to claim 1, it is characterised in that, the detailed process of step S1 is as follows:
S11: after image A carries out super-pixel segmentation, calculates the survey ground distance of each super-pixel;
S12: distance calculates the formation zone of each super-pixel with utilizing the survey of each super-pixel obtained;
S13: utilize the formation zone of each super-pixel obtained to calculate the boundary length of each super-pixel;
S14: utilize the formation zone of each super-pixel obtained and the boundary length of each super-pixel to calculate the boundary connected value of each super-pixel.
3. the image partition method of the remarkable model of view-based access control model according to claim 2, it is characterised in that, the detailed process after image A being carried out super-pixel segmentation in described step S21 is:
Image A is carried out simple linear iteration cluster segmentation SLIC, records the mark number of each super-pixel, the super-pixel classification belonging to each pixel, super-pixel adjacency matrix, and the super-pixel in image boundary is in order to using.
4. the image partition method of the remarkable model of view-based access control model according to claim 3, it is characterised in that, the detailed process of described step S11 is as follows:
S111: the image after segmentation is carried out color space conversion, is converted to Lab space by rgb space;
S112: according to super-pixel adjacency matrix, calculates all adjacent super-pixel (pi, pi+1) in the Euclidean distance of Lab space:
d a p p ( p i , p i + 1 ) = ( l i - l i + 1 ) 2 + ( a i - a i + 1 ) 2 + ( b i - b i + 1 ) 2
Wherein the span of i be 1 to N-1, N be the number of image superpixel, piRepresent i-th super-pixel, pi+1Represent the i-th+1 super-pixel, li, ai, biIt is three components of i-th super-pixel at Lab color space respectively, li+1, ai+1, bi+1It is three components of the i-th+1 super-pixel at Lab color space respectively;
S113: the survey ground distance d of any two super-pixelgeo(pi, pj) it is: from super-pixel piStart to arrive super-pixel p along a road the shortestjDistance:
d g e o ( p i , p j ) = m i n p k = p i , p 2 , ... , p n = p j Σ k = 1 n - 1 d a p p ( p i , p k )
Wherein, pk, pi, p2..., pn, pj is the super-pixel of image after segmentation, and i, j span is 1 and is 1 to n-1, n representative from p to N, k spaniTo pjPath on the super-pixel number of process, min represents and gets minimum value, as i=j, dgeo (pi, pj)=0, represents that a super-pixel and its survey ground distance are 0.
5. the image partition method of the remarkable model of view-based access control model according to claim 4, it is characterised in that, the detailed process of described step S12 is as follows:
Super-pixel piFormation zone represent, super-pixel piA soft region of affiliated area. This region description be other super-pixel pjFor super-pixel piThe contribution of region, the formation zone Area (p of super-pixel pii) it is:
A r e a ( p i ) = Σ j = 1 N exp ( - d g e o 2 ( p i , p j ) 2 σ c l r 2 ) = Σ j = 1 N S ( p i , p j )
Wherein, exp represents exponential function, and it is the number of image superpixel that the span of i, j is 1 to N, N, σclrRepresent adjustment super-pixel pjTo piThe parameter of regional effect size, σclr=10, S (pi, pj) represent super-pixel pjTo piRegional effect, pi, pjSurveying ground distance more little, it is to piArea contribution more big.
6. the image partition method of the remarkable model of view-based access control model according to claim 5, it is characterised in that, the detailed process of step S13 is as follows:
Super-pixel piBoundary length describe be that the super-pixel in image boundary is for piThe contribution Lenbnd (p in regioni), calculating is defined as:
Len b n d ( p i ) = Σ j = 1 N S ( p i , p j ) · δ ( p j ∈ B n d )
Wherein, Bnd is the set of the super-pixel in image boundary, for the super-pixel in image boundary, and δ (pj∈ Bnd) it is 1, other are 0.
7. the image partition method of the remarkable model of view-based access control model according to claim 6, it is characterised in that, the detailed process of step S14 is as follows:
Super-pixel piBoundary connected value describe be piBelong to the possibility size on the border of image, boundary connected value be one about a function of image superpixel boundary length and formation zone:
B n d C o n ( p i ) = Len b n d ( p i ) A r e a ( p i ) .
8. the image partition method of the remarkable model of view-based access control model according to claim 7, it is characterised in that, the detailed process of described step S3 is as follows:
Thought according to graph theory, regards the node one by one on figure as by super-pixel, and source node is S, and sink nodes are T, and area item is converted into the weight of S or T to each super-pixel, the weight that edge item is converted between super-pixel. By solving max-flow min-cut, image is divided into prospect and background area, when edge item is constant, the significance value of the Saliency maps that the image boundary that obtains of step S1 is connected property value and step S2 obtains is used to input as the weight of area item, automatically Iamge Segmentation is carried out, obtain saliency region segmentation result
Wherein, the weight of area item is:
w e i g h t ( p i ) = w * B o n C o n ( p i ) + ( 1 - w ) * exp ( - S 2 ( p i ) 2 * σ 2 )
Wherein, w, σ are two regulating parameter respectively, w, σ ∈ [0.3,0.6], S (pi) super-pixel p for utilizing step S2 to obtainiSignificance value, BonCon (pi), S (pi) all normalize between [0,1].
CN201610123858.6A 2016-03-04 2016-03-04 Image segmentation method based on visual saliency model Pending CN105678797A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610123858.6A CN105678797A (en) 2016-03-04 2016-03-04 Image segmentation method based on visual saliency model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610123858.6A CN105678797A (en) 2016-03-04 2016-03-04 Image segmentation method based on visual saliency model

Publications (1)

Publication Number Publication Date
CN105678797A true CN105678797A (en) 2016-06-15

Family

ID=56307824

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610123858.6A Pending CN105678797A (en) 2016-03-04 2016-03-04 Image segmentation method based on visual saliency model

Country Status (1)

Country Link
CN (1) CN105678797A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106408529A (en) * 2016-08-31 2017-02-15 浙江宇视科技有限公司 Shadow removal method and apparatus
CN106919950A (en) * 2017-01-22 2017-07-04 山东大学 Probability density weights the brain MR image segmentation of geodesic distance
CN108364300A (en) * 2018-03-15 2018-08-03 山东财经大学 Vegetables leaf portion disease geo-radar image dividing method, system and computer readable storage medium
CN108717539A (en) * 2018-06-11 2018-10-30 北京航空航天大学 A kind of small size Ship Detection
CN109389601A (en) * 2018-10-19 2019-02-26 山东大学 Color image superpixel segmentation method based on similitude between pixel

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106408529A (en) * 2016-08-31 2017-02-15 浙江宇视科技有限公司 Shadow removal method and apparatus
CN106919950A (en) * 2017-01-22 2017-07-04 山东大学 Probability density weights the brain MR image segmentation of geodesic distance
CN106919950B (en) * 2017-01-22 2019-10-25 山东大学 The brain MR image segmentation of probability density weighting geodesic distance
CN108364300A (en) * 2018-03-15 2018-08-03 山东财经大学 Vegetables leaf portion disease geo-radar image dividing method, system and computer readable storage medium
CN108717539A (en) * 2018-06-11 2018-10-30 北京航空航天大学 A kind of small size Ship Detection
CN109389601A (en) * 2018-10-19 2019-02-26 山东大学 Color image superpixel segmentation method based on similitude between pixel
CN109389601B (en) * 2018-10-19 2019-07-16 山东大学 Color image superpixel segmentation method based on similitude between pixel

Similar Documents

Publication Publication Date Title
CN105678797A (en) Image segmentation method based on visual saliency model
CN104809729A (en) Robust automatic image salient region segmenting method
Dornaika et al. Building detection from orthophotos using a machine learning approach: An empirical study on image segmentation and descriptors
CN104881681B (en) Image sequence type labeling based on mixing graph model
Benabbas et al. Motion pattern extraction and event detection for automatic visual surveillance
CN110059581A (en) People counting method based on depth information of scene
CN108803617A (en) Trajectory predictions method and device
CN106446914A (en) Road detection based on superpixels and convolution neural network
CN105005760B (en) A kind of recognition methods again of the pedestrian based on Finite mixture model
CN107862702B (en) Significance detection method combining boundary connectivity and local contrast
CN107369158A (en) The estimation of indoor scene layout and target area extracting method based on RGB D images
CN109154938B (en) Classifying entities in a digital graph using discrete non-trace location data
CN104517317A (en) Three-dimensional reconstruction method of vehicle-borne infrared images
Couprie et al. Causal graph-based video segmentation
CN113052184B (en) Target detection method based on two-stage local feature alignment
Fernandez et al. A comparative analysis of decision trees based classifiers for road detection in urban environments
Dou et al. Moving object detection based on improved VIBE and graph cut optimization
Kumar et al. A hybrid cluster technique for improving the efficiency of colour image segmentation
CN104966091A (en) Strip mine road extraction method based on unmanned plane remote sensing images
Rajeswari et al. Automatic road extraction using high resolution satellite images based on level set and mean shift methods
Stumper et al. Offline object extraction from dynamic occupancy grid map sequences
Ouzounis et al. Interactive collection of training samples from the max-tree structure
Dornaika et al. A comparative study of image segmentation algorithms and descriptors for building detection
KM et al. Optical flow based anomaly detection in traffic scenes
Imani et al. Spectral-spatial classification of high dimensional images using morphological filters and regression model

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20170316

Address after: 528300 Guangdong province Foshan city Shunde District Daliang South Road No. 9 Research Institute

Applicant after: Internation combination research institute of Carnegie Mellon University of Shunde Zhongshan University

Applicant after: Sun Yat-sen University

Address before: 528300 Daliang street, Shunde District, Guangdong,,, Carnegie Mellon University, Zhongshan University, Shunde

Applicant before: Internation combination research institute of Carnegie Mellon University of Shunde Zhongshan University

RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20160615