CN104050682B - Image segmentation method fusing color and depth information - Google Patents

Image segmentation method fusing color and depth information Download PDF

Info

Publication number
CN104050682B
CN104050682B CN201410324569.3A CN201410324569A CN104050682B CN 104050682 B CN104050682 B CN 104050682B CN 201410324569 A CN201410324569 A CN 201410324569A CN 104050682 B CN104050682 B CN 104050682B
Authority
CN
China
Prior art keywords
depth
region
color
image
similarity
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201410324569.3A
Other languages
Chinese (zh)
Other versions
CN104050682A (en
Inventor
郑庆庆
吴谨
刘劲
邓慧萍
廖宇峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan University of Science and Engineering WUSE
Original Assignee
Wuhan University of Science and Engineering WUSE
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan University of Science and Engineering WUSE filed Critical Wuhan University of Science and Engineering WUSE
Priority to CN201410324569.3A priority Critical patent/CN104050682B/en
Publication of CN104050682A publication Critical patent/CN104050682A/en
Application granted granted Critical
Publication of CN104050682B publication Critical patent/CN104050682B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses an image segmentation method fusing color and depth information. According to the method, firstly, a meanshift algorithm is used for segmenting an input color image to obtain an over-segmentation region set, and then similarities among all the regions are calculated and include color similarities, depth similarities and fusions of the color similarities and the depth similarities; then according to a depth image, seed regions of a target and seed regions of a background are automatically selected; finally, an MSRM algorithm is used for merging the regions, so that a final segmentation result is obtained. In the process of calculating the similarities among the regions, the color information is used, besides, the depth information is dynamically fused, and the problem that when the target color and the background color are similar, and namely a long-scale contrast edge exists between objects, correct segmentation can not be achieved is solved. The seed regions are automatically selected by the utilization of the depth information of the image, the seed regions of the target and the seed regions of the background do not need to be marked manually and alternately, region characteristics of the depth image are directly used for determining the seed regions instead of edge characteristics, and therefore high robustness is achieved.

Description

A kind of Fusion of Color and the image partition method of depth information
Technical field
The invention belongs to computer vision field is and in particular to the image segmentation side of a kind of Fusion of Color and depth information Method.
Background technology
Image segmentation is to divide the image into several regions specific, with unique properties and propose interesting target Technology and process.It is the committed step that image procossing arrives graphical analyses.The target extracting after image segmentation can be used for figure As semantics recognition, the field such as image retrieval.Traditional images dividing method is generally basede on the external appearance characteristic of image, such as color, brightness, Texture, shape, structure etc..And really object exists in three-dimensional world, object should be defined by connectedness physically, knot The depth information closing image carrys out segmentation figure picture, can obtain and have certain semanteme efficiently against over-segmentation and less divided problem Image segmentation result.
At present, image partition method is countless, the region merging technique based on maximum similarity that wherein [document 1] proposes Method (hereinafter referred to as msrm algorithm) is fairly simple, to picture material self adaptation, need not arrange similarity threshold, Neng Goucong in advance Objective contour is extracted in complex background.The advantage of msrm algorithm is: cuts (graph cut) with the classic algorithm figure of man-machine interactively Compare, under the conditions of identical man-machine interactively, segmentation effect is more preferable.It is limited in that: need labelling to cover main characteristic area Domain, when shade, low contrast edge and fuzzy region, segmentation is failure.
Have a lot of documents that depth image is introduced image segmentation field in recent years, [document 2] propose a kind of based on depth and The image object partitioning algorithm of colouring information, carries out over-segmentation first with meanshift partitioning algorithm to target image, simultaneously Obtain the dense depth map of axonometric chart pair by binocular stereo vision algorithm, select from over-segmentation result according to depth discontinuity Take the seed point set proceeding that " exquisite " is split, algorithm distribution label is cut to the region figure of unallocated Seed label, and right The adjacent area not having depth noncoherent boundary each other but having different labels is merged.The limitation of this algorithm exists In: when (1) is cut algorithm (graph cut) and carried out global optimization with figure, only use colouring information;(2) image two obtaining Value depth discontinuous edge is strongly depend on experimental threshold values, and edge line in itself be interruption, discontinuous, this will affect seed The reliability that point is chosen.[document 3] proposes a kind of multimode semantic segmentation method based on color and depth information, and this algorithm is by stricture of vagina Reason, color description and 3d description are fused together by Markov random field model, are super-pixel distribution label.The party Method needs to train, computationally intensive.What [document 4] proposed is positioned using colored and depth image in robot visual guidance system Object, when being only applicable to that in image, several object colors are single and completely the same, the difference using depth is distinguished.
Document 1:ning j., zhang l., zhang d., et al.interactive image segmentation by maximal similarity based region merging.pattern recognition,2010,43(2): 445-456;
Document 2: Pi Zhiming, Wang Zengfu. merge the image object partitioning algorithm of depth and colouring information. pattern recognition and people Work intelligence, 2013,26 (2): 151-158;
Document 3:islem jebari, david filliat.color and depth-based superpixels For background and object segmentation.procedia engineering, 2012,41:1307- 1315;
Document 4:jos é-juan, hern á ndez-l ó pez, ana-linnet.et al.detecting objects using color and depth segmentation with kinect sensor.procedia technology, 2012,3:196-204.
Content of the invention
It is an object of the invention to provide one kind can more accurately in addition when target and background color is complicated and close Distinguish and can utilize the image partition method in the region characteristic automatic selected seed region of depth image.
The technical solution adopted in the present invention is: the image partition method of a kind of Fusion of Color and depth information, its feature It is, comprise the following steps:
Step 1: using meanshift algorithm, input color image is made to split, obtain overdivided region set g= {gi}I=1 ..., rn, wherein, subscript i represents region sequence number, and rn is region total number;
Step 2: calculate the similarity between each region in g, including color similarity scWith depth similarity sd, Yi Jiyan Color similarity scWith depth similarity sdFusion;
Step 3: target and background seed region is automatically chosen according to depth image;
Step 4: carry out region merging technique using msrm algorithm, obtain final segmentation result.
Preferably, the color similarity between each region in the calculating g described in step 2, it implements process is Define the color similarity s of any two region r and q in g using bhattacharyya coefficientc:
s c = σ u = 1 u hist r u · hist q u
Wherein, histrAnd histqIt is respectively the normalization color histogram of region r and q, subscript u represents histogrammic u Individual element, u is histogrammic footstalk number.
Preferably, the depth similarity between each region in the calculating g described in step 2, it implements process is The depth value of the pixel in each region in g is taken arithmetic average as the depth value in this region, constitute regional depth set d= {di}I=1 ..., rn, subscript i represents region sequence number, defines depth similarity s of any two region r and q in gd:
s d = - | d r - d q | m a x { d i } i = 1 , ... , r n - min { d i } i = 1 , ... , r n
Wherein, max { di}I=1 ..., rnRepresent that all regional depths take maximum, min { di}I=1 ..., rnFor all regional depths Minima in addition to 0.
Preferably, the described depth value using the pixel in each region in g takes arithmetic average as the depth in this region Value, the depth for partial pixel in image not can determine that due to the reason such as blocking, and in the depth image being given be using 0 as The situation that depth value is filled up, concrete processing method is: overdivided region set g is mapped in depth image, if too Cut region giThe depth value of middle element is all 0, show this region object depth information do not know, then only consider this region with The color similarity of adjacent area;If overdivided region giIn have Partial Elements depth value be 0, then calculate this regional depth Value diWhen, only to overdivided region giThose pixels that middle depth is not zero take arithmetic average.
Preferably, the color similarity s described in step 2cWith depth similarity sdFusion, scAnd sdTotal after fusion Similarity be:
S=sc+w·sd
Wherein, scAnd sdThe weight w merging to be described using nonlinear sigmoid curve:
w = a 0.1 + exp ( - ( s c - b ) c )
The maximum that wherein a determines sigmoid curve approaches value, b and c determine respectively sigmoid curve displacement and Steep.
Preferably, automatically choosing target and background seed region according to depth image described in step 3, it implements Process is first with k-means algorithm, the element in regional depth set d to be clustered, and classification number takes k=2, automatically Be polymerized to two big class, i.e. target and background, then respectively from this some region of two big apoplexy due to endogenous wind random chooses as target and background Seed region.This clustering method is simple, solves the problems, such as to need in Ning Ji peak method manually to participate in marking.
Present invention advantage compared with prior art is:
(1) between zoning during similarity, not only utilize colouring information, go back dynamic fusion depth information, then When in image, target and background color is close, when low contrast edge occurring between object and object, can by depth not With distinguishing;
(2) utilize image depth information automatic selected seed region, mark the kind of target and background without man-machine interactively Subregion, directly utilizes the region characteristic of depth image, rather than local edge, to determine seed region, has preferable robust Property.
Brief description
Fig. 1: be flow chart of the present invention;
Fig. 2: be the sigmoid curve used in the embodiment of the present invention;
Fig. 3-1: be the coloured image of input in the embodiment of the present invention;
Fig. 3-2: be the depth image of input in the embodiment of the present invention;
Fig. 4: be the result that in the embodiment of the present invention, coloured image is made with meanshift segmentation;
Fig. 5-1: be that in the embodiment of the present invention, regional depth is made with k-means cluster result;
Fig. 5-2: be the seed region chosen in the embodiment of the present invention;
Fig. 6-1: be the final segmentation result being obtained according to the inventive method in the embodiment of the present invention;
Fig. 6-2: be the final segmentation result being obtained according to Ning Ji peak method in the embodiment of the present invention.
Specific embodiment
Understand for the ease of those of ordinary skill in the art and implement the present invention, below in conjunction with the accompanying drawings and embodiment is to this Bright be described in further detail it will be appreciated that described herein enforcement example be merely to illustrate and explain the present invention, not For limiting the present invention.
Ask for an interview Fig. 1, Fig. 2, Fig. 3-1, Fig. 3-2 and Fig. 4, as a example the present invention adopts the segmentation of Aloe potted landscape shown in Fig. 3-1, Fig. 3-2 is the depth image of Aloe potted landscape, make use of the information of depth image that coloured image is made more accurately to split.The present invention Be the technical scheme is that the image partition method of a kind of Fusion of Color and depth information, comprised the following steps:
Step 1: using the framework of Ning Jifeng, using meanshift segmentation software edison to the Aloe basin shown in Fig. 3-1 Scape is split, and asks for an interview Fig. 4, is over-segmentation result, obtains overdivided region set g={ gi}I=1 ..., rn, wherein, subscript i generation Table section sequence number, rn is region total number;Software relevant parameter is all using default setting.It can be seen that the background based on texture is divided It is slit into a large amount of discreet region, and Aloe potted landscape is divided into larger region unit.
Step 2: 3 Color Channels of the rgb coloured image of input are divided into 16 deciles by codomain [0,255] respectively, The color histogram in so each region just calculates in the feature space for 16 × 16 × 16=4096 for the dimension.Then right Color histogram normalization, the color similarity of two adjacent areas of calculating:
histrAnd histqIt is respectively the normalization color histogram of any two region r and q in g, subscript u represents Nogata U-th element of figure.Create two-dimentional similarity matrix sm, dimension is rn × rn.If two regions non-conterminous it is impossible to be closed And, make smij=0, each region oneself is maximum with the similarity of oneself, is set to 1, i.e. diagonal entry smii=1, adjacent area According to formula one calculate color similarity, span be [0,1).
The depth value of the pixel in each region in g is taken arithmetic average as the depth value in this region, constitute set d= {di}I=1 ..., rn, subscript i represents region sequence number.Assume that ith zone has n pixel, the depth value set of each pixel is {x1,…,xn, then:
Define depth similarity s of two region r and qd:
In formula three, max { di}I=1 ..., rnRepresent that all regional depths take maximum, min { di}I=1 ..., rnDeep for all regions Minima in addition to 0 for the degree.If overdivided region giThe depth value of middle element is all 0, then it is believed that this region no depth Information, only considers the color similarity in this region and adjacent area;If overdivided region giIn have Partial Elements depth value be 0, then calculate this regional depth value diWhen, only arithmetic average is taken to the pixel that depth is not zero.From formula three, depth phase Like degree sdSpan be [- 1,0], the depth difference in two regions is bigger, then depth similarity is less.
Step 3: color similarity scWith depth similarity sdFusion, scAnd sdAfter fusion, total similarity is:
S=sc+w·sd(formula is wantonly)
Ask for an interview Fig. 2, scAnd sdThe weight w merging to be described using nonlinear sigmoid curve:
The maximum that wherein a determines sigmoid curve approaches value, b and c determine respectively sigmoid curve displacement and Steep.
It is demonstrated experimentally that a=1, the objective contour effect extracted when b=0.2, c=0.5 is preferable.Due to color similarity sc ∈ [0,1], Fig. 2 only show and work as a=1, a part for sigmoid curve when b=0.2, c=0.5.
Step 4: with k-means algorithm to d={ di}I=1 ..., rnMiddle element is clustered, and takes k=2, is polymerized to two big class, that is, Target class roWith background classes rb.Assume roIncluding m region, rbComprise n region, then m+n=rn.Respectively from this two big apoplexy due to endogenous wind M1 and n1 region of random choose is as the seed region of target and background, wherein m1 < m, n1 < n.M1 and n1 specific number is permissible By user from primary input program, find in practice, 1/40th effects that m1 and n1 is about taken as m and n are just fine, no With taking too many seed region.If Fig. 5-1 is this example region depth cluster result, white represents target class, and black represents the back of the body Scape class, then randomly selects 5 seed regions from target class (containing 191 overdivided regions), background classes are (containing 1267 over-segmentations Region) 27 seed regions of random choose, because in background classes, each region area is less, region quantity is larger, therefore can fit Choose some seed regions when, as shown in Fig. 5-2, green area represents target seed region, and blue region represents background kind more Subregion.
Step 5: region merging technique is carried out using msrm algorithm, obtains final segmentation result.
If mbFor the background area set of labelling, moFor the target area set of labelling, n is unmarked regional ensemble, then The basic procedure of msrm algorithm is as follows:
Step 5.1: merge the background area of unmarked region and labelling.For each region b ∈ mb, find the adjacent of it Set of regionsFor each aiAndFind its Neighbourhood setObviouslyCalculateIfThen by region aiIt is merged into b.Update set mbAnd n.Until mbIn region can not find new combined region till;
Step 5.2: the region in self-adopt combination n.In the same manner, for each region p ∈ n, find its neighborhood collectionFor each hiMeetAndFind its neighborhood collectionObviouslyCalculateIfThen by region hiIt is merged into p.Update collection Close n.Till the region in n can not find new combined region.
Circulation carries out step 5.1 and step 5.2 until unmarked region can neither be merged into mbIn, nor close in n And.This algorithm progressively by unmarked region merging technique in background area, remaining unmarked region is automatically incorporated into target area In.
Under conditions of seed region is chosen unanimously, Fig. 6-1 is the final segmentation result being obtained according to the inventive method, figure 6-2 is the final segmentation result being obtained according to Ning Jifeng et al. proposition method.It is seen that, the color due to curtain in background is Absinthe-green decorative pattern, and Aloe leave color is close, the target area therefore do not laid down hard and fast rule as seed all can be by mistakenly It is merged in background classes.And background and target can be precisely separating by the present invention using the difference of depth.
It should be appreciated that the part that this specification does not elaborate belongs to prior art.
It should be appreciated that the above-mentioned description for preferred embodiment is more detailed, can not therefore be considered to this The restriction of invention patent protection scope, those of ordinary skill in the art, under the enlightenment of the present invention, is weighing without departing from the present invention Profit requires under protected ambit, can also make replacement or deform, each fall within protection scope of the present invention, this Bright scope is claimed should be defined by claims.

Claims (3)

1. the image partition method of a kind of Fusion of Color and depth information is it is characterised in that comprise the following steps:
Step 1: using meanshift algorithm, input color image is made to split, obtain overdivided region set g= {gi}I=1 ..., rn, wherein, subscript i represents region sequence number, and rn is region total number;
Step 2: calculate the similarity between each region in g, including color similarity scWith depth similarity sd, and color phase Like degree scWith depth similarity sdFusion;
Wherein calculate the depth similarity between each region in g, it implements process is by the depth of the pixel in each region in g Angle value takes arithmetic average as the depth value in this region, constitutes regional depth set d={ di}I=1 ..., rn, subscript i represents region Sequence number, defines depth similarity s of any two region r and q in gd:
s d = - | d r - d q | m a x { d i } i = 1 , ... , r n - min { d i } i = 1 , ... , r n
Wherein, max { di}I=1 ..., rnRepresent that all regional depths take maximum, min { di}I=1 ..., rnFor all regional depths except Minima beyond 0;
Wherein the depth value of the pixel in each region in g is taken arithmetic average as the depth value in this region, in the middle part of image The depth of point pixel not can determine that, and is using 0 situation about being filled up as depth value in the depth image being given, and concrete processes Method is: overdivided region set g is mapped in depth image, if overdivided region giThe depth value of middle element is all 0, table The depth information of this region object bright does not know, then only consider the color similarity in this region and adjacent area;If over-segmentation Region giIn have Partial Elements depth value be 0, then calculate this regional depth value diWhen, only to overdivided region giMiddle depth Those pixels being not zero take arithmetic average;
Step 3: target and background seed region is automatically chosen according to depth image;
Step 4: the method using the region merging technique based on maximum similarity carries out region merging technique, obtains final segmentation result.
2. Fusion of Color according to claim 1 and depth information image partition method it is characterised in that: in step 2 Described color similarity scWith depth similarity sdFusion, scAnd sdAfter fusion, total similarity is:
S=sc+w·sd
Wherein, scAnd sdThe weight w merging to be described using nonlinear sigmoid curve:
w = a 0.1 + exp ( - ( s c - b ) c )
The maximum that wherein a determines sigmoid curve approaches value, b and c determines the displacement of sigmoid curve and precipitous respectively Degree.
3. Fusion of Color according to claim 1 and depth information image partition method it is characterised in that: step 3 institute That states chooses target and background seed region automatically according to depth image, and it implements process is to calculate first with k-means Method clusters to the element in regional depth set d, and classification number takes k=2, is automatically polymerized to two big class, i.e. target and background, Then respectively from this some region of two big apoplexy due to endogenous wind random chooses as target and background seed region.
CN201410324569.3A 2014-07-09 2014-07-09 Image segmentation method fusing color and depth information Expired - Fee Related CN104050682B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410324569.3A CN104050682B (en) 2014-07-09 2014-07-09 Image segmentation method fusing color and depth information

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410324569.3A CN104050682B (en) 2014-07-09 2014-07-09 Image segmentation method fusing color and depth information

Publications (2)

Publication Number Publication Date
CN104050682A CN104050682A (en) 2014-09-17
CN104050682B true CN104050682B (en) 2017-01-18

Family

ID=51503465

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410324569.3A Expired - Fee Related CN104050682B (en) 2014-07-09 2014-07-09 Image segmentation method fusing color and depth information

Country Status (1)

Country Link
CN (1) CN104050682B (en)

Families Citing this family (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9466118B2 (en) 2014-01-17 2016-10-11 Htc Corporation Image segmentation device, image segmentation method, and depth map generating method
CN104574375B (en) * 2014-12-23 2017-05-03 浙江大学 Image significance detection method combining color and depth information
CN105354838B (en) * 2015-10-20 2018-04-10 努比亚技术有限公司 The depth information acquisition method and terminal of weak texture region in image
CN107025442B (en) * 2017-03-31 2020-05-01 北京大学深圳研究生院 Multi-mode fusion gesture recognition method based on color and depth information
CN107145892B (en) * 2017-05-24 2019-01-22 北京大学深圳研究生院 A kind of image significance object detection method based on adaptive syncretizing mechanism
CN107247929B (en) * 2017-05-26 2020-02-18 大连海事大学 Shoe-printing pattern progressive refining type extraction method combined with priori knowledge
CN107292923B (en) * 2017-06-29 2019-03-01 北京大学深圳研究生院 The back-propagating image vision conspicuousness detection method excavated based on depth map
KR102411661B1 (en) 2017-07-31 2022-06-21 삼성전자주식회사 Method and device for processing image
CN108322788B (en) * 2018-02-09 2021-03-16 武汉斗鱼网络科技有限公司 Advertisement display method and device in live video
CN110278233B (en) * 2018-03-16 2022-06-03 中移(苏州)软件技术有限公司 Load adjusting method and device
CN108717524B (en) * 2018-04-28 2022-05-06 天津大学 Gesture recognition system based on double-camera mobile phone and artificial intelligence system
CN108895981B (en) * 2018-05-29 2020-10-09 南京怀萃智能科技有限公司 Three-dimensional measurement method, device, server and storage medium
CN110378942A (en) * 2018-08-23 2019-10-25 北京京东尚科信息技术有限公司 Barrier identification method, system, equipment and storage medium based on binocular camera
CN110378176A (en) * 2018-08-23 2019-10-25 北京京东尚科信息技术有限公司 Object identification method, system, equipment and storage medium based on binocular camera
CN109543673A (en) * 2018-10-18 2019-03-29 浙江理工大学 A kind of low contrast punching press character recognition algorithm based on Interactive Segmentation
CN109495784A (en) * 2018-11-29 2019-03-19 北京微播视界科技有限公司 Information-pushing method, device, electronic equipment and computer readable storage medium
CN109949397A (en) * 2019-03-29 2019-06-28 哈尔滨理工大学 A kind of depth map reconstruction method of combination laser point and average drifting
CN110543868A (en) * 2019-09-09 2019-12-06 福建省趋普物联科技有限公司 Monitoring method and system based on face recognition and head and shoulder detection
CN110686652B (en) * 2019-09-16 2021-07-06 武汉科技大学 Depth measurement method based on combination of depth learning and structured light

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101720047A (en) * 2009-11-03 2010-06-02 上海大学 Method for acquiring range image by stereo matching of multi-aperture photographing based on color segmentation

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101720047A (en) * 2009-11-03 2010-06-02 上海大学 Method for acquiring range image by stereo matching of multi-aperture photographing based on color segmentation

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Color and Depth-Based Superpixels for Background and Object Segmentation;Islem Jebari et al;《Procedia Engineering》;20121231;第41卷;1307-1315 *
纹理特征提取及其在图像分割中的应用研究;郑庆庆;《中国博士学位论文全文数据库 信息科技辑》;20110915(第09期);正文第69页最后一段至第70页第1段,第77页第3-4段,图5.6 *
融合深度和颜色信息的图像物体分割算法;皮志明 等;《模式识别与人工智能》;20130228;第26卷(第2期);第154页右栏第4段,第155页右栏第1段,图4 *

Also Published As

Publication number Publication date
CN104050682A (en) 2014-09-17

Similar Documents

Publication Publication Date Title
CN104050682B (en) Image segmentation method fusing color and depth information
CN105931295B (en) A kind of geologic map Extracting Thematic Information method
CN102044069B (en) Method for segmenting white blood cell image
CN106651872A (en) Prewitt operator-based pavement crack recognition method and system
CN104077447B (en) Urban three-dimensional space vector modeling method based on paper plane data
CN106548141B (en) A kind of object-oriented farmland information extraction method based on the triangulation network
CN108537239B (en) Method for detecting image saliency target
CN107945179A (en) A kind of good pernicious detection method of Lung neoplasm of the convolutional neural networks of feature based fusion
CN107403183A (en) The intelligent scissor method that conformity goal is detected and image segmentation is integrated
CN103218833B (en) The color space the most steady extremal region detection method of Edge Enhancement type
CN110738676A (en) GrabCT automatic segmentation algorithm combined with RGBD data
CN1312625C (en) Character extracting method from complecate background color image based on run-length adjacent map
CN104134219A (en) Color image segmentation algorithm based on histograms
CN105718945A (en) Apple picking robot night image identification method based on watershed and nerve network
CN104463138B (en) The text positioning method and system of view-based access control model structure attribute
CN107909079A (en) One kind collaboration conspicuousness detection method
CN101657840A (en) System and method for cell analysis in microscopy
CN104376551A (en) Color image segmentation method integrating region growth and edge detection
CN104408733B (en) Object random walk-based visual saliency detection method and system for remote sensing image
CN107273608A (en) A kind of reservoir geology profile vectorization method
CN104299263A (en) Method for modeling cloud scene based on single image
CN103325117A (en) Rock core picture processing method and system based on MATLAB
CN105931180A (en) Salient information guided image irregular mosaic splicing method
CN101840582B (en) Boundary digitizing method of cadastral plot
Meyer et al. Segmentation, minimum spanning tree and hierarchies

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20170118

Termination date: 20170709

CF01 Termination of patent right due to non-payment of annual fee