CN110084821A - A kind of more example interactive image segmentation methods - Google Patents

A kind of more example interactive image segmentation methods Download PDF

Info

Publication number
CN110084821A
CN110084821A CN201910307292.6A CN201910307292A CN110084821A CN 110084821 A CN110084821 A CN 110084821A CN 201910307292 A CN201910307292 A CN 201910307292A CN 110084821 A CN110084821 A CN 110084821A
Authority
CN
China
Prior art keywords
pixel
image
image segmentation
parameter
segmentation methods
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910307292.6A
Other languages
Chinese (zh)
Other versions
CN110084821B (en
Inventor
冯杰
郑雅羽
李子琼
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Xiaotu Technology Co Ltd
Original Assignee
Hangzhou Xiaotu Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Xiaotu Technology Co Ltd filed Critical Hangzhou Xiaotu Technology Co Ltd
Priority to CN201910307292.6A priority Critical patent/CN110084821B/en
Publication of CN110084821A publication Critical patent/CN110084821A/en
Application granted granted Critical
Publication of CN110084821B publication Critical patent/CN110084821B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20076Probabilistic image processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning

Abstract

The invention discloses a kind of more example interactive image segmentation methods, it is only necessary to mark a small amount of pixel, energy Fast Segmentation random color distribution possesses the target image of multiple examples, comprising steps of (1) obtains the image comprising artificial markup information;(2) more example Gaussian modelings based on colouring information are carried out to image;(3) combine range formula and more example Gaussian modeling information, classify to the pixel in image, image after being divided;(4) user can be modified segmented image, return step (1);Otherwise, image is saved.The method of the present invention is compared with the prior art when solving the problems, such as more example interactive image segmentations, the colouring information of image is not only fully considered, and the location information between image is also contemplated, high performance image segmentation is realized by comprehensively utilizing color and the position of image.

Description

A kind of more example interactive image segmentation methods
Technical field
The invention belongs to image segmentation and integration technology fields, and in particular to a kind of more example interactive image segmentation sides Method.
Background technique
Image segmentation is to extract in region significant in image by certain feature (such as edge, texture etc.), Judge whether there is interested target in image;Image segmentation be generally based on similitude between pixel, pixel mutability or The progress such as space length.Similitude between so-called pixel is exactly that the pixel in some region has similar feature, such as Element value is close, texture is mutually same;Pixel mutability is marginal portion in the picture, can there is the mutation of pixel value;Space away from From the foundation that can also be used as segmentation, a possibility that usual closer pixel of space length is divided into a classification, is larger. There are many ways to image segmentation, is divided into semi-automatic segmentation and automatic segmentation according to whether the artificial process for participating in segmentation is needed. Since target image is different and the similitude of background and object pixel, automatic segmentation is difficult will have high-level semantic in image Object complete extraction comes out;Semi-automatic segmentation is that user's interaction is added on the basis of automatic segmentation as algorithm to input, and makes figure Object as in high-level semantic can be completely extracted.
Interactive image segmentation passes through using some foreground points and the background dot marked in advance in the picture, can be effectively Reduce the uncertainty of segmentation, therefore is widely used in picture editting field.The interaction based on area marking occurred in recent years Formula dividing method, it is only necessary to the classification of partial pixel point is marked in image to be split.It is most popular in image segmentation algorithm at present Based on figure hugger opinion interactive image segmentation method GrabCut just belong to this kind of methods, it by mark one include The rectangle frame of prospect distinguishes foreground and background: the pixel of outer rectangular frame is background, and the pixel within rectangle frame has relatively probably Rate is prospect.The major defect of GrabCut method is: can only divide the image for possessing single foreground target, it is desirable that prospect and back The distribution of color of scene element meets mixed Gauss model, and the distribution of color of the two is required to have larger difference, to preceding, background pair The borderline region segmentation effect not stronger than degree is poor.The method of another kind of prevalence is linear restriction Spectral Clustering, and this method will Markup information is encoded to linear homogeneous equality constraint and is added in classical spectral clustering image segmentation frame, to obtain interactive mode Image segmentation result;This method do not depend on before, the mixed Gauss model of background color distribution, therefore almost can apply to all fields The image of scape, but the disadvantage is that: calculate it is more time-consuming, can not apply in real time, cannot by the space smoothing information coding of pixel at Corresponding constraint, low for markup information utilization rate, needing largely to mark pixel could accurate segmentation result.It is based on Target image to be split is converted to a graph structure by the interactive image segmentation method of figure hugger opinion, by will be with Solve problems It is converted to and minimizes the problem of energy solves to realize that image segmentation, common method have GraphCut, GrabCut and spectrum poly- Class method etc., but there are problems that two in the interactive image segmentation algorithm discussed currently based on figure hugger: the example 1. extracted Several limitations;2. partitioning algorithm speed is slower, Real-time segmentation cannot achieve.
The Chinese patent of Patent No. CN 102360494A proposes a kind of more foreground target interactive image segmentation methods, Different from the interactive image segmentation based on region, this method is not necessarily to assign weight to each side in figure, but at each A discriminant analysis method is introduced in the local window field of pixel, and pixel is mapped directly into classification mark by its feature vector Label;This method causes disadvantage is that the class label to each pixel of the local window of image carries out estimation error Calculation amount is too big, and there may be similar or identical pixel in local window, causes to compute repeatedly.Patent No. CN The Chinese patent of 107730528A proposes a kind of interactive image segmentation and fusion method based on Grabcut algorithm, the party Method combines Grabcut algorithm and watershed algorithm solves the problems, such as the close segmentation inaccuracy of background before Grabcut, but the party The defect of method is to be limited only to preceding background segment.
Summary of the invention
In view of above-mentioned, the present invention provides a kind of more example interactive image segmentation methods, only need to mark a small amount of Pixel, energy Fast Segmentation random color distribution possess the target image of multiple examples.
A kind of more example interactive image segmentation methods, include the following steps:
(1) pass through each example in the artificial uncalibrated image of interactive mode, each foreground target and background in image are equal Example is corresponded to, therefore example quantity K is foreground target number n plus background number 1 i.e. K=n+1;
(2) Gaussian modeling is carried out to each example according to the colouring information for having demarcated pixel;
(3) using EM (Expectation-Maximization, expectation maximization) algorithm to parameterμk、σk、xkWith ykIt is iterated update;Wherein: μkAnd σkThe mean value and mean square deviation of respectively k-th example Gaussian statistics model, xkAnd ykRespectively For the transverse and longitudinal coordinate of k-th of example central point in the picture,For the weight of k-th of example joint classification model, k is natural number And 1≤k≤K;
(4) pixel p is not demarcated for any in image, according to parameter μ k and σkIt calculates pixel p and belongs to each example Posterior probability;
(5) according to parameter xkAnd ykPixel p is calculated at a distance from each example central point;
(6) the joint classification model for combining to obtain each example by weight according to step (3) and the result of (4) is as follows;
uk(p)=λ Φk(p)+(1-λ)·dk(p)
Wherein: uk(p) belong to the joint classification probability of k-th of example, Φ for pixel pk(p) belong to kth for pixel p The posterior probability of a example, dk(p) for pixel p with k-th of example central point at a distance from, λ for preset weight coefficient (preferably There is preferable segmentation effect for all pictures when λ takes 0.8);
(7) according to parameter finally determining after iteration convergenceμk、σk、xkAnd ykRecalculate the joint classification of each example Model, and then determine by following classifier the affiliated example of pixel p, and traverse all in image do not demarcate pixel according to this Point;
Further, the concrete methods of realizing of the step (1) are as follows: for different examples, manually utilize different face Color demarcates a small amount of pixel in each example region.
Further, parameter μ in step (3) iterative processkAnd σkCorresponding initial valueWithPass through following public affairs Formula is calculated:
Wherein: CiThe color value of pixel is demarcated for i-th in k-th of example region, i is natural number and 1≤i≤nk, nkFor the number for having demarcated pixel in k-th of example region.
Further, parameter x in step (3) iterative processkAnd ykCorresponding initial valueWithPass through following formula It is calculated:
Wherein: xiAnd yiThe transverse and longitudinal coordinate of pixel in the picture, i have been demarcated i-th in respectively k-th of example region For natural number and 1≤i≤nk, nkFor the number for having demarcated pixel in k-th of example region.
Further, parameter in step (3) iterative processCorresponding initial valueIt is calculated by the following formula It arrives:
Wherein: nkFor the number for having demarcated pixel in k-th of example region.
Further, use EM algorithm to parameter in the step (3)μk、σk、xkAnd ykIt is iterated update, specifically Based on following update formula:
Wherein:For the intermediate variable during iteration j,WithCorrespond to jth time and+1 iteration of jth In the processWithCorrespond to the μ in+1 iterative process of jthkAnd σk,WithCorrespond to iteration j X in the processkAnd yk,WithCorrespond to the x in+1 iterative process of jthkAnd yk,Correspond to iteration j U in the processk(p), CpFor the color value of pixel p, nsFor the number for not demarcating pixel in image, Ω is not mark in image Determine the set of pixel.
Further, the posterior probability that pixel p belongs to each example is calculated by the following formula in the step (4);
Wherein: CpFor the color value of pixel p.
Further, pixel p is calculated by the following formula at a distance from each example central point in the step (5);
Wherein: x and y is respectively the transverse and longitudinal coordinate of pixel p in the picture.
More example Gaussian modelings the present invention is based on colouring information be based on a kind of following color space, including YCbCr color space, RGB color and hsv color space;More example Gaussian modelings are based on mixed Gauss model More instance models, wherein the number of example is specified by user, and most example numbers are limited by calculator memory.Distance in the present invention What formula was selected is Euclidean distance and the exponent conversion for carrying out e, combines range formula and more example mixed Gaussians are built Mould information carries out classification judgement, and wherein range formula and more example mixed Gauss models this two are respectively provided with weight, and weight is logical It crosses iterative calculation and obtains optimum.
The method of the present invention when solving the problems, such as more example interactive image segmentations, is not only sufficiently examined compared with the prior art The colouring information of image is considered, and has also contemplated the location information between image, by the color and the position that comprehensively utilize image It sets to realize high performance image segmentation.
Detailed description of the invention
Fig. 1 is the flow diagram of the more example interactive image segmentation methods of the present invention.
Fig. 2 is that the more examples of the present invention use interactive mode manually calibrated image.
Fig. 3 is the result images after the more example segmentations of the present invention.
Specific embodiment
In order to more specifically describe the present invention, with reference to the accompanying drawing and specific embodiment is to technical solution of the present invention It is described in detail.
Interactive mode in the present embodiment using pixel will be based on mixing in conjunction with more examples as user's interactive mode The method of Gauss model and range formula is closed to realize technical solution of the present invention, as shown in Figure 1, specific implementation step is as follows:
Step 1: user passes through multiple examples (including multiple foreground targets and background) in interaction uncalibrated image, this implementation Using the interactive mode of pixel as interactive mode in example, when initial in the pixel uncalibrated image of user's different colours Multiple foreground targets and background, as shown in Figure 2.
Step 2: selecting initial training data using the interactive information that user provides.Specific operation process in the present embodiment Initial stage selects the pixel for being labeled as background as primary data to learn to be carried on the back for the training data of background model Scape statistical model;The training data of different target model each for prospect, selection are labeled as the foreground target of a certain color Pixel learn to obtain the statistical model of the multiple targets of prospect as primary data.
Step 3: the model that is learnt using training data obtained in step 2 simultaneously combines range formula, is combined Disaggregated model, and then classified using joint classification model to the non-mark pixel in entire image, the following detailed description of connection Close disaggregated model and its learning process.
It is similar to data statistics model that the present embodiment calculates each pixel based on the statistical sorter of Gauss model Property, it is assumed that C is the color value of some pixel (x, y) not marked, then posterior probability calculation formula is as follows:
Wherein: μkAnd σkThe mean value and mean square deviation of k-th of Gauss model are respectively indicated, K indicates that foreground target number n adds background Number 1, i.e. K=n+1, Φk(C|μkk) it is the probability that certain pixel (x, y) belongs to k-th of example.
Assuming that x, y are the coordinate position of pixel, then the calculation formula of distance is as follows:
Wherein: xkAnd ykRespectively represent central point cross, the ordinate of kth class example.
Before application formula (1) and formula (2), need to determine the color value of each pixel, each Gaussian mode first The centre coordinate point of the mean value and mean square deviation of type and every a kind of example.
Specifically, the color value calculation formula of each pixel is as follows:
C=(R*30+G*59+B*11+50)/100 (3)
Wherein: R, G, B respectively represent the RGB triple channel value of pixel.
Specifically, the mean value of each Gauss model and mean square deviation calculation formula are as follows:
Specifically, the centre coordinate point calculation formula of every a kind of example is as follows:
Wherein: nkIndicate that the pixel that classification is k marks number, xikAnd yikRespectively indicate some pixel that classification is k Cross, the ordinate of point.
Joint classification model is based on above-mentioned formula (1) and formula (2) and to give two formula certain weight combination It forms, the joint classification new probability formula for belonging to kth class is as follows:
uk(x,y,C|xk,ykkk)=λ Φk(C|μkk)+(1-λ)·dk(x,y|xk,yk), k=1,2 ..., K (6)
Wherein: λ is specific gravity shared by colouring information in model, if its value is larger, then it represents that the foundation of classification depends on The probability of colouring information is larger, and the parameter is by manually adjusting to obtain optimal solution;It was proved that for all when λ takes 0.8 Picture has preferable segmentation effect.
It combines and assigns the joint classification new probability formula of all categories to different weights, obtain joint classification model, That is:
Wherein:Indicate the weight of every a kind of joint classification model, initialization formula is as follows:
Determination for above-mentioned parameter can be obtained using expectation maximization EM algorithm, particularly may be divided into following four step It is rapid:
Step 3.1: initializing all parameters;It enablesPass through formula (4), formula (5) and formula (8) all parameters have been initialized.
Step 3.2: estimation parameterPosterior probability, formula is as follows:
Wherein: nsIt indicates not marking number a little, calculates the probability density value not marked a little under the model with formula (6) uk(xi,yi,Ci|Θ)。
Step 3.3: according to the estimated value of step 3.2, updating all parameters.
Update weight:
Update mean value:
Update variance:
Update centre coordinate:
Step 3.4: the condition of convergence, constantly iterative step 3.2 and step 3.3, repeat to update five values above, until Parameters variation is not significant, is embodied in | Θ-Θ ' | < ε, Θ ' they are updated parameter, and ε value takes 10-5.It is calculated by using EM Method respectively obtains the polytypic optimal joint disaggregated model parameter not marked a little, finally formula (14) is combined to can be obtained to not More Exemplary classes devices of pixel are marked, classification standard is as follows:
Step 4: using classifier obtained in step 3, do not mark pixel to each and classify, and obtain containing only Foreground target as a result, as shown in Figure 3.If user is dissatisfied to classification results, it can add or mark again being trained, return Step 2 and step 3 obtain new results and feed back to user;Otherwise, current results figure is saved under specified path is PNG format Image.
The above-mentioned description to embodiment is for that can understand and apply the invention convenient for those skilled in the art. Person skilled in the art obviously easily can make various modifications to above-described embodiment, and described herein general Principle is applied in other embodiments without having to go through creative labor.Therefore, the present invention is not limited to the above embodiments, ability Field technique personnel announcement according to the present invention, the improvement made for the present invention and modification all should be in protection scope of the present invention Within.

Claims (8)

1. a kind of more example interactive image segmentation methods, include the following steps:
(1) pass through each example in the artificial uncalibrated image of interactive mode, each foreground target and background in image are corresponding For example, therefore it plus background number 1 is K=n+1 that example quantity K, which is foreground target number n,;
(2) Gaussian modeling is carried out to each example according to the colouring information for having demarcated pixel;
(3) using EM algorithm to parameterμk、σk、xkAnd ykIt is iterated update;Wherein: μkAnd σkRespectively k-th of example is high The mean value and mean square deviation of this statistical model, xkAnd ykThe transverse and longitudinal coordinate of respectively k-th of example central point in the picture,It is The weight of k example joint classification model, k are natural number and 1≤k≤K;
(4) pixel p is not demarcated for any in image, according to parameter μkAnd σkThe posteriority that calculating pixel p belongs to each example is general Rate;
(5) according to parameter xkAnd ykPixel p is calculated at a distance from each example central point;
(6) the joint classification model for combining to obtain each example by weight according to step (3) and the result of (4) is as follows;
uk(p)=λ Φk(p)+(1-λ)·dk(p)
Wherein: uk(p) belong to the joint classification probability of k-th of example, Φ for pixel pk(p) belong to k-th in fact for pixel p The posterior probability of example, dkIt (p) is pixel p at a distance from k-th of example central point, λ is preset weight coefficient;
(7) according to parameter finally determining after iteration convergenceμk、σk、xkAnd ykRecalculate the joint classification mould of each example Type, and then determine by following classifier the affiliated example of pixel p, and traverse all in image do not demarcate pixel according to this;
2. more example interactive image segmentation methods according to claim 1, it is characterised in that: the tool of the step (1) Body implementation method are as follows: for different examples, manually a small amount of pixel in each example region is carried out using different colors Calibration.
3. more example interactive image segmentation methods according to claim 1, it is characterised in that: step (3) iteration Parameter μ in the processkAnd σkCorresponding initial valueWithIt is calculated by the following formula to obtain:
Wherein: CiThe color value of pixel is demarcated for i-th in k-th of example region, i is natural number and 1≤i≤nk, nkFor The number of pixel has been demarcated in k-th of example region.
4. more example interactive image segmentation methods according to claim 1, it is characterised in that: step (3) iteration Parameter x in the processkAnd ykCorresponding initial valueWithIt is calculated by the following formula to obtain:
Wherein: xiAnd yiThe transverse and longitudinal coordinate of pixel in the picture has been demarcated i-th in respectively k-th of example region, and i is certainly So number and 1≤i≤nk, nkFor the number for having demarcated pixel in k-th of example region.
5. more example interactive image segmentation methods according to claim 1, it is characterised in that: step (3) iteration Parameter in the processCorresponding initial valueIt is calculated by the following formula to obtain:
Wherein: nkFor the number for having demarcated pixel in k-th of example region.
6. more example interactive image segmentation methods according to claim 1, it is characterised in that: adopted in the step (3) With EM algorithm to parameterμk、σk、xkAnd ykIt is iterated update, is specifically based on following update formula:
Wherein:For the intermediate variable during iteration j,WithIt corresponds in jth time and+1 iterative process of jth 'sWithCorrespond to the μ in+1 iterative process of jthkAnd σk,WithDuring corresponding to iteration j XkAnd yk,WithCorrespond to the x in+1 iterative process of jthkAnd yk,During corresponding to iteration j Uk(p), CpFor the color value of pixel p, nsFor the number for not demarcating pixel in image, Ω is not demarcate pixel in image The set of point.
7. more example interactive image segmentation methods according to claim 1, it is characterised in that: lead in the step (4) It crosses following formula and calculates the posterior probability that pixel p belongs to each example;
Wherein: CpFor the color value of pixel p.
8. more example interactive image segmentation methods according to claim 1, it is characterised in that: lead in the step (5) It crosses following formula and calculates pixel p at a distance from each example central point;
Wherein: x and y is respectively the transverse and longitudinal coordinate of pixel p in the picture.
CN201910307292.6A 2019-04-17 2019-04-17 Multi-instance interactive image segmentation method Active CN110084821B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910307292.6A CN110084821B (en) 2019-04-17 2019-04-17 Multi-instance interactive image segmentation method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910307292.6A CN110084821B (en) 2019-04-17 2019-04-17 Multi-instance interactive image segmentation method

Publications (2)

Publication Number Publication Date
CN110084821A true CN110084821A (en) 2019-08-02
CN110084821B CN110084821B (en) 2021-01-12

Family

ID=67415372

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910307292.6A Active CN110084821B (en) 2019-04-17 2019-04-17 Multi-instance interactive image segmentation method

Country Status (1)

Country Link
CN (1) CN110084821B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110598741A (en) * 2019-08-08 2019-12-20 西北大学 Pixel-level label automatic generation model construction and automatic generation method and device
CN112381834A (en) * 2021-01-08 2021-02-19 之江实验室 Labeling method for image interactive instance segmentation
CN112862789A (en) * 2021-02-10 2021-05-28 上海大学 Interactive image segmentation method based on machine learning

Citations (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102637298A (en) * 2011-12-31 2012-08-15 辽宁师范大学 Color image segmentation method based on Gaussian mixture model and support vector machine
CN103119607A (en) * 2010-07-08 2013-05-22 国际商业机器公司 Optimization of human activity determination from video
CN103208123A (en) * 2013-04-19 2013-07-17 广东图图搜网络科技有限公司 Image segmentation method and system
CN103927412A (en) * 2014-04-01 2014-07-16 浙江大学 Real-time learning debutanizer soft measurement modeling method on basis of Gaussian mixture models
CN104063876A (en) * 2014-01-10 2014-09-24 北京理工大学 Interactive image segmentation method
CN104091332A (en) * 2014-07-01 2014-10-08 黄河科技学院 Method for optimizing multilayer image segmentation of multiclass color texture images based on variation model
CN104392240A (en) * 2014-10-28 2015-03-04 中国疾病预防控制中心寄生虫病预防控制所 Parasite egg identification method based on multi-feature fusion
CN104881669A (en) * 2015-05-13 2015-09-02 中国科学院计算技术研究所 Method and system for extracting local area detector based on color contrast
CN104933064A (en) * 2014-03-19 2015-09-23 株式会社理光 Method and apparatus for predicting motion parameter of target object
WO2016037848A1 (en) * 2014-09-09 2016-03-17 Thomson Licensing Image recognition using descriptor pruning
CN105844647A (en) * 2016-04-06 2016-08-10 哈尔滨伟方智能科技开发有限责任公司 Kernel-related target tracking method based on color attributes
CN105957078A (en) * 2016-04-27 2016-09-21 浙江万里学院 Multi-view video segmentation method based on graph cut
CN106981068A (en) * 2017-04-05 2017-07-25 重庆理工大学 A kind of interactive image segmentation method of joint pixel pait and super-pixel
US20170278246A1 (en) * 2016-03-22 2017-09-28 Electronics And Telecommunications Research Institute Apparatus and method for extracting object
WO2018120932A1 (en) * 2016-02-26 2018-07-05 Bayerische Motoren Werke Aktiengesellschaft Method and apparatus for optimizing scan data and method and apparatus for correcting trajectory
CN108364294A (en) * 2018-02-05 2018-08-03 西北大学 Abdominal CT images multiple organ dividing method based on super-pixel
CN109166133A (en) * 2018-07-14 2019-01-08 西北大学 Soft tissue organs image partition method based on critical point detection and deep learning

Patent Citations (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103119607A (en) * 2010-07-08 2013-05-22 国际商业机器公司 Optimization of human activity determination from video
CN102637298A (en) * 2011-12-31 2012-08-15 辽宁师范大学 Color image segmentation method based on Gaussian mixture model and support vector machine
CN103208123A (en) * 2013-04-19 2013-07-17 广东图图搜网络科技有限公司 Image segmentation method and system
CN104063876A (en) * 2014-01-10 2014-09-24 北京理工大学 Interactive image segmentation method
CN104933064A (en) * 2014-03-19 2015-09-23 株式会社理光 Method and apparatus for predicting motion parameter of target object
CN103927412A (en) * 2014-04-01 2014-07-16 浙江大学 Real-time learning debutanizer soft measurement modeling method on basis of Gaussian mixture models
CN104091332A (en) * 2014-07-01 2014-10-08 黄河科技学院 Method for optimizing multilayer image segmentation of multiclass color texture images based on variation model
WO2016037848A1 (en) * 2014-09-09 2016-03-17 Thomson Licensing Image recognition using descriptor pruning
CN104392240A (en) * 2014-10-28 2015-03-04 中国疾病预防控制中心寄生虫病预防控制所 Parasite egg identification method based on multi-feature fusion
CN104881669A (en) * 2015-05-13 2015-09-02 中国科学院计算技术研究所 Method and system for extracting local area detector based on color contrast
WO2018120932A1 (en) * 2016-02-26 2018-07-05 Bayerische Motoren Werke Aktiengesellschaft Method and apparatus for optimizing scan data and method and apparatus for correcting trajectory
US20170278246A1 (en) * 2016-03-22 2017-09-28 Electronics And Telecommunications Research Institute Apparatus and method for extracting object
CN105844647A (en) * 2016-04-06 2016-08-10 哈尔滨伟方智能科技开发有限责任公司 Kernel-related target tracking method based on color attributes
CN105957078A (en) * 2016-04-27 2016-09-21 浙江万里学院 Multi-view video segmentation method based on graph cut
CN106981068A (en) * 2017-04-05 2017-07-25 重庆理工大学 A kind of interactive image segmentation method of joint pixel pait and super-pixel
CN108364294A (en) * 2018-02-05 2018-08-03 西北大学 Abdominal CT images multiple organ dividing method based on super-pixel
CN109166133A (en) * 2018-07-14 2019-01-08 西北大学 Soft tissue organs image partition method based on critical point detection and deep learning

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110598741A (en) * 2019-08-08 2019-12-20 西北大学 Pixel-level label automatic generation model construction and automatic generation method and device
CN110598741B (en) * 2019-08-08 2022-11-18 西北大学 Pixel-level label automatic generation model construction and automatic generation method and device
CN112381834A (en) * 2021-01-08 2021-02-19 之江实验室 Labeling method for image interactive instance segmentation
CN112381834B (en) * 2021-01-08 2022-06-03 之江实验室 Labeling method for image interactive instance segmentation
CN112862789A (en) * 2021-02-10 2021-05-28 上海大学 Interactive image segmentation method based on machine learning

Also Published As

Publication number Publication date
CN110084821B (en) 2021-01-12

Similar Documents

Publication Publication Date Title
CN106296695B (en) Adaptive threshold natural target image segmentation extraction algorithm based on conspicuousness
CN108549891B (en) Multi-scale diffusion well-marked target detection method based on background Yu target priori
CN107123088B (en) A kind of method of automatic replacement photo background color
CN110084821A (en) A kind of more example interactive image segmentation methods
CN105844292B (en) A kind of image scene mask method based on condition random field and secondary dictionary learning
CN105488472B (en) A kind of digital cosmetic method based on sample form
CN108288035A (en) The human motion recognition method of multichannel image Fusion Features based on deep learning
CN104123561B (en) Fuzzy C-mean algorithm remote sensing image automatic classification method based on spatial attraction model
CN104463843B (en) Interactive image segmentation method of Android system
CN107833183A (en) A kind of satellite image based on multitask deep neural network while super-resolution and the method for coloring
CN109712145A (en) A kind of image matting method and system
CN102436636A (en) Method and system for segmenting hair automatically
CN109448015A (en) Image based on notable figure fusion cooperates with dividing method
CN109903257A (en) A kind of virtual hair-dyeing method based on image, semantic segmentation
CN105590325B (en) High-resolution remote sensing image dividing method based on blurring Gauss member function
CN112862792B (en) Wheat powdery mildew spore segmentation method for small sample image dataset
CN102982544B (en) Many foreground object image interactive segmentation method
Meng et al. Feature adaptive co-segmentation by complexity awareness
CN107146229B (en) Polyp of colon image partition method based on cellular Automation Model
CN106611422B (en) Stochastic gradient Bayes&#39;s SAR image segmentation method based on sketch structure
CN106981068A (en) A kind of interactive image segmentation method of joint pixel pait and super-pixel
CN106651884B (en) Mean field variation Bayes&#39;s SAR image segmentation method based on sketch structure
CN107403434A (en) SAR image semantic segmentation method based on two-phase analyzing method
CN105118076A (en) Image colorization method based on over-segmentation and local and global consistency
CN105678790B (en) High-resolution remote sensing image supervised segmentation method based on variable gauss hybrid models

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant