CN103164855A - Bayesian Decision Theory foreground extraction method combined with reflected illumination - Google Patents

Bayesian Decision Theory foreground extraction method combined with reflected illumination Download PDF

Info

Publication number
CN103164855A
CN103164855A CN2013100597075A CN201310059707A CN103164855A CN 103164855 A CN103164855 A CN 103164855A CN 2013100597075 A CN2013100597075 A CN 2013100597075A CN 201310059707 A CN201310059707 A CN 201310059707A CN 103164855 A CN103164855 A CN 103164855A
Authority
CN
China
Prior art keywords
image
foreground
function
illumination
gray
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN2013100597075A
Other languages
Chinese (zh)
Other versions
CN103164855B (en
Inventor
王好谦
邓博雯
邵航
戴琼海
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen International Graduate School of Tsinghua University
Original Assignee
Shenzhen Graduate School Tsinghua University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Graduate School Tsinghua University filed Critical Shenzhen Graduate School Tsinghua University
Priority to CN201310059707.5A priority Critical patent/CN103164855B/en
Publication of CN103164855A publication Critical patent/CN103164855A/en
Application granted granted Critical
Publication of CN103164855B publication Critical patent/CN103164855B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The invention provides a Bayesian Decision Theory foreground extraction method combined with reflected illumination. The Bayesian Decision Theory foreground extraction method comprises the steps of appointing a point light source located on a foreground object by a user, carrying out gray level matching on an image, converting and imitating point light source illumination, strengthening image edge information, obtaining an illumination function according to before-after conversion comparison, filtering waves, reducing noise, dividing the image through a watershed algorithm, calculating a sectional drawing parameter through a Bayes formula, imitating an alpha value function curve through a multi-layer perception device, integrating the illumination function and a color distribution function, and completing extraction of the foreground object. The user is only required to appoint the position of the point light source and not required to preset edge information of a foreground and a background, the requirement for user interaction is reduced, meanwhile, time complexity of the used algorithms is series, and the defects that a common sectional drawing algorithm is large in calculated quantity and low in processing speed are avoided. Due to the facts that the illumination function is introduced and the alpha value is matched by the perception device, an accurate and complete extraction result can be obtained for the foreground object with complicated edges, and particularly for the foreground object similar colors of the edge and the ground.

Description

A kind of Bayesian decision foreground extracting method in conjunction with the reflected light photograph
Technical field
The invention belongs to the computer image processing technology field, particularly a kind of Bayesian decision foreground extracting method in conjunction with the reflected light photograph.
Background technology
The foreground extraction technology is a kind of by a small amount of prospect and background area in user's specify image, and isolates automatically, exactly all foreground objects according to these promptings according to certain decision rule.
The foreground extraction technology is requisite gordian technique in production of film and TV, is widely used in media production.Develop into and produced many different algorithms today: Rotoscoping method, Autokey method, Knockout method, Ruzon-tomasi method, Hillman method, bayes method, Poisson method, Grabcut method, Lazy snapping and based on stingy drawing method of perceptual color space etc.Carry out in natural image foreground extraction can be divided into that the zone is divided, color is estimated with αEstimate 3 steps, at first carry out trimap and divide, then ask each point in zone of ignorance the foreground composition and αValue.
Based on Bayesian frame, Chuang has proposed to utilize the Bayes of Principle of Statistics to scratch drawing method, the method was from Grabcut method or the Lazy snapping based on graph theory knowledge was different in the past, neither utilize raster scan order to process one by one each pixel, but utilize Bayesian frame, use Principle of Statistics structure system of linear equations to ask for most suitable solution, the processing sequence of the method is that the oniony ecto-entad from coil to coil of picture stripping is processed, in fact exactly will be in try to achieve zone of ignorance in rgb space any point CTo connecting the foreground area point FWith the background area point BLine segment FBEuclidean distance d 1 ,
Figure 908006DEST_PATH_IMAGE001
Point arrives FThe mahalanobis distance of point d 2 , Point arrives BThe mahalanobis distance of point d 3 The minimum value of quadratic sum, namely ask min( d 1 2 + d 2 2 + d 3 2 ).But the method has only defined the logarithm probability L( C| F, B, α), L( F) and L( B), not definition L( α), when the color of prospect and background relatively near the time, this hypothesis will go wrong.Tan scratches figure to Bayes and improves, and has proposed a kind of drawing method of scratching fast: suppose line segment FBProcess CPoint, like this d 1 =0, the stingy figure framework of Bayes is reduced to asks min( d 2 2 + d 3 2 ), and the approximation method of a rapid solving minimum value has been proposed.Although this method speed, effect is unsatisfactory.
Summary of the invention
Scratch the Principle of Statistics of drawing method based on Bayes, deficiency for above-mentioned foreground extracting method, the present invention proposes a kind of Bayesian decision foreground extracting method in conjunction with the reflected light photograph, to avoid generally scratching the shortcoming that the nomography calculated amount is large, processing speed is slow, and reduction obtains accurately complete foreground extraction result to the requirement of user interactions.
The present invention is a kind of brand-new interactive foreground extraction technology in conjunction with the Bayesian decision foreground extracting method of reflected light photograph.The method comprises the following steps:
S1, Gray-scale Matching conversion
The input original image, specified the pointolite that is positioned at foreground object by the user, use the BRDF(bidirectional reflectance distribution function) illumination effect of calculation level light source on front background object, introduce power transform, parameter by the control change function, expand high tonal range, the low tonal range of compression, background edge information before outstanding image;
Simultaneously, the different light impact according to pointolite produces on front background object calculates illumination function corresponding to each pixel;
S2, filtering and noise reduction
Use Hi-pass filter to carry out filtering and noise reduction to the image after step S1 matched transform and process, remove the picture noise impact, produce the over-segmentation phenomenon when avoiding follow-up watershed segmentation;
S3, image segmentation
Through the image after step S2 denoising, marginal information is obvious, and the especially front comparatively close image of background color has been given prominence to foreground object and border burr part details; Use on this basis watershed algorithm to cut apart image, image is regarded as topological landforms on geodesy, in image, the gray-scale value of every bit pixel represents the sea level elevation of this point, and each local minimum and range of influence thereof are regarded reception basin as, and the border of reception basin forms the watershed divide;
Watershed algorithm can be regarded the process that a simulation is immersed as, computation process is divided into sequence and floods the iteration mark of two steps, surperficial at each local minimum, pierce through an aperture, then whole model slowly is immersed in the water, along with the intensification of immersing, the domain of influence of each local minimum is slowly to external expansion, construct dam at two reception basin meets, form watershed divide, the i.e. edge of image; After completing the iteration mark of all images, just obtain the complete continuous edge cut-off rule of image, and the image of processing through filtering can not produce the over-segmentation phenomenon;
S4, calculate to scratch graph parameter ( F, BWith α)
After image segmentation becomes background, prospect and zone of ignorance Three regions, define a Bayesian frame and come formulistic stingy graph parameter, find the solution a maximum a posteriori problem, calculate the most approaching C F, BWith α, wherein, CAny color on known image, F, B, αRespectively background colour, foreground and opacity, FWith BDistribution simulate with Gaussian distribution, αThe distribution zone of ignorance that will calculate by the window that slides and the appointed area as sample data, with its distribution curve of multilayer perceptron match;
S5, αValue is rebuild and foreground extraction
The stingy graph parameter that obtains according to step S4 F, BWith α, reconstruct image αValue figure introduces a Markov random field, and the illumination function that fusion steps S1 obtains is completed the extraction of foreground picture layer by minimizing the energy function that constructs.
Wherein, step S2 preferably selects a ButterWorth Hi-pass filter that seamlessly transits band between passband and stopband, the sharpening image edge, and outstanding boundary information, and also treated image does not have ringing and produces.
Can adopt formula in step S1
Figure 353080DEST_PATH_IMAGE003
The power function of expression carries out the Gray-scale Matching conversion, and in formula, f is former figure gray scale, and g is gray scale after conversion, and c, b, γ are for controlling parameter.
A kind of Gray-scale Matching transform method that is used for above-mentioned Bayesian decision foreground extracting method in conjunction with the reflected light photograph, comprise the following steps: the input original image, specified the pointolite that is positioned at foreground object by the user, with the illumination effect of BRDF calculation level light source on front background object, introduce the power function in the Gray-scale Matching conversion
Figure 75049DEST_PATH_IMAGE003
, in formula, f is former figure gray scale, and g is gray scale after conversion, and c, b, γ by the parameter of control change function, expand high tonal range for controlling parameter, the low tonal range of compression, background edge information before outstanding image.
The inventive method is on the basis of prior art achievement, before utilizing in natural image, background object changes different characteristics under the pointolite irradiation, image is done Gray-scale Matching conversion simulation points light source irradiation, strengthen image edge information, according to front and back transfer pair ratio, obtain with the relevant energy function of illumination simultaneously; Use dividing ridge method to carry out image segmentation after filtering and noise reduction, obtain accurately complete marginal information, avoid again the over-segmentation phenomenon simultaneously; With using the multilayer perceptron match in the process of the stingy figure of Bayes αThe value function curve; The illumination function and the color distribution function that obtain before integrating are completed the extraction of foreground object.
In this method, the user need only the specified point light source position and must do not provided front background edge details and just can complete foreground extraction, has reduced the requirement to user interactions, uses simultaneously the time complexity of algorithm to be all
Figure 892832DEST_PATH_IMAGE004
Progression has changed the general shortcoming that the nomography calculated amount is large, processing speed is slow of scratching.
This method used the power function to carry out the Gray-scale Matching conversion before cutting apart image, increased prospect brightness, reduced background luminance, and one of the image comparison Information generation before and after conversion is followed the relevant energy function of illumination, was incorporated into follow-up cutting procedure; Simultaneously, make image edge information more obvious, also can reach desirable segmentation effect to the more close image of front background color.
Adopt watershed algorithm to carry out image segmentation, can judge voluntarily the cut zone number, segmenting edge is continuous, and splitting speed is fast, the outline line that can obtain sealing, accurately positioning image edge; And initial pictures is first carried out filtering and noise reduction, and making the image border sharpening, outstanding boundary information simultaneously, is avoided because the impact of noise produces the over-segmentation phenomenon.
Adopt the multilayer perceptron training study, match αThe value function curve solves αThe value estimation problem is completed and is scratched in the figure process αValue is rebuild; Introduce simultaneously Markov random field, in conjunction with before the illumination function that obtains in step, increase the accuracy of foreground extraction, minimize the energy function of structure, complete the extraction of foreground picture layer.
Description of drawings
Fig. 1 is main flow chart of the present invention.
Embodiment
Below in conjunction with accompanying drawing, the present invention's one preferred implementation is described in further detail.
S1. The Gray-scale Matching conversion
A) input picture is specified by the user that on the foreground picture layer, one or several intensity is LPointolite, at imaging surface a bit PThe briliancy that produces
Figure 423170DEST_PATH_IMAGE005
, ρThe surperficial BRDF(bidirectional reflectance distribution function under given illumination and visual angle), rThe distance with illumination point, θIt is the angle that illumination point and P are ordered.Foreground picture layer and illumination point apart from each other, so foreground object has larger variation under exposure, and the variation of background object is less.
B) inputting the luminance variations of rear calculating according to the user, put the histogram after matched transform, is that the histogram of regulation is specified the probability density on the corresponding grey scale level.
The probability density of c) the histogram array of source images being carried out on gray level is added up, and its probability density array is carried out the gray balance conversion.
D) the histogram probability density array of regulation is carried out the gray balance conversion.
E) determine the mapping corresponding relation, introduce power transform:
Figure 300996DEST_PATH_IMAGE003
F is former figure gray scale, g is gray scale after conversion, c, b, γ circulate to the histogram probability density array after gray balance conversion on former figure one by one for controlling parameter, set up the grey scale mapping table, by controlling the adjusting of parameter, expand high tonal range, the low tonal range of compression, background edge information before outstanding image.Also generate simultaneously the illumination function in this step, detailed process is introduced in following step S5.
S2. Filtering and noise reduction
Use Hi-pass filter (ButterWorth Hi-pass filter) to carry out filtering and noise reduction and process, produce the over-segmentation phenomenon when avoiding follow-up watershed segmentation, sharpening image edge, outstanding boundary information.Cutoff frequency is D 0 nThe transport function of rank ButterWorth Hi-pass filter:
Figure 572578DEST_PATH_IMAGE006
D( u, v) be from the initial point of frequency field to ( u, v) point distance, that is:
Figure 928473DEST_PATH_IMAGE007
Usually handle H( u, v) drop to original value
Figure 578897DEST_PATH_IMAGE008
The time D( u, v) be decided to be the cut-off frequency point.Significantly do not jump between passband and stopband, namely have one and seamlessly transit band between the two.Treated image will not have ringing and produce.
S3. Image segmentation
A) reading images is converted into gray level image, utilizes the sobel operator to ask for the border of image, and x is asked in filtering, and the gradient-norm value is asked at y direction edge.
B) utilize user in step S1 to image before the appointment of background, at first use morphology to open operation, remove some very little targets, then corrode successively morphological reconstruction, morphology closed operation, expand, morphological reconstruction, image is negated, obtain image local maximum value, local maximum place pixel value is made as 255, closed operation, operation is opened in corrosion, and the prospect place is set to 255, be converted into bianry image, complete the mark to foreground object, optimize segmentation effect.
C) computed segmentation function, use the watershed segmentation algorithm and be achieved as follows:
I. the statistics connected region, initial watershed divide is looked in the mark prime area;
Original image is got threshold value and is designated as Thre as initial mountain, and all mountains lower than this height all add water.Represent original state with Seed, the set of water is arranged, anhydrous resetting;
According to mark before, be communicated with the initial connected regions of neighborhood methods statistics with eight, each pixel of scanning input Seed according to this, with all marks initial reception basin include one by one separately zone in, form outer circulation.Create an interim formation, the connection that is used for processing current initial reception basin connects, but the growing point that belongs to a specific initial reception basin zone of scanning one by one is temporary, and forms circulation in;
When current scan point is processed, at first judge whether this point is the point of certain initial reception basin, if not skipping, if, whether there is so the point that can not grow (can not growing point refer to not have in Seed markd point) in its eight connected domains, but the growing point in eight connection fields of scanning adds in interim formation, can not growing point if having, add so the seed formation;
Obtain until two circulations are complete the records that each is communicated with initial reception basin, obtain simultaneously initial watershed divide, the inside mark GTG of corresponding point in the regional number of they correspondences and zone, be namely the set of point corresponding to specific region specific grey-scale.The regional number that obtains watershed divide point place just can obtain the information of the point of these all GTGs of zone;
II. flood process
Realized by an embedded loops, outer circulation is to do water level to rise (cycle index is not more than 256), from initial threshold Thre, allows water level rise; Interior circulation is the point that scans the watershed divide of each prime area, expand according to given water level, its four connections neighborhood is checked one by one, do not have markd point (must be the higher point of GTG) if having in four connected domains, judge whether it can grow under current water level, if can grow, join in the seed formation, will again enter interior circulation; If can not, join in the watershed divide set formation of this neighborhood point.Circulation and so forth is until all watershed divide of scanning under a water level are completed in the corresponding region.Expand under a water level simultaneously so separately, guarantee not occur the situation (expansion of a water level All Ranges overall situation) of jumping;
Final All Ranges is complete in each water level expansion, obtains cutting apart figure.
S4. Calculate and scratch graph parameter
A) hypothesis F, B, αRespectively background colour, foreground and the opacity that need to find the solution, CIt is any color on the image of having known.The purpose of calculating parameter be CUnder known prerequisite, ask to make probability PMaximum F, B, αMathematical description such as formula (1), LBe logarithmic function, multiplication turned to addition, simplify and calculate,
(1)
b) L( C| F, B, α) corresponding with σ C Be standard deviation, the center exists αF+ (1- α) Gaussian distribution of B, as formula (2),
Figure 58606DEST_PATH_IMAGE010
(2)
c) L( F) also corresponding Gaussian distribution, by the coupling in space, i.e. the color of neighbor, calculation expectation and covariance matrix, as formula (3),
Figure 218192DEST_PATH_IMAGE011
(3)
L( B) with L( F) similar, be handle w i α i Replace to 1- α i
D) sampling process: the first circle sampling centered by unknown point, constantly enlarge search radius, until adopt enough known background and foreground point, wherein all points of having found the solution also add sample N, the sample evidence color cluster.The corresponding weights of each sample point.Wherein α i Opacity, g i Take distance as Gauss's attenuation function of parameter, as formula (4),
Figure 785439DEST_PATH_IMAGE012
(4) ;
E) introduce the multilayer perceptron training study, obtain αValue function distributes, and solves L( α) problem:
I. determine the structure (select three-decker: input layer, middle layer and output layer) of multilayer perceptron, (generally get with little random number +0.3 interior) carry out the weights initialization, establish training time t=0;
II. choose at random a training sample from sample
Figure 5068DEST_PATH_IMAGE013
, remember that its desired output is
Figure 329870DEST_PATH_IMAGE014
III. calculate xThe lower actual output of current perceptron of input:
Figure 123225DEST_PATH_IMAGE015
IV. begin to adjust weights from output layer,
To lLayer, with following formula correction weights:
Figure 201088DEST_PATH_IMAGE017
Be the weights correction term:
Figure 137820DEST_PATH_IMAGE018
ηBe in advance given Learning Step (generally getting in 0.1 ~ 3);
To output layer ( l= L-1),
Figure 842471DEST_PATH_IMAGE019
The error of current output and desired output to the derivative of weights:
Figure 649890DEST_PATH_IMAGE020
To the middle layer,
Figure 211321DEST_PATH_IMAGE019
The output error backpropagation to the error of this layer derivative to weights:
Figure 104191DEST_PATH_IMAGE021
V. after having upgraded whole weights, all training samples are recomputated output, the output after calculating is upgraded and the error of desired output.Check end condition: the Mean Square Error in nearest takes turns training between actual output and desired output is less than a certain threshold value (general<0.1), or in nearest one takes turns training the variation of all weights all less than a certain threshold value (general<0.1), perhaps reached prior given total frequency of training upper limit (generally getting 3 times), if satisfy end condition deconditioning, the Output rusults function; Otherwise put t=t+1, return to II.
F) process of finding the solution was divided into for two steps, first asked F, B, then asked αFirst suppose αDetermine, to formula (1) right side pair F, BAsk partial derivative, and make it equal 0, obtain 6 yuan of linear function groups, as formula (5), be converted into a solve linear equations problem, ask for the maximum F of formula (1), the solution of B;
Figure 143691DEST_PATH_IMAGE022
(5)
Suppose that again F and B determine, with above-mentioned gained αFunction bring in formula (1), ask for making formula (1) maximum αSeparate.
S5. α Value is rebuild and foreground extraction
A) introduce a Markov random field, complete the extraction of foreground picture layer by the energy function that minimizes structure:
Figure 477721DEST_PATH_IMAGE023
E sThe smoothness of expression two adjacent pixels point p, q, here, general ε value 20~40, γ fValue 10.
B) in step S1, suppose H f = h f k And H nf = h nf k Be respectively photo-irradiation treatment after and the RGB color histogram of photo-irradiation treatment image not, h f k With h nf k Number for pixel in k color.If h nf k h f k , illustrate H nf In some pixels exist H f In be exposed and be modified in other color, these pixels may be more foreground pixels.If h nf k h f k , illustrate that some pixels are exposed modification and exist afterwards H f In be assigned to k group.Therefore the Illumination that defines each pixel p is as follows:
Figure 537949DEST_PATH_IMAGE024
The Illumination value is larger, and the expression pixel is that the possibility of prospect is larger.So be defined as follows an illumination function:
Figure 324640DEST_PATH_IMAGE025
If rp〉ζ, pixel p is labeled as prospect, ζ value 0.2.
C) color distribution is a gauss hybrid models:
Figure 167831DEST_PATH_IMAGE026
Wherein, N () is a Gaussian distribution.Find out the nearest some f of range points p on the prospect profile line 1, this bee-line is labeled as l FWith a f 1Centered by, r 1l FLong for radius be a round F ( r 1A distance parameter, 1.0< r 1<10.0).The weight of nearest known point is made as 1, and along with the increase of distance, weight also reduces thereupon.Foreground point f in image space iWeighting function be expressed as:
ζ fiPresentation video space mid point p and foreground point f iDistance.Work as ζ fiDuring increase, the weight of point will reduce rapidly.Above-mentioned weighting function more affects to the point of near distance.

Claims (4)

1. Bayesian decision foreground extracting method in conjunction with the reflected light photograph is characterized in that comprising the following steps:
S1, Gray-scale Matching conversion, the input original image, specified the pointolite that is positioned at foreground object by the user, use the BRDF(bidirectional reflectance distribution function) illumination effect of calculation level light source on front background object, introduce power transform, by the parameter of control change function, background edge information before outstanding image;
Simultaneously, the different light impact according to pointolite produces on front background object calculates illumination function corresponding to each pixel;
S2, use Hi-pass filter to carry out filtering and noise reduction to the image after step S1 matched transform to process;
Image after S3, process step S2 denoising uses traditional watershed algorithm to cut apart image, and watershed algorithm comprises sequence and floods the iteration mark of two steps, after completing the iteration mark of all images, obtains the complete continuous edge cut-off rule of image; And
Graph parameter is scratched in S4, calculating, after image segmentation becomes background, prospect and zone of ignorance Three regions, defines a Bayesian frame and comes formulistic stingy graph parameter, finds the solution a maximum a posteriori problem, calculates the most approaching C F, BWith αWherein, CAny color on known image, F, B, αRespectively background colour, foreground and opacity, FWith BDistribution simulate with Gaussian distribution, αThe distribution zone of ignorance that will calculate by the window that slides and the appointed area as sample data, with its distribution curve of multilayer perceptron match;
S5, the stingy graph parameter that obtains according to step S4 F, BWith α, reconstruct image αValue figure introduces a Markov random field, and the illumination function that fusion steps S1 obtains is completed the extraction of foreground picture layer by minimizing the energy function that constructs.
2. the method for claim 1 is characterized in that: step S2 selects a ButterWorth Hi-pass filter that seamlessly transits band between passband and stopband.
3. method as claimed in claim 1 or 2, is characterized in that: adopt the power function in step S1
Figure 838216DEST_PATH_IMAGE001
Carry out the Gray-scale Matching conversion, in formula, f is former figure gray scale, and g is gray scale after conversion, and c, b, γ are for controlling parameter.
4. a Gray-scale Matching transform method, be used for the described Bayesian decision foreground extracting method in conjunction with the reflected light photograph of claim 1, it is characterized in that comprising the following steps:
The input original image is specified the pointolite that is positioned at foreground object by the user, with the illumination effect of BRDF calculation level light source on front background object, introduce the power function in the Gray-scale Matching conversion
Figure 561321DEST_PATH_IMAGE001
, in formula, f is former figure gray scale, and g is gray scale after conversion, and c, b, γ by the parameter of control change function, expand high tonal range for controlling parameter, the low tonal range of compression, background edge information before outstanding image.
CN201310059707.5A 2013-02-26 2013-02-26 A kind of Bayesian decision foreground extracting method in conjunction with reflected light photograph Active CN103164855B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310059707.5A CN103164855B (en) 2013-02-26 2013-02-26 A kind of Bayesian decision foreground extracting method in conjunction with reflected light photograph

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310059707.5A CN103164855B (en) 2013-02-26 2013-02-26 A kind of Bayesian decision foreground extracting method in conjunction with reflected light photograph

Publications (2)

Publication Number Publication Date
CN103164855A true CN103164855A (en) 2013-06-19
CN103164855B CN103164855B (en) 2016-04-27

Family

ID=48587911

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310059707.5A Active CN103164855B (en) 2013-02-26 2013-02-26 A kind of Bayesian decision foreground extracting method in conjunction with reflected light photograph

Country Status (1)

Country Link
CN (1) CN103164855B (en)

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103914843A (en) * 2014-04-04 2014-07-09 上海交通大学 Image segmentation method based on watershed algorithm and morphological marker
CN104346806A (en) * 2013-08-09 2015-02-11 联想(北京)有限公司 Image processing method and device
CN108156370A (en) * 2017-12-07 2018-06-12 Tcl移动通信科技(宁波)有限公司 By the use of local picture as the photographic method of background, storage medium and mobile terminal
CN110298861A (en) * 2019-07-04 2019-10-01 大连理工大学 A kind of quick three-dimensional image partition method based on shared sampling
CN110349189A (en) * 2019-05-31 2019-10-18 广州铁路职业技术学院(广州铁路机械学校) A kind of background image update method based on continuous inter-frame difference
CN110399851A (en) * 2019-07-30 2019-11-01 广东工业大学 A kind of image processing apparatus, method, equipment and readable storage medium storing program for executing
CN110728061A (en) * 2019-10-16 2020-01-24 郑州迈拓信息技术有限公司 Ceramic surface pore detection method based on Lambert body reflection modeling
CN111696188A (en) * 2020-04-26 2020-09-22 杭州群核信息技术有限公司 Rendering graph rapid illumination editing method and device and rendering method
CN112118394A (en) * 2020-08-27 2020-12-22 厦门亿联网络技术股份有限公司 Dim light video optimization method and device based on image fusion technology
CN112132848A (en) * 2020-09-01 2020-12-25 成都运达科技股份有限公司 Preprocessing method based on image layer segmentation and extraction
CN112348826A (en) * 2020-10-26 2021-02-09 陕西科技大学 Interactive liver segmentation method based on geodesic distance and V-net
CN117078838A (en) * 2023-07-07 2023-11-17 上海散爆信息技术有限公司 Object rendering method and device, storage medium and electronic equipment

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1941850A (en) * 2005-09-29 2007-04-04 中国科学院自动化研究所 Pedestrian tracting method based on principal axis marriage under multiple vedio cameras
WO2012041419A1 (en) * 2010-10-01 2012-04-05 Telefonica, S.A. Method and system for images foreground segmentation in real-time

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1941850A (en) * 2005-09-29 2007-04-04 中国科学院自动化研究所 Pedestrian tracting method based on principal axis marriage under multiple vedio cameras
WO2012041419A1 (en) * 2010-10-01 2012-04-05 Telefonica, S.A. Method and system for images foreground segmentation in real-time

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
YUNG-YU CHUANG: "New Models and Methods for Matting and Compositing", 《ACADEMIC》 *
谢蓉等: "图像前景提取技术研究", 《信息科学》 *
谷井子等: "基于幂次变换和区域收缩算法的运动目标检测与定位", 《应用科技》 *

Cited By (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104346806A (en) * 2013-08-09 2015-02-11 联想(北京)有限公司 Image processing method and device
CN103914843A (en) * 2014-04-04 2014-07-09 上海交通大学 Image segmentation method based on watershed algorithm and morphological marker
CN103914843B (en) * 2014-04-04 2018-04-03 上海交通大学 The image partition method marked based on watershed algorithm and morphology
CN108156370A (en) * 2017-12-07 2018-06-12 Tcl移动通信科技(宁波)有限公司 By the use of local picture as the photographic method of background, storage medium and mobile terminal
WO2019109990A1 (en) * 2017-12-07 2019-06-13 捷开通讯(深圳)有限公司 Photographing method using local picture as background, storage medium, and mobile terminal
CN110349189A (en) * 2019-05-31 2019-10-18 广州铁路职业技术学院(广州铁路机械学校) A kind of background image update method based on continuous inter-frame difference
CN110298861A (en) * 2019-07-04 2019-10-01 大连理工大学 A kind of quick three-dimensional image partition method based on shared sampling
CN110399851A (en) * 2019-07-30 2019-11-01 广东工业大学 A kind of image processing apparatus, method, equipment and readable storage medium storing program for executing
CN110728061A (en) * 2019-10-16 2020-01-24 郑州迈拓信息技术有限公司 Ceramic surface pore detection method based on Lambert body reflection modeling
CN110728061B (en) * 2019-10-16 2020-12-11 沈纪云 Ceramic surface pore detection method based on Lambert body reflection modeling
CN111696188A (en) * 2020-04-26 2020-09-22 杭州群核信息技术有限公司 Rendering graph rapid illumination editing method and device and rendering method
CN111696188B (en) * 2020-04-26 2023-10-03 杭州群核信息技术有限公司 Rendering graph rapid illumination editing method and device and rendering method
CN112118394A (en) * 2020-08-27 2020-12-22 厦门亿联网络技术股份有限公司 Dim light video optimization method and device based on image fusion technology
CN112118394B (en) * 2020-08-27 2022-02-11 厦门亿联网络技术股份有限公司 Dim light video optimization method and device based on image fusion technology
CN112132848A (en) * 2020-09-01 2020-12-25 成都运达科技股份有限公司 Preprocessing method based on image layer segmentation and extraction
CN112348826A (en) * 2020-10-26 2021-02-09 陕西科技大学 Interactive liver segmentation method based on geodesic distance and V-net
CN112348826B (en) * 2020-10-26 2023-04-07 陕西科技大学 Interactive liver segmentation method based on geodesic distance and V-net
CN117078838A (en) * 2023-07-07 2023-11-17 上海散爆信息技术有限公司 Object rendering method and device, storage medium and electronic equipment
CN117078838B (en) * 2023-07-07 2024-04-19 上海散爆信息技术有限公司 Object rendering method and device, storage medium and electronic equipment

Also Published As

Publication number Publication date
CN103164855B (en) 2016-04-27

Similar Documents

Publication Publication Date Title
CN103164855B (en) A kind of Bayesian decision foreground extracting method in conjunction with reflected light photograph
CN108986050B (en) Image and video enhancement method based on multi-branch convolutional neural network
CN110335290B (en) Twin candidate region generation network target tracking method based on attention mechanism
CN109493303B (en) Image defogging method based on generation countermeasure network
CN105894484B (en) A kind of HDR algorithm for reconstructing normalized based on histogram with super-pixel segmentation
CN110119728A (en) Remote sensing images cloud detection method of optic based on Multiscale Fusion semantic segmentation network
CN109872285A (en) A kind of Retinex low-luminance color image enchancing method based on variational methods
CN108053417A (en) A kind of lung segmenting device of the 3DU-Net networks based on mixing coarse segmentation feature
CN107993238A (en) A kind of head-and-shoulder area image partition method and device based on attention model
CN101980284A (en) Two-scale sparse representation-based color image noise reduction method
CN103262119A (en) Method and system for segmenting an image
CN110443763B (en) Convolutional neural network-based image shadow removing method
CN112949838B (en) Convolutional neural network based on four-branch attention mechanism and image segmentation method
CN109255758A (en) Image enchancing method based on full 1*1 convolutional neural networks
CN111695633A (en) Low-illumination target detection method based on RPF-CAM
CN109544694A (en) A kind of augmented reality system actual situation hybrid modeling method based on deep learning
CN104036481B (en) Multi-focus image fusion method based on depth information extraction
Lepcha et al. A deep journey into image enhancement: A survey of current and emerging trends
CN105631405B (en) Traffic video intelligent recognition background modeling method based on Multilevel Block
CN109829925A (en) A kind of method and model training method for extracting clean prospect in scratching figure task
CN111062953A (en) Method for identifying parathyroid hyperplasia in ultrasonic image
Feng et al. Low-light image enhancement algorithm based on an atmospheric physical model
CN115393734A (en) SAR image ship contour extraction method based on fast R-CNN and CV model combined method
CN111383759A (en) Automatic pneumonia diagnosis system
CN111798463B (en) Method for automatically segmenting multiple organs in head and neck CT image

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CB03 Change of inventor or designer information

Inventor after: Wang Haoqian

Inventor after: Fang Lu

Inventor after: Deng Bowen

Inventor after: Wang Shengjin

Inventor after: Shao Hang

Inventor after: Dai Qionghai

Inventor after: Guo Yuchen

Inventor before: Wang Haoqian

Inventor before: Deng Bowen

Inventor before: Shao Hang

Inventor before: Dai Qionghai

CB03 Change of inventor or designer information
CP01 Change in the name or title of a patent holder

Address after: Shenzhen Graduate School of Guangdong Province, Shenzhen City Xili 518055 Nanshan District University City Tsinghua University

Patentee after: Shenzhen International Graduate School of Tsinghua University

Address before: Shenzhen Graduate School of Guangdong Province, Shenzhen City Xili 518055 Nanshan District University City Tsinghua University

Patentee before: Graduate School at Shenzhen, Tsinghua University

CP01 Change in the name or title of a patent holder