CN104050682A - Image segmentation method fusing color and depth information - Google Patents
Image segmentation method fusing color and depth information Download PDFInfo
- Publication number
- CN104050682A CN104050682A CN201410324569.3A CN201410324569A CN104050682A CN 104050682 A CN104050682 A CN 104050682A CN 201410324569 A CN201410324569 A CN 201410324569A CN 104050682 A CN104050682 A CN 104050682A
- Authority
- CN
- China
- Prior art keywords
- depth
- region
- color
- image
- similarity
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Landscapes
- Image Analysis (AREA)
Abstract
The invention discloses an image segmentation method fusing color and depth information. According to the method, firstly, a meanshift algorithm is used for segmenting an input color image to obtain an over-segmentation region set, and then similarities among all the regions are calculated and include color similarities, depth similarities and fusions of the color similarities and the depth similarities; then according to a depth image, seed regions of a target and seed regions of a background are automatically selected; finally, an MSRM algorithm is used for merging the regions, so that a final segmentation result is obtained. In the process of calculating the similarities among the regions, the color information is used, besides, the depth information is dynamically fused, and the problem that when the target color and the background color are similar, and namely a long-scale contrast edge exists between objects, correct segmentation can not be achieved is solved. The seed regions are automatically selected by the utilization of the depth information of the image, the seed regions of the target and the seed regions of the background do not need to be marked manually and alternately, region characteristics of the depth image are directly used for determining the seed regions instead of edge characteristics, and therefore high robustness is achieved.
Description
Technical field
The invention belongs to computer vision field, be specifically related to the image partition method of a kind of Fusion of Color and depth information.
Background technology
It is that image is divided into several specific, to have the region of peculiar property and propose interesting target technology and processes that image is cut apart.It is the committed step that image is processed graphical analysis.The target that image extracts after cutting apart can be identified for image, semantic, the fields such as image retrieval.Traditional images dividing method is the external appearance characteristic based on image generally, as color, brightness, texture, shape, structure etc.And real object exists in three-dimensional world, object should be defined by connectedness physically, and the depth information of combining image is cut apart image, can effectively overcome over-segmentation and less divided problem, obtains the image segmentation result with certain semanteme.
At present, image partition method is countless, and the method (hereinafter to be referred as MSRM algorithm) that the region based on maximum similarity that wherein [document 1] proposes merges is fairly simple, to picture material self-adaptation, without similarity threshold is set in advance, can from complex background, extract objective contour.The advantage of MSRM algorithm is: cut with the classic algorithm figure of man-machine interactively compared with (graph cut), under identical man-machine interactively condition, segmentation effect is better.Limitation is: need mark to cover main characteristic area, in the time there is shade, low contrast edge and fuzzy region, it is failed cutting apart.
There are in recent years a lot of documents that depth image is introduced to image and cut apart field, [document 2] proposes a kind of image object partitioning algorithm based on the degree of depth and colouring information, first utilize meanshift partitioning algorithm to carry out over-segmentation to target image, obtain the right dense depth map of stereographic map by binocular stereo vision algorithm simultaneously, choose from over-segmentation result according to degree of depth uncontinuity and proceed the seed point set that " exquisiteness " cut apart, algorithm assigns label is cut to figure in the region of unallocated Seed label, and to there is no each other degree of depth noncoherent boundary but the adjacent area with different labels merge.The limitation of this algorithm is: when (1) is cut algorithm (graph cut) and carried out global optimization with figure, only used colouring information; (2) the image binaryzation degree of depth discontinuous edge obtaining depends on experimental threshold values strongly, and edge line itself be interrupted, discontinuous, this is by the reliability that affects Seed Points and choose.[document 3] proposes a kind of multimode semantic segmentation method based on color and depth information, and this algorithm is fused together texture, color description and 3D descriptor by Markov random field model, is super pixel distributing labels.The method needs training, and calculated amount is large.What [document 4] proposed utilizes colored and depth image positioning object in robot visual guidance system, while being only applicable in image that several object colors are single and in full accord, utilizes the difference of the degree of depth to distinguish.
Document 1:Ning J., Zhang L., Zhang D., et al.Interactive image segmentation by maximal similarity based region merging.Pattern Recognition, 2010,43 (2): 445-456;
Document 2: Pi Zhiming, Wang Zengfu. merge the image object partitioning algorithm of the degree of depth and colouring information. pattern-recognition and artificial intelligence, 2013,26 (2): 151-158;
Document 3:Islem Jebari, David Filliat.Color and depth-based superpixels for background and object segmentation.Procedia Engineering, 2012,41:1307-1315;
Document 4:Jos é-Juan, Hern á ndez-L ó pez, Ana-Linnet.et al.Detecting objects using color and depth segmentation with Kinect sensor.Procedia Technology, 2012,3:196-204.
Summary of the invention
The object of the present invention is to provide a kind of when target and background color is complicated and can be distinguished more accurately and can utilize the image partition method in the automatic selected seed of the region characteristic region of depth image when close.
The technical solution adopted in the present invention is: the image partition method of a kind of Fusion of Color and depth information, it is characterized in that, and comprise the following steps:
Step 1: utilize meanshift algorithm to do to cut apart to input color image, obtain overdivided region set G={G
i}
i=1 ..., RN, wherein, subscript i represents region sequence number, RN is region total number;
Step 2: calculate the similarity between each region in G, comprise color similarity S
cwith degree of depth similarity S
d, and color similarity S
cwith degree of depth similarity S
dfusion;
Step 3: automatically choose target and background seed region according to depth image;
Step 4: utilize MSRM algorithm to carry out region merging, obtain final segmentation result.
As preferably, the color similarity in the calculating G described in step 2 between each region, its specific implementation process is the color similarity S that adopts any two region R and Q in Bhattacharyya Coefficient Definition G
c:
Wherein, Hist
rand Hist
qbe respectively the normalization color histogram of region R and Q, subscript u represents histogrammic u element, and U is histogrammic footstalk number.
As preferably, the degree of depth similarity in the calculating G described in step 2 between each region, its specific implementation process is that the depth value of the pixel in each region in G is got to the depth value of arithmetic mean as this region, forms regional depth set D={D
i}
i=1 ..., RN, subscript i represents region sequence number, the degree of depth similarity S of any two region R and Q in definition G
d:
Wherein, max{D
i}
i=1 ..., RNthe expression All Ranges degree of depth is got maximal value, min{D
i}
i=1 ..., RNfor the minimum value of the All Ranges degree of depth except 0.
As preferably, the described depth value using the pixel in each region in G is got the depth value of arithmetic mean as this region, can not determine owing to the reason such as blocking for the degree of depth of partial pixel in image, and in the depth image providing, be situation about filling up as depth value using 0, concrete disposal route is: overdivided region set G is mapped in depth image, if the depth value of element is 0 entirely in overdivided region i, the depth information that shows this region object is uncertain, so only considers the color similarity of this region and adjacent area; If having the depth value of Partial Elements in overdivided region i is 0, calculate so this regional depth value D
itime, only those non-vanishing pixels of the degree of depth in the i of region are got to arithmetic mean.
As preferably, the color similarity S described in step 2
cwith degree of depth similarity S
dfusion, S
cand S
dafter merging, total similarity is:
S=S
c+w·S
d
Wherein, S
cand S
dthe weight w merging adopts nonlinear Sigmod curve to describe:
Wherein A has determined maximum value of approaching of Sigmod curve, and B and C have determined respectively displacement and the precipitous degree of Sigmod curve.
As preferably, described in step 3, automatically choose target and background seed region according to depth image, its specific implementation process is first to utilize K-means algorithm to carry out cluster to the element in regional depth set D, classification number is got K=2, automatically be polymerized to two large classes, be target and background, then respectively from this two large class the some regions of random choose as the seed region of target and background.This clustering method is simple, has solved the problem that needs artificial participation mark in Ning Ji peak method.
The present invention's advantage is compared with prior art:
(1) between zoning when similarity, not only utilize colouring information, also dynamic fusion depth information, when target and background color in image is close, be, while there is low contrast edge between object and object, can be distinguished by the difference of the degree of depth;
(2) utilize the automatic selected seed of picture depth information region, without man-machine interactively mark the seed region of target and background, directly utilize the region characteristic of depth image, instead of local edge determines seed region, there is good robustness.
Brief description of the drawings
Fig. 1: be process flow diagram of the present invention;
Fig. 2: be the Sigmod curve using in the embodiment of the present invention;
Fig. 3-1: be the coloured image of inputting in the embodiment of the present invention;
Fig. 3-2: be the depth image of inputting in the embodiment of the present invention;
Fig. 4: be the result of in the embodiment of the present invention, coloured image being cut apart as meanshift;
Fig. 5-1: be, in the embodiment of the present invention, regional depth is made to K-means cluster result;
Fig. 5-2: be the seed region of choosing in the embodiment of the present invention;
Fig. 6-1: be the final segmentation result obtaining according to the inventive method in the embodiment of the present invention;
Fig. 6-2: be the final segmentation result obtaining according to Ning Ji peak method in the embodiment of the present invention.
Embodiment
Understand and enforcement the present invention for the ease of those of ordinary skill in the art, below in conjunction with drawings and Examples, the present invention is described in further detail, should be appreciated that exemplifying embodiment described herein, only for description and interpretation the present invention, is not intended to limit the present invention.
Ask for an interview Fig. 1, Fig. 2, Fig. 3-1, Fig. 3-2 and Fig. 4, the present invention adopts the example that is divided into of the aloe potted landscape shown in Fig. 3-1, Fig. 3-2nd, and the depth image of aloe potted landscape, has utilized the information of depth image to do to cut apart more accurately to coloured image.The technical solution adopted in the present invention is: the image partition method of a kind of Fusion of Color and depth information, comprises the following steps:
Step 1: utilizing the framework at Ning Ji peak, adopt meanshift segmentation software EDISON to cut apart the aloe potted landscape shown in Fig. 3-1, ask for an interview Fig. 4, is over-segmentation result, obtains overdivided region set G={G
i}
i=1 ..., RN, wherein, subscript i represents region sequence number, RN is region total number; Software correlation parameter all adopts default setting.Be divided into a large amount of tiny regions taking texture as main background as seen, and aloe potted landscape is divided into larger region unit.
Step 2: 3 Color Channels of the RGB coloured image of input are divided into 16 deciles by codomain [0,255] respectively, calculate in the feature space that the color histogram in so each region is just 16 × 16 × 16=4096 a dimension.Then to color histogram normalization, calculate the color similarity of two adjacent areas:
Hist
rand Hist
qthe normalization color histogram that is respectively any two region R and Q in G, subscript u represents histogrammic u element.Create two-dimentional similarity matrix S M, dimension is RN × RN.If two regions are non-conterminous, can not merge, make sm
ij=0, each region oneself and the similarity maximum of oneself, be made as 1, i.e. diagonal entry sm
ii=1, adjacent area calculates color similarity according to formula one, span be [0,1).
The depth value of the pixel in each region in G is got to the depth value of arithmetic mean as this region, form set D={D
i}
i=1 ..., RN, subscript i represents region sequence number.Suppose that there is n pixel in i region, the depth value set of each pixel is { x
1..., x
n:
Define the degree of depth similarity S of two region R and Q
d:
In formula three, max{D
i}
i=1 ..., RNthe expression All Ranges degree of depth is got maximal value, min{D
i}
i=1 ..., RNfor the minimum value of the All Ranges degree of depth except 0.If the depth value of all elements is 0 in overdivided region i, can think that so this region, without depth information, only considers the color similarity of this region and adjacent area; If having the depth value of Partial Elements in overdivided region i is 0, calculate so this regional depth value D
itime, only the non-vanishing pixel of the degree of depth is got to arithmetic mean.From formula three, degree of depth similarity S
dspan be [1,0], the degree of depth difference in two regions is larger, degree of depth similarity is less.
Step 3: color similarity S
cwith degree of depth similarity S
dfusion, S
cand S
dafter merging, total similarity is:
S=S
c+ wS
d(formula wantonly)
Ask for an interview Fig. 2, S
cand S
dthe weight w merging adopts nonlinear Sigmod curve to describe:
Wherein A has determined maximum value of approaching of Sigmod curve, and B and C have determined respectively displacement and the precipitous degree of Sigmod curve.
Experiment showed, A=1, B=0.2, the objective contour effect of extracting when C=0.5 is better.Due to color similarity S
c∈ [0,1], Fig. 2 only demonstrates and works as A=1, B=0.2, a part for Sigmod curve when C=0.5.
Step 4: use K-means algorithm to D={D
i}
i=1 ..., RNmiddle element carries out cluster, gets K=2, is polymerized to two large classes, i.e. target class R
owith background classes R
b.Suppose R
ocomprise m region, R
bcomprise n region, m+n=RN.Respectively from this two large class random choose m1 and n1 region as the seed region of target and background, wherein m1<m, n1<n.The concrete number of m1 and n1 can, by the autonomous loading routine of user, find in practice, and it is just fine that m1 and n1 are approximately taken as 1/40th effects of m and n, need not get too many seed region.If Fig. 5-1 is this example region degree of depth cluster result, white represents target class, black represents background classes, then choose at random 5 seed regions from target class (containing 191 overdivided regions), background classes (containing 1267 overdivided regions) 27 seed regions of random choose, because each region area in background classes is less, region quantity is larger, therefore suitably multiselect is got some seed regions, as shown in Fig. 5-2, green area represents target species subregion, and blue region represents background seed region.
Step 5: adopt MSRM algorithm to carry out region merging, obtain final segmentation result.
If M
bfor the background area set of mark, M
ofor the target area set of mark, N is unmarked regional ensemble, and the basic procedure of MSRM algorithm is as follows:
Step 5.1: the background area that merges unmarked region and mark.For each region B ∈ M
b, find its adjacent area collection
for each A
iand
find its neighborhood set
Obviously
calculate
if
by region A
imerge to B.Upgrade set M
band N.Until M
bin region can not find new merging region till;
Step 5.2: self-adaptation merges the region in N.In like manner, for each region p ∈ N, find its neighborhood collection
For each H
imeet
and
find its neighborhood collection
Obviously
calculate
if
by region H
imerge to p.Upgrade set N.Until the region in N can not find new merging region.
Unmarked region loops step 5.1 and step 5.2 until can not merge to M
bin, can not in N, merge.This algorithm progressively merges to unmarked region in background area, and remaining unmarked region merges in target area automatically.
Choose under consistent condition the final segmentation result of Fig. 6-1 for obtaining according to the inventive method, the final segmentation result of Fig. 6-2 for obtaining according to people's put forward the methods such as Ning Jifeng at seed region.Be not difficult to find, because the color of curtain in background is absinthe-green decorative pattern, and aloe leaf color approaches, and therefore do not lay down hard and fast rule and all can be merged to mistakenly in background classes for the target area of seed.And the present invention utilizes the difference of the degree of depth background accurately can be separated with target.
Should be understood that, the part that this instructions does not elaborate all belongs to prior art.
Should be understood that; the above-mentioned description for preferred embodiment is comparatively detailed; can not therefore think the restriction to scope of patent protection of the present invention; those of ordinary skill in the art is under enlightenment of the present invention; do not departing from the scope situation that the claims in the present invention protect; can also make and replacing or distortion, within all falling into protection scope of the present invention, request protection domain of the present invention should be as the criterion with claims.
Claims (6)
1. an image partition method for Fusion of Color and depth information, is characterized in that, comprises the following steps:
Step 1: utilize meanshift algorithm to do to cut apart to input color image, obtain overdivided region set G={G
i}
i=1 ..., RN, wherein, subscript i represents region sequence number, RN is region total number;
Step 2: calculate the similarity between each region in G, comprise color similarity S
cwith degree of depth similarity S
d, and color similarity S
cwith degree of depth similarity S
dfusion;
Step 3: automatically choose target and background seed region according to depth image;
Step 4: utilize MSRM algorithm to carry out region merging, obtain final segmentation result.
2. the image partition method of Fusion of Color according to claim 1 and depth information, is characterized in that:
Color similarity in calculating G described in step 2 between each region, its specific implementation process is the color similarity S that adopts any two region R and Q in Bhattacharyya Coefficient Definition G
c:
Wherein, Hist
rand Hist
qbe respectively the normalization color histogram of region R and Q, subscript u represents histogrammic u element, and U is histogrammic footstalk number.
3. the image partition method of Fusion of Color according to claim 1 and depth information, is characterized in that:
Degree of depth similarity in calculating G described in step 2 between each region, its specific implementation process is that the depth value of the pixel in each region in G is got to the depth value of arithmetic mean as this region, forms regional depth set D={D
i}
i=1 ..., RN, subscript i represents region sequence number, the degree of depth similarity S of any two region R and Q in definition G
d:
Wherein, max{D
i}
i=1 ..., RNthe expression All Ranges degree of depth is got maximal value, min{D
i}
i=1 ..., RNfor the minimum value of the All Ranges degree of depth except 0.
4. the image partition method of Fusion of Color according to claim 3 and depth information, it is characterized in that: the described depth value using the pixel in each region in G is got the depth value of arithmetic mean as this region, can not determine owing to the reason such as blocking for the degree of depth of partial pixel in image, and in the depth image providing, be situation about filling up as depth value using 0, concrete disposal route is: overdivided region set G is mapped in depth image, if the depth value of element is 0 entirely in overdivided region i, the depth information that shows this region object is uncertain, so only consider the color similarity of this region and adjacent area, if having the depth value of Partial Elements in overdivided region i is 0, while calculating so this regional depth value Di, only those non-vanishing pixels of the degree of depth in the i of region are got to arithmetic mean.
5. the image partition method of Fusion of Color according to claim 1 and depth information, is characterized in that: the color similarity S described in step 2
cwith degree of depth similarity S
dfusion, S
cand S
dafter merging, total similarity is:
S=S
c+w·S
d
Wherein, S
cand S
dthe weight w merging adopts nonlinear Sigmod curve to describe:
Wherein A has determined maximum value of approaching of Sigmod curve, and B and C have determined respectively displacement and the precipitous degree of Sigmod curve.
6. the image partition method of Fusion of Color according to claim 1 and depth information, it is characterized in that: described in step 3, automatically choose target and background seed region according to depth image, its specific implementation process is first to utilize K-means algorithm to carry out cluster to the element in regional depth set D, classification number is got K=2, automatically be polymerized to two large classes, be target and background, then respectively from this two large class the some regions of random choose as the seed region of target and background.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410324569.3A CN104050682B (en) | 2014-07-09 | 2014-07-09 | Image segmentation method fusing color and depth information |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410324569.3A CN104050682B (en) | 2014-07-09 | 2014-07-09 | Image segmentation method fusing color and depth information |
Publications (2)
Publication Number | Publication Date |
---|---|
CN104050682A true CN104050682A (en) | 2014-09-17 |
CN104050682B CN104050682B (en) | 2017-01-18 |
Family
ID=51503465
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201410324569.3A Expired - Fee Related CN104050682B (en) | 2014-07-09 | 2014-07-09 | Image segmentation method fusing color and depth information |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN104050682B (en) |
Cited By (19)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104574375A (en) * | 2014-12-23 | 2015-04-29 | 浙江大学 | Image significance detection method combining color and depth information |
CN105354838A (en) * | 2015-10-20 | 2016-02-24 | 努比亚技术有限公司 | Method and terminal for acquiring depth information of weak texture region in image |
TWI550559B (en) * | 2014-01-17 | 2016-09-21 | 宏達國際電子股份有限公司 | Image segmentation device and image processing method |
CN107025442A (en) * | 2017-03-31 | 2017-08-08 | 北京大学深圳研究生院 | A kind of multi-modal fusion gesture identification method based on color and depth information |
CN107145892A (en) * | 2017-05-24 | 2017-09-08 | 北京大学深圳研究生院 | A kind of image significance object detection method based on adaptive syncretizing mechanism |
CN107247929A (en) * | 2017-05-26 | 2017-10-13 | 大连海事大学 | A kind of footwear stamp line progressive refinement formula extracting method of combination priori |
CN107292923A (en) * | 2017-06-29 | 2017-10-24 | 北京大学深圳研究生院 | The back-propagating image vision conspicuousness detection method excavated based on depth map |
CN108322788A (en) * | 2018-02-09 | 2018-07-24 | 武汉斗鱼网络科技有限公司 | Advertisement demonstration method and device in a kind of net cast |
CN108717524A (en) * | 2018-04-28 | 2018-10-30 | 天津大学 | It is a kind of based on double gesture recognition systems and method for taking the photograph mobile phone and artificial intelligence system |
CN108895981A (en) * | 2018-05-29 | 2018-11-27 | 南京怀萃智能科技有限公司 | A kind of method for three-dimensional measurement, device, server and storage medium |
CN109495784A (en) * | 2018-11-29 | 2019-03-19 | 北京微播视界科技有限公司 | Information-pushing method, device, electronic equipment and computer readable storage medium |
CN109543673A (en) * | 2018-10-18 | 2019-03-29 | 浙江理工大学 | A kind of low contrast punching press character recognition algorithm based on Interactive Segmentation |
CN109949397A (en) * | 2019-03-29 | 2019-06-28 | 哈尔滨理工大学 | A kind of depth map reconstruction method of combination laser point and average drifting |
US10382738B2 (en) | 2017-07-31 | 2019-08-13 | Samsung Electronics Co., Ltd. | Method and apparatus for processing image |
CN110278233A (en) * | 2018-03-16 | 2019-09-24 | 中移(苏州)软件技术有限公司 | A kind of load regulation method and device |
CN110378176A (en) * | 2018-08-23 | 2019-10-25 | 北京京东尚科信息技术有限公司 | Object identification method, system, equipment and storage medium based on binocular camera |
CN110378942A (en) * | 2018-08-23 | 2019-10-25 | 北京京东尚科信息技术有限公司 | Barrier identification method, system, equipment and storage medium based on binocular camera |
CN110543868A (en) * | 2019-09-09 | 2019-12-06 | 福建省趋普物联科技有限公司 | Monitoring method and system based on face recognition and head and shoulder detection |
CN110686652A (en) * | 2019-09-16 | 2020-01-14 | 武汉科技大学 | Depth measurement method based on combination of depth learning and structured light |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101720047B (en) * | 2009-11-03 | 2011-12-21 | 上海大学 | Method for acquiring range image by stereo matching of multi-aperture photographing based on color segmentation |
-
2014
- 2014-07-09 CN CN201410324569.3A patent/CN104050682B/en not_active Expired - Fee Related
Cited By (31)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
TWI550559B (en) * | 2014-01-17 | 2016-09-21 | 宏達國際電子股份有限公司 | Image segmentation device and image processing method |
US9466118B2 (en) | 2014-01-17 | 2016-10-11 | Htc Corporation | Image segmentation device, image segmentation method, and depth map generating method |
US9704258B2 (en) | 2014-01-17 | 2017-07-11 | Htc Corporation | Image segmentation device, image segmentation method, and depth map generating method |
CN104574375A (en) * | 2014-12-23 | 2015-04-29 | 浙江大学 | Image significance detection method combining color and depth information |
CN104574375B (en) * | 2014-12-23 | 2017-05-03 | 浙江大学 | Image significance detection method combining color and depth information |
CN105354838A (en) * | 2015-10-20 | 2016-02-24 | 努比亚技术有限公司 | Method and terminal for acquiring depth information of weak texture region in image |
CN105354838B (en) * | 2015-10-20 | 2018-04-10 | 努比亚技术有限公司 | The depth information acquisition method and terminal of weak texture region in image |
CN107025442A (en) * | 2017-03-31 | 2017-08-08 | 北京大学深圳研究生院 | A kind of multi-modal fusion gesture identification method based on color and depth information |
CN107025442B (en) * | 2017-03-31 | 2020-05-01 | 北京大学深圳研究生院 | Multi-mode fusion gesture recognition method based on color and depth information |
CN107145892A (en) * | 2017-05-24 | 2017-09-08 | 北京大学深圳研究生院 | A kind of image significance object detection method based on adaptive syncretizing mechanism |
CN107247929A (en) * | 2017-05-26 | 2017-10-13 | 大连海事大学 | A kind of footwear stamp line progressive refinement formula extracting method of combination priori |
CN107247929B (en) * | 2017-05-26 | 2020-02-18 | 大连海事大学 | Shoe-printing pattern progressive refining type extraction method combined with priori knowledge |
WO2019000821A1 (en) * | 2017-06-29 | 2019-01-03 | 北京大学深圳研究生院 | Back-propagation image visual significance detection method based on depth map mining |
CN107292923A (en) * | 2017-06-29 | 2017-10-24 | 北京大学深圳研究生院 | The back-propagating image vision conspicuousness detection method excavated based on depth map |
US10382738B2 (en) | 2017-07-31 | 2019-08-13 | Samsung Electronics Co., Ltd. | Method and apparatus for processing image |
CN108322788A (en) * | 2018-02-09 | 2018-07-24 | 武汉斗鱼网络科技有限公司 | Advertisement demonstration method and device in a kind of net cast |
CN108322788B (en) * | 2018-02-09 | 2021-03-16 | 武汉斗鱼网络科技有限公司 | Advertisement display method and device in live video |
CN110278233B (en) * | 2018-03-16 | 2022-06-03 | 中移(苏州)软件技术有限公司 | Load adjusting method and device |
CN110278233A (en) * | 2018-03-16 | 2019-09-24 | 中移(苏州)软件技术有限公司 | A kind of load regulation method and device |
CN108717524A (en) * | 2018-04-28 | 2018-10-30 | 天津大学 | It is a kind of based on double gesture recognition systems and method for taking the photograph mobile phone and artificial intelligence system |
CN108717524B (en) * | 2018-04-28 | 2022-05-06 | 天津大学 | Gesture recognition system based on double-camera mobile phone and artificial intelligence system |
CN108895981B (en) * | 2018-05-29 | 2020-10-09 | 南京怀萃智能科技有限公司 | Three-dimensional measurement method, device, server and storage medium |
CN108895981A (en) * | 2018-05-29 | 2018-11-27 | 南京怀萃智能科技有限公司 | A kind of method for three-dimensional measurement, device, server and storage medium |
CN110378942A (en) * | 2018-08-23 | 2019-10-25 | 北京京东尚科信息技术有限公司 | Barrier identification method, system, equipment and storage medium based on binocular camera |
CN110378176A (en) * | 2018-08-23 | 2019-10-25 | 北京京东尚科信息技术有限公司 | Object identification method, system, equipment and storage medium based on binocular camera |
CN109543673A (en) * | 2018-10-18 | 2019-03-29 | 浙江理工大学 | A kind of low contrast punching press character recognition algorithm based on Interactive Segmentation |
CN109495784A (en) * | 2018-11-29 | 2019-03-19 | 北京微播视界科技有限公司 | Information-pushing method, device, electronic equipment and computer readable storage medium |
CN109949397A (en) * | 2019-03-29 | 2019-06-28 | 哈尔滨理工大学 | A kind of depth map reconstruction method of combination laser point and average drifting |
CN110543868A (en) * | 2019-09-09 | 2019-12-06 | 福建省趋普物联科技有限公司 | Monitoring method and system based on face recognition and head and shoulder detection |
CN110686652A (en) * | 2019-09-16 | 2020-01-14 | 武汉科技大学 | Depth measurement method based on combination of depth learning and structured light |
CN110686652B (en) * | 2019-09-16 | 2021-07-06 | 武汉科技大学 | Depth measurement method based on combination of depth learning and structured light |
Also Published As
Publication number | Publication date |
---|---|
CN104050682B (en) | 2017-01-18 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN104050682A (en) | Image segmentation method fusing color and depth information | |
CN105931295B (en) | A kind of geologic map Extracting Thematic Information method | |
CN105261017B (en) | The method that image segmentation based on road surface constraint extracts pedestrian's area-of-interest | |
CN106909902B (en) | Remote sensing target detection method based on improved hierarchical significant model | |
US8798965B2 (en) | Generating three-dimensional models from images | |
CN104077447B (en) | Urban three-dimensional space vector modeling method based on paper plane data | |
CN103337072B (en) | A kind of room objects analytic method based on texture and geometric attribute conjunctive model | |
CN105528575B (en) | Sky detection method based on Context Reasoning | |
CN104850850A (en) | Binocular stereoscopic vision image feature extraction method combining shape and color | |
CN102622769A (en) | Multi-target tracking method by taking depth as leading clue under dynamic scene | |
CN107909079A (en) | One kind collaboration conspicuousness detection method | |
CN104299263A (en) | Method for modeling cloud scene based on single image | |
CN107369158A (en) | The estimation of indoor scene layout and target area extracting method based on RGB D images | |
CN104463138A (en) | Text positioning method and system based on visual structure attribute | |
Ok | Automated extraction of buildings and roads in a graph partitioning framework | |
CN102663700A (en) | Segmentation method for adhering grain binary image | |
Khosravi et al. | Performance evaluation of object-based and pixel-based building detection algorithms from very high spatial resolution imagery | |
CN110967020B (en) | Simultaneous drawing and positioning method for port automatic driving | |
Başeski et al. | Texture and color based cloud detection | |
Mohammadi et al. | 2D/3D information fusion for building extraction from high-resolution satellite stereo images using kernel graph cuts | |
US9087381B2 (en) | Method and apparatus for building surface representations of 3D objects from stereo images | |
Bhadauria et al. | Building extraction from satellite images | |
Abdullah et al. | LiDAR segmentation using suitable seed points for 3D building extraction | |
Kung et al. | Efficient surface detection for augmented reality on 3d point clouds | |
CN109636844B (en) | Complex desktop point cloud segmentation method based on 3D bilateral symmetry |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C14 | Grant of patent or utility model | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20170118 Termination date: 20170709 |
|
CF01 | Termination of patent right due to non-payment of annual fee |