CN106355583A - Image processing method and device - Google Patents
Image processing method and device Download PDFInfo
- Publication number
- CN106355583A CN106355583A CN201610759871.0A CN201610759871A CN106355583A CN 106355583 A CN106355583 A CN 106355583A CN 201610759871 A CN201610759871 A CN 201610759871A CN 106355583 A CN106355583 A CN 106355583A
- Authority
- CN
- China
- Prior art keywords
- point
- image
- background
- prospect
- point set
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/60—Analysis of geometric attributes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/22—Matching criteria, e.g. proximity measures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20112—Image segmentation details
- G06T2207/20164—Salient point detection; Corner detection
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Geometry (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- Image Processing (AREA)
Abstract
The invention discloses an image processing method and device. The method comprises steps as follows: acquiring a first image and a second image; performing sparse sampling on the first image to obtain a group of sampled points; matching the sampled points in the second image to obtain a group of matched point pairs; calculating the parallax value of two points of each matched point pair; determining a clicking point and calculating the parallax value of the clicking point in the first image and the second image; calculating the difference between the parallax value of the clicking point and the parallax value of each matched point pair, and calibrating a foreground sampling point set and a background sampling point set according to the difference; performing extension with the foreground sampling point set and the background sampling point set as the reference to obtain a foreground point set and a background point set; performing image matting with a matting algorithm based on sparse points according to the foreground point set and a background point set. Only depth information of part of the sparse points is required to be solved during image matting, and accordingly, the image matting speed is greatly increased.
Description
Technical field
The present invention relates to technical field of image processing, more particularly to a kind of image processing method and device.
Background technology
Stingy figure refers to accurately extract a kind of technology of foreground object from image or video sequence.Stingy diagram technology conduct
A kind of key technology in visual effect field, is widely used in the fields such as picture editting and film making.Ask due to scratching figure
Topic is the problem of a underconstrained, and therefore, solving this problem needs to increase extra constraints.Traditional stingy drawing method utilizes
Three components are as additional restraint, however, the manufacturing process of three components needs substantial amounts of user mutual, very time-consuming.
Content of the invention
It is an object of the invention to overcoming the deficiencies in the prior art, provide a kind of image processing method and device, its method
Only need to solve the depth information of partly sparse point during stingy figure, stingy figure speed can be greatly enhanced.
The purpose of the present invention is achieved through the following technical solutions: a kind of image processing method, comprises the following steps:
Obtain the first image and the second image;Described first image is carried out with sparse sampling and obtains one group of sampled point;In described second figure
In picture, described sampled point is mated, obtain one group of matching double points;Calculate 2 points in each matching double points of parallax value;Determine
Clicking point, the described clicking point position of definition is prospect, calculates described clicking point in described first image and described second figure
Parallax value in picture;Calculate the gap between the parallax value of described clicking point and the parallax value of each matching double points, according to described
Gap calibrates prospect sampling point set and background sampling point set, if wherein described gap is less than first threshold, demarcates this coupling
Point is foreground point, if described gap is more than Second Threshold, demarcating this match point is background dot;In the past described prospect sampled point and
On the basis of background sampled point, extension obtains prospect point set and background point set;According to described prospect point set and described background point set, base
Stingy nomography in sparse point carries out scratching figure.
Methods described also includes: sparse sampling is carried out to described second image, and to each sampled point in the first image
In mated, obtain multiple matching double points.
Described first threshold is less than described Second Threshold.
Extension obtains prospect point set and the method for background point set includes: described prospect sampled point and background sampled point are carried out
Vectorization is processed, and obtains vectorial coordinate;It is calculated the vectorial coordinate waiting to extend point;Wait to extend the vectorial coordinate minute of point by described
It is not compared with the vectorial coordinate of described prospect sampled point and the vectorial coordinate of described background dot, before treating that extension point is defined as
Sight spot or background dot.
Described vectorial coordinate includes: color coordinates, gradient coordinate, range coordinate and depth coordinate.
Methods described also includes carrying out registration process to the first image and the second image.
After registration process, generate a width composograph, when extending prospect point set and background point set, treat that extension point is located at institute
State on composograph.
When extension prospect point set and background point set, treat that extension point is located in described first image.
Methods described also includes: described matching double points is screened, excludes Mismatching point pair.
Methods described also includes, and described stingy nomography is k k-nearest neighbor.
A kind of image processing apparatus, comprising: image collection module, for obtaining the first image and the second image;Sampling mould
Block, obtains one group of sampled point for described first image is carried out with sparse sampling;Matching module, in described second image
Described sampled point is mated, obtains one group of matching double points;Disparity computation module, for calculating 2 points in each matching double points
Parallax value;Clicking point generation module, for determining clicking point, the described clicking point position of definition is prospect, calculates described
Parallax value in described first image and described second image for the clicking point;Classification demarcating module, for calculating described clicking point
Parallax value and the parallax value of each matching double points between gap, prospect sampling point set and background are calibrated according to described gap
Sampling point set, if wherein described gap is less than first threshold, demarcating this match point is foreground point, if described gap is more than second
Threshold value, then demarcating this match point is background dot;Expansion module, is base for described prospect sampled point in the past and background sampled point
Standard, extension obtains prospect point set and background point set;Stingy module, for according to described prospect point set and described background point set, base
Stingy nomography in sparse point carries out scratching figure.
The invention has the beneficial effects as follows: the present invention by the use of the depth information of image as priori, according to clicking point
Relation between the depth information of depth information and other pixel carries out scratching figure;During stingy figure, full figure need not be solved
Depth information, a demand solves the depth information of partly sparse point, thus drastically increasing stingy figure speed.
Brief description
Fig. 1 is the flow chart of image processing method of the present invention;
Fig. 2 is the schematic diagram of image processing apparatus of the present invention.
Specific embodiment
Below in conjunction with the accompanying drawings technical scheme is described in further detail, but protection scope of the present invention is not limited to
Described below.
As shown in figure 1, a kind of image processing method, comprise the following steps:
Obtain the first image and the second image, registration process is carried out to the first image and the second image.
Described first image is carried out with sparse sampling and obtains one group of sampled point;To described sampled point in described second image
Mated, obtained one group of matching double points;Sparse sampling is carried out to described second image, and to each sampled point in the first figure
Mated in picture, obtained multiple matching double points;Described matching double points are screened, excludes Mismatching point pair.
Calculate 2 points in each matching double points of parallax value.
Determine clicking point, the described clicking point position of definition is prospect, calculate described clicking point in described first image
With the parallax value in described second image;
Calculate the gap between the parallax value of described clicking point and the parallax value of each matching double points, calibrated according to described gap
Prospect sampling point set and background sampling point set, if wherein described gap is less than first threshold, demarcating this match point is foreground point,
If described gap is more than Second Threshold, demarcating this match point is background dot;
Described first threshold is less than described Second Threshold.
In the past on the basis of described prospect sampled point and background sampled point, extension obtains prospect point set and background point set.
Extension obtains prospect point set and the method for background point set includes: described prospect sampled point and background sampled point are carried out
Vectorization is processed, and obtains vectorial coordinate;It is calculated the vectorial coordinate waiting to extend point;Wait to extend the vectorial coordinate minute of point by described
It is not compared with the vectorial coordinate of described prospect sampled point and the vectorial coordinate of described background dot, before treating that extension point is defined as
Sight spot or background dot.
Described vectorial coordinate includes: color coordinates, gradient coordinate, range coordinate and depth coordinate.
After registration process, generate a width composograph, when extending prospect point set and background point set, treat that extension point is located at institute
State on composograph.
When extension prospect point set and background point set, treat that extension point is located in described first image.
According to described prospect point set and described background point set, carry out scratching figure based on the stingy nomography of sparse point;Described stingy figure
Algorithm can be k k-nearest neighbor.
As shown in Fig. 2 a kind of image processing apparatus, comprising: image collection module, for obtaining the first image and the second figure
Picture;Sampling module, obtains one group of sampled point for described first image is carried out with sparse sampling;Matching module, for described
In second image, described sampled point is mated, obtain one group of matching double points;Disparity computation module, for calculating each coupling
The parallax value of 2 points of centering of point;Clicking point generation module, for determining clicking point, it is front for defining described clicking point position
Scape, calculates parallax value in described first image and described second image for the described clicking point;Classification demarcating module, by based on
Calculate the gap between the parallax value of described clicking point and the parallax value of each matching double points, prospect is calibrated according to described gap and adopts
Sampling point collection and background sampling point set, if wherein described gap is less than first threshold, demarcating this match point is foreground point, if described
Gap is more than Second Threshold, then demarcating this match point is background dot;Expansion module, for described prospect sampled point and background in the past
On the basis of sampled point, extension obtains prospect point set and background point set;Stingy module, for according to described prospect point set and the described back of the body
Sight spot collection, carries out scratching figure based on the stingy nomography of sparse point.
The above be only the preferred embodiment of the present invention it should be understood that the present invention be not limited to described herein
Form, is not to be taken as the exclusion to other embodiment, and can be used for various other combinations, modification and environment, and can be at this
In the described contemplated scope of literary composition, it is modified by the technology or knowledge of above-mentioned teaching or association area.And those skilled in the art are entered
The change of row and change, then all should be in the protection domains of claims of the present invention without departing from the spirit and scope of the present invention
Interior.
Claims (11)
1. a kind of image processing method is it is characterised in that comprise the following steps:
Obtain the first image and the second image;
Described first image is carried out with sparse sampling and obtains one group of sampled point;
Described second image mates to described sampled point, obtains one group of matching double points;
Calculate 2 points in each matching double points of parallax value;
Determine clicking point, the described clicking point position of definition is prospect, calculate described clicking point in described first image and institute
State the parallax value in the second image;
Calculate the gap between the parallax value of described clicking point and the parallax value of each matching double points, calibrated according to described gap
Prospect sampling point set and background sampling point set, if wherein described gap is less than first threshold, demarcating this match point is foreground point,
If described gap is more than Second Threshold, demarcating this match point is background dot;
In the past on the basis of described prospect sampled point and background sampled point, extension obtains prospect point set and background point set;
According to described prospect point set and described background point set, carry out scratching figure based on the stingy nomography of sparse point.
2. image processing method according to claim 1 is it is characterised in that methods described also includes:
Sparse sampling is carried out to described second image, and each sampled point is mated in the first image, obtain multiple
Matching double points.
3. image processing method according to claim 1 is it is characterised in that described first threshold is less than described second threshold
Value.
4. image processing method according to claim 1 is it is characterised in that extension obtains prospect point set and background point set
Method includes:
Vectorization process is carried out to described prospect sampled point and background sampled point, obtains vectorial coordinate;
It is calculated the vectorial coordinate waiting to extend point;
By the described vectorial coordinate waiting to extend point respectively with the vectorial coordinate of described prospect sampled point and the vector of described background dot
Coordinate is compared, and will treat that extension point is defined as foreground point or background dot.
5. image processing method according to claim 4 is it is characterised in that described vectorial coordinate includes: color coordinates, and gradient is sat
Mark, range coordinate and depth coordinate.
6. image processing method according to claim 1 is it is characterised in that methods described is also included to the first image and
Two images carry out registration process.
7. image processing method according to claim 6 it is characterised in that after registration process, generates a width composograph,
When extending prospect point set and background point set, treat that extension point is located on described composograph.
8. image processing method according to claim 1 is it is characterised in that when extending prospect point set and background point set, treat
Extension point is located in described first image.
9. image processing method according to claim 1 is it is characterised in that methods described also includes: to described match point
To screening, exclude Mismatching point pair.
10. image processing method according to claim 1 is it is characterised in that methods described also includes, described stingy nomography
For k k-nearest neighbor.
A kind of 11. image processing apparatus are it is characterised in that include:
Image collection module, for obtaining the first image and the second image;
Sampling module, obtains one group of sampled point for described first image is carried out with sparse sampling;
Matching module, for mating to described sampled point in described second image, obtains one group of matching double points;
Disparity computation module, for calculating 2 points in each matching double points of parallax value;
Clicking point generation module, for determining clicking point, the described clicking point position of definition is prospect, calculates described clicking point
Parallax value in described first image and described second image;
Classification demarcating module, for calculating the gap between the parallax value of described clicking point and the parallax value of each matching double points,
Prospect sampling point set and background sampling point set are calibrated according to described gap, if wherein described gap is less than first threshold, marks
This match point fixed is foreground point, if described gap is more than Second Threshold, demarcating this match point is background dot;
Expansion module, on the basis of described prospect sampled point in the past and background sampled point, extension obtains prospect point set and background
Point set;
Stingy module, for according to described prospect point set and described background point set, carrying out scratching figure based on the stingy nomography of sparse point.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610759871.0A CN106355583A (en) | 2016-08-30 | 2016-08-30 | Image processing method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610759871.0A CN106355583A (en) | 2016-08-30 | 2016-08-30 | Image processing method and device |
Publications (1)
Publication Number | Publication Date |
---|---|
CN106355583A true CN106355583A (en) | 2017-01-25 |
Family
ID=57856335
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610759871.0A Pending CN106355583A (en) | 2016-08-30 | 2016-08-30 | Image processing method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106355583A (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108961322A (en) * | 2018-05-18 | 2018-12-07 | 辽宁工程技术大学 | A kind of error hiding elimination method suitable for the sequential images that land |
CN110148102A (en) * | 2018-02-12 | 2019-08-20 | 腾讯科技(深圳)有限公司 | Image composition method, ad material synthetic method and device |
CN110751668A (en) * | 2019-09-30 | 2020-02-04 | 北京迈格威科技有限公司 | Image processing method, device, terminal, electronic equipment and readable storage medium |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102917175A (en) * | 2012-09-13 | 2013-02-06 | 西北工业大学 | Sheltering multi-target automatic image matting method based on camera array synthetic aperture imaging |
CN103871051A (en) * | 2014-02-19 | 2014-06-18 | 小米科技有限责任公司 | Image processing method, device and electronic equipment |
CN104616286A (en) * | 2014-12-17 | 2015-05-13 | 浙江大学 | Fast semi-automatic multi-view depth restoring method |
US9223404B1 (en) * | 2012-01-27 | 2015-12-29 | Amazon Technologies, Inc. | Separating foreground and background objects in captured images |
CN105809716A (en) * | 2016-03-07 | 2016-07-27 | 南京邮电大学 | Superpixel and three-dimensional self-organizing background subtraction algorithm-combined foreground extraction method |
-
2016
- 2016-08-30 CN CN201610759871.0A patent/CN106355583A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9223404B1 (en) * | 2012-01-27 | 2015-12-29 | Amazon Technologies, Inc. | Separating foreground and background objects in captured images |
CN102917175A (en) * | 2012-09-13 | 2013-02-06 | 西北工业大学 | Sheltering multi-target automatic image matting method based on camera array synthetic aperture imaging |
CN103871051A (en) * | 2014-02-19 | 2014-06-18 | 小米科技有限责任公司 | Image processing method, device and electronic equipment |
CN104616286A (en) * | 2014-12-17 | 2015-05-13 | 浙江大学 | Fast semi-automatic multi-view depth restoring method |
CN105809716A (en) * | 2016-03-07 | 2016-07-27 | 南京邮电大学 | Superpixel and three-dimensional self-organizing background subtraction algorithm-combined foreground extraction method |
Non-Patent Citations (2)
Title |
---|
周伟: "一种实时的自动抠图方法研究", 《西南师范大学学报》 * |
陈佳坤 等: "一种用于立体图像匹配的改进稀疏匹配算法", 《计算机技术与发展》 * |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110148102A (en) * | 2018-02-12 | 2019-08-20 | 腾讯科技(深圳)有限公司 | Image composition method, ad material synthetic method and device |
CN110148102B (en) * | 2018-02-12 | 2022-07-15 | 腾讯科技(深圳)有限公司 | Image synthesis method, advertisement material synthesis method and device |
CN108961322A (en) * | 2018-05-18 | 2018-12-07 | 辽宁工程技术大学 | A kind of error hiding elimination method suitable for the sequential images that land |
CN108961322B (en) * | 2018-05-18 | 2021-08-10 | 辽宁工程技术大学 | Mismatching elimination method suitable for landing sequence images |
CN110751668A (en) * | 2019-09-30 | 2020-02-04 | 北京迈格威科技有限公司 | Image processing method, device, terminal, electronic equipment and readable storage medium |
CN110751668B (en) * | 2019-09-30 | 2022-12-27 | 北京迈格威科技有限公司 | Image processing method, device, terminal, electronic equipment and readable storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN105528785B (en) | A kind of binocular vision image solid matching method | |
CN110264416B (en) | Sparse point cloud segmentation method and device | |
CN109308719B (en) | Binocular parallax estimation method based on three-dimensional convolution | |
CN105374019B (en) | A kind of more depth map fusion methods and device | |
CN105956539B (en) | A kind of Human Height measurement method of application background modeling and Binocular Vision Principle | |
CN103310421B (en) | The quick stereo matching process right for high-definition image and disparity map acquisition methods | |
CN102651135B (en) | Optimized direction sampling-based natural image matting method | |
CN108470356A (en) | A kind of target object fast ranging method based on binocular vision | |
TWI497450B (en) | Visual object tracking method | |
CN112801074B (en) | Depth map estimation method based on traffic camera | |
CN102831582A (en) | Method for enhancing depth image of Microsoft somatosensory device | |
CN110276264A (en) | A kind of crowd density estimation method based on foreground segmentation figure | |
CN101765019B (en) | Stereo matching algorithm for motion blur and illumination change image | |
CN103177260B (en) | A kind of coloured image boundary extraction method | |
CN104182968B (en) | The fuzzy moving-target dividing method of many array optical detection systems of wide baseline | |
CN110110793B (en) | Binocular image rapid target detection method based on double-current convolutional neural network | |
CN106355583A (en) | Image processing method and device | |
CN108021857B (en) | Building detection method based on unmanned aerial vehicle aerial image sequence depth recovery | |
CN111105451B (en) | Driving scene binocular depth estimation method for overcoming occlusion effect | |
CN107909611A (en) | A kind of method using differential geometric theory extraction space curve curvature feature | |
CN104200453A (en) | Parallax image correcting method based on image segmentation and credibility | |
CN107909543A (en) | A kind of flake binocular vision Stereo matching space-location method | |
CN103337064A (en) | Method for removing mismatching point in image stereo matching | |
CN103646397B (en) | Real-time synthetic aperture perspective imaging method based on multisource data fusion | |
CN103714544B (en) | A kind of optimization method based on SIFT feature Point matching |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20170125 |