CN106683173B - A method of Three-dimensional Gravity is improved based on neighborhood Block- matching and is laid foundations the dense degree of cloud - Google Patents

A method of Three-dimensional Gravity is improved based on neighborhood Block- matching and is laid foundations the dense degree of cloud Download PDF

Info

Publication number
CN106683173B
CN106683173B CN201611201364.1A CN201611201364A CN106683173B CN 106683173 B CN106683173 B CN 106683173B CN 201611201364 A CN201611201364 A CN 201611201364A CN 106683173 B CN106683173 B CN 106683173B
Authority
CN
China
Prior art keywords
point
image
matching
cloud
block
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201611201364.1A
Other languages
Chinese (zh)
Other versions
CN106683173A (en
Inventor
宋锐
田野
李星霓
贾媛
李云松
王养利
许全优
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shaanxi Xiandian Tongyuan Information Technology Co ltd
Original Assignee
Xidian University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xidian University filed Critical Xidian University
Priority to CN201611201364.1A priority Critical patent/CN106683173B/en
Publication of CN106683173A publication Critical patent/CN106683173A/en
Application granted granted Critical
Publication of CN106683173B publication Critical patent/CN106683173B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Abstract

It Three-dimensional Gravity is improved based on neighborhood Block- matching lays foundations the method for the dense degree of cloud the invention discloses a kind of, include: to obtain coarse and sparse object point cloud using the three-dimensional reconstruction algorithm based on image sequence, obtains the transformation matrix of camera in three dimensions when each frame image taking;Original image is handled again, using the block matching algorithm based on neighborhood, to progress dense characteristic matching in image;Satisfactory characteristic point to obtained dense characteristic point to progress legitimacy inspection, and is mapped to corresponding position in three-dimensional point cloud by the next position according to obtained camera in space;An exterior point is carried out to obtained point cloud using the outer vertex deletion algorithm based on contour of object to filter, and carries out a color and remaps, and obtains the dense point cloud that quality is much better than original point cloud.The present invention can obtain quality and the dense point cloud for being much higher than traditional algorithm, can significantly improve the effect of primal algorithm, improve reconstruction quality;Universality is high, strong robustness.

Description

A method of Three-dimensional Gravity is improved based on neighborhood Block- matching and is laid foundations the dense degree of cloud
Technical field
The invention belongs to technical field of computer vision, more particularly to a kind of neighborhood Block- matching raising Three-dimensional Gravity that is based on to lay foundations The method of the dense degree of cloud.
Background technique
Three-dimensional reconstruction based on image sequence is related to many subjects, belongs to the reverse process of camera imaging principle, studies model It encloses and specifically includes that object target identifies, feature detection, the fields such as characteristic matching.Three-dimensional reconstruction has just become since birth One of the hot spot of computer vision field and difficult point.Its input only needs general color image, and the high feature of universality exists When to object modeling in real world, it is impayable convenient to provide.However, needing to count too much due to having in three-dimensional reconstruction The parameter of measurement is calculated, therefore the reconstruction model for how obtaining high quality also becomes its difficult point for being difficult to capture.In general, Three-dimensional Gravity The core procedure built is how to obtain the point cloud of the enough expression object model information of high quality.Usual three-dimensional reconstruction be divided into Lower two categories: (1) based on the three-dimensional rebuilding method of image sequence.Three-dimensional rebuilding method based on image sequence passes through to object It carries out surrounding shooting acquisition input picture.According to the characteristic matching between the feature and image detected in image, two frames are obtained Between camera transformation mode.When error by minimizing all image camera positions finally determines each frame image taking, Rotation and translation of the camera relative to world coordinate system can determine each characteristic point in space according to these information Actual position, to obtain sparse cloud of object.Then the point cloud acquired again by some cloud growth algorithm increases Consistency.(2) based on the three-dimensional rebuilding method of depth camera.Three-dimensional rebuilding method based on depth camera is by using depth Degree camera captures image and depth map in real time.I.e. for each frame image, the frame each pixel can be obtained in space In position.And with the movement of camera, the point cloud of next frame is constantly matched with the point cloud of the frame, phase between two frames is detected The movement that seat in the plane is set is fused together the point cloud of different frame, obtains the dense point cloud of object.In the above methods, it is based on The advantages of there is the three-dimensional reconstruction of image sequence input data easily to obtain, and there is shooting environmental to require low, strong robustness.But by It is only capable of providing the information of two-dimensional space in image, therefore when reversely solving three dimensional space coordinate, has many mistakes in calculating Difference and noise exist, for example hardly result in accurate matching characteristic point pair, so that often quality is not high enough for obtained point cloud, cause Finally obtained modelling effect is unsatisfactory.And based on the method for depth camera due in the image of acquisition, each pixel It can be taken as a characteristic point, therefore characteristic point dense enough can be obtained.But since it needs the depth phase of high quality Machine, and be difficult to deal with apart from farther away shooting, while civilian depth camera maximum magnitude, therefore should about in 10m or so at present Requirement of the method to shooting environmental and equipment is excessively high.And the three-dimensional reconstruction based on image sequence is not limited, as long as object exists Enough features are showed in image.
In conclusion the point cloud quality that the three-dimensional reconstruction based on image sequence obtains is not high enough, there are biggish errors;Base It is excessively high to shooting environmental requirement in the method for depth camera, the depth camera of high quality is needed, and be difficult to deal with distance farther out Shooting.
Summary of the invention
It Three-dimensional Gravity is improved based on neighborhood Block- matching lays foundations the method for the dense degree of cloud the purpose of the present invention is to provide a kind of, The point cloud quality for aiming to solve the problem that the three-dimensional reconstruction based on image sequence obtains is not high enough;Based on the method for depth camera to shooting ring Border requirement is excessively high, needs the depth camera of high quality, and be difficult to the problem of dealing with apart from farther away shooting.
The invention is realized in this way a method of Three-dimensional Gravity is improved based on neighborhood Block- matching and is laid foundations the dense degree of cloud, It is described based on neighborhood Block- matching improve Three-dimensional Gravity lay foundations the dense degree of cloud method the following steps are included:
Step 1 obtains coarse and sparse object point cloud using the three-dimensional reconstruction algorithm based on image sequence, obtains every The transformation matrix of camera in three dimensions when one frame image taking;
Step 2 is again handled original image, thick to carrying out in image using the block matching algorithm based on neighborhood Close characteristic matching;
Step 3, the following position according to obtained camera in space, to obtained dense characteristic point to closing Method is examined, and satisfactory characteristic point is mapped to corresponding position in three-dimensional point cloud;
Step 4 carries out an exterior point to obtained point cloud using the outer vertex deletion algorithm based on contour of object and filters, and It carries out a color to remap, obtains the dense point cloud that final mass is much better than original point cloud.
Further, described the lay foundations method of the dense degree of cloud of Three-dimensional Gravity is improved based on neighborhood Block- matching to specifically include following step It is rapid:
The first step carries out traditional three-dimensional reconstruction using around bat image sequence;
Second step chooses the dense characteristic generating algorithm that adjacent image carries out the Block- matching based on neighborhood, determines neighborhood Step-length n, i.e. each image can carry out the calculating of dense characteristic with each two images in its left and right;
Third step after determining the matched step-length n of adjacent image, traverses whole image sequence, on the basis of every piece image Image carries out the block matching algorithm based on neighborhood with the n frame image at its rear;In original image, along row and column direction every How many a pixels take a point conduct to need to carry out matched characteristic point, for the feature calculation based on Block- matching;
4th step after determining sample rate r, needs to carry out the image (P of matching characteristic for every a pair1, P2) needed The P to be calculated1In matching point set Pt;All characteristic points are selected around it centered on each characteristic point in traversal Pt Its ranks is all extended for original 8 times using interpolation algorithm, is denoted as I by the image block for taking X*Y sizesrc;In P2In, with same Coordinate points centered on, choose the image block of M*N size, its ranks be all equally extended for original 8 times using interpolation algorithm, It is denoted as Idist
5th step, works as IdistWhen tile size is sufficiently large, image block I can be foundsrcCorresponding image block Imatch; Image block I at this timematchCenter figure P2In position namely construct image block IsrcThe characteristic point of Shi Suoyong is in image P2On Corresponding match point;
6th step, after completing characteristic matching, what is needed schemes the dense characteristic pair between opposite;For image sequence In each frame, by with it carried out characteristic matching calculating image be denoted as P2;P will be shot1And P2When camera photocentre position point C is not denoted as it1And C2;Traverse P1Feature point set Pt in all characteristic point position, find in phase plane in three dimensions Position Pt where it1 Pt is found simultaneously1In image P2On the match point position Pt in phase plane in three dimensions2 ;And Ray C is in three dimensions1Pt1' and C2Pt2 .Obtain ray L1And L2
7th step, for obtained ray L1With L2For, if matching double points Pt1With Pt2It is correct match point, that L1With L2It should intersect in the corresponding points of object model in space;One threshold value T is set, when this two different surface beelines it Between the distance between closest approach when being less than T, this pair of of match point is legal matching double points;
8th step, for obtained legal matching double points, if be ray L1With L2It meets at a bit, then using intersection point as new Characteristic point be supplemented in initial point cloud;If L1With L2It is less than the different surface beeline of threshold value T for closest approach distance;Then take L1And L2's The midpoint of proximity pair is added in initial point cloud as new three-dimensional point;
9th step carries out the miscellaneous point based on contour of object and filters out, and for each frame image, extracts contour of object, by Point in contour area is retained the position of Dian Yun back projection phase plane into three-dimensional space by the camera transformation matrix arrived;And After back projection, in any one frame, the point outside contour of object is appeared in, all deletes it from cloud.
Tenth step, the phase plane Dian Yun back projection to each frame in space record each pixel in phase plane and exist Point when back projection in the point cloud nearest apart from it, is assigned to this closest to him point for the pixel value of the pixel on image Point in cloud, to complete remapping for a cloud color.
Further, the camera transformation matrix include camera photocentre position relative to world coordinate system origin rotation and Translation is [R | T], and wherein R is spin matrix, and T is translation vector.
Further, using the style of shooting of 5 ° of rotation between every two frame in the step 2, the step-length used is 2.
Further, the sample rate that a pixel is taken every 2 rows 2 column is used in the step 3;The pixel number of selection The time overhead of mesh and algorithm is directly proportional.
Further, the value of M and N is respectively necessary for being greater than X and Y in the step 4;Using X, ginseng that Y 40, M and N are 80 Number setting.
Further, in the step 5, the formula of calculating position (x, y) coefficient R is as follows:
Wherein:
Wherein, T (x, y) representative image IsrcThe pixel value of middle coordinate (x, y) position, I (x, y) representative image IdistMiddle seat Mark the pixel value of the position (x, y).W and h respectively represents the width and height of image.
Further, in the step 6, when finding characteristic point in the phase plane of each frame, the transformation using camera is needed Matrix [R | T], the focal length taken pictures are f, and in a certain frame F, the position of characteristic point is (x, y), rotational translation matrix on the image For [R | T], then the calculation formula of its three-dimensional coordinate (X, Y, Z) in world coordinate system is as follows:
Further, in the step 7, two ray L are thought in setting1With L2Legal threshold value is 1e-2
In the step 8, parameter k is 5;
In the step 9, the method for the phase plane of Dian Yun back projection to the i-th frame is needed to the transformation matrix using camera [R|T];In point cloud a bit (X, Y, Z), it is projected in the calculation formula of the coordinate (u, v) in the i-th frame are as follows:
Wherein, f is the focal length of the i-th frame;
In the step 10, if multiple phase planes are all by the pixel-map of some point therein a to point in cloud Pt, then the RGB of Pt takes the pixel mean value of the corresponding points of these planes.
Another object of the present invention is to provide neighborhood Block- matching is based on described in a kind of application, to improve Three-dimensional Gravity cloud of laying foundations thick The computer vision system of the method for close degree.
It is provided by the invention Three-dimensional Gravity is improved based on neighborhood Block- matching to lay foundations the method for the dense degree of cloud, for image sequence There is no particular/special requirement, the inexcessive dependence of the adjustment for parameter can improve the quality and consistency of point cloud with high degree. The present invention further expands obtained point cloud, it is complete to increase its on the basis of the three-dimensional reconstruction based on image sequence Whole degree and consistency.On the basis of existing three-dimensional reconstruction obtains the algorithm of sparse cloud, point Yun Tezheng is increased considerably, thus Promote the algorithm of maximal end point cloud result.
The present invention compares and the three-dimensional reconstruction algorithm based on SIFT feature, the point cloud consistency of acquisition, integrity degree, Yi Jiyan Chromatic rendition degree is all much better than universal algorithm.Meanwhile the region smoother for feature SIFT is correctly special it is difficult to extract going out Point is levied, and the present invention is based on fast matched methods, can maximally utilize the information of each pixel peripheral neighborhood, as far as possible Increase correct characteristic point.Meanwhile based on the calculated camera transformation square arrived, it is inaccurate to eliminate traditional fast matching algorithm The shortcomings that, it can be a large amount of, missing or excessively sparse point cloud supplement are accurately become into quality and preferably put cloud.
Compared to the method based on depth image, since this method needs the corresponding depth image of each frame and adjacent Displacement cannot be excessive between frame, so being difficult have true use value in the scene that can not obtain depth map.It is civilian at present High-resolution depth camera be often only capable of detection 10 meters within depth bounds.Although and laser acquisition is reached apart from enough Less than enough precision.As long as the present invention can obtain simple image, can be realized the Model Reconstruction of high quality, and two frames it Between displacement be not based on required for the method for depth image accurately, increase universality and ease for use;Using around bat image Carry out traditional three-dimensional reconstruction based on SIFT feature;Determine neighborhood matching step-length n;It is matched to determine that sampling step length needs to carry out Characteristic point;Image sequence is traversed, to calculative image to carrying out the feature extraction based on neighborhood Block- matching;Based on obtaining in 1 Matching characteristic obtained in the camera matrix taken and 4 is filtered the feature in 4;Based on image outline to the point in space Cloud is filtered out again and color correction.The present invention can obtain quality and the dense point cloud for being much higher than traditional algorithm, energy Enough effects for significantly improving primal algorithm, improve reconstruction quality;It is that a kind of universality is high, the improvement three-dimensional reconstruction of strong robustness The method of maximal end point cloud quality.
Detailed description of the invention
Fig. 1 is provided in an embodiment of the present invention to improve Three-dimensional Gravity based on neighborhood Block- matching and lay foundations the method stream of the dense degree of cloud Cheng Tu.
Fig. 2 is the flow chart of embodiment 1 provided in an embodiment of the present invention.
Specific embodiment
In order to make the objectives, technical solutions, and advantages of the present invention clearer, with reference to embodiments, to the present invention It is further elaborated.It should be appreciated that the specific embodiments described herein are merely illustrative of the present invention, it is not used to Limit the present invention.
Application principle of the invention is explained in detail with reference to the accompanying drawing.
It Three-dimensional Gravity is improved based on neighborhood Block- matching lays foundations the dense degree of cloud as shown in Figure 1, provided in an embodiment of the present invention Method the following steps are included:
S101: coarse and sparse object point cloud is obtained using traditional three-dimensional reconstruction algorithm based on image sequence, is obtained The transformation matrix of camera in three dimensions when to each frame image taking;
S102: again being handled original image, dense to carrying out in image using the block matching algorithm based on neighborhood Characteristic matching, the matched method of this feature can obtain dense enough, but the lower matching characteristic pair of precision;
S103: the next position according to obtained camera in space, it is legal to carrying out to obtained dense characteristic point Property examine, and satisfactory characteristic point is mapped to corresponding position in three-dimensional point cloud;
S104: exterior point is carried out to obtained point cloud using the outer vertex deletion algorithm based on contour of object and is filtered, is gone forward side by side Color of row remaps, and obtains the dense point cloud that final mass is much better than original point cloud.
Application principle of the invention is further described combined with specific embodiments below.
Embodiment 1:
It is provided in an embodiment of the present invention based on neighborhood Block- matching improve Three-dimensional Gravity lay foundations the dense degree of cloud method include with Lower step:
Step 1: target object being carried out to surround shooting, obtains the image of all angles.
Step 2: being rebuild using traditional three-dimensional rebuilding method based on SIFT feature matching algorithm, obtain each frame Camera spin matrix when shooting.
Step 3: determining the step-length for carrying out images match, and use the Feature Correspondence Algorithm based on neighborhood Block- matching, extract Dense matching characteristic point.Feature point extraction formula has a detailed description in the explanation of S5.
Step 4: after obtaining dense matching characteristic point, carrying out the inspection of legitimacy to it in three dimensions, will be discontented with The characteristic point of sufficient threshold condition is concentrated from and is deleted, and meet threshold condition calculates its actual position in space.
Step 5: legal cloud all in step 4 being added in original cloud, the consistency of original cloud and complete is promoted Whole degree.
Step 6: miscellaneous filtering algorithm and color correction algorithm based on profile are carried out, to filter off a small amount of existing wrong point, And the color of the point cloud of expansion is corrected, to complete entire algorithm flow.
More dense complete high quality point cloud is finally obtained according to above step.
It is provided in an embodiment of the present invention the lay foundations method of the dense degree of cloud of Three-dimensional Gravity is improved based on neighborhood Block- matching specifically to wrap Include following steps:
S1, which is used, carries out traditional three-dimensional reconstruction around bat image sequence, since traditional three-dimensional reconstruction is calculated based on SIFT feature Method, and SIFT accuracy is higher, is able to carry out the extraction of sub-pixel another characteristic, therefore phase in accurate three-dimensional space can be obtained The transformation matrix that seat in the plane is set.
S2 chooses the dense characteristic generating algorithm that adjacent image carries out the Block- matching based on neighborhood.Firstly, determining neighborhood Step-length n, i.e. each image can carry out the calculating of dense characteristic with each two images in its left and right.
After S3 determines the matched step-length n of adjacent image, whole image sequence is traversed, using every piece image as benchmark image, The block matching algorithm based on neighborhood is carried out with the n frame image at its rear.The algorithm is firstly the need of sample rate r is determined, i.e., original In image, takes a point to be used as every how many a pixels along row and column direction and need to carry out matched characteristic point, for being based on The feature calculation of Block- matching.
After S4 determines sample rate r, need to carry out the image (P of matching characteristic for every a pair1, P2) obtained needing to count The P of calculation1In matching point set Pt.All characteristic points choose X* centered on each characteristic point around it in traversal Pt Its ranks is all extended for original 8 times using interpolation algorithm, is denoted as I by the image block of Y sizesrc.In P2In, similarly to sit Centered on punctuate, the image block of M*N size is chosen, its ranks is all equally extended for original 8 times using interpolation algorithm, is denoted as Idist
S5 based on the assumption that, it is believed that when camera to image when carrying out surrounding shooting, the motion amplitude between adjacent two frame is very It is small.Therefore in S4, work as IdistWhen tile size is sufficiently large, image block I can be found whereinsrcCorresponding image block Imatch.And image block I at this timematchCenter figure P2In position namely construct image block IsrcThe characteristic point of Shi Suoyong exists Image P2Upper corresponding match point, due to interpolation work before, the precision of the Block- matching finally carried out is 1/8 pixel.
After S6 completes the characteristic matching in S5, what is needed schemes the dense characteristic pair between opposite.But due to object Between the two images due to the variation of photo angle, can only accomplish in IdistMiddle searching is as close possible to IsrcImage block.Cause This, the characteristic matching extracted in this way needs to be further processed to there is biggish noise.In S1, shooting image has been obtained The position of camera in three dimensions when each frame.In S5, the matching characteristic point pair between image pair has been obtained.For image Each frame in sequence, with P1For, the image for carrying out characteristic matching calculating with it is denoted as P2.P will be shot1And P2When Camera photocentre position is denoted as C respectively1And C2.Traverse P1Feature point set Pt in all characteristic point position, with point Pt1For, The position Pt where it is found in phase plane in three dimensions1 .Pt is found simultaneously1In image P2On match point in three-dimensional Position Pt in space in phase plane2 .And ray C is in three dimensions1Pt1 And C2Pt2 .Obtain ray L1And L2
S7 is for ray L obtained in S61With L2For, if matching double points Pt1With Pt2It is correct match point, then L1With L2It should intersect in the corresponding points of object model in space.However, various errors and floating-point due to being introduced in calculating The constant error of calculating, is difficult to find and really meets at the case where a bit.Therefore, in most cases, L1With L2It is two antarafacials Straight line, and a threshold value T can be set in algorithm, when the distance between the closest approach between this two different surface beelines is less than T, just Think that this pair of of match point is legal matching double points.
S8 is for legal matching double points obtained in S7, if be ray L1With L2It meets at a bit, then using intersection point as new Characteristic point is supplemented in initial point cloud.If L1With L2It is less than the different surface beeline of threshold value T for closest approach distance.Then take L1And L2Most The midpoint of near point pair is added in initial point cloud as new three-dimensional point.The point being newly supplemented in three-dimensional point cloud, around it Mean value apart from nearest k colors is as the color newly put.The point cloud color obtained in this way has error, specific to correct Method is shown in S10.
Although S9 result obtained in S8 has filtered off most of error matching points, still there is fraction There is errors for match point, therefore carry out the miscellaneous point based on contour of object to it and filter out.For each frame image, object wheel is extracted It is wide.It, can be by the position of Dian Yun back projection phase plane into three-dimensional space by camera transformation matrix obtained in S1.Herein it Afterwards, the point in contour area is retained.And after back projection, in any one frame, the point outside contour of object is appeared in, all will It is deleted from cloud.
The color for the point cloud that S10 is newly increased at present still has error, again by the method in S9 Dian Yun back projection To the phase plane of each frame in space.It records in the point cloud that each pixel is nearest apart from it in back projection in phase plane Point.The pixel value of the pixel on image is assigned to the point in this closest to him point cloud, to complete a cloud color It remaps.
Camera transformation matrix obtained in the step S1 includes camera photocentre position relative to world coordinate system origin Rotation and translation be [R | T], wherein R is spin matrix, and T is translation vector.
The matching step-length maximum value chosen in the step S2 is depending on the amplitude size moved in camera shooting process.? In this algorithm, the style of shooting of 5 ° of rotation between every two frame is used, the step-length used is 2.Amplitude of variation between every two frame Bigger, then corresponding step-length should be smaller, and vice versa.
Sample rate in the step S3 will affect the number of the match point ultimately generated.In the method, it has used every The sample rate of a pixel is taken every 2 rows 2 column.The pixel number of selection and the time overhead of algorithm are directly proportional.
The value of M and N is respectively necessary for being greater than X and Y in the step S4.In algorithm experimental, X, Y 40, M and N have been used For 80 parameter setting.
In the S5 step, in IdistMiddle searching image block IsrcThe image matching algorithm based on related coefficient is used.I.e. By image block IsrcIn image block IdistOn from left to right, move from top to bottom, calculate each position IsrcThe I covered with itdist Region image block coefficient R, and choose I in the maximum position of related coefficientsrcThe region covered, as image Block Imatch.In sliding IsrcDuring, by IsrcThe upper left corner is in IdistIn coordinate (x, y) be denoted as the coordinate of current location.Meter The formula for calculating position (x, y) coefficient R is as follows:
Wherein:
Wherein, T (x, y) representative image IsrcThe pixel value of middle coordinate (x, y) position, I (x, y) representative image IdistMiddle seat Mark the pixel value of the position (x, y).W and h respectively represents the width and height of image.
In step S6, when finding characteristic point in the phase plane of each frame, the transformation matrix [R | T] using camera is needed. Specific formula is as follows, it is assumed that the focal length taken pictures is f, and in a certain frame F, the position of characteristic point is (x, y), rotary flat on the image Moving matrix is [R | T], then its three-dimensional coordinate calculation formula in world coordinate system is as follows:
In step S7, what is be arranged in test thinks two ray L1With L2Legal threshold value is 1e-2
In step S8, selection for parameter k uses in test 5.
In step S9, by the method for the phase plane of Dian Yun back projection to the i-th frame need using camera transformation matrix [R | T].In postulated point cloud a bit (X, Y, Z), it is projected in the calculation formula of the coordinate (u, v) in the i-th frame are as follows:
Wherein, f is the focal length of the i-th frame.
In step S10, if multiple phase planes all by the pixel-map of some point therein to a point Pt in cloud, The RGB of Pt takes the pixel mean value of the corresponding points of these planes.
The foregoing is merely illustrative of the preferred embodiments of the present invention, is not intended to limit the invention, all in essence of the invention Made any modifications, equivalent replacements, and improvements etc., should all be included in the protection scope of the present invention within mind and principle.

Claims (8)

  1. It Three-dimensional Gravity is improved based on neighborhood Block- matching lays foundations the method for the dense degree of cloud 1. a kind of, which is characterized in that is described based on neighbour Domain Block- matching improve Three-dimensional Gravity lay foundations the dense degree of cloud method the following steps are included:
    Step 1 obtains coarse and sparse object point cloud using the three-dimensional reconstruction algorithm based on image sequence, obtains each frame The transformation matrix of camera in three dimensions when image taking;
    Step 2 is again handled original image, using the block matching algorithm based on neighborhood, to carrying out dense spy in image Sign matching;
    Step 3, the position of the camera that next basis obtains in space, to obtained dense characteristic point to progress legitimacy It examines, and satisfactory characteristic point is mapped to corresponding position in three-dimensional point cloud;
    Step 4 carries out an exterior point to obtained point cloud using the outer vertex deletion algorithm based on contour of object and filters, and carries out One time color remaps, and obtains the dense point cloud that final mass is much better than original point cloud;
    It is described based on neighborhood Block- matching improve Three-dimensional Gravity lay foundations the dense degree of cloud method specifically includes the following steps:
    The first step carries out three-dimensional reconstruction using around bat image sequence;
    Second step chooses the dense characteristic generating algorithm that adjacent image carries out the Block- matching based on neighborhood, determines neighborhood step-length N, i.e. each image can carry out the calculating of dense characteristic with each two images in its left and right;
    Third step after determining the matched step-length n of adjacent image, traverses whole image sequence, using every piece image as benchmark image, The block matching algorithm based on neighborhood is carried out with the n frame image at its rear;In original image, along row and column direction every multiple pictures Vegetarian refreshments takes a point conduct to need to carry out matched characteristic point, for the feature calculation based on Block- matching;
    4th step after determining sample rate r, needs to carry out the image (P of matching characteristic for every a pair1, P2) obtained needing to count The P of calculation1In matching point set Pt;All characteristic points choose X* centered on each characteristic point around it in traversal Pt Its ranks is all extended for original 8 times using interpolation algorithm, is denoted as I by the image block of Y sizesrc;In P2In, similarly to sit Centered on punctuate, the image block of M*N size is chosen, its ranks is all equally extended for original 8 times using interpolation algorithm, is denoted as Idist
    5th step, works as IdistWhen tile size is sufficiently large, image block I can be foundsrcCorresponding image block Imatch;At this time Image block ImatchCenter figure P2In position namely construct image block IsrcThe characteristic point of Shi Suoyong is in image P2Upper correspondence Match point;
    6th step, after completing characteristic matching, the dense characteristic pair between image pair that is needed;For in image sequence The image for carrying out characteristic matching calculating with it is denoted as P by each frame2;P will be shot1And P2When camera photocentre position remember respectively Make C1And C2;Traverse P1Feature point set Pt in all characteristic point position, find its institute in phase plane in three dimensions Position Pt1;Pt is found simultaneously1In image P2On the match point position Pt in phase plane in three dimensions2';And three Ray C is in dimension space1Pt1' and C2Pt2';Obtain ray L1And L2
    7th step, for obtained ray L1With L2For, if matching double points Pt1With Pt2It is correct match point, then L1With L2It should intersect in the corresponding points of object model in space;One threshold value T is set, when between this two different surface beelines most When the distance between near point is less than T, this pair of of match point is legal matching double points;
    8th step, for obtained legal matching double points, if be ray L1With L2It meets at a bit, then using intersection point as new spy Sign point is supplemented in initial point cloud;If L1With L2It is less than the different surface beeline of threshold value T for closest approach distance;Then take L1And L2It is nearest The midpoint of point pair is added in initial point cloud as new three-dimensional point;
    9th step carries out the miscellaneous point based on contour of object and filters out, and for each frame image, extracts contour of object, passes through what is obtained Point in contour area is retained the position of Dian Yun back projection phase plane into three-dimensional space by camera transformation matrix;And it counter throws After shadow, in any one frame, the point outside contour of object is appeared in, all deletes it from cloud;
    Tenth step, the phase plane Dian Yun back projection to each frame in space record each pixel in phase plane and throw counter Point when shadow in the point cloud nearest apart from it, the pixel value of the pixel on image is assigned in this closest to him point cloud Point, to complete remapping for a cloud color.
  2. 2. improving Three-dimensional Gravity based on neighborhood Block- matching as described in claim 1 to lay foundations the method for the dense degree of cloud, feature exists Rotation and translation i.e. [R | T] for including camera photocentre position relative to world coordinate system origin in, the camera transformation matrix, Wherein R is spin matrix, and T is translation vector.
  3. 3. improving Three-dimensional Gravity based on neighborhood Block- matching as described in claim 1 to lay foundations the method for the dense degree of cloud, feature exists In the second step chooses the dense characteristic generating algorithm that adjacent image carries out the Block- matching based on neighborhood, determines that neighborhood walks Each two images of long n, i.e. each image meeting and its left and right carry out in the calculating of dense characteristic using 5 ° of rotation between every two frame Style of shooting, the step-length used are 2.
  4. 4. improving Three-dimensional Gravity based on neighborhood Block- matching as described in claim 1 to lay foundations the method for the dense degree of cloud, feature exists In after the third step determines the matched step-length n of adjacent image, traversal whole image sequence is schemed on the basis of every piece image Picture carries out the block matching algorithm based on neighborhood with the n frame image at its rear;In original image, along row and column direction every more A pixel takes a point conduct to need to carry out matched characteristic point, for use in the feature calculation based on Block- matching every 2 Row 2 arranges the sample rate for taking a pixel;The pixel number of selection and the time overhead of algorithm are directly proportional.
  5. 5. improving Three-dimensional Gravity based on neighborhood Block- matching as described in claim 1 to lay foundations the method for the dense degree of cloud, feature exists In the value that the 4th step chooses M and N in the image block of M*N size is respectively necessary for being greater than X and Y;It is using X, Y 40, M and N 80 parameter setting.
  6. 6. improving Three-dimensional Gravity based on neighborhood Block- matching as described in claim 1 to lay foundations the method for the dense degree of cloud, feature exists In the 5th step image block ImatchCenter figure P2In position namely construct image block IsrcThe characteristic point of Shi Suoyong In image P2In upper corresponding match point, the formula of calculating position (x, y) coefficient R is as follows:
    Wherein:
    Wherein, T representative image Isrc, I representative image Idist
  7. 7. improving Three-dimensional Gravity based on neighborhood Block- matching as described in claim 1 to lay foundations the method for the dense degree of cloud, feature exists In, after the 6th step completes characteristic matching, the dense characteristic pair between image pair that is needed;For in image sequence Each frame, by with it carried out characteristic matching calculating image be denoted as P2In, characteristic point is found in the phase plane of each frame When, it needs to be f using the transformation matrix [R | T] of camera, the focal length taken pictures, in a certain frame F, the position of characteristic point is on the image (x, y), rotational translation matrix are [R | T], then its three-dimensional coordinate calculation formula in world coordinate system is as follows:
  8. 8. improving Three-dimensional Gravity based on neighborhood Block- matching as described in claim 1 to lay foundations the method for the dense degree of cloud, feature exists In, the 7th step for obtained ray L1With L2Two ray L1With L2Legal threshold value is 1e-2
    8th step is for obtained legal matching double points, if be ray L1With L2It meets at a bit, then using intersection point as new Characteristic point is supplemented in initial point cloud;If L1With L2It is less than the different surface beeline of threshold value T for closest approach distance;Then take L1And L2Most Initial point Yun Zhongzhong is added as new three-dimensional point in the midpoint of near point pair, and parameter k is 5;
    In 9th step, by the method for the phase plane of Dian Yun back projection to the i-th frame need using camera transformation matrix [R | T];In point cloud a bit (X, Y, Z), it is projected in the calculation formula of the coordinate (u, v) in the i-th frame are as follows:
    Wherein, f is the focal length of the i-th frame;
    In tenth step, if multiple phase planes all by the pixel-map of some point therein to a point Pt in cloud, The RGB of Pt takes the pixel mean value of the corresponding points of these planes.
CN201611201364.1A 2016-12-22 2016-12-22 A method of Three-dimensional Gravity is improved based on neighborhood Block- matching and is laid foundations the dense degree of cloud Active CN106683173B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201611201364.1A CN106683173B (en) 2016-12-22 2016-12-22 A method of Three-dimensional Gravity is improved based on neighborhood Block- matching and is laid foundations the dense degree of cloud

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201611201364.1A CN106683173B (en) 2016-12-22 2016-12-22 A method of Three-dimensional Gravity is improved based on neighborhood Block- matching and is laid foundations the dense degree of cloud

Publications (2)

Publication Number Publication Date
CN106683173A CN106683173A (en) 2017-05-17
CN106683173B true CN106683173B (en) 2019-09-13

Family

ID=58870280

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201611201364.1A Active CN106683173B (en) 2016-12-22 2016-12-22 A method of Three-dimensional Gravity is improved based on neighborhood Block- matching and is laid foundations the dense degree of cloud

Country Status (1)

Country Link
CN (1) CN106683173B (en)

Families Citing this family (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109087338B (en) * 2017-06-13 2021-08-24 北京图森未来科技有限公司 Method and device for extracting image sparse optical flow
CN107784666B (en) * 2017-10-12 2021-01-08 武汉市工程科学技术研究院 Three-dimensional change detection and updating method for terrain and ground features based on three-dimensional images
CN107862745B (en) * 2017-10-25 2021-04-09 武汉楚锐视觉检测科技有限公司 Reflective curved surface three-dimensional reconstruction labeling method and device
CN107862742B (en) * 2017-12-21 2020-08-14 华中科技大学 Dense three-dimensional reconstruction method based on multi-hypothesis joint view selection
GB2569656B (en) * 2017-12-22 2020-07-22 Zivid Labs As Method and system for generating a three-dimensional image of an object
CN109345557B (en) * 2018-09-19 2021-07-09 东南大学 Foreground and background separation method based on three-dimensional reconstruction result
CN109448111B (en) * 2018-10-25 2023-05-30 山东鲁软数字科技有限公司 Image three-dimensional curved surface model optimization construction method and device
CN111819602A (en) * 2019-02-02 2020-10-23 深圳市大疆创新科技有限公司 Method for increasing point cloud sampling density, point cloud scanning system and readable storage medium
CN110288701B (en) * 2019-06-26 2023-01-24 图码思(成都)科技有限公司 Three-dimensional reconstruction method based on depth focusing and terminal
CN110301981A (en) * 2019-06-28 2019-10-08 华中科技大学同济医学院附属协和医院 A kind of intelligence checks the scanner and control method of surgical instrument
CN113129329A (en) * 2019-12-31 2021-07-16 中移智行网络科技有限公司 Method and device for constructing dense point cloud based on base station target segmentation
CN111242990B (en) * 2020-01-06 2024-01-30 西南电子技术研究所(中国电子科技集团公司第十研究所) 360-degree three-dimensional reconstruction optimization method based on continuous phase dense matching
CN113538552B (en) * 2020-02-17 2024-03-22 天目爱视(北京)科技有限公司 3D information synthetic image matching method based on image sorting
CN111508063A (en) * 2020-04-13 2020-08-07 南通理工学院 Three-dimensional reconstruction method and system based on image
CN111695486B (en) * 2020-06-08 2022-07-01 武汉中海庭数据技术有限公司 High-precision direction signboard target extraction method based on point cloud
CN111754560B (en) * 2020-06-10 2023-06-02 北京瓦特曼科技有限公司 High-temperature smelting container erosion early warning method and system based on dense three-dimensional reconstruction
CN111899345B (en) * 2020-08-03 2023-09-01 成都圭目机器人有限公司 Three-dimensional reconstruction method based on 2D visual image
CN112418250A (en) * 2020-12-01 2021-02-26 怀化学院 Optimized matching method for complex 3D point cloud
CN113470049B (en) * 2021-07-06 2022-05-20 吉林省田车科技有限公司 Complete target extraction method based on structured color point cloud segmentation
CN114564014A (en) * 2022-02-23 2022-05-31 杭州萤石软件有限公司 Object information determination method, mobile robot system, and electronic device
CN114913552B (en) * 2022-07-13 2022-09-23 南京理工大学 Three-dimensional human body density corresponding estimation method based on single-view-point cloud sequence

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103021017A (en) * 2012-12-04 2013-04-03 上海交通大学 Three-dimensional scene rebuilding method based on GPU acceleration
CN103247045A (en) * 2013-04-18 2013-08-14 上海交通大学 Method of obtaining artificial scene main directions and image edges from multiple views
CN104867183A (en) * 2015-06-11 2015-08-26 华中科技大学 Three-dimensional point cloud reconstruction method based on region growing
CN104915986A (en) * 2015-06-26 2015-09-16 北京航空航天大学 Physical three-dimensional model automatic modeling method
CN105979203A (en) * 2016-04-29 2016-09-28 中国石油大学(北京) Multi-camera cooperative monitoring method and device

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3221851A1 (en) * 2014-11-20 2017-09-27 Cappasity Inc. Systems and methods for 3d capture of objects using multiple range cameras and multiple rgb cameras

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103021017A (en) * 2012-12-04 2013-04-03 上海交通大学 Three-dimensional scene rebuilding method based on GPU acceleration
CN103247045A (en) * 2013-04-18 2013-08-14 上海交通大学 Method of obtaining artificial scene main directions and image edges from multiple views
CN104867183A (en) * 2015-06-11 2015-08-26 华中科技大学 Three-dimensional point cloud reconstruction method based on region growing
CN104915986A (en) * 2015-06-26 2015-09-16 北京航空航天大学 Physical three-dimensional model automatic modeling method
CN105979203A (en) * 2016-04-29 2016-09-28 中国石油大学(北京) Multi-camera cooperative monitoring method and device

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Accurate vehicle detection and counting algorithm for traffic data collection;Tian, Ye 等;《2015 International Conference on Connected Vehicles and Expo (ICCVE)》;20151023;第285-290页 *
基于k-邻域密度的离散点云简化算法与实现;车翔玖 等;《吉林大学学报(理学版)》;20090926;第994-998页 *

Also Published As

Publication number Publication date
CN106683173A (en) 2017-05-17

Similar Documents

Publication Publication Date Title
CN106683173B (en) A method of Three-dimensional Gravity is improved based on neighborhood Block- matching and is laid foundations the dense degree of cloud
CN106651942B (en) Three-dimensional rotating detection and rotary shaft localization method based on characteristic point
CN110288642B (en) Three-dimensional object rapid reconstruction method based on camera array
CN106023303B (en) A method of Three-dimensional Gravity is improved based on profile validity and is laid foundations the dense degree of cloud
CN103106688B (en) Based on the indoor method for reconstructing three-dimensional scene of double-deck method for registering
CN103345736B (en) A kind of virtual viewpoint rendering method
CN108876749A (en) A kind of lens distortion calibration method of robust
CN112927360A (en) Three-dimensional modeling method and system based on fusion of tilt model and laser point cloud data
CN106485690A (en) Cloud data based on a feature and the autoregistration fusion method of optical image
CN104537707B (en) Image space type stereoscopic vision moves real-time measurement system online
CN107358633A (en) Join scaling method inside and outside a kind of polyphaser based on 3 points of demarcation things
CN109272574B (en) Construction method and calibration method of linear array rotary scanning camera imaging model based on projection transformation
CN100428805C (en) Video camera reference method only using plane reference object image
CN105698699A (en) A binocular visual sense measurement method based on time rotating shaft constraint
CN105957007A (en) Image stitching method based on characteristic point plane similarity
CN107507246A (en) A kind of camera marking method based on improvement distortion model
CN109754459B (en) Method and system for constructing human body three-dimensional model
CN110378969A (en) A kind of convergence type binocular camera scaling method based on 3D geometrical constraint
CN102376089A (en) Target correction method and system
WO2011145285A1 (en) Image processing device, image processing method and program
CN110375648A (en) The spatial point three-dimensional coordinate measurement method that the single camera of gridiron pattern target auxiliary is realized
CN109523595A (en) A kind of architectural engineering straight line corner angle spacing vision measuring method
CN113920205B (en) Calibration method of non-coaxial camera
CN110044374A (en) A kind of method and odometer of the monocular vision measurement mileage based on characteristics of image
CN108010086A (en) Camera marking method, device and medium based on tennis court markings intersection point

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20220420

Address after: 710000 room 025, F2001, 20 / F, block 4-A, Xixian financial port, Fengdong new town energy gold trade zone, Xixian new area, Xi'an City, Shaanxi Province

Patentee after: Shaanxi Xiandian Tongyuan Information Technology Co.,Ltd.

Address before: 710071 Xi'an Electronic and Science University, 2 Taibai South Road, Shaanxi, Xi'an

Patentee before: XIDIAN University

TR01 Transfer of patent right