CN106023303A - Method for improving three-dimensional reconstruction point-clout density on the basis of contour validity - Google Patents

Method for improving three-dimensional reconstruction point-clout density on the basis of contour validity Download PDF

Info

Publication number
CN106023303A
CN106023303A CN201610298507.9A CN201610298507A CN106023303A CN 106023303 A CN106023303 A CN 106023303A CN 201610298507 A CN201610298507 A CN 201610298507A CN 106023303 A CN106023303 A CN 106023303A
Authority
CN
China
Prior art keywords
point
cloud
derivation
point cloud
dense degree
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201610298507.9A
Other languages
Chinese (zh)
Other versions
CN106023303B (en
Inventor
宋锐
李星霓
田野
贾媛
李云松
王养利
许全优
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xidian University
Original Assignee
Xidian University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xidian University filed Critical Xidian University
Priority to CN201610298507.9A priority Critical patent/CN106023303B/en
Publication of CN106023303A publication Critical patent/CN106023303A/en
Application granted granted Critical
Publication of CN106023303B publication Critical patent/CN106023303B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects

Abstract

The invention discloses a method for improving three-dimensional reconstruction point-clout density on the basis of contour validity. The method comprises the following steps: 1, extracting an object contour, and generating a corresponding effective area graph sequence; 2, calculating expansion dimensions of a point cloud on an x axis, a y axis and a z axis; 3, obtaining a derivative point cloud by expanding each point in an initial point cloud; 4, converting the derivative point cloud to a camera coordinate system, performing back projection on the derivative point cloud to an effective area graph, and reserving points in an effective area; 5, calculating point products of vectors from initial points of processed derivative points to the points and normal vectors of the points, and reserving points whose point product values are greater than zero; and 6, inspecting whether density of the derivate point cloud reaches a demand, and if the density does not satisfy the demand, by taking the derivative point cloud as the initial point cloud, carrying out operation after the second step until the demand is satisfied. According to the invention, the method is not restricted to a specific detour shot image sequence, does not excessively rely on parameter adjustment, and can improve the density of an effective point cloud within quite short time under the cognition of a quite small computation amount.

Description

A kind of method of the dense degree of cloud of laying foundations based on profile effectiveness raising Three-dimensional Gravity
Technical field
The present invention relates to computer vision field, be specifically related to a kind of to be effectively improved a cloud within the time faster The method of the dense degree of cloud of laying foundations based on profile effectiveness raising Three-dimensional Gravity of dense degree.
Background technology
Computer vision relates to multiple subject, is the inverse process of video camera imaging process, and its research range is quite extensive, main Including: the aspects such as object detection and recognition, edge extracting, feature extraction and three-dimensional reconstruction.Three-dimensional reconstruction is also based on The modeling technique of image, just receives much concern at the beginning of being born, and the method has only to two frame adjacent images just can be the most extensive Appear again the three-dimensional relationship of matching characteristic point and camera in image.In this process, the quantity of matching characteristic point is directly determined Determine the quality of the some cloud that three-dimensional reconstruction obtains, so that it is determined that the quality of reconstruction model.
Conventional three-dimensional rebuilding method has three classes: (1) Stereo Vision.The method simulating human visual system is to objective The perceptive mode of three-dimensional body, utilizes two or more camera that at diverse location, same scenery is carried out imaging, further according to two frames Disparity map between image, is converted to depth map, it is thus achieved that the depth information of object.The geometric model file that the method generates leads to Smaller, it is easy to be used in virtual reality.But the method needs the problem overcoming object features sparse, when texture is smooth Time, there is large stretch of white space in calculated disparity map, the dense degree of some cloud is the lowest.(2) motion structure method.To thing It is identical that body carries out the motion o'clock occurred between two two field pictures of in bat, rigid objects optional position, by two frames Extract some to characteristic point and mate between image, it is possible to be calculated the transformation matrix that object moves, according to change Change matrix and can determine that the position relationship between two cameras, by pinhole imaging system principle, characteristic point can be recovered at world coordinate system In coordinate.The method development comparative maturity, can calculate the movement of camera, to sparse point in the case of camera internal reference is demarcated Cloud carries out process can obtain relatively dense point cloud, and recovers threedimensional model the most accurately.But it requires adjacent two interframe Matching characteristic point quantity wants many, therefore at the negligible amounts of the smooth region available point of feature.(3) side based on depth image Method.Scheme just can generate the some cloud of object under Current camera coordinate system, adjacent two frames with depth map by the RGB of every two field picture Two groups of some clouds that RGBD figure generates mate, and calculate the transformation matrix of two frame cameras, it is possible to two groups of some clouds are fused to generation Boundary's coordinate system.Calculated some cloud of the method is more accurate, and the dense degree of some cloud is higher.But it needs depth camera Assistance, and very sensitive to the precision of depth map, rebuild on a large scale in scene, the precision of depth camera is the most limited, and The precision of depth camera is by the precision of direct correlation reconstruction point cloud.
In above method, the amount of calculation that stereo vision method needs is less, but obtain at image texture flat site There is white space in disparity map, the most calculated the dense degree of cloud is the lowest;Motion structure method has higher universality, Wherein comprise the sparse some cloud derivation history to dense point cloud, but the dense degree of the dense point cloud obtained still depends on figure The texture complexity degree of picture, for the image that texture is smooth, it is thus achieved that some cloud dense degree also ratio relatively low;Based on depth image Method for reconstructing precision is higher, and the texture complexity degree not requirement to image, but quick to depth camera precision of the method Sense degree is higher, the most inapplicable and object dimensional reconstruction on a large scale.
For problem above, a kind of new method is presently required, makes it to improve the dense degree of available point cloud, and make The point cloud obtained overcomes the impact that texture is smooth to a certain extent.
Summary of the invention
The present invention is directed to above-mentioned the deficiencies in the prior art, it is provided that a kind of improve Three-dimensional Gravity based on profile effectiveness and lay foundations cloud The method of dense degree, is not limited to that specifically around clapping image sequence, depends on the adjustment of parameter not too much, can be relatively low In the case of amount of calculation, improve the dense degree of available point cloud in the short period of time, the mistake in initial point cloud can be deleted simultaneously Overdue cloud so that the some cloud of acquisition overcomes the impact that texture is smooth to a certain extent.
The method of the present invention carries out pretreatment to rebuilding image sequence, it is thus achieved that one group of corresponding effective coverage figure;Then will Initial point cloud is extended a cloud that is expanded;The transformation matrix obtained during calculating according to initial point cloud, will extend a little Cloud back projection is on the figure of each frame effective coverage;Fall the point deletion outside Tu Zhong effective coverage, effective coverage by back projection position, Repeat aforesaid operations, the some cloud after being expanded once to one group around the image clapped, now also need to the some cloud within deleting, only Retain external point cloud, delete according to the normal vector of initial point cloud and be positioned at the point below body surface, i.e. complete point in deleting Work.Point cloud after extension once is extended again as initial point cloud, and filters interior point after back projection's deletion, it is thus achieved that more Add dense some cloud;Repeating above-mentioned extended operation, during until the quantity of existing some cloud reaches the densest, end point cloud extends, The point cloud now obtained has the highest dense degree, and be independent of the adjustment with parameter, and the customer service smooth shadow of texture Ring.
For solving problems of the prior art, the concrete technical scheme that the present invention uses is:
A kind of improving Three-dimensional Gravity based on profile effectiveness and lay foundations the method for the dense degree of cloud, it comprises the following steps:
S1, obtained by photographic equipment one group around clap image sequence, to every frame around clap image zooming-out contour of object, and will wheel Pixel value in wide region is set to 255, the pixel value outside profile is set to 0, obtains frame bianry image, referred to as an effective coverage Figure;
S2, opposing connection are clapped image sequence and are carried out three-dimensional reconstruction step, it is thus achieved that the some cloud that a consistency is the lowest, referred to as initial point Cloud, the most also obtains each frame camera spin matrix R and translation vector t relative to world coordinate system, spin matrix and translation Vector Groups collectively forms transformation matrix M;
Each point in S3, traversal initial point cloud, it is thus achieved that in initial point cloud a little on three axles of x, y, z, value is Big value and minima, and calculate on each axle the range difference between maxima and minima, it is denoted as x_dis, y_dis, z_ respectively Dis, respectively by these three range differences divided by 100, three amounts obtained, referred to as the derivation yardstick of initial point cloud, is denoted as x_ scalar、y_scalar、z_scalar;
S4, using a point in initial point cloud as source point, respectively along x, it is right that the positive negative direction in tri-directions of y, z respectively extends Answering the derivation scale size calculated in S3, obtain a cuboid centered by source point, the length, width and height of this cuboid are respectively 2*x_scalar, 2*y_scalar, 2*z_scalar, this source point center extends 26 directions altogether toward the surrounding of cuboid, Deriving a new point on each direction, the normal vector taking this new point is identical with the normal vector of source point, and each point that derives from all is remembered Record its source point;
S5, in initial point cloud each point carry out described in step S4 derivation operation, a group will be obtained Raw some cloud, the quantity at this cloud midpoint is 26 times of initial point cloud quantity;
S6, opposing connection clap the i-th two field picture in image sequence, take out its most calculated transformation matrix Mi, By the derivation point cloud that obtains in step S5 according to transformation matrix MiTransform under the camera coordinates system of correspondence, and according to projection theory Dian Yun back projection will be derived to the effective coverage figure of the i-th frame of acquisition in step S1;
S7, according to the step in S6, to the point in the inactive area projected in the i-th frame effective coverage figure, by it from group Deleting in raw some cloud, the point projected in the effective coverage in the i-th frame effective coverage figure then retains;
S8, opposing connection are clapped each frame in image sequence and are performed both by the operation of above-mentioned steps S6 and S7, by deriving from some cloud Around projecting and deleting, three-dimensional reconstruction obtains the derivation point cloud containing interior point;
S9, traversal step S8 obtained derive from some cloud in each derivation point, it is judged that each derivation point be interior point or Exterior point, deleting is the derivation point cloud of interior point, and reservation is the derivation point cloud of exterior point;The point cloud finally retained is to derive from once Available point cloud;
S10, the quantity of statistic procedure S9 gained available point cloud, if dense degree reaches demand, then this available point cloud is Terminal cloud;If dense degree is not up to demand, then using this available point cloud as initial point cloud, repeat above-mentioned S3 to current procedures, Until the available point cloud obtained meets consistency requirement.
Preferably scheme, the derivation yardstick described in step S3 be in initial point cloud all point coordinates respectively x, y, z three One of percentage of range difference between maxima and minima on axle, this parameter is the actual parameter after repeatedly test, is not required to Will be according to specifically applying adjustment.
Further preferred scheme, using one of them point in initial point cloud as source point to the 26 of cuboid in step S4 Individual direction is derived from and is obtained new point, and wherein the computing formula of new point is:
x _ n e w = x _ o r g + i * x _ s c a l a r , i = { - 1 , 0 , 1 } y _ n e w = y _ o r g + i * y _ s c a l a r , i = { - 1 , 0 , 1 } z _ n e w = z _ o r g + i * z _ s c a l a r , i = { - 1 , 0 , 1 }
Wherein, x_org, y_org, z_org are respectively in initial point cloud some some coordinate on x, y, z axle, x_ Scalar, y_scalar, z_scalar are respectively the derivation yardstick in calculated three directions of x, y, z,
The calculated 3*3*3 of above formula new point coordinates, except the situation that source point increment of coordinate is (0,0,0), it will group Bear 26 new some clouds described in step S4.
Further preferably scheme, in step S6, will derive from the some Cloud transform meter to the camera coordinates system of the i-th two field picture Calculation formula:
(x_cami,y_cam,z_cami)=(x_world, y_world, z_world) * Ri+ti
Wherein, (x_world, y_world, z_world) is for deriving from some cloud coordinate in world coordinate system, Ri, tiRespectively It is spin matrix and the translation vector of the i-th frame camera, through RiWith tiConversion, forwarded the some cloud in world coordinate system to i-th Under frame camera coordinates system, in the i.e. i-th frame camera coordinates system, the coordinate of some cloud after conversion is (x_cami,y_cami,z_cami);
Point cloud in camera coordinates system is carried out back projection, by each spot projection to the i-th frame effective coverage figure, projects position The computing formula put:
u = ( x _ cam i * f ) / z _ cam i + C x v = ( y _ cam i * f ) / z _ cam i + C y
Wherein, f is camera focus, Cx、CyBeing respectively 0.5 times of image resolution ratio, calculated u, v are this spot projection U row on position on image, i.e. image, the location of pixels that v row are corresponding.
Further preferred scheme, in step S8, after a derivation point cloud being filtered around bat image, derivation point now Cloud has not had outside miscellaneous point, is filtered by the point that part in initial point cloud repeats the most yet, but is 26 owing to deriving from direction Individual direction, therefore there is also the point being positioned at interior surface, i.e. interior point.
The method of a kind of dense degree of cloud of laying foundations based on profile effectiveness raising Three-dimensional Gravity according to claim 1, It is characterized in that, described in step S9 judge each derive from point be interior point or the method for exterior point, be the side according to dot product Method determines: due to the outside of surface tangential plane of the normal vector directed towards object of source point, to each derivation point, calculate this derivation Point derive from source point to the vector of this derivation point, if the dot product result of this vector and the normal vector of the derivation source point of this derivation point is Just, illustrate that this derivation point is exterior point, retain this point;If the dot product result of the normal vector deriving from source point of this vector and this derivation point It is negative, illustrates that the source point that derives from of this derivation point is more than 90 degree to the vector of derivation point and the angle of normal vector, illustrate that this point is interior Point, deletes this point.
It should be noted that the method that in described step S1, contour of object extracts can use now widely used sift etc. Algorithm, the effective coverage figure finally given is bianry image.
It should be noted that the three-dimensional reconstruction step mentioned in described step S2 can use the three-dimensional of current comparative maturity Algorithm for reconstructing, transformation matrix M is spin matrix R and the integration M=of translation vector t [R | t].
Needing further illustrate is in step S7, verifies the projected position of this some method whether in effective coverage, first The coordinate being positioned on image of this point, this pixel position in the i-th frame effective coverage figure that detection is corresponding is found by above-mentioned back projection Whether the pixel value put is 255.If pixel value is 255, retain this point, otherwise delete this point.
It should be noted that in step s 8, after filtering a derivation point cloud around bat image, derivation point cloud now is Through there is no outside miscellaneous point, also the point of partial error in initial point cloud is filtered simultaneously, but be 26 sides owing to deriving from direction To, therefore there is also the point being positioned at interior surface.
It should be noted that in step slo, the consistency of some cloud can be selected voluntarily, reaches as high as million ranks, when thick Density can stop iteration after reaching demand.
By using above technical scheme, the present invention is a kind of to be improved Three-dimensional Gravity based on profile effectiveness and lays foundations the dense degree of cloud Method be compared with the prior art, it has the technical effect that
1, method based on stereoscopic vision acquisition point cloud is compared: method based on stereoscopic vision acquisition point cloud needs to provide stricture of vagina The image sequence that reason is complicated, does not has the region not point of disparity map in process of reconstruction, reconstruction error is solved error by disparity map Impact.And the present invention does not has too much requirement to the texture of image, as long as the initial point cloud provided can be closer to real-world object Shape, just can recover to a certain extent initial point cloud lose most information.
2, method based on motion structure acquisition point cloud is compared: the some cloud that method based on motion structure acquisition point cloud obtains Quantity depend on the quantity being effectively matched feature point pairs between adjacent two frames, the derivation of the sparse some cloud taked to dense point cloud The calculating of mode is complicated.And the some quantity of cloud that derives from generated in the present invention the most directly contacts, to initially with the texture of image The not too much requirement of some cloud, as long as being closer to real-world object, can make originally to put the ground that cloud distribution is sparse by derivation mode The point cloud quantity of side increases, and increases the quantity of available point cloud.
3, method based on depth image is compared: method based on depth image needs to provide the depth map of every two field picture, Algorithm is higher to the sensitivity of the degree of accuracy of depth map, and the coupling between two some clouds uses iterative algorithm so that required meter Calculation amount is very big, and matrix operations is a lot, needs to calculate on GPU.And the method that the present invention proposes is to depth map not requirement, along with Deriving from filtering of some cloud, the quantity of calculative point is also reducing, and calculates speed and speeds, it is not necessary to calculates on GPU and also can There is speed faster.
Accompanying drawing explanation
Fig. 1 is one group of image around shooting;
Fig. 2 is that a frame is around clapping the corresponding effect duration administrative division map that image calculates with the present invention;
Fig. 3 is that the present invention improves a flow chart for cloud quality.
Detailed description of the invention
In order to make the purpose of the present invention, technical scheme and advantage clearer, in conjunction with the accompanying drawings and embodiments to this Bright it is described in further detail, it will be appreciated that specific embodiment described herein, and need not only in order to explain the present invention In limiting the present invention.
As it is shown on figure 3, first choose one group around clap image sequence, as it is shown in figure 1, and select a basic three-dimensional reconstruction side Method, claps image by opposing connection and carries out rebuilding acquisition initial point cloud, by initial point cloud, around clapping image sequence, the conversion of every two field picture Matrix, as input, uses the method for the present invention to process it, and it specifically comprises the following steps that
Step 1: every two field picture is extracted the profile of wherein object, and inserts pixel value 255 in object region, in nothing Object area inserts pixel value 0, obtains one group of effective image corresponding with around bat image, as shown in Figure 2.
Step 2: traversal initial point cloud, the point coordinates in measuring point cloud is at x, y, the interval in z-axis, each axle respectively The centesimal size of upper interval is as the extension yardstick on each axle.
Step 3: according to extension yardstick, using each point in initial point cloud as source point, increase along extension yardstick and reduce Directional Extension, 26 dimensions altogether, expansion obtains 26 new points, and the normal vector of new point is identical with the normal vector of source point. The computing formula of inflexion point:
x _ n e w = x _ o r g + i * x _ s c a l a r , i = { - 1 , 0 , 1 } y _ n e w = y _ o r g + i * y _ s c a l a r , i = { - 1 , 0 , 1 } z _ n e w = z _ o r g + i * z _ s c a l a r , i = { - 1 , 0 , 1 }
Wherein, x_org, y_org, z_org are respectively in initial point cloud some some coordinate on x, y, z axle, x_ Scalar, y_scalar, z_scalar are respectively the derivation yardstick in calculated three directions of x, y, z.
Step 4: according to the transformation matrix of the first two field picture of input, some Cloud transform will be derived to corresponding camera coordinates system Under, and back projection is on the first frame effective coverage figure, and the spot projection pixel value in the figure of effective coverage that falls is the place of 255, then protect Stay this point;The spot projection pixel value in the figure of effective coverage that falls is the place of 0, then delete this point.The derivation point cloud deleted is made For new derivation point cloud, the transformation matrix of the second two field picture is carried out aforesaid operations;Operate successively, until having traveled through all figures Picture, now obtains final derivation point cloud.Wherein, Dian Yun back projection is to the computing formula of image coordinate:
u = ( x _ cam i * f ) / z _ cam i + C x v = ( y _ cam i * f ) / z _ cam i + C y
In above-mentioned formula, f is camera focus, Cx、CyBeing respectively 0.5 times of image resolution ratio, calculated u, v are for being somebody's turn to do Spot projection to the position on image, i.e. u row on image, the location of pixels that v row are corresponding.
Step 5: up to the present, it is thus achieved that derive from some a cloud be the some cloud not having external error, to each derivation point, pass through Calculating the some product value of its source point vector to this derivation point and source point, this value is more than zero, illustrates that this derivation point is positioned at outside source point Side, retains this point, and this value is less than zero, illustrates that deriving from point is positioned at inside source point, then be interior point, delete this point.Energy is operated by this Delete the interior point generated in derivation history.
Step 6: check whether the dense degree deriving from some cloud after putting in deleting reaches demand, to not reaching dense requirement Derivation point cloud, using this derivation point cloud as initial point cloud, with the transformation matrix sequence of input and image sequence together as defeated Enter, again the operation after repeat the above steps 2, until dense degree reaches demand.
Finally give consistency according to above step and meet the some cloud of demand.
The foregoing is only presently preferred embodiments of the present invention, not in order to limit the present invention, all essences in the present invention Any amendment, equivalent and the improvement etc. made within god and principle, should be included within the scope of the present invention.

Claims (10)

1. one kind is improved Three-dimensional Gravity based on profile effectiveness and lays foundations the method for the dense degree of cloud, it is characterised in that it includes following Step:
S1, obtained by photographic equipment one group around clapping image sequence, to every frame around clapping image zooming-out contour of object, and by profile region Pixel value in territory is set to 255, and the pixel value outside profile is set to 0, obtains a frame bianry image, referred to as effective coverage figure;
S2, opposing connection are clapped image sequence and are carried out three-dimensional reconstruction step, it is thus achieved that the some cloud that a consistency is the lowest, referred to as initial point cloud, The most also obtain each frame camera spin matrix R and translation vector t relative to world coordinate system, spin matrix and translation vector Combine to form transformation matrix M;
Each point in S3, traversal initial point cloud, it is thus achieved that in initial point cloud a little maximum of value on three axles of x, y, z With minima, and calculate on each axle the range difference between maxima and minima, be denoted as x_dis, y_dis, z_dis respectively, Respectively by these three range differences divided by 100, three amounts obtained, referred to as the derivation yardstick of initial point cloud, is denoted as x_scalar, y_ scalar、z_scalar;
S4, using a point in initial point cloud as source point, the positive negative direction along three directions of x, y, z respectively extends corresponding step respectively The derivation scale size calculated in rapid S3, obtains a cuboid centered by source point, and the length, width and height of this cuboid are respectively 2*x_scalar, 2*y_scalar, 2*z_scalar, this source point center extends 26 directions altogether toward the surrounding of cuboid, Deriving a new point on each direction, the normal vector taking this new point is identical with the normal vector of source point, and each point that derives from all is remembered Record its source point;
S5, in initial point cloud each point carry out described in step S4 derivation operation, will obtain one derive from Point cloud, the quantity at this cloud midpoint is 26 times of initial point cloud quantity;
S6, opposing connection clap the i-th two field picture in image sequence, take out its most calculated transformation matrix Mi, by step The derivation point cloud obtained in S5 is according to transformation matrix MiTransform under the camera coordinates system of correspondence, and will derive from according to projection theory Each some back projection in some cloud is on the effective coverage figure of the i-th frame obtained in step S1;
S7, according to the step in S6, to the point in the inactive area projected in the i-th frame effective coverage figure, by it from deriving from point Deleting in cloud, the point projected in the effective coverage in the i-th frame effective coverage figure then retains;
S8, opposing connection are clapped each frame in image sequence and are performed both by the operation of above-mentioned steps S6 and S7, by deriving from some cloud cincture Projecting and delete, three-dimensional reconstruction obtains the derivation point cloud containing interior point;
What S9, traversal step S8 were obtained derives from each derivation point in some cloud, it is judged that each derivation point is interior point or exterior point, Deleting is the derivation point cloud of interior point, and reservation is the derivation point cloud of exterior point;The point cloud finally retained is to derive from once effective Point cloud;
S10, the quantity of statistic procedure S9 gained available point cloud, if dense degree reaches demand, then this available point cloud is maximal end point Cloud;If dense degree is not up to demand, then using this available point cloud as initial point cloud, repeat above-mentioned S3 to current procedures, until The available point cloud obtained meets consistency requirement.
The method of a kind of dense degree of cloud of laying foundations based on profile effectiveness raising Three-dimensional Gravity the most according to claim 1, its Being characterised by, the derivation yardstick described in step S3 is all point coordinates maximum on three axles of x, y, z respectively in initial point cloud And one of percentage of range difference between minima, this parameter is the actual parameter after repeatedly test, it is not necessary to according to specifically Application adjusts.
The method of a kind of dense degree of cloud of laying foundations based on profile effectiveness raising Three-dimensional Gravity the most according to claim 1, its It is characterised by, step S4 derives from acquisition as source point to the 26 of cuboid directions using one of them point in initial point cloud new Point, wherein the computing formula of new point is:
x _ n e w = x _ o r g + i * x _ s c a l a r , i = { - 1 , 0 , 1 } y _ n e w = y _ o r g + i * y _ s c a l a r , i = { - 1 , 0 , 1 } z _ n e w = z _ o r g + i * z _ s c a l a r , i = { - 1 , 0 , 1 }
Wherein, x_org, y_org, z_org are respectively in initial point cloud some coordinate on x, y, z axle of point, x_scalar, Y_scalar, z_scalar are respectively the derivation yardstick in calculated three directions of x, y, z,
The calculated 3*3*3 of above formula new point coordinates, except the situation that source point increment of coordinate is (0,0,0), it will derive New some cloud of described in step S4 26.
The method of a kind of dense degree of cloud of laying foundations based on profile effectiveness raising Three-dimensional Gravity the most according to claim 1, its It is characterised by, in step S6, some Cloud transform will be derived to the computing formula of the camera coordinates system of the i-th two field picture:
(x_cami,y_cam,z_cami)=(x_world, y_world, z_world) * Ri+ti
Wherein, (x_world, y_world, z_world) is for deriving from some cloud coordinate in world coordinate system, Ri, tiIt is respectively the The spin matrix of i frame camera and translation vector, through RiWith tiConversion, forwarded the some cloud in world coordinate system to i-th frame phase Under machine coordinate system, in the i.e. i-th frame camera coordinates system, the coordinate of some cloud after conversion is (x_cami,y_cami,z_cami);
Point cloud in camera coordinates system is carried out back projection, by each spot projection to the i-th frame effective coverage figure, projected position Computing formula:
u = ( x _ cam i * f ) / z _ cam i + C x v = ( y _ cam i * f ) / z _ cam i + C y
Wherein, f is camera focus, Cx、CyBeing respectively 0.5 times of image resolution ratio, calculated u, v are that this spot projection is to figure As the u row on upper position, i.e. image, the location of pixels of v row correspondence.
The method of a kind of dense degree of cloud of laying foundations based on profile effectiveness raising Three-dimensional Gravity the most according to claim 1, its Being characterised by, in step S8, after filtering a derivation point cloud around bat image, derivation point cloud now has not had outside miscellaneous Point, also filters the point of partial error in initial point cloud simultaneously, but is 26 directions owing to deriving from direction, therefore there is also It is positioned at the point of interior surface, i.e. interior point.
The method of a kind of dense degree of cloud of laying foundations based on profile effectiveness raising Three-dimensional Gravity the most according to claim 1, its Be characterised by, described in step S9 judge each derive from point be interior point or the method for exterior point, be the method according to dot product Determine: due to the outside of surface tangential plane of the normal vector directed towards object of source point, to each derivation point, calculate this derivation point Derive from source point to the vector of this derivation point, if the dot product result of this vector and the normal vector of the derivation source point of this derivation point is Just, illustrate that this derivation point is exterior point, retain this point;If the dot product result of the normal vector deriving from source point of this vector and this derivation point It is negative, illustrates that the source point that derives from of this derivation point is more than 90 degree to the vector of derivation point and the angle of normal vector, illustrate that this point is interior Point, deletes this point.
The method of a kind of dense degree of cloud of laying foundations based on profile effectiveness raising Three-dimensional Gravity the most according to claim 1, its Be characterised by, described in described step S1 extract contour of object method be use sift algorithm, and finally give effective Administrative division map is bianry image.
The method of a kind of dense degree of cloud of laying foundations based on profile effectiveness raising Three-dimensional Gravity the most according to claim 1, its Being characterised by, the three-dimensional reconstruction step described in described step S2 is to use three-dimensional reconstruction algorithm, and transformation matrix M is spin matrix The integration M=of R and translation vector t [R | t].
The method of a kind of dense degree of cloud of laying foundations based on profile effectiveness raising Three-dimensional Gravity the most according to claim 1, its Being characterised by, in step S7, the method that checking is derived from the projected position whether effective coverage in the figure of effective coverage of point is: First pass through back projection and find the coordinate being positioned on image of this derivation point, this pixel in the i-th frame effective coverage figure that detection is corresponding Whether the pixel value of position is 255, if pixel value is 255, then retains this point, otherwise deletes this point.
The method of a kind of dense degree of cloud of laying foundations based on profile effectiveness raising Three-dimensional Gravity the most according to claim 1, its Being characterised by, in step slo, the consistency of some cloud can be selected voluntarily, reaches as high as million ranks, when consistency reaches demand After can stop iteration.
CN201610298507.9A 2016-05-06 2016-05-06 A method of Three-dimensional Gravity is improved based on profile validity and is laid foundations the dense degree of cloud Active CN106023303B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610298507.9A CN106023303B (en) 2016-05-06 2016-05-06 A method of Three-dimensional Gravity is improved based on profile validity and is laid foundations the dense degree of cloud

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610298507.9A CN106023303B (en) 2016-05-06 2016-05-06 A method of Three-dimensional Gravity is improved based on profile validity and is laid foundations the dense degree of cloud

Publications (2)

Publication Number Publication Date
CN106023303A true CN106023303A (en) 2016-10-12
CN106023303B CN106023303B (en) 2018-10-26

Family

ID=57081878

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610298507.9A Active CN106023303B (en) 2016-05-06 2016-05-06 A method of Three-dimensional Gravity is improved based on profile validity and is laid foundations the dense degree of cloud

Country Status (1)

Country Link
CN (1) CN106023303B (en)

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106570507A (en) * 2016-10-26 2017-04-19 北京航空航天大学 Multi-angle consistent plane detection and analysis method for monocular video scene three dimensional structure
CN106887043A (en) * 2017-03-08 2017-06-23 景致三维(江苏)股份有限公司 The method of the method, device and three-dimensional modeling of the removal of three-dimensional modeling exterior point
CN108655571A (en) * 2018-05-21 2018-10-16 广东水利电力职业技术学院(广东省水利电力技工学校) A kind of digital-control laser engraving machine, control system and control method, computer
CN108735279A (en) * 2018-06-21 2018-11-02 广西虚拟现实科技有限公司 A kind of virtual reality headstroke rehabilitation training of upper limbs system and control method
CN108846896A (en) * 2018-06-29 2018-11-20 南华大学 A kind of automatic molecule protein molecule body diagnostic system
CN109081524A (en) * 2018-09-25 2018-12-25 江西理工大学 A kind of intelligence mineral processing waste water reuse change system, detection method
CN109293434A (en) * 2018-11-15 2019-02-01 关静 A kind of agricultural environmental protection base manure and preparation method
CN110070608A (en) * 2019-04-11 2019-07-30 浙江工业大学 A method of being automatically deleted the three-dimensional reconstruction redundant points based on image
CN110888143A (en) * 2019-10-30 2020-03-17 中铁四局集团第五工程有限公司 Bridge through measurement method based on unmanned aerial vehicle airborne laser radar
CN111433821A (en) * 2017-10-06 2020-07-17 交互数字Vc控股公司 Method and apparatus for reconstructing a point cloud representing a 3D object
WO2020155159A1 (en) * 2019-02-02 2020-08-06 深圳市大疆创新科技有限公司 Method for increasing point cloud sampling density, point cloud scanning system, and readable storage medium
CN111553985A (en) * 2020-04-30 2020-08-18 四川大学 Adjacent graph pairing type European three-dimensional reconstruction method and device
CN112106105A (en) * 2017-12-22 2020-12-18 兹威达公司 Method and system for generating three-dimensional image of object
CN113436242A (en) * 2021-07-22 2021-09-24 西安电子科技大学 Method for acquiring high-precision depth value of static object based on mobile depth camera

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101271591A (en) * 2008-04-28 2008-09-24 清华大学 Interactive multi-vision point three-dimensional model reconstruction method
CN101533529A (en) * 2009-01-23 2009-09-16 北京建筑工程学院 Range image-based 3D spatial data processing method and device
US7843448B2 (en) * 2005-11-21 2010-11-30 Leica Geosystems Ag Identification of occluded edge regions from 3D point data
CN102592284A (en) * 2012-02-27 2012-07-18 上海交通大学 Method for transforming part surface appearance three-dimensional high-density point cloud data into grayscale image

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7843448B2 (en) * 2005-11-21 2010-11-30 Leica Geosystems Ag Identification of occluded edge regions from 3D point data
CN101271591A (en) * 2008-04-28 2008-09-24 清华大学 Interactive multi-vision point three-dimensional model reconstruction method
CN101533529A (en) * 2009-01-23 2009-09-16 北京建筑工程学院 Range image-based 3D spatial data processing method and device
CN102592284A (en) * 2012-02-27 2012-07-18 上海交通大学 Method for transforming part surface appearance three-dimensional high-density point cloud data into grayscale image

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
SOON-WOOK KWON等: "Fitting range data to primitives for rapid local 3D modeling using sparse range point clouds", 《AUTOMATION IN CONSTRUCTION》 *
YXYXYXYX41: "实验三 插值", 《百度文库 HTTPS://WENKU.BAIDU.COM/VIEW/1FB468E8F8C75FBFC77DB2AB.HTML》 *

Cited By (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106570507B (en) * 2016-10-26 2019-12-27 北京航空航天大学 Multi-view-angle consistent plane detection and analysis method for monocular video scene three-dimensional structure
CN106570507A (en) * 2016-10-26 2017-04-19 北京航空航天大学 Multi-angle consistent plane detection and analysis method for monocular video scene three dimensional structure
CN106887043A (en) * 2017-03-08 2017-06-23 景致三维(江苏)股份有限公司 The method of the method, device and three-dimensional modeling of the removal of three-dimensional modeling exterior point
CN111433821B (en) * 2017-10-06 2023-10-20 交互数字Vc控股公司 Method and apparatus for reconstructing a point cloud representing a 3D object
CN111433821A (en) * 2017-10-06 2020-07-17 交互数字Vc控股公司 Method and apparatus for reconstructing a point cloud representing a 3D object
CN112106105B (en) * 2017-12-22 2024-04-05 兹威达公司 Method and system for generating three-dimensional image of object
CN112106105A (en) * 2017-12-22 2020-12-18 兹威达公司 Method and system for generating three-dimensional image of object
CN108655571A (en) * 2018-05-21 2018-10-16 广东水利电力职业技术学院(广东省水利电力技工学校) A kind of digital-control laser engraving machine, control system and control method, computer
CN108735279B (en) * 2018-06-21 2022-04-12 广西虚拟现实科技有限公司 Virtual reality upper limb rehabilitation training system for stroke in brain and control method
CN108735279A (en) * 2018-06-21 2018-11-02 广西虚拟现实科技有限公司 A kind of virtual reality headstroke rehabilitation training of upper limbs system and control method
CN108846896A (en) * 2018-06-29 2018-11-20 南华大学 A kind of automatic molecule protein molecule body diagnostic system
CN109081524A (en) * 2018-09-25 2018-12-25 江西理工大学 A kind of intelligence mineral processing waste water reuse change system, detection method
CN109293434A (en) * 2018-11-15 2019-02-01 关静 A kind of agricultural environmental protection base manure and preparation method
WO2020155159A1 (en) * 2019-02-02 2020-08-06 深圳市大疆创新科技有限公司 Method for increasing point cloud sampling density, point cloud scanning system, and readable storage medium
CN110070608A (en) * 2019-04-11 2019-07-30 浙江工业大学 A method of being automatically deleted the three-dimensional reconstruction redundant points based on image
CN110070608B (en) * 2019-04-11 2023-03-31 浙江工业大学 Method for automatically deleting three-dimensional reconstruction redundant points based on images
CN110888143A (en) * 2019-10-30 2020-03-17 中铁四局集团第五工程有限公司 Bridge through measurement method based on unmanned aerial vehicle airborne laser radar
CN110888143B (en) * 2019-10-30 2022-09-13 中铁四局集团第五工程有限公司 Bridge through measurement method based on unmanned aerial vehicle airborne laser radar
CN111553985A (en) * 2020-04-30 2020-08-18 四川大学 Adjacent graph pairing type European three-dimensional reconstruction method and device
CN113436242A (en) * 2021-07-22 2021-09-24 西安电子科技大学 Method for acquiring high-precision depth value of static object based on mobile depth camera
CN113436242B (en) * 2021-07-22 2024-03-29 西安电子科技大学 Method for obtaining high-precision depth value of static object based on mobile depth camera

Also Published As

Publication number Publication date
CN106023303B (en) 2018-10-26

Similar Documents

Publication Publication Date Title
CN106023303A (en) Method for improving three-dimensional reconstruction point-clout density on the basis of contour validity
CN111968129B (en) Instant positioning and map construction system and method with semantic perception
CN110288642B (en) Three-dimensional object rapid reconstruction method based on camera array
CN108520554B (en) Binocular three-dimensional dense mapping method based on ORB-SLAM2
CN106683173B (en) A method of Three-dimensional Gravity is improved based on neighborhood Block- matching and is laid foundations the dense degree of cloud
CN104869387B (en) Method for acquiring binocular image maximum parallax based on optical flow method
CN109215117B (en) Flower three-dimensional reconstruction method based on ORB and U-net
CN107833270A (en) Real-time object dimensional method for reconstructing based on depth camera
CN108198145A (en) For the method and apparatus of point cloud data reparation
CN111932678B (en) Multi-view real-time human motion, gesture, expression and texture reconstruction system
CN108734776A (en) A kind of three-dimensional facial reconstruction method and equipment based on speckle
CN110599540A (en) Real-time three-dimensional human body shape and posture reconstruction method and device under multi-viewpoint camera
CN110223377A (en) One kind being based on stereo visual system high accuracy three-dimensional method for reconstructing
US20170278302A1 (en) Method and device for registering an image to a model
US20210044787A1 (en) Three-dimensional reconstruction method, three-dimensional reconstruction device, and computer
CN102902355A (en) Space interaction method of mobile equipment
CN106875443B (en) The whole pixel search method and device of 3-dimensional digital speckle based on grayscale restraint
CN109754459B (en) Method and system for constructing human body three-dimensional model
CN106600632A (en) Improved matching cost aggregation stereo matching algorithm
CN110516639B (en) Real-time figure three-dimensional position calculation method based on video stream natural scene
CN106023230A (en) Dense matching method suitable for deformed images
CN106500625A (en) A kind of telecentricity stereo vision measuring apparatus and its method for being applied to the measurement of object dimensional pattern micron accuracies
CN111105451B (en) Driving scene binocular depth estimation method for overcoming occlusion effect
JP6285686B2 (en) Parallax image generation device
CN102663812B (en) Direct method of three-dimensional motion detection and dense structure reconstruction based on variable optical flow

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant