CN106023303B - A method of Three-dimensional Gravity is improved based on profile validity and is laid foundations the dense degree of cloud - Google Patents

A method of Three-dimensional Gravity is improved based on profile validity and is laid foundations the dense degree of cloud Download PDF

Info

Publication number
CN106023303B
CN106023303B CN201610298507.9A CN201610298507A CN106023303B CN 106023303 B CN106023303 B CN 106023303B CN 201610298507 A CN201610298507 A CN 201610298507A CN 106023303 B CN106023303 B CN 106023303B
Authority
CN
China
Prior art keywords
point
point cloud
cloud
derivation
frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201610298507.9A
Other languages
Chinese (zh)
Other versions
CN106023303A (en
Inventor
宋锐
李星霓
田野
贾媛
李云松
王养利
许全优
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xidian University
Original Assignee
Xidian University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xidian University filed Critical Xidian University
Priority to CN201610298507.9A priority Critical patent/CN106023303B/en
Publication of CN106023303A publication Critical patent/CN106023303A/en
Application granted granted Critical
Publication of CN106023303B publication Critical patent/CN106023303B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Processing Or Creating Images (AREA)

Abstract

It Three-dimensional Gravity is improved based on profile validity lays foundations the method for the dense degree of cloud the invention discloses a kind of comprising have:1, its contour of object is extracted, corresponding effective coverage graphic sequence is generated;2, point cloud is calculated in x, y, the extension scale in z-axis;Each of 3 pairs of initial point clouds point is extended, and is obtained one and is derived from point cloud;4, it will derive under point Cloud transform to camera coordinates system, and in back projection to effective coverage figure, retain the point in effective coverage;5, the initial point of the derivation point of calculation processing is to the dot product of the vector and the normal vector of the point of the point, point of the retention point product value more than zero;6, it checks whether the dense degree for deriving from point cloud reaches demand, the derivation point cloud is regard as initial point cloud when being unsatisfactory for demand, again the operation after progress step 2 to meet demand.The present invention is not limited to specifically not be too dependent on the adjustment of parameter around image sequence is clapped, and can improve the dense degree of available point cloud in a relatively short period of time in the case of relatively low calculation amount.

Description

A method of Three-dimensional Gravity is improved based on profile validity and is laid foundations the dense degree of cloud
Technical field
The present invention relates to computer vision fields, and in particular to a kind of that a cloud can be effectively improved within the time faster The method of the dense degree of cloud of being laid foundations based on profile validity raising Three-dimensional Gravity of dense degree.
Background technology
Computer vision is related to multiple subjects, is the inverse process of video camera imaging process, and research range is quite extensive, main Including:Object detection and recognition, edge extracting, feature extraction and three-dimensional reconstruction etc..Three-dimensional reconstruction is also based on The modeling technique of image, is just concerned at the beginning of birth, and this method only needs two frame adjacent images more accurate extensive It appears again the three-dimensional relationship of matching characteristic point and camera in image.In this process, the quantity of matching characteristic point is directly determined The quality for having determined the point cloud of three-dimensional reconstruction acquisition, so that it is determined that the quality of reconstruction model.
Common three-dimensional rebuilding method has three classes:(1) Stereo Vision.This method simulates human visual system to objective The perceptive mode of three-dimension object is imaged the same scenery in different location using more than two cameras, further according to two frames Disparity map between image, is converted to depth map, obtains the depth information of object.The geometrical model file that the method generates is logical It is often smaller, it is easy to be used in virtual reality.But this method needs to overcome the problems, such as that object features are sparse, when texture is flat When, there is large stretch of white space in the disparity map being calculated, the dense degree for putting cloud is very low.(2) motion structure method.To object It is identical that body, which carries out the movement that the point around any position in bat, rigid objects occurs between two field pictures, by two frames Extracted between image it is several to characteristic point and match, the transformation matrix that object moves can be calculated, according to change It changes matrix and can determine that the position relationship between two cameras can restore characteristic point in world coordinate system by pinhole imaging system principle In coordinate.The method develops comparative maturity, the movement of camera can be calculated in the case where camera internal reference is demarcated, to sparse point Cloud, which carries out processing, to be obtained compared with dense point cloud, and recovers more accurately threedimensional model.But it requires adjacent two interframe Matching characteristic point quantity is more, therefore in the negligible amounts of the flat region available point of feature.(3) side based on depth image Method.The point cloud of the object under Current camera coordinate system, adjacent two frame can be generated with depth map by the RGB figures of every frame image Two groups of point clouds that RGBD figures generate are matched, and calculate the transformation matrix of two frame cameras, so that it may be fused to generation with two groups of point clouds Boundary's coordinate system.The point cloud that the method is calculated is more accurate, and the dense degree for putting cloud is higher.But it needs depth camera Assistance, and it is very sensitive to the precision of depth map, rebuild in scene on a large scale, the precision of depth camera is always limited, and The precision of depth camera will be directly linked the precision of reconstruction point cloud.
In above method, the calculation amount that stereo vision method needs is smaller, but obtained in image texture flat site There are white spaces for disparity map, therefore the dense degree of point cloud being calculated is very low;Motion structure method has higher universality, Wherein include derivation history of the sparse cloud to dense point cloud, but the dense degree of the dense point cloud obtained still depends on figure The texture complexity degree of picture, for the image that texture is flat, the dense degree of point cloud of acquisition is also relatively low;Based on depth image Method for reconstructing precision is higher, and does not require the texture complexity degree of image, but this method is to the quick of depth camera precision Sense degree is higher, not applicable at present to be rebuild with a wide range of object dimensional.
In view of the above problems, needing a kind of new method at present, make it the dense degree for improving available point cloud, and make The influence that the point cloud of acquisition overcomes texture flat to a certain extent.
Invention content
The present invention in view of the above shortcomings of the prior art, provides and a kind of improving Three-dimensional Gravity based on profile validity and lay foundations cloud The method of dense degree is not limited to that specifically around image sequence is clapped, is not too dependent on the adjustment of parameter, can be relatively low In the case of calculation amount, the dense degree of available point cloud is improved in a relatively short period of time, while can delete the mistake in origin cloud Overdue cloud so that the influence that the point cloud of acquisition overcomes texture flat to a certain extent.
The method of the present invention is pre-processed to rebuilding image sequence, obtains one group of corresponding effective coverage figure;Then will Initial point cloud is extended the cloud that is expanded;According to the transformation matrix obtained in the calculating process of initial point cloud, will extend a little In cloud back projection to each frame effective coverage figure;Back projection position is fallen into the point deletion in the figure of effective coverage outside effective coverage, Aforesaid operations are repeated to one group of image around bat, the point cloud after being expanded once also needs to the point cloud inside deleting, only at this time Reservation external point cloud, the point being located at below body surface is deleted according to the normal vector of initial point cloud, that is, completes point in deleting Work.Point cloud of the extension after primary is extended again as initial point cloud, and interior point is filtered out after back projection's deletion, is obtained more The point cloud for thickening the soup close;Above-mentioned extended operation is repeated, when the quantity of existing cloud reaches dense enough, the extension of end point cloud, The point cloud obtained at this time has very high dense degree, and do not depend on the adjustment with parameter, and the customer service flat shadow of texture It rings.
To solve problems of the prior art, the specific technical solution that the present invention uses is:
A method of Three-dimensional Gravity is improved based on profile validity and is laid foundations the dense degree of cloud comprising following steps:
S1, one group is obtained by photographic equipment around image sequence is clapped, to every frame around bat image zooming-out contour of object, and will take turns Pixel value in wide region is set as 255, and the pixel value outside profile is set as 0, obtains a frame bianry image, referred to as effective coverage Figure;
S2, opposing connection clap image sequence and carry out three-dimensional reconstruction step, obtain the very low point cloud of a consistency, referred to as initial point Cloud, while spin matrix R and translation vector t of each frame camera relative to world coordinate system are also obtained, spin matrix and translation Vector combines to form transformation matrix M;
Each point in S3, traversal initial point cloud, obtaining all the points in initial point cloud, value is most on three axis of x, y, z Big value and minimum value, and the distance between maxima and minima difference on each axis is calculated, it is denoted as x_dis, y_dis, z_ respectively Dis, respectively by this three range difference divided by 100, three obtained amount, referred to as the derivation scale of initial point cloud are denoted as x_ scalar,y_scalar,z_scalar;
S4, using a point in initial point cloud as source point, respectively along x, the positive negative direction respectively extension pair in tri- directions y, z The derivation scale size calculated in S3 is answered, a cuboid centered on source point is obtained, the length, width and height of the cuboid are respectively 2*x_scalar, 2*y_scalar, 2*z_scalar, the source point center toward extending 26 directions altogether around cuboid, A new point is derived in each direction, takes the normal vector of the new point identical as the normal vector of source point, and each point that derives from is remembered Record its source point;
S5, the derivation operation described in a step S4 is carried out to each point in initial point cloud, a group will be obtained Raw point cloud, the quantity at this cloud midpoint are 26 times of initial point cloud quantity;
S6, opposing connection clap the i-th frame image in image sequence, take out its transformation matrix M being calculated in step s 2i, By the derivation point cloud obtained in step S5 according to transformation matrix MiIt transforms under corresponding camera coordinates system, and according to projection theory It will be on the effective coverage figure that the i-th frame obtained in Dian Yun back projections to step S1 be derived from;
S7, according to the step in S6, to projecting to the point in the inactive area in the i-th frame effective coverage figure, by it from group It is deleted in raw point cloud, the point in the effective coverage projected in the i-th frame effective coverage figure then retains;
S8, opposing connection clap the operation that each frame in image sequence is performed both by above-mentioned steps S6 and S7, by deriving from point cloud Around projecting and deleting, three-dimensional reconstruction obtains the derivation point cloud containing interior point;
Each of derivation point cloud that S9, traversal step S8 are obtained derives from point, judge it is each derive from point be interior point or Exterior point, delete be interior point derivation point cloud, reservation is the derivation point cloud of exterior point;The point cloud finally retained is to derive from once Available point cloud;
The quantity of available point cloud obtained by S10, statistic procedure S9, if dense degree reaches demand, this available point cloud is most Terminal cloud;If dense degree is not up to demand, using the available point cloud as initial point cloud, above-mentioned S3 is repeated to current procedures, Until the available point cloud of acquisition meets consistency requirement.
Preferred scheme, the derivation scale described in step S3 be in initial point cloud all point coordinates respectively in x, y, z three On axis between maxima and minima range difference 1 percent, which is the actual parameter after repeatedly testing, and is not required to It to be adjusted according to concrete application.
Further preferred scheme, in step S4 one of using in initial point cloud point as source point to cuboid 26 A direction, which is derived from, obtains new point, wherein the calculation formula newly put is:
Wherein, x_org, y_org, z_org are respectively that some in initial point cloud puts the coordinate on x, y, z axis, x_ Scalar, y_scalar, z_scalar are respectively the derivation scale in three directions of x, y, z being calculated,
The case where 3*3*3 that above formula is calculated new point coordinates in addition to source point increment of coordinate are (0,0,0), it will group Bear 26 new point clouds described in step S4.
Scheme still more preferably, in step S6, will derive from point Cloud transform to the camera coordinates system of the i-th frame image meter Calculate formula:
(x_cami,y_cam,z_cami)=(x_world, y_world, z_world) * Ri+ti
Wherein, (x_world, y_world, z_world) is the coordinate for deriving from point cloud in world coordinate system, Ri, tiRespectively For the spin matrix and translation vector of the i-th frame camera, by RiWith tiTransformation, the point cloud in world coordinate system has been gone to i-th Under frame camera coordinates system, i.e. the coordinate of point cloud after being converted in the i-th frame camera coordinates system is (x_cami,y_cami,z_cami);
Point cloud in camera coordinates system is subjected to back projection, by each spot projection to the i-th frame effective coverage figure, projects position The calculation formula set:
Wherein, f is camera focus, Cx、CyRespectively 0.5 times of image resolution ratio, u, the v being calculated are the spot projection Position on to image, i.e. u rows on image, v arrange corresponding location of pixels.
Scheme still further preferably, in step S8, by after clapping image and filtering out to deriving from a point cloud, derivation point at this time Cloud has not had external miscellaneous point, while filtering out the point partly repeated in initial point cloud yet, but is 26 due to deriving from direction A direction, therefore there is also the points positioned at interior surface, i.e., interior point.
A kind of method of dense degree of cloud of being laid foundations based on profile validity raising Three-dimensional Gravity according to claim 1, It is characterized in that, each derivation point of judgement described in step S9 is a method for interior point or exterior point, it is the side according to dot product Method determines:Since the outside of the surface tangential plane of the normal vector directed towards object of source point calculates the derivation to each derivation point Point derive from source point to the derivation point vector, if the dot product result of the vector and the normal vector of the derivation source point of the derivation point is Just, illustrate that the derivation point is exterior point, retain the point;If the dot product result of the vector and the normal vector of the derivation source point of the derivation point It is negative, illustrates that the source point that derives from of the derivation point is more than 90 degree to the angle of the vector and normal vector that derive from point, illustrate that the point is interior Point deletes the point.
It should be noted that now widely used sift etc. can be used in the method that contour of object extracts in the step S1 Algorithm, finally obtained effective coverage figure is bianry image.
It should be noted that the three-dimensional of current comparative maturity can be used in the three-dimensional reconstruction step referred in the step S2 Algorithm for reconstructing, transformation matrix M are the integration M=&#91 of spin matrix R and translation vector t;R|t].
It should be further noted that in step S7, verify the point projected position whether effective coverage method, first The coordinate being located on image that the point is found by above-mentioned back projection detects the pixel position in the figure of corresponding i-th frame effective coverage Whether the pixel value set is 255.Retain the point if pixel value is 255, otherwise deletes the point.
It should be noted that in step s 8, after being filtered out to a derivation point cloud around bat image, derivation point cloud at this time is Through the miscellaneous point in no outside, while also the point of partial error in initial point cloud being filtered out, but is 26 sides due to deriving from direction To, therefore there is also the points positioned at interior surface.
It should be noted that in step slo, putting the consistency of cloud can voluntarily select, and million ranks be reached as high as, when thick Density can stop iteration after reaching demand.
By using above technical scheme, the present invention is a kind of to be improved Three-dimensional Gravity based on profile validity and lays foundations the dense degree of cloud Method be compared with the prior art, have technical effect that:
1, compared to the method for obtaining point cloud based on stereoscopic vision:The method for obtaining point cloud based on stereoscopic vision needs to provide line Complicated image sequence is managed, does not have the region of disparity map not have a little in reconstruction process, reconstruction error is solved error by disparity map It influences.And the requirement that texture of the present invention to image be not excessive, only initial point cloud to be offered can be closer to real-world object Shape, just can restore to a certain extent initial point cloud loss most information.
2, compared to the method for obtaining point cloud based on motion structure:The point cloud obtained based on the method that motion structure obtains point cloud Quantity depend on the quantity that characteristic point pair is effectively matched between adjacent two frame, the derivation of the sparse cloud taken to dense point cloud The calculating of mode is complicated.And the quantity of derivation point cloud and the texture of image generated in the present invention does not contact directly, to initial Point cloud does not require excessively, as long as being closer to real-world object, can make that putting cloud originally is distributed sparse ground by derivation mode The point cloud quantity of side increases, and increases the quantity of available point cloud.
3, compared to the method based on depth image:Method based on depth image needs to provide the depth map per frame image, Algorithm is higher to the susceptibility of the accuracy of depth map, and the matching between two clouds uses iterative algorithm so that required meter Calculation amount is very big, and there are many matrix operation, needs to calculate on GPU.And method proposed by the present invention does not require depth map, with Filtering out for point cloud is derived from, the quantity of calculative point is also being reduced, and calculating speed is speeded, and need not be calculated on GPU and also can There is faster speed.
Description of the drawings
Fig. 1 is one group of image around shooting;
Fig. 2 is corresponding effective term area figure that a frame is calculated around bat image and the present invention;
Fig. 3 is the flow chart that the present invention improves point cloud quality.
Specific implementation mode
In order to make the purpose , technical scheme and advantage of the present invention be clearer, in conjunction with the accompanying drawings and embodiments to this hair It is bright to be described in further detail, it should be understood that described herein specific examples are only used to explain the present invention, and does not have to It is of the invention in limiting.
As shown in figure 3, first choosing one group around image sequence is clapped, as shown in Figure 1, and selecting a basic three-dimensional reconstruction side Method claps image by opposing connection and carries out rebuilding acquisition initial point cloud, by initial point cloud, around image sequence is clapped, per the transformation of frame image Matrix handles it using the method for the present invention, is as follows as input:
Step 1:Pixel value 255 is inserted to the profile of every frame image zooming-out wherein object, and in object region, in nothing Object area inserts pixel value 0, obtains one group of effective image corresponding with around image is clapped, as shown in Figure 2.
Step 2:Initial point cloud is traversed, the point coordinates in record point cloud is in x respectively, y, the interval in z-axis, each axis The centesimal size of upper interval is as the extension scale on each axis.
Step 3:According to extension scale, it regard each of initial point cloud point as source point, increases and reduce along extension scale Directional Extension, 26 dimensions altogether, expansion obtains 26 new points, and the normal vector newly put is identical as the normal vector of source point. The calculation formula of inflexion point:
Wherein, x_org, y_org, z_org are respectively that some in initial point cloud puts the coordinate on x, y, z axis, x_ Scalar, y_scalar, z_scalar are respectively the derivation scale in three directions of x, y, z being calculated.
Step 4:According to the transformation matrix of the first frame image of input, point Cloud transform will be derived to corresponding camera coordinates system Under, and in back projection to first frame effective coverage figure, spot projection falls the place that pixel value is 255 in the figure of effective coverage, then protects Stay the point;Spot projection falls the place that pixel value is 0 in the figure of effective coverage, then deletes the point.The derivation deleted point cloud is made For new derivation point cloud, aforesaid operations are carried out to the transformation matrix of the second frame image;It operates successively, until having traversed all figures Picture obtains final derivation point cloud at this time.Wherein, calculation formula of the Dian Yun back projections to image coordinate:
In above-mentioned formula, f is camera focus, Cx、CyRespectively 0.5 times of image resolution ratio, u, the v being calculated are should Position on spot projection to image, i.e. u rows on image, v arrange corresponding location of pixels.
Step 5:Up to the present, the derivation point cloud of acquisition is that the point cloud of not external error passes through each derivation point Its source point is calculated to the vector of the derivation point and the point product value of source point, which is more than zero, illustrates that the derivation point is located at the outer of source point Side retains the point, which is less than zero, illustrates that deriving from point is located on the inside of source point, then is interior point, deletes the point.Pass through this operation energy Delete the interior point generated in derivation history.
Step 6:Check whether the dense degree of the derivation point cloud in deleting after point reaches demand, to dense requirement is not achieved Derivation point cloud, using the derivation point cloud as initial point cloud, the transformation matrix sequence and image sequence with input are together as defeated Enter, the operation to repeat the above steps again after 2, until dense degree reaches demand.
The point cloud that consistency meets demand is finally obtained according to above step.
The foregoing is merely illustrative of the preferred embodiments of the present invention, is not intended to limit the invention, all essences in the present invention All any modification, equivalent and improvement etc., should all be included in the protection scope of the present invention made by within refreshing and principle.

Claims (4)

1. a kind of method for the dense degree of cloud of being laid foundations based on profile validity raising Three-dimensional Gravity, which is characterized in that it includes following Step:
S1, one group is obtained by photographic equipment around clapping image sequence, to every frame around clapping image zooming-out contour of object, and by profile region Pixel value in domain is set as 255, and the pixel value outside profile is set as 0, obtains a frame bianry image, referred to as effective coverage figure;
S2, opposing connection clap image sequence and carry out three-dimensional reconstruction step, obtain the very low point cloud of a consistency, referred to as initial point cloud, Spin matrix R and translation vector t, spin matrix and translation vector of each frame camera relative to world coordinate system are also obtained simultaneously Combine to form transformation matrix M;
Each point in S3, traversal initial point cloud, obtains the maximum value of all the points value on three axis of x, y, z in initial point cloud With minimum value, and the distance between maxima and minima difference on each axis is calculated, is denoted as x_dis, y_dis, z_dis respectively, Respectively by this three range difference divided by 100, three obtained amount, referred to as the derivation scale of initial point cloud are denoted as x_scalar, y_ scalar,z_scalar;
S4, using a point in initial point cloud as source point, corresponding step is respectively extended along the positive negative direction in three directions of x, y, z respectively The derivation scale size calculated in rapid S3, obtains a cuboid centered on source point, the length, width and height of the cuboid are respectively 2*x_scalar, 2*y_scalar, 2*z_scalar, the source point center toward extending 26 directions altogether around cuboid, A new point is derived in each direction, takes the normal vector of the new point identical as the normal vector of source point, and each point that derives from is remembered Record its source point;
S5, in initial point cloud each point carry out a step S4 described in derivation operate, by obtain one derivation Point cloud, the quantity at this cloud midpoint is 26 times of initial point cloud quantity;
S6, opposing connection clap the i-th frame image in image sequence, take out its transformation matrix M being calculated in step s 2i, by step The derivation point cloud obtained in S5 is according to transformation matrix MiIt transforms under corresponding camera coordinates system, and will be derived from according to projection theory On the effective coverage figure of the i-th frame obtained in each of point cloud point back projection to step S1;
S7, according to the step in S6, to projecting to the point in the inactive area in the i-th frame effective coverage figure, by its from derive from point It is deleted in cloud, the point in the effective coverage projected in the i-th frame effective coverage figure then retains;
S8, opposing connection clap the operation that each frame in image sequence is performed both by above-mentioned steps S6 and S7, by being surround to deriving from point cloud It projects and deletes, three-dimensional reconstruction obtains the derivation point cloud containing interior point;
Each of derivation point cloud that S9, traversal step S8 are obtained derives from point, judges that each point that derives from is interior point or exterior point, Delete be interior point derivation point cloud, reservation is the derivation point cloud of exterior point;The point cloud finally retained is to derive from once effective Point cloud;
The quantity of available point cloud obtained by S10, statistic procedure S9, if dense degree reaches demand, this available point cloud is maximal end point Cloud;If dense degree is not up to demand, using the available point cloud as initial point cloud, above-mentioned S3 is repeated to current procedures, until The available point cloud of acquisition meets consistency requirement;
Derivation scale described in step S3 be in initial point cloud all point coordinates respectively on three axis of x, y, z maximum value with it is minimum 1 the percent of range difference between value, the derivation scale are the actual parameters after repeatedly testing, need not be according to specifically answering With adjustment;
The new point of acquisition is derived from 26 directions of the point as source point to cuboid one of using in initial point cloud in step S4, In the calculation formula newly put be:
Wherein, x_org, y_org, z_org are respectively that some in initial point cloud puts coordinate on x, y, z axis, x_scalar, Y_scalar, z_scalar are respectively the derivation scale in three directions of x, y, z being calculated,
The case where 3*3*3 that above formula is calculated new point coordinates in addition to source point increment of coordinate are (0,0,0), it will derive 26 new point clouds described in step S4;
In step S6, calculation formula of the point Cloud transform to the camera coordinates system of the i-th frame image will be derived from:
(x_cami,y_cami,z_cami)=(x_world, y_world, z_world) * Ri+ti
Wherein, (x_world, y_world, z_world) is the coordinate for deriving from point cloud in world coordinate system, Ri, tiRespectively The spin matrix and translation vector of i frame cameras, by RiWith tiTransformation, the point cloud in world coordinate system has been gone into the i-th frame phase Under machine coordinate system, i.e. the coordinate of point cloud after being converted in the i-th frame camera coordinates system is (x_cami,y_cami,z_cami);
Point cloud in camera coordinates system is subjected to back projection, by each spot projection to the i-th frame effective coverage figure, projected position Calculation formula:
Wherein, f is camera focus, Cx、CyRespectively 0.5 times of image resolution ratio, u, the v being calculated are the spot projection to figure As upper position, i.e., u rows, v on image arrange corresponding location of pixels;
In step S8, after being filtered out to a derivation point cloud around bat image, derivation point cloud at this time has not had external miscellaneous point, simultaneously Also the point of partial error in initial point cloud is filtered out, but is 26 directions due to deriving from direction, there is also positioned at table Point inside face, i.e., interior point;
The each derivation point of judgement described in step S9 is a method for interior point or exterior point, is determined according to the method for dot product 's:Since the outside of the surface tangential plane of the normal vector directed towards object of source point calculates the derivation of the derivation point to each derivation point Source point to the derivation point vector, if the dot product result of the normal vector of the derivation source point of the vector and the derivation point is just explanation The derivation point is exterior point, retains the point;If the dot product result of the vector and the normal vector of the derivation source point of the derivation point is negative, say The angle for deriving from source point to the vector and normal vector that derive from point of the bright derivation point is more than 90 degree, illustrates that the point is interior point, deletes This point;
The method of extraction contour of object described in the step S1 is using sift algorithms, and finally obtained effective coverage figure It is bianry image.
2. a kind of method of dense degree of cloud of being laid foundations based on profile validity raising Three-dimensional Gravity according to claim 1, It is characterized in that, the three-dimensional reconstruction step described in the step S2 is to use three-dimensional reconstruction algorithm, and transformation matrix M is spin matrix The integration M=&#91 of R and translation vector t;R|t].
3. a kind of method of dense degree of cloud of being laid foundations based on profile validity raising Three-dimensional Gravity according to claim 1, It is characterized in that, in step S7, the projected position of point is derived from verification, and whether the method in the effective coverage in the figure of effective coverage is: The coordinate being located on image that back projection finds the derivation point is first passed through, the pixel in the figure of corresponding i-th frame effective coverage is detected Whether the pixel value of position is 255, if pixel value is 255, retains the point, otherwise deletes the point.
4. a kind of method of dense degree of cloud of being laid foundations based on profile validity raising Three-dimensional Gravity according to claim 1, It is characterized in that, in step slo, putting the consistency of cloud can voluntarily select, and can stop iteration after consistency reaches demand.
CN201610298507.9A 2016-05-06 2016-05-06 A method of Three-dimensional Gravity is improved based on profile validity and is laid foundations the dense degree of cloud Active CN106023303B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610298507.9A CN106023303B (en) 2016-05-06 2016-05-06 A method of Three-dimensional Gravity is improved based on profile validity and is laid foundations the dense degree of cloud

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610298507.9A CN106023303B (en) 2016-05-06 2016-05-06 A method of Three-dimensional Gravity is improved based on profile validity and is laid foundations the dense degree of cloud

Publications (2)

Publication Number Publication Date
CN106023303A CN106023303A (en) 2016-10-12
CN106023303B true CN106023303B (en) 2018-10-26

Family

ID=57081878

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610298507.9A Active CN106023303B (en) 2016-05-06 2016-05-06 A method of Three-dimensional Gravity is improved based on profile validity and is laid foundations the dense degree of cloud

Country Status (1)

Country Link
CN (1) CN106023303B (en)

Families Citing this family (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106570507B (en) * 2016-10-26 2019-12-27 北京航空航天大学 Multi-view-angle consistent plane detection and analysis method for monocular video scene three-dimensional structure
CN106887043A (en) * 2017-03-08 2017-06-23 景致三维(江苏)股份有限公司 The method of the method, device and three-dimensional modeling of the removal of three-dimensional modeling exterior point
EP3467789A1 (en) * 2017-10-06 2019-04-10 Thomson Licensing A method and apparatus for reconstructing a point cloud representing a 3d object
GB2569656B (en) * 2017-12-22 2020-07-22 Zivid Labs As Method and system for generating a three-dimensional image of an object
CN108655571A (en) * 2018-05-21 2018-10-16 广东水利电力职业技术学院(广东省水利电力技工学校) A kind of digital-control laser engraving machine, control system and control method, computer
CN108735279B (en) * 2018-06-21 2022-04-12 广西虚拟现实科技有限公司 Virtual reality upper limb rehabilitation training system for stroke in brain and control method
CN108846896A (en) * 2018-06-29 2018-11-20 南华大学 A kind of automatic molecule protein molecule body diagnostic system
CN109081524A (en) * 2018-09-25 2018-12-25 江西理工大学 A kind of intelligence mineral processing waste water reuse change system, detection method
CN109293434A (en) * 2018-11-15 2019-02-01 关静 A kind of agricultural environmental protection base manure and preparation method
CN111819602A (en) * 2019-02-02 2020-10-23 深圳市大疆创新科技有限公司 Method for increasing point cloud sampling density, point cloud scanning system and readable storage medium
CN110070608B (en) * 2019-04-11 2023-03-31 浙江工业大学 Method for automatically deleting three-dimensional reconstruction redundant points based on images
CN110888143B (en) * 2019-10-30 2022-09-13 中铁四局集团第五工程有限公司 Bridge through measurement method based on unmanned aerial vehicle airborne laser radar
CN111553985B (en) * 2020-04-30 2023-06-13 四川大学 O-graph pairing European three-dimensional reconstruction method and device
CN113436242B (en) * 2021-07-22 2024-03-29 西安电子科技大学 Method for obtaining high-precision depth value of static object based on mobile depth camera

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101271591A (en) * 2008-04-28 2008-09-24 清华大学 Interactive multi-vision point three-dimensional model reconstruction method
CN101533529A (en) * 2009-01-23 2009-09-16 北京建筑工程学院 Range image-based 3D spatial data processing method and device
US7843448B2 (en) * 2005-11-21 2010-11-30 Leica Geosystems Ag Identification of occluded edge regions from 3D point data
CN102592284A (en) * 2012-02-27 2012-07-18 上海交通大学 Method for transforming part surface appearance three-dimensional high-density point cloud data into grayscale image

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7843448B2 (en) * 2005-11-21 2010-11-30 Leica Geosystems Ag Identification of occluded edge regions from 3D point data
CN101271591A (en) * 2008-04-28 2008-09-24 清华大学 Interactive multi-vision point three-dimensional model reconstruction method
CN101533529A (en) * 2009-01-23 2009-09-16 北京建筑工程学院 Range image-based 3D spatial data processing method and device
CN102592284A (en) * 2012-02-27 2012-07-18 上海交通大学 Method for transforming part surface appearance three-dimensional high-density point cloud data into grayscale image

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Fitting range data to primitives for rapid local 3D modeling using sparse range point clouds;Soon-Wook Kwon等;《Automation in Construction》;20041231;第67-81页 *
实验三 插值;yxyxyxyx41;《百度文库 https://wenku.baidu.com/view/1fb468e8f8c75fbfc77db2ab.html》;20140429;第2页 *

Also Published As

Publication number Publication date
CN106023303A (en) 2016-10-12

Similar Documents

Publication Publication Date Title
CN106023303B (en) A method of Three-dimensional Gravity is improved based on profile validity and is laid foundations the dense degree of cloud
CN103345736B (en) A kind of virtual viewpoint rendering method
CN110288642B (en) Three-dimensional object rapid reconstruction method based on camera array
CN106683173B (en) A method of Three-dimensional Gravity is improved based on neighborhood Block- matching and is laid foundations the dense degree of cloud
CN103810685B (en) A kind of super-resolution processing method of depth map
CN107833270A (en) Real-time object dimensional method for reconstructing based on depth camera
CN105279789B (en) A kind of three-dimensional rebuilding method based on image sequence
CN109754459B (en) Method and system for constructing human body three-dimensional model
KR20180054487A (en) Method and device for processing dvs events
CN107025660B (en) Method and device for determining image parallax of binocular dynamic vision sensor
CN106023230B (en) A kind of dense matching method of suitable deformation pattern
CN103456043B (en) A kind of viewpoint internetwork roaming method and apparatus based on panorama sketch
CN110567441B (en) Particle filter-based positioning method, positioning device, mapping and positioning method
CN106778660B (en) A kind of human face posture bearing calibration and device
CN110310331A (en) A kind of position and orientation estimation method based on linear feature in conjunction with point cloud feature
CN111563952A (en) Method and system for realizing stereo matching based on phase information and spatial texture characteristics
CN111325828B (en) Three-dimensional face acquisition method and device based on three-dimensional camera
JP6285686B2 (en) Parallax image generation device
CN110516639B (en) Real-time figure three-dimensional position calculation method based on video stream natural scene
CN116310131A (en) Three-dimensional reconstruction method considering multi-view fusion strategy
Hung et al. Multipass hierarchical stereo matching for generation of digital terrain models from aerial images
CN112102504A (en) Three-dimensional scene and two-dimensional image mixing method based on mixed reality
CN110148168B (en) Three-eye camera depth image processing method based on size double baselines
CN108090930A (en) Barrier vision detection system and method based on binocular solid camera
CN113808185B (en) Image depth recovery method, electronic device and storage medium

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant