CN101271590A - Method for acquiring cam contour object shape - Google Patents

Method for acquiring cam contour object shape Download PDF

Info

Publication number
CN101271590A
CN101271590A CNA2008100471841A CN200810047184A CN101271590A CN 101271590 A CN101271590 A CN 101271590A CN A2008100471841 A CNA2008100471841 A CN A2008100471841A CN 200810047184 A CN200810047184 A CN 200810047184A CN 101271590 A CN101271590 A CN 101271590A
Authority
CN
China
Prior art keywords
point
image
video camera
measurand
visual angle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CNA2008100471841A
Other languages
Chinese (zh)
Inventor
李德华
李清光
赵亮
高岑
董莉萍
石碧莹
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huazhong University of Science and Technology
Original Assignee
Huazhong University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huazhong University of Science and Technology filed Critical Huazhong University of Science and Technology
Priority to CNA2008100471841A priority Critical patent/CN101271590A/en
Publication of CN101271590A publication Critical patent/CN101271590A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses a method of obtaining a cam contour shape of an object, which uses a video camera for shooting the measured object continuously to get a series of object images. After the object image and a background being segmented, a contour image of the object in a visual angle of every video camera can be obtained. A plurality of object marginal points are arranged on each height of each external contour image; the marginal points and a center of the video camera constitute a section plane. The section plane intersects with the measured object to obtain a cross-section of the object. After the cross-section of the object is processed by a computer, the contour of the object on the cross-section can be obtained, which is called as a residual plot of the object on the cross-section. After all contours of the cross-sections of the object are obtained, a spatial superpose of the object contour is made along the height direction to get an intact three-dimensional model of the object. The method of obtaining the object shape with the cam contour provided by the invention can obtain a plurality of scanning spots, and has the advantages of fast scanning speed, safe and reliable usage, wide scanning range, thereby not only being applied to small-size situation, but also scanning the object with larger size.

Description

A kind of method of obtaining cam contour object shape
Technical field
Integrated application computer image processing technology of the present invention, multimedia technology, integrate machinery, electronics, optics and computer technology, be specifically related to a kind of method of obtaining cam contour object shape, this method with the noncontact mode with the steric information of body surface and chromatic information one-off scanning with return in the computing machine.
Background technology
Obtaining of steric information is one of vital task in graphical analysis, the computer vision, and Chinese scholars has been done a large amount of work in this field, and some devices are applied in the real work.The work of this respect at present can be divided into two big classes from application purpose, and a class is the steric information that obtains visual range, as people's vision.This is widely used in fields such as industry, robot vision, military surveillance, geology; Another kind of is to be the center with taken the photograph object, obtains body surface omnibearing stereo information, and this mainly is to be used for machine-building, video display, advertising.
For different application occasion, object and purpose, the method for obtaining steric information has a lot, wherein mainly contains the stereoscopic parallax method, flight time range finding, structured light range finding.
Stereo vision method is used for binocular and used for multi-vision visual, it takes many every map sheet by the camera of diverse location to same target, according to principle of triangulation, utilize in the image parallaxometer of corresponding point to calculate the steric information of target surface within sweep of the eye, this method requires looser to the application scenario, thereby is used widely.And it has the key link in the stereoscopic parallax method: the difficulty of seeking corresponding point is bigger, algorithm complexity, long shortcoming consuming time.And when object was complicated, various influences of blocking were difficult to obtain the omnibearing steric information of object.
Flight time range finding is that to utilize the light velocity and velocity of sound velocity of propagation in air be certain principle, send the detection wave beam by viameter to surveying thing, meet thing reflection back and measure the flight time or the phase change of wave beam, calculate the radio-beam flying distance, thereby obtain the spatial positional information of body surface.This viameter is divided into airborne laser range finder and ultrasonic ranging device by measuring method.Present technique has many matured products, is widely used in the occasion of large-size, and some also is applied to the small size occasion.The pulse detection of this class device and time measurement device accuracy requirement height, complexity, (especially when using the small scale occasion) costs an arm and a leg.Because wave beam of every emission can only be surveyed any position of body surface, finishes the scanning of body surface consuming time longer.Use laser instrument can cause environmental hazard simultaneously.
Structured light range finding is a kind of ranging technology that had not only utilized image but also utilized controllable light source, and its basic ideas are the geological informations that utilize in the lighting source, help to extract the geological information in the scenery.Be to utilize the range finding of structure luminous point in early days, develop into striation method, grid method, circular striation, intersection collimation method, random grain method or the like afterwards.The optical plane that this method utilization has certain structure is radiated at body surface and produces striations, in the image of taking, extract these stripeds, the form of these stripeds and discontinuity have constituted each visible surface of all objects of scenery and the relative distance between the camera optical centre and have estimated.
Summary of the invention
The object of the present invention is to provide a kind of method of obtaining cam contour object shape, this method has the precision height, and low this is low, the safe in utilization and characteristics that are easy to realize.
The method of obtaining cam contour object shape provided by the invention, its step comprises:
(1) video camera is calibrated, is obtained the calibration parameter matrix of this video camera:
M = m 11 m 12 m 13 m 14 m 21 m 22 m 23 m 24 m 31 m 32 m 33 m 34
(2) video camera is taken the sequence image image_1...image_oi...image_n at each visual angle that obtains measurand, and preservation is got off;
(3) contour edge of measurand in the image at above-mentioned each visual angle of extraction;
(4) obtain the image coordinate and the corresponding RGB color value of point on the above-mentioned all images sequence, preservation is got off;
(5) at some visual angle oi, oi=1~n finds out the contour images image_oi corresponding to the measurand at this visual angle; At some picture altitude v, find out and be positioned at image border point S all on this height q, q=1,2 ... r, q is the number of hypothesis height v place image outline marginal point; According to formula (I), obtain r bar tangent line T q, and T qWith all intersection points of the outer bounding box of measurand, preservation is got off;
X w = Zw l 7 l 4 - l 1 l 7 l 5 - l 2 + l 3 - l 7 l 6 l 7 l 5 - l 2 Y w = Z w k 2 - k 7 k 51 k 7 k 4 - k + k 3 - k 7 k 6 k 7 k 4 - k Z w = Z w - - - ( I )
In the formula,
k 1=m 12-um 32,k 2=m 13-um 33,k 3=m 14-um 34
k 4=m 22-vm 32,k 5=m 23-vm 33,k 6=m 24-vm 34 k 7 = um 31 - m 11 vm 31 - m 21 ;
l 1=um 33-m 13,L 2=m 11-um 31,l 3=m 14-um 34
l 4=vm 33-m 23,l 5=m 21-vm 31,l 6=m 24-vm 34 l 7 = m 12 - um 32 m 22 - vm 32 ;
Wherein, (X Wg, Y Wg, Z Wg) be the coordinate of g the point in space; (u g, v g) be the image coordinate in video camera of g point, m IjThe capable j column element of i for calibration matrix M;
(6) for q bar tangent line, the intersection point P of this tangent line and the outer bounding box of object Qh, h=1,2 ..., d supposes that this tangent line and object bounding box have d intersection point, carries out projection toward other visual angles respectively, the calculating of projection is carried out according to the projection formula (II) that is determined by calibration matrix;
u i = m 11 X wi + m 12 Y wi + m 13 Z wi + m 14 m 31 Y wi + m 32 Y wi + m 33 Z wi + m 34 v i = m 21 X wi + m 22 Y wi + m 23 Z wi + m 24 m 31 X wi + m 32 Y wi + m 33 Z wi m 34 - - - ( II )
In the formula, (X Wi, Y Wi, Z Wi) be the volume coordinate of i the point in space; (u i, v i) be the image coordinate of i spatial point in this visual angle, m IjThe capable j column element of i for calibration matrix M
(7) get rid of the fict point of intersection of tangents;
(8) r bar tangent line is handled in repeating step (6) and (7);
(9) repeating step (5) carries out identical processing to the marginal point on all height of visual angle oi to (8);
(10) repeating step (5) is to (9), other visual angles (2,3 ..., contour images n) all carries out identical processing;
(11) point of contact that all preservations are got off superposes, and has obtained the some cloud of the three-dimensional model of measurand.
This method only need one to several video cameras as image capture device, need not to use additional distance mearuring equipment such as laser instrument, ultrasonic distance measuring apparatus, thus cost far away than additive method be cheap, control circuit is also very simple, it is easy to realize; Owing to do not need to seek corresponding point, thus algorithm is more stable than stereoscopic parallax method, simple quick; Can obtain a plurality of analyzing spots simultaneously, sweep velocity is fast; Can not damage environment, to the harmless effect of sweep object, safe and reliable; Sweep limit is wide, both can be applied to the little occasion of size, also can the bigger object of scan size.
Description of drawings
Bounding box intersects synoptic diagram to Fig. 1 outside plane and the human body for capable the cuing open of L2 through the ccd video camera imaging, wherein, (a) is the cubical overlapping relation synoptic diagram of section and volume elements, (b) is the edge pixel synoptic diagram;
Fig. 2 is an image coordinate system;
Fig. 3 is the synoptic diagram that concerns between image coordinate system, camera coordinate system, the world coordinate system;
Fig. 4 is the schematic flow sheet of the inventive method;
Fig. 5 cuts ray volume elements method for removing principle schematic for non-point of contact.
Embodiment
The inventive method only just can obtain the outline stereoscopic model of protruding testee and human body with the image sequence of taking continuously.The present invention at first adopts video camera to take testee continuously, obtains a series of subject image.After subject image and background segment, then can obtain the contour images of object at each video camera visual angle.In each of each outer profile image several object edge points are arranged highly all, these marginal points and video camera center then constitute one and cut open the plane.Cut open plane and testee and intersect a cross section that then can obtain object.We can obtain the profile of object on this cross section after machine was handled as calculated, were called the remnants figure of object in this cross section.After the profile (remaining figure) in all cross sections of object all obtains, these contour of object in the enterprising row space stack of short transverse, are promptly obtained the three-dimensional information of a complete object shell, thereby acquire the shell stereoscopic model of object.
Suppose that measurand shrouds a huge outer bounding box the inside, outer bounding box is according to being parallel to the volume elements that the X-Y-Z coordinate axis is cut into some, shown in Fig. 1 (a).Under certain video camera visual angle, utilize the principle of ray cast, investigate by viewpoint (video camera center) pass viewing plane (outer profile image of the measurand of obtaining under this viewpoint camera record under) go up some contour edge point pixels and penetrate visual field (bounding box) a ray (this ray will inevitably with the outside surface of measurand certain any tangent; Therefore this ray is just by some marginal points in the measurand contour images under video camera center and this viewpoint, and and the tangent tangent line of measurand outside surface), can determine all volume elements that intersect with this ray in the bounding box, these volume elements are the discrete representation of this ray just.
In addition, according in the space not conplane arbitrarily 3 can determine a plane principle, the section under certain viewpoint bunch can be made as the video camera center respectively with imaging surface on two marginal points in each row pixel form.Fig. 1 (a) and Fig. 1 (b) have provided a certain cutting planes object (herein with human body be example, suppose human body shroud one outside the bounding box the inside) imaging corresponding with this viewpoint under the viewpoint, and the relation between the outer bounding box of testee.
Shown in Fig. 1 (b), 6 points on the line L2 represented L2 capable in the edge pixel of human body contour outline.6 rays have been formed at these 6 points and ccd video camera center respectively, and this plane, 6 ray places is the plane of cuing open of L2 correspondence.Our purpose is exactly will cut open at this to obtain these 6 rays and the tangent point of contact of people's external surface in plane.All point of contacts have also just constituted the some cloud of the three-dimensional model of measurand (is example with the human body).
In order to describe the optical imagery process quantitatively, we at first define following several coordinate system: image coordinate system, camera coordinate system and world coordinate system.
As above shown in Figure 2, definition rectangular coordinate system u, v on image, (u v) is respectively columns and the line number of this pixel in image to the coordinate of each pixel.So (u v) is to be the coordinate of the image coordinate system of unit with the pixel.The video camera imaging geometric relationship as shown in Figure 3, wherein the O point is called video camera photocentre, X cAxle and Y cAxle is parallel with the y axle with the x axle of image, Z cAxle is the optical axis of video camera, and it is vertical with the plane of delineation.The intersection point of the optical axis and the plane of delineation is the initial point of image coordinate system, by an O and X c, Y c, Z cThe rectangular coordinate system that axle is formed.OO1 is a focal length of camera.
Because video camera can be placed in any position in the environment, we also select a frame of reference to describe the position of video camera in environment, and with the position of any object in its describe environment, this coordinate system is called world coordinate system.It is by X w, Y w, Z wAxle is formed.
As shown in Figure 5, the step of the inventive method comprises:
(1) video camera is calibrated, is obtained the calibration parameter matrix of this video camera:
M = m 11 m 12 m 13 m 14 m 21 m 22 m 23 m 24 m 31 m 32 m 33 m 34 - - - ( 1 )
Can adopt the method described in " the computer vision---theory of computation and algorithm basis " (Ma Songde, Zhang Zhengyou, Beijing: Science Press, 1998) that video camera is calibrated.
(2) video camera is taken the sequence image at each visual angle obtain measurand (suppose to have n visual angle, n width of cloth image is also promptly arranged: image_1...image_oi...image_n), preservation is got off.
(3) contour edge of measurand in the image at above-mentioned each visual angle of extraction.
Can adopt the method described in " Color Image Processing and the some applied researcies in film and tv industry thereof " (Huang Jianzhong, PhD dissertation, HUST, 1997) to carry out above-mentioned processing.
(4) obtain the image coordinate and the point RGB color value of point on the above-mentioned all images sequence, preservation is got off.
(5) at some visual angle oi, oi=1~n finds out the contour images image_oi corresponding to the measurand at this visual angle.(be assumed to be highly v), find out and be positioned at image border point S all on this height in some picture altitudes q, q=1,2 ... r (supposing that v highly has q image outline marginal point).According to formula (2), obtain r bar tangent line T q, (q=1,2 ... r) and T qWith all intersection points of the outer bounding box of measurand, preservation is got off.
X w = Zw l 7 l 4 - l 1 l 7 l 5 - l 2 + l 3 - l 7 l 6 l 7 l 5 - l 2 Y w = Z w k 2 - k 7 k 51 k 7 k 4 - k + k 3 - k 7 k 6 k 7 k 4 - k Z w = Z w - - - ( 2 )
In the formula,
k 1=m 12-um 32,k 2=m 13-um 33,k 3=m 14-um 34
k 4=m 22-vm 32,k 5=m 23-vm 33,k 6=m 24-vm 34 k 7 = um 31 - m 11 vm 31 - m 21 .
l 1=um 33-m 13,L 2=m 11-um 31,l 3=m 14-um 34
l 4=vm 33-m 23,l 5=m 21-vm 31,l 6=m 24-vm 34 l 7 = m 12 - um 32 m 22 - vm 32 .
Wherein, (X Wg, Y Wg, Z Wg) be the coordinate of g the point in space; (u g, v g) be the image coordinate in video camera of g point, m IjThe capable j column element of i for calibration matrix M.
(6) for q bar tangent line, the intersection point P of this tangent line and the outer bounding box of object Qh, h=1,2 ..., d (supposing that this tangent line and object bounding box have d intersection point), respectively toward other visual angles (2,3 ..., n) carry out projection, the calculating of projection is carried out according to the projection formula (3) that is determined by calibration matrix.
u i = m 11 X wi + m 12 Y wi + m 13 Z wi + m 14 m 31 Y wi + m 32 Y wi + m 33 Z wi + m 34 v i = m 21 X wi + m 22 Y wi + m 23 Z wi + m 24 m 31 X wi + m 32 Y wi + m 33 Z wi m 34 - - - ( 3 )
Formula (3) is in some visual angles, and the volume coordinate of any one point of known spatial asks this spatial point to become the system of equations of the image pixel coordinate of image in video camera.In the formula, (X Wi, Y Wi, Z Wi) be the volume coordinate of i the point in space; (u i, v i) be the image coordinate of i spatial point in this visual angle, m IjThe capable j column element of i for calibration matrix M.
Attention: in different visual angles, if the relative world coordinate system of video camera the position relative variation has taken place, have different calibration matrix M so at different visual angles, this is to should be noted that when calculating projection.
(7) get rid of the fict point of intersection of tangents.When projection, as long as and the disjoint point of intersection of tangents of image at any one visual angle all will be excluded.
Fig. 5 is that (broken circle is the camera position track to a certain height for cutting section synoptic diagram v), ellipse representation measurand cross section among the figure, and dashed rectangle is the cross section of the external bounding box of measurand.Pic1, Pic2, Pic1 are video camera three images that same measurand forms in video camera when laying respectively at 1,2,3 positions.
Line segment [p11, p12] is the imaging of Delta Region ∠ T11o1T12 in pic1, also is object and the imaging of Plane intersects part in pic1 that constitute with Delta Region ∠ T11o1T12.Point p11 and p12 are the subject image marginal point.According to image-forming principle, ray p11T11 through tangent with object behind the photocentre o1, supposes that the point of contact is o11 from p11; The intersection point of ray and the external bounding box of object is respectively s111 and s112.In like manner, ray p12T12 through tangent with object behind the photocentre o1, supposes that the point of contact is o12 from p12; The intersection point of ray and the external bounding box of object is respectively s121 and s122.
Line segment [p 2 1, p 2 2] be Delta Region ∠ T 2 1O2T 2 2Imaging in pic2 also is an object and with Delta Region ∠ T 2 1O2T 2 2The imaging of Plane intersects part in pic2 that constitutes.Point p 2 1And p 2 2Be the subject image marginal point.According to image-forming principle, ray p 2 1T 2 1From p 2 1,, suppose that the point of contact is o through tangent behind the photocentre o2 with object 2 1The intersection point of ray and the external bounding box of object is respectively s 2 11And s 2 12In like manner, ray p 2 2T 2 2From p 2 2,, suppose that the point of contact is o through tangent behind the photocentre o2 with object 2 2The intersection point of ray and the external bounding box of object is respectively s 2 21And s 2 22
Line segment [p 3 1, p 3 2] be Delta Region ∠ T 3 1O3T 3 2Imaging in pic3 also is an object and with Delta Region ∠ T 3 1O3T 3 2The imaging of Plane intersects part in pic3 that constitutes.Point p 3 1And p 3 2Be the subject image marginal point.According to image-forming principle, ray p 3 1T 3 1From p 3 1,, suppose that the point of contact is o through tangent behind the photocentre o3 with object 3 1The intersection point of ray and the external bounding box of object is respectively s 3 11And s 3 12In like manner, ray p 3 2T 3 2From p 3 2,, suppose that the point of contact is o through tangent behind the photocentre o3 with object 3 2The intersection point of ray and the external bounding box of object is respectively s 3 21And s 3 22
Suppose ray p 1 1T 1 1And p 2 1T 2 1Intersection point be i 12 1p 1 1T 1 1And p 3 1T 3 1Intersection point be i 13 1p 1 1T 1 1And p 3 2T 3 2Intersection point be i 13 2p 1 2T 1 2And p 2 1T 2 1Intersection point be i 21 1p 1 2T 1 2And p 2 2T 2 2Intersection point be i 21 2p 1 2T 1 2And p 3 1T 3 1Intersection point be i311; p 1 2T 1 2And p 3 2T 3 2Intersection point be i 31 2p 2 1T 2 1And p 3 1T 3 1Intersection point be i 23 1p 2 2T 2 2And p 3 1T 3 1Intersection point be i 32 1p 2 2T 2 2And p 3 2T 3 2Intersection point be i 32 2
At the visual angle 1, according to formula (2), we can calculate ray p 1 1T 1 1With the set of the joining of bounding box outside the object, and ray p 1 2T 1 2With the set of the joining of bounding box outside the object, i.e. line segment [s 1 11, s 1 12] and line segment [s 1 21, s 1 22].According to formula (3), we are line segment [s 1 11, s 1 12] and line segment [s 1 21, s 1 22] on point on pic2 and pic3, carry out projection, can notice such fact:
1. every ∠ T that is positioned at simultaneously 1 1O1T 1 2With ∠ T 2 1O2T 2 2Point----in the zone-(a point is positioned at ray p such as a point 1 1T 1 1On) and the b point (the b point is positioned at ray p 1 2T 1 2On), their projections in pic2 are p all 2'-----their projections in pic2 all must drop on line segment [p 2 1, p 2 2] in.
2. every only is to be positioned at ∠ T 1 1O1T 1 2In the zone, but not at ∠ T 2 1O2T 2 2Point in the zone---(the c point is positioned at ray p such as c 1 1T 1 1On), (d and e point are positioned at ray p for d and e 1 2T 1 2On), their projections in pic2 are respectively p 2 c, p 2 dAnd p 2 e---all drop on line segment [p 2 1, p 2 2] outside.
Though 3. a point and b point are positioned at ∠ T simultaneously 1 1O1T 1 2With ∠ T 2 1O2T 2 2In the zone, but because they are not positioned at ∠ T 3 1O3T 3 2In the zone, so their projection p in pic3 3 a, p 3 bAlso drop on line segment [p 3 1, p 3 2] outside.
4. we can further infer from the above-mentioned fact, have only to be positioned at ∠ T simultaneously 1 1O1T 1 2, ∠ T 2 1O2T 2 2With ∠ T 3 1O3T 3 2On point, just can drop on line segment [p simultaneously respectively during projection respectively toward pic1, pic2 and pic3 1 1, p 1 2], line segment [p 2 1, p 2 2] and line segment [p 3 1, p 3 2] in.The point that the point of contact of object comes to this.
5. when constantly increasing the quantity at visual angle, the point that satisfies above-mentioned condition also just can only be each point of contact of object.Can there be at least one visual angle in the point of non-measurand toward the projection of each visual angle the time, the projection meeting thereon of non-measurand point drops on measurand beyond the imaging region at this visual angle.
As seen from the above description, can be by certain visual angle measurand being cut spot projection on the ray to other visual angle images, whether whether the image of seeing the measurand on tangent point and other visual angles intersects, be the judgement at the point of contact of measurand according to projection result then.As long as being positioned at the imaging region tangent point in addition of a visual angle measurand is not the point of contact, to be got rid of, remaining tangent point remains the visual angle of continuing toward other and carries out projection.When all visual angles after all projection finishes, the tangent point that remains just must be the point of contact.
It should be noted that the point of contact quantity on the tangent line may be not only one.In measurand is the zone on plane, and one just has a plurality of with the mutually tangent point of contact of measurand.
Like this, have only the intersection point that all intersects with the image of measurand at all visual angles just to remain to the end, this intersection point or intersection point set are exactly point of contact (point of contact collection), and this point of contact (point of contact collection) coated and tangent line T qThe color of corresponding image outline marginal point.
(8) r bar tangent line is handled in repeating step (6) and (7).
(9) repeating step (5) carries out identical processing to the marginal point on all height of visual angle oi to (8).
(10) repeating step (5) is to (9), other visual angles (2,3 ..., contour images n) all carries out identical processing.
(11) point of contact that all preservations are got off superposes, and has just obtained the some cloud of the three-dimensional model of measurand.
The inventive method can reach following technical indicator:
Sweep limit: 2 * 1.5 * 2.4m (length * wide * height),
Typical scan time: 10--40 second (speed is adjustable according to situation),
Scanning accuracy: 4mm (single camera); 1mm (4 video camera).
The present invention carries out reconstructing three-dimensional model under the prerequisite of the contour images of known measurand.For camera calibration, the image outline edge extracting all can be used existing technology.

Claims (1)

1, a kind of method of obtaining cam contour object shape, its step comprises:
(1) video camera is calibrated, is obtained the calibration parameter matrix of this video camera:
M = m 11 m 12 m 13 m 14 m 21 m 22 m 23 m 24 m 31 m 32 m 33 m 34
(2) video camera is taken the sequence image image_1...image_oi...image_n at each visual angle that obtains measurand, and preservation is got off;
(3) contour edge of measurand in the image at above-mentioned each visual angle of extraction;
(4) obtain the image coordinate and the corresponding RGB color value of point on the above-mentioned all images sequence, preservation is got off;
(5) at some visual angle oi, oi=1~n finds out the contour images image_oi corresponding to the measurand at this visual angle; At some picture altitude v, find out and be positioned at image border point S all on this height q, q=1,2 ... r, q is the number of hypothesis height v place image outline marginal point; According to formula (I), obtain r bar tangent line T q, and T qWith all intersection points of the outer bounding box of measurand, preservation is got off;
X w = Zw l 7 l 4 - l 1 l 7 l 5 - l 2 + l 3 - l 7 l 6 l 7 l 5 - l 2 Y w = Z w k 2 - k 7 k 51 k 7 k 4 - k + k 3 - k 7 k 6 k 7 k 4 - k Z w = Z w - - - ( I )
In the formula,
k 1=m 12-um 32,k 2=m 13-um 33,k 3=m 14-um 34
k 4=m 22-vm 32,k 5=m 23-vm 33,k 6=m 24-vm 34 k 7 = um 31 - m 11 vm 31 - m 21 ;
l 1=um 33-m 13,L 2=m 11-um 31,l 3=m 14-um 34
l 4=vm 33-m 23,l 5=m 21-vm 31,l 6=m 24-vm 34 l 7 = m 12 - um 32 m 22 - vm 32 ;
Wherein, (X Wg, Y Wg, Z Wg) be the coordinate of g the point in space; (u g, v g) be the image coordinate in video camera of g point, m IjThe capable j column element of i for calibration matrix M;
(6) for q bar tangent line, the intersection point P of this tangent line and the outer bounding box of object Qh, h=1,2 ..., d supposes that this tangent line and object bounding box have d intersection point, carries out projection toward other visual angles respectively, the calculating of projection is carried out according to the projection formula (II) that is determined by calibration matrix;
u i = m 11 X wi + m 12 Y wi + m 13 Z wi + m 14 m 31 Y wi + m 32 Y wi + m 33 Z wi + m 34 v i = m 21 X wi + m 22 Y wi + m 23 Z wi + m 24 m 31 X wi + m 32 Y wi + m 33 Z wi m 34 - - - ( II )
In the formula, (X Wi, Y Wi, Z Wi) be the volume coordinate of i the point in space; (u i, v i) be the image coordinate of f spatial point in this visual angle, m IjThe capable j column element of i for calibration matrix M
(7) get rid of the fict point of intersection of tangents;
(8) r bar tangent line is handled in repeating step (6) and (7);
(9) repeating step (5) carries out identical processing to the marginal point on all height of visual angle oi to (8);
(10) repeating step (5) is to (9), other visual angles (2,3 ..., contour images n) all carries out identical processing;
(11) point of contact that all preservations are got off superposes, and has obtained the some cloud of the three-dimensional model of measurand.
CNA2008100471841A 2008-03-28 2008-03-28 Method for acquiring cam contour object shape Pending CN101271590A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CNA2008100471841A CN101271590A (en) 2008-03-28 2008-03-28 Method for acquiring cam contour object shape

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CNA2008100471841A CN101271590A (en) 2008-03-28 2008-03-28 Method for acquiring cam contour object shape

Publications (1)

Publication Number Publication Date
CN101271590A true CN101271590A (en) 2008-09-24

Family

ID=40005541

Family Applications (1)

Application Number Title Priority Date Filing Date
CNA2008100471841A Pending CN101271590A (en) 2008-03-28 2008-03-28 Method for acquiring cam contour object shape

Country Status (1)

Country Link
CN (1) CN101271590A (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105025193A (en) * 2014-04-29 2015-11-04 钰创科技股份有限公司 Portable stereo scanner and method for generating stereo scanning result of corresponding object
CN107133548A (en) * 2016-02-27 2017-09-05 林项武 The acquisition device and method and its application method of a part or whole part human body contour outline data
CN107194983A (en) * 2017-05-16 2017-09-22 华中科技大学 A kind of three-dimensional visualization method and system based on a cloud and image data
CN108038862A (en) * 2017-12-11 2018-05-15 深圳市图智能科技有限公司 A kind of Interactive medical image intelligent scissor modeling method
CN109029253A (en) * 2018-06-29 2018-12-18 南京阿凡达机器人科技有限公司 A kind of package volume measuring method, system, storage medium and mobile terminal
CN110087055A (en) * 2018-01-25 2019-08-02 台湾东电化股份有限公司 Vehicle and its three-dimension object Information Acquisition System and three-dimension object information acquisition method
CN111429581A (en) * 2020-03-12 2020-07-17 网易(杭州)网络有限公司 Method and device for determining outline of game model and adding special effect of game
CN112016210A (en) * 2020-08-28 2020-12-01 中国科学院、水利部成都山地灾害与环境研究所 Method for searching impact contact points of hard objects with irregular shapes and measuring impact force and method for measuring impact force of large rocks in torrential flood debris flow
CN113538449A (en) * 2020-04-20 2021-10-22 顺丰科技有限公司 Image correction method, device, server and storage medium

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105025193A (en) * 2014-04-29 2015-11-04 钰创科技股份有限公司 Portable stereo scanner and method for generating stereo scanning result of corresponding object
CN105025193B (en) * 2014-04-29 2020-02-07 钰立微电子股份有限公司 Portable stereo scanner and method for generating stereo scanning result of corresponding object
CN107133548A (en) * 2016-02-27 2017-09-05 林项武 The acquisition device and method and its application method of a part or whole part human body contour outline data
CN107194983A (en) * 2017-05-16 2017-09-22 华中科技大学 A kind of three-dimensional visualization method and system based on a cloud and image data
CN107194983B (en) * 2017-05-16 2018-03-09 华中科技大学 A kind of three-dimensional visualization method and system based on a cloud and image data
CN108038862A (en) * 2017-12-11 2018-05-15 深圳市图智能科技有限公司 A kind of Interactive medical image intelligent scissor modeling method
CN110087055A (en) * 2018-01-25 2019-08-02 台湾东电化股份有限公司 Vehicle and its three-dimension object Information Acquisition System and three-dimension object information acquisition method
CN110087055B (en) * 2018-01-25 2022-03-29 台湾东电化股份有限公司 Vehicle, three-dimensional object information acquisition system thereof and three-dimensional object information acquisition method
CN109029253A (en) * 2018-06-29 2018-12-18 南京阿凡达机器人科技有限公司 A kind of package volume measuring method, system, storage medium and mobile terminal
CN111429581A (en) * 2020-03-12 2020-07-17 网易(杭州)网络有限公司 Method and device for determining outline of game model and adding special effect of game
CN111429581B (en) * 2020-03-12 2024-01-26 网易(杭州)网络有限公司 Method and device for determining outline of game model and adding special effects of game
CN113538449A (en) * 2020-04-20 2021-10-22 顺丰科技有限公司 Image correction method, device, server and storage medium
CN112016210A (en) * 2020-08-28 2020-12-01 中国科学院、水利部成都山地灾害与环境研究所 Method for searching impact contact points of hard objects with irregular shapes and measuring impact force and method for measuring impact force of large rocks in torrential flood debris flow
CN112016210B (en) * 2020-08-28 2023-02-28 中国科学院、水利部成都山地灾害与环境研究所 Impact contact point searching and impact force measuring method for hard objects with irregular shapes and impact force measuring method for large rocks in torrential flood debris flow

Similar Documents

Publication Publication Date Title
JP7297017B2 (en) Method and apparatus for calibrating external parameters of on-board sensors and related vehicles
CN101271590A (en) Method for acquiring cam contour object shape
EP3264364B1 (en) Method and apparatus for obtaining range image with uav, and uav
US7616817B2 (en) Three dimensional shape correlator
US9443308B2 (en) Position and orientation determination in 6-DOF
US7342669B2 (en) Three-dimensional shape measuring method and its device
JP5891560B2 (en) Identification-only optronic system and method for forming three-dimensional images
CN105043350A (en) Binocular vision measuring method
US10760907B2 (en) System and method for measuring a displacement of a mobile platform
Barazzetti et al. 3D scanning and imaging for quick documentation of crime and accident scenes
CN106019264A (en) Binocular vision based UAV (Unmanned Aerial Vehicle) danger vehicle distance identifying system and method
CN110031830B (en) Distance measurement method based on laser line scanning imaging
CN112837207B (en) Panoramic depth measurement method, four-eye fisheye camera and binocular fisheye camera
US20130028482A1 (en) Method and System for Thinning a Point Cloud
CN108195305B (en) Binocular detection system and depth detection method thereof
CN110750153A (en) Dynamic virtualization device of unmanned vehicle
CN106707295A (en) Three-dimensional imaging device and method based on time correlation
RU2562368C1 (en) Three-dimensional (3d) mapping method
Li et al. Laser scanning based three dimensional measurement of vegetation canopy structure
EP3989169A1 (en) Hybrid photogrammetry
RU2559332C1 (en) Method of detecting small unmanned aerial vehicles
Djuricic et al. Supporting UAVs in low visibility conditions by multiple-pulse laser scanning devices
EP4257924A1 (en) Laser scanner for verifying positioning of components of assemblies
KR20230158474A (en) sensing system
CN209978890U (en) Multi-laser-line rapid detection system

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C12 Rejection of a patent application after its publication
RJ01 Rejection of invention patent application after publication

Open date: 20080924