CN101853528B - Hand-held three-dimensional surface information extraction method and extractor thereof - Google Patents

Hand-held three-dimensional surface information extraction method and extractor thereof Download PDF

Info

Publication number
CN101853528B
CN101853528B CN2010101738490A CN201010173849A CN101853528B CN 101853528 B CN101853528 B CN 101853528B CN 2010101738490 A CN2010101738490 A CN 2010101738490A CN 201010173849 A CN201010173849 A CN 201010173849A CN 101853528 B CN101853528 B CN 101853528B
Authority
CN
China
Prior art keywords
point
order
laser stripe
information
dimensional
Prior art date
Application number
CN2010101738490A
Other languages
Chinese (zh)
Other versions
CN101853528A (en
Inventor
马孜
聂建辉
胡英
Original Assignee
沈阳雅克科技有限公司
马孜
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 沈阳雅克科技有限公司, 马孜 filed Critical 沈阳雅克科技有限公司
Priority to CN2010101738490A priority Critical patent/CN101853528B/en
Publication of CN101853528A publication Critical patent/CN101853528A/en
Application granted granted Critical
Publication of CN101853528B publication Critical patent/CN101853528B/en

Links

Abstract

The invention relates to a hand-held three-dimensional surface information extraction method and an extractor thereof. The method comprises the following steps of: 1) randomly arranging a plurality of circular markers on a scanned object, symmetrically arranging two cameras on the left and right sides of a linear laser projector and arranging an image acquisition and processing card having two parallel paths of working threads; 2) acquiring left-eye and right-eye two-dimensional surface information captured by the cameras on the left and right sides by using one path of working thread in the image acquisition and processing card; 3) extracting the markers in the left-eye and right-eye two-dimensional surface information and performing stereoscopic vision matching to reconstruct three-dimensional coordinates and a topological structure of the markers in the left-eye and right-eye two-dimensional surface information by an elliptic object image processing method by using the other path of working thread in the image acquisition and processing card, computing a registration matrix of scanning data in the whole scanning data in a current posture by using the reconstructed topological structure of the three-dimensional markers, extracting laser streak points from the left-eye and right-eye two-dimensional surface information, reconstructing the three-dimensional coordinates of the laser streak points in the left-eye and right-eye two-dimensional surface information and performing left and right eye fusion on the three-dimensional coordinates of the two laser streak points and then registering the three-dimensional coordinates of the laser streak points subjected to the left and right eye fusion into the whole scanning data of the object to be scanned by using the registration matrix in the current posture; and 4) repeating the steps 2) to 3) till all the scanning data of the object to be scanned is acquired.

Description

A kind of hand-held three dimensional type surface information extracting method and extraction apparatus thereof

Technical field

The present invention relates to a kind of object profile 3 D information obtaining method and deriving means, particularly about a kind of hand-held three dimensional type surface information extracting method and extraction apparatus thereof.

Background technology

Profile digitizing in kind is to follow the CAD/CAE technology constantly to develop a novel product that arises at the historic moment to design ancillary technique.By the digitizing of mock-up, can give full play to the advantage of digitizing and computing machine, improve product design, manufacturing, improved efficient.In recent years, profile digitizing technique in kind has been obtained widely at numerous areas and has been used, such as: aspect research and development of products, reverse Engineering Technology can be understood original design idea and mechanism by digitizing technique in kind, and original design made improvement, thereby the R﹠D cycle can be shortened; At processing manufacturing industry, digitizing technique in kind has been erected the bridge between manual mould and the computer technology, thereby the great ability of artificial model's intuitive, easy modification and Computer-aided manufacturing is given full play to; Aspect police criminal detection, digitizing technique in kind can be widely used in the three-dimensional information that extracts the scene of a crime, improves the efficient of cracking of cases; Aspect archaeology research, digitizing technique in kind can be converted to computer model with the rare cultural relics appearance information, and the computer model that obtains can be used for putting on display and permanent the preservation.

And at present computer vision is the digitized important means of profile in kind, because camera coverage and object block in profile digitized process in kind, needs video camera that the material object of diverse location and attitude is scanned.Traditional way is that scanner head is fixed on the mobile device with positioning function, such as high-precision motion mechanism, flexible arm, electromagnetism gyroscope etc., and utilize these mobile devices that the movable information of scanner head is provided, but cost is very high.Current also some product needed is by increasing the adaptive capacity to environment that surround lighting and optical filtering strengthen product, but fixing environmental light brightness is subjected to the influence of testee Facing material, can cause laser stripe can't extract when color of object surface is dark; And optical filtering can bring distortion to image, influences measuring accuracy.

Current, what more generally use is the area-structure light method, the area-structure light method is to the testee surface by the projector projects structural images, receive by the CCD camera, and in computing machine, the image that receives is encoded, thereby obtain the angle of every some transmitted light, just can calculate the depth information of body surface again according to laser triangulation.Can in one-shot measurement, obtain many three-dimensional datas in the scene of the visual field though adopt the mode of area-structure light, but, the anti-process of asking of its data can not reach real-time requirement, therefore video camera must be fixed on to keep static relatively on the tripod in measuring process, and dirigibility is very limited.In addition, because the hiding relation of body surface is difficult to definite in advance video camera attitude that needs scanning, scan efficiency is very low.In theory, coded structured light measuring method shortcoming is the discreteness measured, and each bar grating has a discrete value, therefore only can carry out limited fringe number coding, has limited the precision of measuring.

Summary of the invention

At the problems referred to above, the purpose of this invention is to provide a kind of in real time, hand-held three dimensional type surface information extracting method and extraction apparatus thereof accurately.

For achieving the above object, the present invention takes following technical scheme: a kind of hand-held three dimensional type surface information extracting method, it may further comprise the steps: the circular gauge point that 1) some multiple colors and type are set on scanned object at random, two video cameras are symmetricly set on the and arranged on left and right sides of one-line laser projecting apparatus, an image acquisition with two-way concurrent working thread are set and handle integrated circuit board; 2) image acquisition is gathered the left and right order two dimension profile information that the left and right side video camera absorbs with the road worker thread of handling in the integrated circuit board; 3) image acquisition is utilized ellipse subject image disposal route with another road of handling in the integrated circuit board, gauge point in the left and right order two dimension profile information is extracted, and carry out stereoscopic vision coupling, the three-dimensional coordinate and the topological structure of gauge point in the left and right order two dimension of the reconstruct profile information again; Utilize the topological structure of the three-dimensional gauge point of reconstruct, calculate under the current attitude scan-data at the registration matrix of whole scanning data; From left and right order two dimension profile information, extract the laser stripe point again, the three-dimensional coordinate of laser stripe point in the left and right order two dimension of the reconstruct profile information, and the three-dimensional coordinate of 2 laser stripe points carried out left and right order fusion; The three-dimensional coordinate of the laser stripe point after utilizing registration matrix under the current attitude that left and right order is merged then is registrated in the whole scanning data of object to be scanned; 4) repeating step 2)~3), up to the whole scan-datas that obtain object to be scanned.

In the described step 3), the step of extracting gauge point from left and right order two dimension profile information is as follows: 1. adopt the Susan operator respectively left and right order two dimension profile information to be carried out edge extracting; 2. judge the zones of different that identifies in the left and right order two dimension profile information respectively by the UNICOM zone; 3. preliminary definite possible elliptic region; 4. utilize least square method to simulate directly that step is preliminary in 3. determines possible elliptic region, and further determine the oval marks point.

Described step 3. in definite step of elliptic region as follows:

A, the left and right order two dimension of traversal profile information pixels, and find the minimum rectangle of surrounding each marked region to be:

[StRow i, StCol i] → [EndRow i, EndCol i], and the area A rea of each rectangle of representing with pixel count of record i

The area identification of one of following condition is satisfied in B, removal:

A) situation of rectangular aspect ratio example imbalance:

EndCol i - StCol i EndRow i - StRow i > Tol max Or EndCol i - StCol i EndRow i - StRow i < Tol min ;

In the formula, StRow iThe initial row of representing i piece zone, StCol iThe initial row of representing i piece zone, EndRow iLast column of representing i piece zone, EndCol iLast row of representing i piece zone, Tol MaxAnd Tol MinThe minimum and maximum tolerance of expression Aspect Ratio;

B) the area A rea in i piece zone iExcessive or too small situation:

Area i>Tol MaxOr Area i<Tol Max

Utilize the minimum rectangle frame that surrounds each marked region to be: [StRow i, StCol i] → [EndRow i, EndCol i], find the edge Edge of each marked region i

Described step 4. in, the step of ellipse fitting is as follows:

A, for the marginal point Edge of each identified areas i, fitted ellipse Ellip (C x, C y, A 1, A s, θ), C wherein x, C yBe the pixel coordinate center of ellipse, A 1, A sBe the long and short shaft length of ellipse, θ is oval deflection;

B, utilize match gained elliptic parameter C x, C y, A 1, A s, θ, calculate ellipse area Area ' i, and with the A gained area A rea of step in 3. iCompare,

If Then remove this ellipse;

C, put the mean distance of institute's fitted ellipse by edge calculation, obtain average fit error MeanErr, if MeanErr>Tol then removes this ellipse, Tol represents the tolerance of average error.

In the described step 3), the stereoscopic vision of gauge point coupling step is as follows: 1. in left and right order two dimension profile information, extract oval gauge point respectively, and the fitted ellipse parameter; 2. utilize the elliptic parameter that obtains, estimate the positional information of oval gauge point under left and right two video cameras; 3. will by the transformation relation between the good video camera of prior demarcation, be transformed under a left side or the right lens camera based on the oval gauge point of right or left purpose position; 4. for each gauge point in a left side or the right order two dimension profile information, calculate its polar curve equation in right or left order two dimension profile information, and near right or left order two dimension profile information polar curve, seek the oval marks point, set up initial matching, the corresponding point set of arbitrary ellipse gauge point Ei in right or left order is Si in a note left side or the right order two dimension profile information; 5. for arbitrary ellipse gauge point Ei in a left side or the right order two dimension profile information, every bit among the traversal S set i, find with these points in oval marks point Ei locus difference minimum and less than the point of error margin as corresponding point, finish the coupling of oval marks point.

In the described step 3), laser stripe point extraction step is as follows: 1. smoothed image, filtering noise; 2. each line data in the traversing graph picture finds the maximal value of this row brightness, and as the alternative point of laser stripe, the capable laser stripe candidate point of i is Li in the note image; 3. calculate adjacent two distances of going between the laser stripe candidate points since first row, if distance greater than certain threshold value, is then preserved the current laser stripe section that finds, from next line, start new laser stripe section and follow the tracks of, repeat said process, up to all row that traveled through on the image; 4. find the longest one section of all laser stripe sections, and think this Duan Biwei laser stripe, remember that this section striped is LP1; 5. calculate the distance of its terminal for all laser stripe sections to the terminal of LP1, chosen distance minimum and less than that section of certain threshold value as next section laser stripe LP2; 6. seek for remaining all laser stripes and calculate the distance of its terminals to the terminal of LP1, LP2, chosen distance minimum and less than that section of certain threshold value as next section laser stripe LP3; 7. return step 2., find all laser stripe points.

In the described step 3), it is as follows that the laser stripe point of reconstruct under left and right two camera coordinate systems is carried out the step that left and right sides order merges: 1. reconstruct is transformed under the video camera on a left side or right side by the good transition matrix of prior demarcation at the laser stripe point under the camera coordinate system in the right side or left side; 2. for reconstruct each data point under the video camera on a left side or right side, seek its closest approach under the video camera in the right side or left side, if the distance of point-to-point transmission less than the scanner ultimate resolution, then all joins in the final scan-data these 2; Otherwise, calculate the average of both coordinates, and this average is pressed in the final scan-data, merge good laser stripe point as left and right sides order.

The step that the three-dimensional coordinate of the laser stripe point after left and right order in the described step 3) merged is registrated in the whole scanning data of scanning object is as follows: 1. define: a container SignPtSet is used for the three-dimensional gauge point coordinate that splendid attire reconstruct obtains; One container KNNId is used for writing down the K neighborhood sequence number of each point of SignPtSet; One container KNNInfo is used to write down the topology information of SignPtSet mid point and its K neighborhood every bit; The point set of the gauge point that note obtains from single frames scanning at every turn is P; The camera coordinates on a left side or right side was Coord when note scanned for the first time First, and with Coord FirstAs global coordinate system; 2. all gauge points are pressed among the container SignPtSet in will the point set P that reconstruct is come out from first two field picture; 3. calculate the domain information that faces of every bit among the point set P, the K neighborhood sequence number and the topology information of record every bit also is pressed into respectively among KNNId and the KNNInfo; 4. current first frame is merged good laser stripe point and be pressed into and go in the whole scanning data, then finish of the processing of the two-dimentional profile information of first frame to the three dimensional type surface information; 5. all gauge points are pressed among the container SignPtSet in will the gauge point point set P that reconstruct is come out from the later current frame image of first frame, and calculate the topology information and the K neighborhood sequence number of each gauge point among the gauge point point set P; 6. utilize the topology information and the K neighborhood sequence number of each gauge point, it is right with its point and composition point with identical topological structure to find from SignPtSet; 7. as the right number of fruit dot less than 2, then return step 6., otherwise, utilize the point that finds to be tied to global coordinate system Coord to asking for from current coordinate FirstThe registration matrix T; 8. utilize the registration matrix T to carry out conversion to gauge point point set P, the point set after the note conversion is P1; 9. travel through the every bit among the P1,, then calculate the average of these gauge points, in order to replace original gauge point if face other gauge point of existence in the territory at its R-ball; Otherwise, this point is pressed among the SignPtSet, calculate the topology information and the K neighborhood sequence number of this point, and be pressed into respectively among KNNId and the KNNInfo; 10. utilize the registration matrix T to carry out conversion to finishing the laser stripe point that left and right order merges, will merge good laser stripe point from present frame and be registrated to and go the whole scanning data, then finish of the processing of the two-dimentional profile information of present frame to the three dimensional type surface information.

A kind of hand-held three dimensional type surface information extraction apparatus is characterized in that it comprises: a frame, and one-line laser projecting apparatus, it is arranged on the described frame; Two video cameras, it is symmetricly set on the frame of described line formula laser projecting apparatus both sides; One image acquisition and processing integrated circuit board, it is electrically connected described left and right two video cameras; One communication module, it is electrically connected image acquisition and handles integrated circuit board; One computing machine, it connects described communication module by data line; One power module, it connects described left and right two video cameras, image acquisition and processing integrated circuit board and communication module respectively; Described image acquisition comprises with the processing integrated circuit board: an image capture module is used to gather the left and right order two dimension profile information that described left and right two video cameras absorb; One image processing module, it utilizes the image processing method of ellipse object to extract oval gauge point from described left and right order two dimension profile information; One stereoscopic vision matching module, oval gauge point mates and its three-dimensional coordinate of reconstruct and topological structure in its described left and right order two dimension profile information that described image processing module is extracted, and utilizes the topological structure of the good described three-dimensional gauge point of reconstruct to calculate the registration matrix that scan-data under the current attitude is transformed into whole scanning data again; One laser stripe extraction module, it extracts laser stripe point and its three-dimensional coordinate of reconstruct from described left and right order two dimension profile information; One merges and registration module, and the registration matrix that it utilizes described stereoscopic vision matching module to calculate merges and registration the three-dimensional coordinate of the laser stripe point of reconstruct from described laser stripe extraction module, to obtain the three dimensional type surface information of object to be scanned.

It also comprises two surround lighting projectors, and the two described surround lighting projectors are symmetricly set on respectively between described left and right two video cameras and the line formula laser projecting apparatus.

Described left and right two video cameras are 20cm~30cm to the scanned object distance, and the distance between two photocentres is 25cm; The optical axis direction of described left and right two video cameras becomes 20 ° with the angle of the laser stripe projecting direction of described line formula laser projecting apparatus.

The present invention is owing to take above technical scheme, it has the following advantages: when 1, image processing module of the present invention utilizes the image processing method of ellipse object to extract gauge point, at first carry out the connected region sign, and then the marginal point of seeking connected region carries out the extraction of oval frontier point, and because image processing module has been removed outer oval border by the area criterion, thereby guarantee in subsequent treatment to have only inner boundary to participate in calculating, improved counting yield effectively.2, therefore the present invention can obtain high accuracy three-dimensional profile information to the gauge point projection single-line type laser stripe of scanning object surface more exactly owing to adopted line formula laser projecting apparatus.3, the present invention is owing to be symmetrical arranged two video cameras the and arranged on left and right sides of online formula laser projecting apparatus, therefore the left and right order two dimension profile information of the object to be scanned that absorbs respectively of and arranged on left and right sides video camera is very approaching, for the three dimensional type surface information that obtains object to be scanned exactly provides strong assurance.4, the present invention can work under natural light environment, has overcome that some product must have the shortcoming that surround lighting is supported in the prior art.5, therefore the present invention can improve the speed of data registration owing to adopt a plurality of gauge points to position on the surface of object to be scanned.6, because the present invention's video camera in the shooting process remains at 20cm~30cm to the distance of scanned object, and the distance between the photocentre of left and right two video cameras is 25cm, therefore can make the precision of the two-dimentional profile information of obtaining of left and right order the highest.7, therefore the present invention can make left and right two video cameras have the public visual field of maximum magnitude because the optical axis direction of left and right two video cameras becomes 20 ° with the angle of the laser stripe projecting direction of line formula laser projecting apparatus.8, because the communication module among the present invention adopts is one 1394 integrated circuit board, satisfy the requirement that high-level efficiency is carried.The present invention can be applied in all fields that need obtain object profile three-dimensional information, for example vehicle complete vehicle scanning, aircraft configuration scanning, mould manufacturing, consumer goods manufacturing, police criminal detection, cultural relic digitalization etc. numerous areas.

Description of drawings

Fig. 1 is the floor map of extraction apparatus structure of the present invention

Fig. 2 is the structure principle chart of extraction apparatus of the present invention

Fig. 3 is an image acquisition and the structured flowchart of handling integrated circuit board in the extraction apparatus of the present invention

Fig. 4 is an image acquisition and the workflow diagram of handling integrated circuit board in the extraction apparatus of the present invention

Fig. 5 is the left and right order two dimension profile information edge synoptic diagram that image processing module extracts in the extraction apparatus of the present invention

Fig. 6 is the left and right order two dimension profile information area synoptic diagram of image processing module sign in the extraction apparatus of the present invention

The elliptic region synoptic diagram that image processing module is tentatively determined in Fig. 7 extraction apparatus of the present invention

Fig. 8 is an image processing module ellipse fitting result schematic diagram in the extraction apparatus of the present invention

Fig. 9 is the typical marks point synoptic diagram of image processing module match in the extraction apparatus of the present invention

Figure 10 is the synoptic diagram of extraction apparatus neutral body vision matching module limit coupling of the present invention

Embodiment

Below in conjunction with drawings and Examples the present invention is described in detail.

As shown in Figure 1 and Figure 2, extraction apparatus of the present invention comprises a frame 1, and frame 1 is provided with one-line laser projecting apparatus 2, is symmetrical arranged left and right two video cameras 3 on the frame 1 of online laser projecting apparatus 2 both sides.Left and right two video cameras 3 are electrically connected an image acquisition and handle integrated circuit board 4, and image acquisition is electrically connected a communication module 5 with processing integrated circuit board 4.Communication module 5 connects a computing machine 6 by data line, and left and right two video cameras 3, image acquisition are connected a power module 7 respectively with processing integrated circuit board 4 and communication module 5.Line formula laser projecting apparatus 2 emission laser, treating scanning object scans, the light of reflected back on the video camera 3 picked-ups object to be scanned, be the two-dimentional profile information of object to be scanned, image acquisition is electrically connected two video cameras 3 with processing integrated circuit board 4, gather the two-dimentional profile information of object to be scanned, simultaneously two-dimentional profile information processing is become the three dimensional type surface information, and by communication module 5 and data line the three dimensional type surface information is flowed to computing machine 6, by computing machine 6 show synchronously video cameras 3 image and through image acquisition with handle the scan images that integrated circuit board 4 is handled well.Power module 7 is for giving two video cameras 3, image acquisition and processing integrated circuit board 4 and communication module 5 power supplies.In the present embodiment, what communication module 5 adopted is one 1394 integrated circuit board.

The line formula laser projecting apparatus 2 of extraction apparatus of the present invention is used for to single-line type laser stripe of the surface of object to be scanned projection, because the single-line type laser stripe can satisfy high-level efficiency, high-precision requirement.The laser stripe projecting direction of line formula laser projecting apparatus 2 points to the direction of object to be scanned.

Two video cameras 3 of extraction apparatus of the present invention are used to absorb the two-dimentional profile information of object to be scanned, and the two is symmetrical arranged structure can make laser stripe close at the image space of left and right two video cameras 3.The image that single camera 3 is taken only can provide the two-dimentional profile information of object to be scanned, adopting left and right two video cameras 3 is binocular positioning functions for the anthropomorphic dummy, with the left and right order two dimension profile information of picked-up scanning object surface, thereby obtain the three dimensional type surface information of object to be scanned.At video camera 3 to the scanned object distance at 20cm~30cm, and the distance between the photocentre of left and right two video cameras 3 is when being 25cm, the precision of the left and right order two dimension profile information of obtaining is the highest.The optical axis direction of left and right two video cameras 3 becomes 20 ° with the angle of the laser stripe projecting direction of line formula laser projecting apparatus 2, can make left and right two video cameras 3 have the public visual field of maximum magnitude like this.In the present embodiment, video camera 3 can adopt the CCD camera, can also adopt other picture pick-up device, requires as long as resolution is not less than about 1,300,000 pixels.

In order to improve the speed of data registration, what the gauge point that the present invention uses adopted is the circular gauge point of different colours, and gauge point comprises non-coding and encodes two kinds.Above-mentioned gauge point is sticked on the object to be scanned or around the object randomly, thereby can guarantee the uniqueness of gauge point topological structure, topological structure of the present invention be meant gauge point point of proximity color, type and with the range information (being detailed later) of its neighbor point.

As shown in Figure 3, Figure 4, the image acquisition of extraction apparatus of the present invention and processing integrated circuit board 4 comprise the thread of two-way concurrent working, wherein one the tunnel is image capture module 41, be used to gather the left and right order two dimension profile information that left and right two video cameras 3 absorb, another road comprises that image processing module 42, a stereoscopic vision matching module 43, a laser stripe extraction module 44 and merge and registration module 45.Image capture module 41 is used for the left and right order two dimension profile information of the scanning object surface that acquisition camera 3 absorbs.Image processing module 42 utilizes the image processing method of ellipse object that the gauge point in the left and right order two dimension profile information is extracted, and the gauge point that extracts is oval gauge point.Oval gauge point mates in the left and right order two dimension profile information that stereoscopic vision matching module 43 extracts image processing module 42, the three-dimensional coordinate and the topological structure of gauge point in the left and right order of the reconstruct two dimension profile information again utilize the topological structure of the good three-dimensional gauge point of reconstruct to calculate the registration matrix T that scan-data under the current attitude is transformed into whole scanning data at last.Laser stripe extraction module 44 extracts the laser stripe point from the left and right order two dimension profile information of left and right two video cameras 3 picked-up, and the three-dimensional coordinate of laser stripe point in the left and right order two dimension of the reconstruct profile information.Fusion and registration module 45 are carried out left and right order fusion with the three-dimensional coordinate of 2 laser stripe points, and the three-dimensional coordinate of the laser stripe after utilizing registration matrix under the current attitude that left and right order is merged is registrated in the whole scanning data of scanning object, to obtain the three dimensional point cloud of object to be scanned, i.e. the three dimensional type surface information of object to be scanned.

It is as follows that image processing module 42 of the present invention utilizes the image processing method of ellipse object to handle the step of left and right order two dimension profile information:

1. Edge extraction: Susan in various edge extracting operators (Soviet Union three) operator arithmetic speed is higher, and will not want too much parameter setting, can satisfy real-time requirement, therefore, the present invention adopts the Susan operator that left and right order two dimension profile information is carried out edge extracting, and the edge of extraction as shown in Figure 5.

2. image-region sign: identify zones of different in the left and right order two dimension profile information by UNICOM's region decision, the sign result in zone as shown in Figure 6.

3. preliminary definite possible elliptic region, elliptic region as shown in Figure 7.Definite step of elliptic region is as follows:

A, the left and right order two dimension of traversal profile information pixels, find the minimum rectangle of surrounding each marked region to be in this process:

[StRow i, StCol i] → [EndRow i, EndCol i], and the area A rea of each rectangle of representing with pixel count of record i

The area identification of one of following condition is satisfied in B, removal:

A) situation of rectangular aspect ratio example imbalance:

EndCol i - StCol i EndRow i - StRow i > Tol max Or EndCol i - StCol i EndRow i - StRow i < Tol min

In the following formula, subscript i represents i piece zone, and St represents initial, and End represents to finish, and Row represents row, and Col represents row.StRow iThe initial row of representing i piece zone, StCol iThe initial row of representing i piece zone, EndRow iLast column of representing i piece zone, EndCol iLast row of representing i piece zone, Tol MaxAnd Tol MinThe minimum and maximum tolerance of expression Aspect Ratio.

B) the area A rea in i piece zone iExcessive or too small situation:

Area i>Tol MaxOr Area i<Tol Max

Utilize the minimum rectangle frame that surrounds each marked region to be:

[StRow i, StCol i] → [EndRow i, EndCol i], find the edge Edge of each marked region i

4. utilize least square method directly to simulate preliminary definite possible elliptic region in the step 3), and further determine ellipse, serve as a mark a little.At last the ellipse fitting result of Que Dinging as shown in Figure 8, the step of ellipse fitting is as follows:

A, for the marginal point Edge of each identified areas i, fitted ellipse Ellip (C x, C y, A 1, A s, θ), C wherein x, C yBe the pixel coordinate center of ellipse, A 1, A sBe the long and short shaft length of ellipse, θ is oval deflection.

B, utilize match gained elliptic parameter C x, C y, A 1, A s, θ, calculate ellipse area Area ' i, and with step 3) in 1. gained area A rea iCompare,

If Then remove this ellipse.

C, put the mean distance of institute's fitted ellipse, obtain average fit error MeanErr, if MeanErr>Tol then removes this ellipse by edge calculation.Tol represents the tolerance of average error.

It is pointed out that and carrying out oval frontier point when extracting, the algorithm that the present invention does not adopt traditional border to follow the tracks of, but by at first carrying out the connected region sign, and then the marginal point of seeking connected region carries out the extraction of oval frontier point.Done following advantage like this: in actual scanning,, reduce surround lighting, often adopt the circular gauge point of pattern as shown in Figure 4 to the influence that ellipse extracts for increasing the difference degree of gauge point and surrounding environment.If adopt traditional edge following algorithm, then can provide two border C1, C2, the formed C1 of circle gets final product in only adopting for the location, and introducing redundant border can increase calculated amount more, and fitting result is shown in a among Fig. 9; And if the regional connectivity among employing the present invention then can form the regional A1 of the white shown in the b among Figure 10 and the regional A2 of black.Because regional A2 pixel count (area) is seldom, therefore in step 3), 2. removed outer oval border, thereby guaranteed in subsequent treatment to have only inner boundary to participate in calculating by the area criterion, improved counting yield.This is most important in calculating in real time.

Oval gauge point mates in the left and right order two dimension profile information that stereoscopic vision matching module 43 among the present invention extracts image processing module 42, the three-dimensional coordinate and the topological structure of gauge point in the left and right order of the reconstruct two dimension profile information again utilize the topological structure of gauge point in the left and right order two dimension profile information to calculate the registration matrix T of left and right order two dimension profile information respectively again.Corresponding point matching is the core problem in the stereoscopic vision.About the most important principle of corresponding point matching is polar curve constraint condition.By the polar curve constraint, some corresponding point in another width of cloth image in the image can be limited point-blank, thereby reduce the hunting zone greatly.But polar curve constraint condition still can not guarantee the uniqueness of corresponding point matching.In order to address this problem, the present invention utilizes the imaging characteristics of circular target in image, has realized unique coupling of left and right order two dimension profile information corresponding point.As shown in figure 10, the coupling step of gauge point is as follows:

1. in left and right order two dimension profile information, extract gauge point respectively, and the fitted ellipse parameter.

2. utilize the elliptic parameter that calculates gained in the step 1), estimate the positional information of gauge point under left and right two video cameras 3.

3. will be based on the oval gauge point of right purpose position, by the transformation relation of 3 in the good video camera of prior demarcation, be transformed into the video camera 3 times in left side.

4. for each gauge point in the left order two dimension profile information, calculate its polar curve equation in right order two dimension profile information, and near right order two dimension profile information polar curve, seek the oval marks point, set up initial matching.Through polar curve constraint coupling, a gauge point may have a plurality of right order gauge point correspondences in the left order two dimension profile information.Remember that the corresponding point set of any gauge point Ei in right order is Si in the left order two dimension profile information.

5. for gauge point Ei arbitrarily in the left order two dimension profile information, the every bit among the traversal S set i, find with these points in gauge point Ei locus difference minimum and less than the point of error margin as corresponding point, finish the coupling of gauge point.

In the foregoing description, also can will be transformed into the video camera 3 times on right side based on the oval gauge point of left purpose position in 3., and in the same way corresponding point in the left and right order two dimension profile information be mated in step.

Laser stripe extraction module 44 among the present invention extracts the laser stripe point from the original left and right order two dimension profile information of video camera 3 picked-ups, and the three-dimensional coordinate of reconstruct laser stripe point.Laser stripe point extraction step is as follows:

1. smoothed image, the filtering noise.

2. each line data in the traversing graph picture finds the maximal value of this row brightness, and as the alternative point of laser stripe point, the capable laser stripe candidate point of i is Li in the note image.

3. calculate adjacent two distances of going between the laser stripe candidate points since first row, if distance greater than certain threshold value, is then preserved the current laser stripe section that finds.From next line, start new laser stripe section and follow the tracks of.Repeat said process, up to all row that traveled through on the image.

4. find the longest one section of all laser stripe sections, and think this Duan Biwei laser stripe, remember that this section striped is LP1.

5. calculate the distance of its terminal for all laser stripe sections to the terminal of LP1, chosen distance minimum and less than that section of certain threshold value as next section laser stripe LP2.

6. seek for remaining all laser stripes and calculate its terminals to LP1, the distance of the terminal of LP2, chosen distance minimum and less than that section of certain threshold value as next section laser stripe LP3.

7. return step 2., find all laser stripe points.

Fusion of the present invention and registration module 45 are utilized the registration matrix T, and the laser stripe that laser stripe extraction module 33 is extracted carries out data registration and fusion, to obtain the final scan-data of object to be scanned, i.e. three dimensional type surface information.

For reliability and the expansion single sweep operation scope that increases data, need the laser stripe point of reconstruct under left and right two video cameras, 3 coordinate systems is carried out left and right sides order fusion, its fusion steps is as follows:

1. the laser stripe point of reconstruct under right video camera 3 coordinate systems is transformed into left video camera 3 times by the good transition matrix of prior demarcation.

2. for reconstruct each data point under left video camera 3, seek its closest approach under right video camera 3, if the distance of point-to-point transmission less than the scanner ultimate resolution, then all joins in the final scan-data these 2; Otherwise, calculate the average of both coordinates, and this average is pressed in the final scan-data, merge good laser stripe point as left and right sides order.

In the foregoing description, step also can be transformed into right video camera 3 times by the good transition matrix of prior demarcation with the laser stripe of reconstruct under left video camera 3 coordinate systems in 1., adopts above-mentioned same mode that the laser stripe point of reconstruct under left and right two video cameras, 3 coordinate systems is carried out left and right sides order and merges.

The step that the laser stripe point that fusion is good carries out the data registration is as follows:

1. definition: a container SignPtSet is used for the three-dimensional gauge point coordinate that splendid attire reconstruct obtains; One container KNNId is used for writing down the K neighborhood sequence number of each point of SignPtSet; One container KNNInfo is used to write down the topology information of SignPtSet mid point and its K neighborhood every bit; The point set of the gauge point that note obtains from single frames scanning at every turn is P; The camera coordinates in left side was Coord when note scanned for the first time First, and with Coord FirstAs global coordinate system.

2. all gauge points are pressed among the container SignPtSet in will the gauge point point set P that reconstruct is come out from first two field picture.

3. calculate the topology information of each gauge point among the gauge point point set P, it comprise color, type and with the range information of its neighbor point, write down the K neighborhood sequence number of each gauge point simultaneously.

4. the K neighborhood serial number information with every bit is pressed into KNNId, and the topology information of each gauge point is pressed among the KNNInfo.

5. will merge good laser stripe point from current first frame and be pressed into and go the whole scanning data, then finish of the processing of the two-dimentional profile information of first frame to the three dimensional type surface information.

For the consistance of the coordinate that guarantees data, ensuing scanning need be transformed into Coord with the three-dimensional coordinate of gauge point and laser stripe point under current camera coordinate system FirstDown, its processing procedure is as follows:

1. all gauge points are pressed among the container SignPtSet in will the gauge point point set P that reconstruct is come out from the later current frame image of first frame.

2. calculate the topology information of each gauge point among the gauge point point set P, write down the K neighborhood sequence number of each gauge point simultaneously.

3. for each gauge point among the gauge point point set P that from current frame image, reconstructs, utilize the topology information and the K neighborhood sequence number of each gauge point, from SignPtSet, find the point that has identical topological structure with it, and it is right that these gauge points are formed point.

4. as the right number of fruit dot less than 2, then return step 3.; Greater than 2, then utilize the point that finds to be tied to global coordinate system Coord as the right number of fruit dot to asking for from current coordinate FirstTransformation matrix T, i.e. registration matrix T.

5. utilize the registration matrix T to carry out conversion to each gauge point among the gauge point point set P, the point set after the note conversion is P1.

6. travel through the every bit among the point set P1,, then calculate the average of these gauge points, and make average replace corresponding gauge point among the original SignPtSet if face other gauge point that exists in the territory among the SignPtSet at its R-ball; Otherwise, this point is pressed among the SignPtSet, calculate the topology information of this point, write down the K neighborhood sequence number of this point simultaneously; And the topology information that will put and K neighborhood sequence number are pressed into respectively among KNNId and the KNNInfo.

7. utilize the registration matrix T to carry out conversion to finishing the laser stripe point that left and right order merges, will merge good laser stripe point from present frame and be registrated to and go the whole scanning data, then finish of the processing of the two-dimentional profile information of present frame to the three dimensional type surface information.

Above-mentioned whole scanning data is the three dimensional type surface information of final object to be scanned.

In the foregoing description, as shown in Figure 1, extraction apparatus of the present invention also comprises two surround lighting projectors 8, two surround lighting projectors 8 are symmetricly set on respectively between left and right two video cameras 3 and the line formula laser projecting apparatus 2, guarantee that the surround lighting irradiation scope that forms is distributed in the camera coverage scope uniformly.The brightness of surround lighting realizes by the mode of voltage-regulation, when brighter or dark, reduces the brightness of surround lighting when the scanned object surface color, guarantees can both extract the laser stripe point accurately on the surface of unlike material.In the present embodiment, environment light source 7 adopts 2 light emitting diodes that power is 1w.

The step of extracting method of the present invention is as follows:

1) the circular gauge point of some multiple colors and type is set at random on scanned object, two video cameras are symmetricly set on the and arranged on left and right sides of one-line laser projecting apparatus, an image acquisition with two-way concurrent working thread is set and handles integrated circuit board;

2) image acquisition is gathered the left and right order two dimension profile information that the left and right side video camera absorbs with the road worker thread of handling in the integrated circuit board;

3) image acquisition is utilized ellipse subject image disposal route with another road of handling in the integrated circuit board, gauge point in the left and right order two dimension profile information is extracted, and carry out stereoscopic vision coupling, the three-dimensional coordinate and the topological structure of gauge point in the left and right order two dimension of the reconstruct profile information again; Utilize the topological structure of the three-dimensional gauge point of reconstruct, calculate under the current attitude scan-data at the registration matrix of whole scanning data; From left and right order two dimension profile information, extract the laser stripe point again, the three-dimensional coordinate of laser stripe point in the left and right order two dimension of the reconstruct profile information, and the three-dimensional coordinate of 2 laser stripe points carried out left and right order fusion; The three-dimensional coordinate of the laser stripe point after utilizing registration matrix under the current attitude that left and right order is merged then is registrated in the whole scanning data of scanning object;

4) repeating step 2)~3), up to the whole scan-datas that obtain object to be scanned.

Above-mentioned steps 3) in, the image processing process of object to be scanned has been described clear in extraction apparatus of the present invention, does not repeat them here.

Claims (8)

1. hand-held three dimensional type surface information extracting method, it may further comprise the steps:
1) the circular gauge point of some multiple colors and type is set at random on scanned object, two video cameras are symmetricly set on the and arranged on left and right sides of one-line laser projecting apparatus, an image acquisition with two-way concurrent working thread is set and handles integrated circuit board;
2) image acquisition is gathered the left and right order two dimension profile information that the left and right side video camera absorbs with the road worker thread of handling in the integrated circuit board;
3) image acquisition is utilized the oblong object image processing method with another road of handling in the integrated circuit board, gauge point in the left and right order two dimension profile information is extracted, and carry out stereoscopic vision coupling, the three-dimensional coordinate and the topological structure of gauge point in the left and right order two dimension of the reconstruct profile information again; Utilize the topological structure of the three-dimensional gauge point of reconstruct, calculate under the current attitude scan-data at the registration matrix of whole scanning data; From left and right order two dimension profile information, extract the laser stripe point again, the three-dimensional coordinate of laser stripe point in the left and right order two dimension of the reconstruct profile information, and the three-dimensional coordinate of 2 laser stripe points carried out left and right order fusion; The three-dimensional coordinate of the laser stripe point after utilizing registration matrix under the current attitude that left and right order is merged then is registrated in the whole scanning data of object to be scanned; The step of wherein extracting gauge point from left and right order two dimension profile information is as follows:
1. adopt the Susan operator respectively left and right order two dimension profile information to be carried out edge extracting;
2. judge the zones of different that identifies in the left and right order two dimension profile information respectively by the UNICOM zone;
3. preliminary definite possible elliptic region;
4. utilize least square method to simulate directly that step is preliminary in 3. determines possible elliptic region, and further determine the oval marks point;
4) repeating step 2)~3), up to the whole scan-datas that obtain object to be scanned.
2. a kind of hand-held three dimensional type surface information extracting method as claimed in claim 1 is characterized in that: in the described step 3), the stereoscopic vision of gauge point coupling step is as follows:
1. in left and right order two dimension profile information, extract oval gauge point respectively, and the fitted ellipse parameter;
2. utilize the elliptic parameter that obtains, estimate the positional information of oval gauge point under left and right two video cameras;
3. will by the transformation relation between the good video camera of prior demarcation, be transformed under a left side or the right lens camera based on the oval gauge point of right or left purpose position;
4. for each gauge point in a left side or the right order two dimension profile information, calculate its polar curve equation in right or left order two dimension profile information, and near right or left order two dimension profile information polar curve, seek the oval marks point, set up initial matching, the corresponding point set of arbitrary ellipse gauge point Ei in right or left order is Si in a note left side or the right order two dimension profile information;
5. for arbitrary ellipse gauge point Ei in a left side or the right order two dimension profile information, every bit among the traversal S set i, find with these points in oval marks point Ei locus difference minimum and less than the point of error margin as corresponding point, finish the coupling of oval marks point.
3. a kind of hand-held three dimensional type surface information extracting method as claimed in claim 1, it is characterized in that: in the described step 3), laser stripe point extraction step is as follows:
1. smoothed image, the filtering noise;
2. each line data in the traversing graph picture finds the maximal value of this row brightness, and as the alternative point of laser stripe, the capable laser stripe candidate point of i is Li in the note image;
3. calculate adjacent two distances of going between the laser stripe candidate points since first row, if distance greater than certain threshold value, is then preserved the current laser stripe section that finds, from next line, start new laser stripe section and follow the tracks of, repeat said process, up to all row that traveled through on the image;
4. find the longest one section of all laser stripe sections, and think this Duan Biwei laser stripe, remember that this section striped is LP1;
5. calculate the distance of its terminal for all laser stripe sections to the terminal of LP1, chosen distance minimum and less than that section of certain threshold value as next section laser stripe LP2;
6. seek for remaining all laser stripes and calculate the distance of its terminals to the terminal of LP1, LP2, chosen distance minimum and less than that section of certain threshold value as next section laser stripe LP3;
7. return step 2., find all laser stripe points.
4. a kind of hand-held three dimensional type surface information extracting method as claimed in claim 1 is characterized in that: in the described step 3), it is as follows that the laser stripe point of reconstruct under left and right two camera coordinate systems is carried out the step that left and right sides order merges:
1. reconstruct is transformed under the video camera on a left side or right side by the good transition matrix of prior demarcation at the laser stripe point under the camera coordinate system in the right side or left side;
2. for reconstruct each data point under the video camera on a left side or right side, seek its closest approach under the video camera in the right side or left side, if the distance of point-to-point transmission less than the scanner ultimate resolution, then all joins in the final scan-data these 2; Otherwise, calculate the average of both coordinates, and this average is pressed in the final scan-data, merge good laser stripe point as left and right sides order.
5. a kind of hand-held three dimensional type surface information extracting method as claimed in claim 1 is characterized in that: the step that the three-dimensional coordinate of the laser stripe point after left and right order in the described step 3) is merged is registrated in the whole scanning data of scanning object is as follows:
1. definition: a container SignPtSet is used for the three-dimensional gauge point coordinate that splendid attire reconstruct obtains; One container KNNId is used for writing down the K neighborhood sequence number of each point of SignPtSet; One container KNNInfo is used to write down the topology information of SignPtSet mid point and its K neighborhood every bit; The point set of the gauge point that note obtains from single frames scanning at every turn is P; The camera coordinates on a left side or right side was Coord when note scanned for the first time First, and with Coord FirstAs global coordinate system;
2. all gauge points are pressed among the container SignPtSet in will the point set P that reconstruct is come out from first two field picture;
3. calculate the neighborhood information of every bit among the point set P, the K neighborhood sequence number and the topology information of record every bit also is pressed into respectively among KNNId and the KNNInfo;
4. current first frame is merged good laser stripe point and be pressed into and go in the whole scanning data, then finish of the processing of the two-dimentional profile information of first frame to the three dimensional type surface information;
5. all gauge points are pressed among the container SignPtSet in will the gauge point point set P that reconstruct is come out from the later current frame image of first frame, and calculate the topology information and the K neighborhood sequence number of each gauge point among the gauge point point set P;
6. utilize the topology information and the K neighborhood sequence number of each gauge point, it is right with its point and composition point with identical topological structure to find from SignPtSet;
7. as the right number of fruit dot less than 2, then return step 6., otherwise, utilize the point that finds to be tied to global coordinate system Coord to asking for from current coordinate FirstThe registration matrix T;
8. utilize the registration matrix T to carry out conversion to gauge point point set P, the point set after the note conversion is P1;
9. travel through the every bit among the P1,, then calculate the average of these gauge points, in order to replace original gauge point if in its R-ball neighborhood, there is other gauge point; Otherwise, this point is pressed among the SignPtSet, calculate the topology information and the K neighborhood sequence number of this point, and be pressed into respectively among KNNId and the KNNInfo;
10. utilize the registration matrix T to carry out conversion to finishing the laser stripe point that left and right order merges, will merge good laser stripe point from present frame and be registrated to and go the whole scanning data, then finish of the processing of the two-dimentional profile information of present frame to the three dimensional type surface information.
6. hand-held three dimensional type surface information extraction apparatus, it is characterized in that: it comprises:
One frame,
One-line laser projecting apparatus, it is arranged on the described frame;
Left and right two video cameras, it is symmetricly set on the frame of described line formula laser projecting apparatus both sides;
One image acquisition and processing integrated circuit board, it is electrically connected described left and right two video cameras;
One communication module, it is electrically connected image acquisition and handles integrated circuit board;
One computing machine, it connects described communication module by data line;
One power module, it connects described left and right two video cameras, image acquisition and processing integrated circuit board and communication module respectively;
Described image acquisition comprises with the processing integrated circuit board:
One image capture module is used to gather the left and right order two dimension profile information that described left and right two video cameras absorb;
One image processing module, it utilizes the image processing method of oblong object to extract oval gauge point from described left and right order two dimension profile information;
One stereoscopic vision matching module, oval gauge point mates and its three-dimensional coordinate of reconstruct and topological structure in its described left and right order two dimension profile information that described image processing module is extracted, and utilizes the topological structure of the good three-dimensional gauge point of reconstruct to calculate the registration matrix T that scan-data under the current attitude is transformed into whole scanning data again;
One laser stripe extraction module, it extracts laser stripe point and its three-dimensional coordinate of reconstruct from described left and right order two dimension profile information;
One merges and registration module, the registration matrix T that it utilizes described stereoscopic vision matching module to calculate, three-dimensional coordinate to the laser stripe point of reconstruct from described laser stripe extraction module merges and registration, to obtain the three dimensional type surface information of object to be scanned.
7. a kind of hand-held three dimensional type surface information extraction apparatus as claimed in claim 6, it is characterized in that: it also comprises two surround lighting projectors, the two described surround lighting projectors are symmetricly set on respectively between described left and right two video cameras and the line formula laser projecting apparatus.
8. as claim 6 or 7 described a kind of hand-held three dimensional type surface information extraction apparatuses, it is characterized in that: described left and right two video cameras are 20cm~30cm to the scanned object distance, and the distance between two photocentres is 25cm; The optical axis direction of described left and right two video cameras becomes 20 ° with the angle of the laser stripe projecting direction of described line formula laser projecting apparatus.
CN2010101738490A 2010-05-10 2010-05-10 Hand-held three-dimensional surface information extraction method and extractor thereof CN101853528B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2010101738490A CN101853528B (en) 2010-05-10 2010-05-10 Hand-held three-dimensional surface information extraction method and extractor thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2010101738490A CN101853528B (en) 2010-05-10 2010-05-10 Hand-held three-dimensional surface information extraction method and extractor thereof

Publications (2)

Publication Number Publication Date
CN101853528A CN101853528A (en) 2010-10-06
CN101853528B true CN101853528B (en) 2011-12-07

Family

ID=42804993

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2010101738490A CN101853528B (en) 2010-05-10 2010-05-10 Hand-held three-dimensional surface information extraction method and extractor thereof

Country Status (1)

Country Link
CN (1) CN101853528B (en)

Families Citing this family (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102012008905A1 (en) * 2012-05-08 2013-11-14 Airbus Operations Gmbh Optical measuring device and displacement device and optical measuring method
CN102779344B (en) * 2012-07-02 2014-08-27 济南大学 Registering block for space exchange and use method thereof
CN102968820A (en) * 2012-12-04 2013-03-13 上海无线电设备研究所 Method for establishing external surface geometric model of space target based on high-precision scanning
CN103033145B (en) * 2013-01-08 2015-09-02 天津锋时互动科技有限公司 For identifying the method and system of the shape of multiple object
CN103256896B (en) * 2013-04-19 2015-06-24 大连理工大学 Position and posture measurement method of high-speed rolling body
CN103632384B (en) * 2013-10-25 2016-06-01 大连理工大学 The rapid extracting method of built-up type mark point and mark dot center
CN104517280B (en) * 2013-11-14 2017-04-12 广东朗呈医疗器械科技有限公司 Three-dimensional imaging method
TWI509566B (en) * 2014-07-24 2015-11-21 Etron Technology Inc Attachable three-dimensional scan module
CN104268930B (en) * 2014-09-10 2018-05-01 芜湖林一电子科技有限公司 A kind of coordinate pair is than 3-D scanning method
CN107004278B (en) * 2014-12-05 2020-11-17 曼蒂斯影像有限公司 Tagging in 3D data capture
CN104501740B (en) * 2014-12-18 2017-05-10 杭州鼎热科技有限公司 Handheld laser three-dimension scanning method and handheld laser three-dimension scanning equipment based on mark point trajectory tracking
CN105091782A (en) * 2015-05-29 2015-11-25 南京邮电大学 Multilane laser light plane calibration method based on binocular vision
CN204988183U (en) * 2015-08-05 2016-01-20 杭州思看科技有限公司 Handheld scanning apparatus skeleton texture
CN105203046B (en) * 2015-09-10 2018-09-18 北京天远三维科技股份有限公司 Multi-thread array laser 3 D scanning system and multi-thread array laser 3-D scanning method
CN105300310A (en) * 2015-11-09 2016-02-03 杭州讯点商务服务有限公司 Handheld laser 3D scanner with no requirement for adhesion of target spots and use method thereof
CN106500628B (en) * 2016-10-19 2019-02-19 杭州思看科技有限公司 A kind of 3-D scanning method and scanner containing multiple and different long wavelength lasers
CN108151671B (en) * 2016-12-05 2019-10-25 先临三维科技股份有限公司 A kind of 3 D digital imaging sensor, 3 D scanning system and its scan method
CN107202554B (en) * 2017-07-06 2018-07-06 杭州思看科技有限公司 It is provided simultaneously with photogrammetric and 3-D scanning function hand-held large scale three-dimensional measurement beam scanner system
CN109029292A (en) * 2018-08-21 2018-12-18 孙傲 A kind of inner surface of container three-dimensional appearance non-destructive testing device and detection method
CN109341591A (en) * 2018-11-12 2019-02-15 杭州思看科技有限公司 A kind of edge detection method and system based on handheld three-dimensional scanner

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3614935B2 (en) * 1995-06-20 2005-01-26 オリンパス株式会社 3D image measuring device
WO2004088245A1 (en) * 2003-03-27 2004-10-14 Zanen Pieter O Method of solving the correspondence problem in convergent stereophotogrammetry
CN102112845B (en) * 2008-08-06 2013-09-11 形创有限公司 System for adaptive three-dimensional scanning of surface characteristics
CN101504275A (en) * 2009-03-11 2009-08-12 华中科技大学 Hand-hold line laser three-dimensional measuring system based on spacing wireless location

Also Published As

Publication number Publication date
CN101853528A (en) 2010-10-06

Similar Documents

Publication Publication Date Title
Casser et al. Depth prediction without the sensors: Leveraging structure for unsupervised learning from monocular videos
Zanuttigh et al. Time-of-flight and structured light depth cameras
DE102012112321B4 (en) Device for optically scanning and measuring an environment
Geiger et al. Stereoscan: Dense 3d reconstruction in real-time
US9478035B2 (en) 2D/3D localization and pose estimation of harness cables using a configurable structure representation for robot operations
CN105894499B (en) A kind of space object three-dimensional information rapid detection method based on binocular vision
TWI569229B (en) Method for registering data
Pages et al. Optimised De Bruijn patterns for one-shot shape acquisition
DE102012112322B4 (en) Method for optically scanning and measuring an environment
US8867790B2 (en) Object detection device, object detection method, and program
Zhang et al. Rapid shape acquisition using color structured light and multi-pass dynamic programming
Fanello et al. Hyperdepth: Learning depth from structured light without matching
CN103106688B (en) Based on the indoor method for reconstructing three-dimensional scene of double-deck method for registering
Sagawa et al. Dense 3D reconstruction method using a single pattern for fast moving object
Young et al. Coded structured light
US7953271B2 (en) Enhanced object reconstruction
CN102710951B (en) Multi-view-point computing and imaging method based on speckle-structure optical depth camera
Song et al. An accurate and robust strip-edge-based structured light means for shiny surface micromeasurement in 3-D
KR20160088909A (en) Slam on a mobile device
EP1649423B1 (en) Method and sytem for the three-dimensional surface reconstruction of an object
CN102231792B (en) Electronic image stabilization method based on characteristic coupling
Chun et al. Markerless kinematic model and motion capture from volume sequences
CN103810685B (en) A kind of super-resolution processing method of depth map
US9436987B2 (en) Geodesic distance based primitive segmentation and fitting for 3D modeling of non-rigid objects from 2D images
US9454821B2 (en) One method of depth perception based on binary laser speckle images

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant