CN101853528B  Handheld threedimensional surface information extraction method and extractor thereof  Google Patents
Handheld threedimensional surface information extraction method and extractor thereof Download PDFInfo
 Publication number
 CN101853528B CN101853528B CN2010101738490A CN201010173849A CN101853528B CN 101853528 B CN101853528 B CN 101853528B CN 2010101738490 A CN2010101738490 A CN 2010101738490A CN 201010173849 A CN201010173849 A CN 201010173849A CN 101853528 B CN101853528 B CN 101853528B
 Authority
 CN
 China
 Prior art keywords
 point
 order
 laser stripe
 information
 dimensional
 Prior art date
Links
 238000000605 extraction Methods 0.000 title claims abstract description 40
 239000011159 matrix materials Substances 0.000 claims abstract description 30
 230000004927 fusion Effects 0.000 claims abstract description 12
 238000003672 processing method Methods 0.000 claims abstract description 7
 239000000284 extracts Substances 0.000 claims description 18
 238000000034 methods Methods 0.000 claims description 16
 238000004891 communication Methods 0.000 claims description 13
 230000000875 corresponding Effects 0.000 claims description 13
 230000001808 coupling Effects 0.000 claims description 12
 238000010168 coupling process Methods 0.000 claims description 12
 238000005859 coupling reactions Methods 0.000 claims description 12
 238000006243 chemical reactions Methods 0.000 claims description 9
 230000003287 optical Effects 0.000 claims description 6
 238000001914 filtration Methods 0.000 claims description 5
 239000003086 colorants Substances 0.000 claims description 4
 235000013399 edible fruits Nutrition 0.000 claims description 4
 230000001131 transforming Effects 0.000 claims description 4
 230000005540 biological transmission Effects 0.000 claims description 3
 239000000203 mixtures Substances 0.000 claims description 2
 238000010586 diagrams Methods 0.000 description 7
 238000004519 manufacturing process Methods 0.000 description 5
 238000005516 engineering processes Methods 0.000 description 4
 239000000463 materials Substances 0.000 description 3
 230000000007 visual effect Effects 0.000 description 3
 241000974840 Ellipes Species 0.000 description 2
 238000004422 calculation algorithm Methods 0.000 description 2
 238000004364 calculation methods Methods 0.000 description 2
 238000005094 computer simulation Methods 0.000 description 2
 241000218330 Canellaceae Species 0.000 description 1
 241000563994 Cardiopteridaceae Species 0.000 description 1
 241001269238 Data Species 0.000 description 1
 230000003044 adaptive Effects 0.000 description 1
 239000008264 clouds Substances 0.000 description 1
 238000005336 cracking Methods 0.000 description 1
 239000004744 fabrics Substances 0.000 description 1
 238000003384 imaging method Methods 0.000 description 1
 230000004048 modification Effects 0.000 description 1
 238000006011 modification reactions Methods 0.000 description 1
 230000001264 neutralization Effects 0.000 description 1
 238000004321 preservation Methods 0.000 description 1
 230000003068 static Effects 0.000 description 1
Abstract
Description
Technical field
The present invention relates to a kind of object profile 3 D information obtaining method and deriving means, particularly about a kind of handheld three dimensional type surface information extracting method and extraction apparatus thereof.
Background technology
Profile digitizing in kind is to follow the CAD/CAE technology constantly to develop a novel product that arises at the historic moment to design ancillary technique.By the digitizing of mockup, can give full play to the advantage of digitizing and computing machine, improve product design, manufacturing, improved efficient.In recent years, profile digitizing technique in kind has been obtained widely at numerous areas and has been used, such as: aspect research and development of products, reverse Engineering Technology can be understood original design idea and mechanism by digitizing technique in kind, and original design made improvement, thereby the R﹠D cycle can be shortened; At processing manufacturing industry, digitizing technique in kind has been erected the bridge between manual mould and the computer technology, thereby the great ability of artificial model's intuitive, easy modification and Computeraided manufacturing is given full play to; Aspect police criminal detection, digitizing technique in kind can be widely used in the threedimensional information that extracts the scene of a crime, improves the efficient of cracking of cases; Aspect archaeology research, digitizing technique in kind can be converted to computer model with the rare cultural relics appearance information, and the computer model that obtains can be used for putting on display and permanent the preservation.
And at present computer vision is the digitized important means of profile in kind, because camera coverage and object block in profile digitized process in kind, needs video camera that the material object of diverse location and attitude is scanned.Traditional way is that scanner head is fixed on the mobile device with positioning function, such as highprecision motion mechanism, flexible arm, electromagnetism gyroscope etc., and utilize these mobile devices that the movable information of scanner head is provided, but cost is very high.Current also some product needed is by increasing the adaptive capacity to environment that surround lighting and optical filtering strengthen product, but fixing environmental light brightness is subjected to the influence of testee Facing material, can cause laser stripe can't extract when color of object surface is dark; And optical filtering can bring distortion to image, influences measuring accuracy.
Current, what more generally use is the areastructure light method, the areastructure light method is to the testee surface by the projector projects structural images, receive by the CCD camera, and in computing machine, the image that receives is encoded, thereby obtain the angle of every some transmitted light, just can calculate the depth information of body surface again according to laser triangulation.Can in oneshot measurement, obtain many threedimensional datas in the scene of the visual field though adopt the mode of areastructure light, but, the antiprocess of asking of its data can not reach realtime requirement, therefore video camera must be fixed on to keep static relatively on the tripod in measuring process, and dirigibility is very limited.In addition, because the hiding relation of body surface is difficult to definite in advance video camera attitude that needs scanning, scan efficiency is very low.In theory, coded structured light measuring method shortcoming is the discreteness measured, and each bar grating has a discrete value, therefore only can carry out limited fringe number coding, has limited the precision of measuring.
Summary of the invention
At the problems referred to above, the purpose of this invention is to provide a kind of in real time, handheld three dimensional type surface information extracting method and extraction apparatus thereof accurately.
For achieving the above object, the present invention takes following technical scheme: a kind of handheld three dimensional type surface information extracting method, it may further comprise the steps: the circular gauge point that 1) some multiple colors and type are set on scanned object at random, two video cameras are symmetricly set on the and arranged on left and right sides of oneline laser projecting apparatus, an image acquisition with twoway concurrent working thread are set and handle integrated circuit board; 2) image acquisition is gathered the left and right order two dimension profile information that the left and right side video camera absorbs with the road worker thread of handling in the integrated circuit board; 3) image acquisition is utilized ellipse subject image disposal route with another road of handling in the integrated circuit board, gauge point in the left and right order two dimension profile information is extracted, and carry out stereoscopic vision coupling, the threedimensional coordinate and the topological structure of gauge point in the left and right order two dimension of the reconstruct profile information again; Utilize the topological structure of the threedimensional gauge point of reconstruct, calculate under the current attitude scandata at the registration matrix of whole scanning data; From left and right order two dimension profile information, extract the laser stripe point again, the threedimensional coordinate of laser stripe point in the left and right order two dimension of the reconstruct profile information, and the threedimensional coordinate of 2 laser stripe points carried out left and right order fusion; The threedimensional coordinate of the laser stripe point after utilizing registration matrix under the current attitude that left and right order is merged then is registrated in the whole scanning data of object to be scanned; 4) repeating step 2)～3), up to the whole scandatas that obtain object to be scanned.
In the described step 3), the step of extracting gauge point from left and right order two dimension profile information is as follows: 1. adopt the Susan operator respectively left and right order two dimension profile information to be carried out edge extracting; 2. judge the zones of different that identifies in the left and right order two dimension profile information respectively by the UNICOM zone; 3. preliminary definite possible elliptic region; 4. utilize least square method to simulate directly that step is preliminary in 3. determines possible elliptic region, and further determine the oval marks point.
Described step 3. in definite step of elliptic region as follows:
A, the left and right order two dimension of traversal profile information pixels, and find the minimum rectangle of surrounding each marked region to be:
[StRow _{i}, StCol _{i}] → [EndRow _{i}, EndCol _{i}], and the area A rea of each rectangle of representing with pixel count of record _{i}
The area identification of one of following condition is satisfied in B, removal:
A) situation of rectangular aspect ratio example imbalance:
In the formula, StRow _{i}The initial row of representing i piece zone, StCol _{i}The initial row of representing i piece zone, EndRow _{i}Last column of representing i piece zone, EndCol _{i}Last row of representing i piece zone, Tol _{Max}And Tol _{Min}The minimum and maximum tolerance of expression Aspect Ratio;
B) the area A rea in i piece zone _{i}Excessive or too small situation:
Area _{i}＞Tol _{Max}Or Area _{i}＜Tol _{Max}
Utilize the minimum rectangle frame that surrounds each marked region to be: [StRow _{i}, StCol _{i}] → [EndRow _{i}, EndCol _{i}], find the edge Edge of each marked region _{i}
Described step 4. in, the step of ellipse fitting is as follows:
A, for the marginal point Edge of each identified areas _{i}, fitted ellipse Ellip (C _{x}, C _{y}, A _{1}, A _{s}, θ), C wherein _{x}, C _{y}Be the pixel coordinate center of ellipse, A _{1}, A _{s}Be the long and short shaft length of ellipse, θ is oval deflection;
B, utilize match gained elliptic parameter C _{x}, C _{y}, A _{1}, A _{s}, θ, calculate ellipse area Area ' _{i}, and with the A gained area A rea of step in 3. _{i}Compare,
If Then remove this ellipse;
C, put the mean distance of institute's fitted ellipse by edge calculation, obtain average fit error MeanErr, if MeanErr＞Tol then removes this ellipse, Tol represents the tolerance of average error.
In the described step 3), the stereoscopic vision of gauge point coupling step is as follows: 1. in left and right order two dimension profile information, extract oval gauge point respectively, and the fitted ellipse parameter; 2. utilize the elliptic parameter that obtains, estimate the positional information of oval gauge point under left and right two video cameras; 3. will by the transformation relation between the good video camera of prior demarcation, be transformed under a left side or the right lens camera based on the oval gauge point of right or left purpose position; 4. for each gauge point in a left side or the right order two dimension profile information, calculate its polar curve equation in right or left order two dimension profile information, and near right or left order two dimension profile information polar curve, seek the oval marks point, set up initial matching, the corresponding point set of arbitrary ellipse gauge point Ei in right or left order is Si in a note left side or the right order two dimension profile information; 5. for arbitrary ellipse gauge point Ei in a left side or the right order two dimension profile information, every bit among the traversal S set i, find with these points in oval marks point Ei locus difference minimum and less than the point of error margin as corresponding point, finish the coupling of oval marks point.
In the described step 3), laser stripe point extraction step is as follows: 1. smoothed image, filtering noise; 2. each line data in the traversing graph picture finds the maximal value of this row brightness, and as the alternative point of laser stripe, the capable laser stripe candidate point of i is Li in the note image; 3. calculate adjacent two distances of going between the laser stripe candidate points since first row, if distance greater than certain threshold value, is then preserved the current laser stripe section that finds, from next line, start new laser stripe section and follow the tracks of, repeat said process, up to all row that traveled through on the image; 4. find the longest one section of all laser stripe sections, and think this Duan Biwei laser stripe, remember that this section striped is LP1; 5. calculate the distance of its terminal for all laser stripe sections to the terminal of LP1, chosen distance minimum and less than that section of certain threshold value as next section laser stripe LP2; 6. seek for remaining all laser stripes and calculate the distance of its terminals to the terminal of LP1, LP2, chosen distance minimum and less than that section of certain threshold value as next section laser stripe LP3; 7. return step 2., find all laser stripe points.
In the described step 3), it is as follows that the laser stripe point of reconstruct under left and right two camera coordinate systems is carried out the step that left and right sides order merges: 1. reconstruct is transformed under the video camera on a left side or right side by the good transition matrix of prior demarcation at the laser stripe point under the camera coordinate system in the right side or left side; 2. for reconstruct each data point under the video camera on a left side or right side, seek its closest approach under the video camera in the right side or left side, if the distance of pointtopoint transmission less than the scanner ultimate resolution, then all joins in the final scandata these 2; Otherwise, calculate the average of both coordinates, and this average is pressed in the final scandata, merge good laser stripe point as left and right sides order.
The step that the threedimensional coordinate of the laser stripe point after left and right order in the described step 3) merged is registrated in the whole scanning data of scanning object is as follows: 1. define: a container SignPtSet is used for the threedimensional gauge point coordinate that splendid attire reconstruct obtains; One container KNNId is used for writing down the K neighborhood sequence number of each point of SignPtSet; One container KNNInfo is used to write down the topology information of SignPtSet mid point and its K neighborhood every bit; The point set of the gauge point that note obtains from single frames scanning at every turn is P; The camera coordinates on a left side or right side was Coord when note scanned for the first time _{First}, and with Coord _{First}As global coordinate system; 2. all gauge points are pressed among the container SignPtSet in will the point set P that reconstruct is come out from first two field picture; 3. calculate the domain information that faces of every bit among the point set P, the K neighborhood sequence number and the topology information of record every bit also is pressed into respectively among KNNId and the KNNInfo; 4. current first frame is merged good laser stripe point and be pressed into and go in the whole scanning data, then finish of the processing of the twodimentional profile information of first frame to the three dimensional type surface information; 5. all gauge points are pressed among the container SignPtSet in will the gauge point point set P that reconstruct is come out from the later current frame image of first frame, and calculate the topology information and the K neighborhood sequence number of each gauge point among the gauge point point set P; 6. utilize the topology information and the K neighborhood sequence number of each gauge point, it is right with its point and composition point with identical topological structure to find from SignPtSet; 7. as the right number of fruit dot less than 2, then return step 6., otherwise, utilize the point that finds to be tied to global coordinate system Coord to asking for from current coordinate _{First}The registration matrix T; 8. utilize the registration matrix T to carry out conversion to gauge point point set P, the point set after the note conversion is P1; 9. travel through the every bit among the P1,, then calculate the average of these gauge points, in order to replace original gauge point if face other gauge point of existence in the territory at its Rball; Otherwise, this point is pressed among the SignPtSet, calculate the topology information and the K neighborhood sequence number of this point, and be pressed into respectively among KNNId and the KNNInfo; 10. utilize the registration matrix T to carry out conversion to finishing the laser stripe point that left and right order merges, will merge good laser stripe point from present frame and be registrated to and go the whole scanning data, then finish of the processing of the twodimentional profile information of present frame to the three dimensional type surface information.
A kind of handheld three dimensional type surface information extraction apparatus is characterized in that it comprises: a frame, and oneline laser projecting apparatus, it is arranged on the described frame; Two video cameras, it is symmetricly set on the frame of described line formula laser projecting apparatus both sides; One image acquisition and processing integrated circuit board, it is electrically connected described left and right two video cameras; One communication module, it is electrically connected image acquisition and handles integrated circuit board; One computing machine, it connects described communication module by data line; One power module, it connects described left and right two video cameras, image acquisition and processing integrated circuit board and communication module respectively; Described image acquisition comprises with the processing integrated circuit board: an image capture module is used to gather the left and right order two dimension profile information that described left and right two video cameras absorb; One image processing module, it utilizes the image processing method of ellipse object to extract oval gauge point from described left and right order two dimension profile information; One stereoscopic vision matching module, oval gauge point mates and its threedimensional coordinate of reconstruct and topological structure in its described left and right order two dimension profile information that described image processing module is extracted, and utilizes the topological structure of the good described threedimensional gauge point of reconstruct to calculate the registration matrix that scandata under the current attitude is transformed into whole scanning data again; One laser stripe extraction module, it extracts laser stripe point and its threedimensional coordinate of reconstruct from described left and right order two dimension profile information; One merges and registration module, and the registration matrix that it utilizes described stereoscopic vision matching module to calculate merges and registration the threedimensional coordinate of the laser stripe point of reconstruct from described laser stripe extraction module, to obtain the three dimensional type surface information of object to be scanned.
It also comprises two surround lighting projectors, and the two described surround lighting projectors are symmetricly set on respectively between described left and right two video cameras and the line formula laser projecting apparatus.
Described left and right two video cameras are 20cm～30cm to the scanned object distance, and the distance between two photocentres is 25cm; The optical axis direction of described left and right two video cameras becomes 20 ° with the angle of the laser stripe projecting direction of described line formula laser projecting apparatus.
The present invention is owing to take above technical scheme, it has the following advantages: when 1, image processing module of the present invention utilizes the image processing method of ellipse object to extract gauge point, at first carry out the connected region sign, and then the marginal point of seeking connected region carries out the extraction of oval frontier point, and because image processing module has been removed outer oval border by the area criterion, thereby guarantee in subsequent treatment to have only inner boundary to participate in calculating, improved counting yield effectively.2, therefore the present invention can obtain high accuracy threedimensional profile information to the gauge point projection singleline type laser stripe of scanning object surface more exactly owing to adopted line formula laser projecting apparatus.3, the present invention is owing to be symmetrical arranged two video cameras the and arranged on left and right sides of online formula laser projecting apparatus, therefore the left and right order two dimension profile information of the object to be scanned that absorbs respectively of and arranged on left and right sides video camera is very approaching, for the three dimensional type surface information that obtains object to be scanned exactly provides strong assurance.4, the present invention can work under natural light environment, has overcome that some product must have the shortcoming that surround lighting is supported in the prior art.5, therefore the present invention can improve the speed of data registration owing to adopt a plurality of gauge points to position on the surface of object to be scanned.6, because the present invention's video camera in the shooting process remains at 20cm～30cm to the distance of scanned object, and the distance between the photocentre of left and right two video cameras is 25cm, therefore can make the precision of the twodimentional profile information of obtaining of left and right order the highest.7, therefore the present invention can make left and right two video cameras have the public visual field of maximum magnitude because the optical axis direction of left and right two video cameras becomes 20 ° with the angle of the laser stripe projecting direction of line formula laser projecting apparatus.8, because the communication module among the present invention adopts is one 1394 integrated circuit board, satisfy the requirement that highlevel efficiency is carried.The present invention can be applied in all fields that need obtain object profile threedimensional information, for example vehicle complete vehicle scanning, aircraft configuration scanning, mould manufacturing, consumer goods manufacturing, police criminal detection, cultural relic digitalization etc. numerous areas.
Description of drawings
Fig. 1 is the floor map of extraction apparatus structure of the present invention
Fig. 2 is the structure principle chart of extraction apparatus of the present invention
Fig. 3 is an image acquisition and the structured flowchart of handling integrated circuit board in the extraction apparatus of the present invention
Fig. 4 is an image acquisition and the workflow diagram of handling integrated circuit board in the extraction apparatus of the present invention
Fig. 5 is the left and right order two dimension profile information edge synoptic diagram that image processing module extracts in the extraction apparatus of the present invention
Fig. 6 is the left and right order two dimension profile information area synoptic diagram of image processing module sign in the extraction apparatus of the present invention
The elliptic region synoptic diagram that image processing module is tentatively determined in Fig. 7 extraction apparatus of the present invention
Fig. 8 is an image processing module ellipse fitting result schematic diagram in the extraction apparatus of the present invention
Fig. 9 is the typical marks point synoptic diagram of image processing module match in the extraction apparatus of the present invention
Figure 10 is the synoptic diagram of extraction apparatus neutral body vision matching module limit coupling of the present invention
Embodiment
Below in conjunction with drawings and Examples the present invention is described in detail.
As shown in Figure 1 and Figure 2, extraction apparatus of the present invention comprises a frame 1, and frame 1 is provided with oneline laser projecting apparatus 2, is symmetrical arranged left and right two video cameras 3 on the frame 1 of online laser projecting apparatus 2 both sides.Left and right two video cameras 3 are electrically connected an image acquisition and handle integrated circuit board 4, and image acquisition is electrically connected a communication module 5 with processing integrated circuit board 4.Communication module 5 connects a computing machine 6 by data line, and left and right two video cameras 3, image acquisition are connected a power module 7 respectively with processing integrated circuit board 4 and communication module 5.Line formula laser projecting apparatus 2 emission laser, treating scanning object scans, the light of reflected back on the video camera 3 pickedups object to be scanned, be the twodimentional profile information of object to be scanned, image acquisition is electrically connected two video cameras 3 with processing integrated circuit board 4, gather the twodimentional profile information of object to be scanned, simultaneously twodimentional profile information processing is become the three dimensional type surface information, and by communication module 5 and data line the three dimensional type surface information is flowed to computing machine 6, by computing machine 6 show synchronously video cameras 3 image and through image acquisition with handle the scan images that integrated circuit board 4 is handled well.Power module 7 is for giving two video cameras 3, image acquisition and processing integrated circuit board 4 and communication module 5 power supplies.In the present embodiment, what communication module 5 adopted is one 1394 integrated circuit board.
The line formula laser projecting apparatus 2 of extraction apparatus of the present invention is used for to singleline type laser stripe of the surface of object to be scanned projection, because the singleline type laser stripe can satisfy highlevel efficiency, highprecision requirement.The laser stripe projecting direction of line formula laser projecting apparatus 2 points to the direction of object to be scanned.
Two video cameras 3 of extraction apparatus of the present invention are used to absorb the twodimentional profile information of object to be scanned, and the two is symmetrical arranged structure can make laser stripe close at the image space of left and right two video cameras 3.The image that single camera 3 is taken only can provide the twodimentional profile information of object to be scanned, adopting left and right two video cameras 3 is binocular positioning functions for the anthropomorphic dummy, with the left and right order two dimension profile information of pickedup scanning object surface, thereby obtain the three dimensional type surface information of object to be scanned.At video camera 3 to the scanned object distance at 20cm～30cm, and the distance between the photocentre of left and right two video cameras 3 is when being 25cm, the precision of the left and right order two dimension profile information of obtaining is the highest.The optical axis direction of left and right two video cameras 3 becomes 20 ° with the angle of the laser stripe projecting direction of line formula laser projecting apparatus 2, can make left and right two video cameras 3 have the public visual field of maximum magnitude like this.In the present embodiment, video camera 3 can adopt the CCD camera, can also adopt other picture pickup device, requires as long as resolution is not less than about 1,300,000 pixels.
In order to improve the speed of data registration, what the gauge point that the present invention uses adopted is the circular gauge point of different colours, and gauge point comprises noncoding and encodes two kinds.Abovementioned gauge point is sticked on the object to be scanned or around the object randomly, thereby can guarantee the uniqueness of gauge point topological structure, topological structure of the present invention be meant gauge point point of proximity color, type and with the range information (being detailed later) of its neighbor point.
As shown in Figure 3, Figure 4, the image acquisition of extraction apparatus of the present invention and processing integrated circuit board 4 comprise the thread of twoway concurrent working, wherein one the tunnel is image capture module 41, be used to gather the left and right order two dimension profile information that left and right two video cameras 3 absorb, another road comprises that image processing module 42, a stereoscopic vision matching module 43, a laser stripe extraction module 44 and merge and registration module 45.Image capture module 41 is used for the left and right order two dimension profile information of the scanning object surface that acquisition camera 3 absorbs.Image processing module 42 utilizes the image processing method of ellipse object that the gauge point in the left and right order two dimension profile information is extracted, and the gauge point that extracts is oval gauge point.Oval gauge point mates in the left and right order two dimension profile information that stereoscopic vision matching module 43 extracts image processing module 42, the threedimensional coordinate and the topological structure of gauge point in the left and right order of the reconstruct two dimension profile information again utilize the topological structure of the good threedimensional gauge point of reconstruct to calculate the registration matrix T that scandata under the current attitude is transformed into whole scanning data at last.Laser stripe extraction module 44 extracts the laser stripe point from the left and right order two dimension profile information of left and right two video cameras 3 pickedup, and the threedimensional coordinate of laser stripe point in the left and right order two dimension of the reconstruct profile information.Fusion and registration module 45 are carried out left and right order fusion with the threedimensional coordinate of 2 laser stripe points, and the threedimensional coordinate of the laser stripe after utilizing registration matrix under the current attitude that left and right order is merged is registrated in the whole scanning data of scanning object, to obtain the three dimensional point cloud of object to be scanned, i.e. the three dimensional type surface information of object to be scanned.
It is as follows that image processing module 42 of the present invention utilizes the image processing method of ellipse object to handle the step of left and right order two dimension profile information:
1. Edge extraction: Susan in various edge extracting operators (Soviet Union three) operator arithmetic speed is higher, and will not want too much parameter setting, can satisfy realtime requirement, therefore, the present invention adopts the Susan operator that left and right order two dimension profile information is carried out edge extracting, and the edge of extraction as shown in Figure 5.
2. imageregion sign: identify zones of different in the left and right order two dimension profile information by UNICOM's region decision, the sign result in zone as shown in Figure 6.
3. preliminary definite possible elliptic region, elliptic region as shown in Figure 7.Definite step of elliptic region is as follows:
A, the left and right order two dimension of traversal profile information pixels, find the minimum rectangle of surrounding each marked region to be in this process:
[StRow _{i}, StCol _{i}] → [EndRow _{i}, EndCol _{i}], and the area A rea of each rectangle of representing with pixel count of record _{i}
The area identification of one of following condition is satisfied in B, removal:
A) situation of rectangular aspect ratio example imbalance:
In the following formula, subscript i represents i piece zone, and St represents initial, and End represents to finish, and Row represents row, and Col represents row.StRow _{i}The initial row of representing i piece zone, StCol _{i}The initial row of representing i piece zone, EndRow _{i}Last column of representing i piece zone, EndCol _{i}Last row of representing i piece zone, Tol _{Max}And Tol _{Min}The minimum and maximum tolerance of expression Aspect Ratio.
B) the area A rea in i piece zone _{i}Excessive or too small situation:
Area _{i}＞Tol _{Max}Or Area _{i}＜Tol _{Max}
Utilize the minimum rectangle frame that surrounds each marked region to be:
[StRow _{i}, StCol _{i}] → [EndRow _{i}, EndCol _{i}], find the edge Edge of each marked region _{i}
4. utilize least square method directly to simulate preliminary definite possible elliptic region in the step 3), and further determine ellipse, serve as a mark a little.At last the ellipse fitting result of Que Dinging as shown in Figure 8, the step of ellipse fitting is as follows:
A, for the marginal point Edge of each identified areas _{i}, fitted ellipse Ellip (C _{x}, C _{y}, A _{1}, A _{s}, θ), C wherein _{x}, C _{y}Be the pixel coordinate center of ellipse, A _{1}, A _{s}Be the long and short shaft length of ellipse, θ is oval deflection.
B, utilize match gained elliptic parameter C _{x}, C _{y}, A _{1}, A _{s}, θ, calculate ellipse area Area ' _{i}, and with step 3) in 1. gained area A rea _{i}Compare,
If Then remove this ellipse.
C, put the mean distance of institute's fitted ellipse, obtain average fit error MeanErr, if MeanErr＞Tol then removes this ellipse by edge calculation.Tol represents the tolerance of average error.
It is pointed out that and carrying out oval frontier point when extracting, the algorithm that the present invention does not adopt traditional border to follow the tracks of, but by at first carrying out the connected region sign, and then the marginal point of seeking connected region carries out the extraction of oval frontier point.Done following advantage like this: in actual scanning,, reduce surround lighting, often adopt the circular gauge point of pattern as shown in Figure 4 to the influence that ellipse extracts for increasing the difference degree of gauge point and surrounding environment.If adopt traditional edge following algorithm, then can provide two border C1, C2, the formed C1 of circle gets final product in only adopting for the location, and introducing redundant border can increase calculated amount more, and fitting result is shown in a among Fig. 9; And if the regional connectivity among employing the present invention then can form the regional A1 of the white shown in the b among Figure 10 and the regional A2 of black.Because regional A2 pixel count (area) is seldom, therefore in step 3), 2. removed outer oval border, thereby guaranteed in subsequent treatment to have only inner boundary to participate in calculating by the area criterion, improved counting yield.This is most important in calculating in real time.
Oval gauge point mates in the left and right order two dimension profile information that stereoscopic vision matching module 43 among the present invention extracts image processing module 42, the threedimensional coordinate and the topological structure of gauge point in the left and right order of the reconstruct two dimension profile information again utilize the topological structure of gauge point in the left and right order two dimension profile information to calculate the registration matrix T of left and right order two dimension profile information respectively again.Corresponding point matching is the core problem in the stereoscopic vision.About the most important principle of corresponding point matching is polar curve constraint condition.By the polar curve constraint, some corresponding point in another width of cloth image in the image can be limited pointblank, thereby reduce the hunting zone greatly.But polar curve constraint condition still can not guarantee the uniqueness of corresponding point matching.In order to address this problem, the present invention utilizes the imaging characteristics of circular target in image, has realized unique coupling of left and right order two dimension profile information corresponding point.As shown in figure 10, the coupling step of gauge point is as follows:
1. in left and right order two dimension profile information, extract gauge point respectively, and the fitted ellipse parameter.
2. utilize the elliptic parameter that calculates gained in the step 1), estimate the positional information of gauge point under left and right two video cameras 3.
3. will be based on the oval gauge point of right purpose position, by the transformation relation of 3 in the good video camera of prior demarcation, be transformed into the video camera 3 times in left side.
4. for each gauge point in the left order two dimension profile information, calculate its polar curve equation in right order two dimension profile information, and near right order two dimension profile information polar curve, seek the oval marks point, set up initial matching.Through polar curve constraint coupling, a gauge point may have a plurality of right order gauge point correspondences in the left order two dimension profile information.Remember that the corresponding point set of any gauge point Ei in right order is Si in the left order two dimension profile information.
5. for gauge point Ei arbitrarily in the left order two dimension profile information, the every bit among the traversal S set i, find with these points in gauge point Ei locus difference minimum and less than the point of error margin as corresponding point, finish the coupling of gauge point.
In the foregoing description, also can will be transformed into the video camera 3 times on right side based on the oval gauge point of left purpose position in 3., and in the same way corresponding point in the left and right order two dimension profile information be mated in step.
Laser stripe extraction module 44 among the present invention extracts the laser stripe point from the original left and right order two dimension profile information of video camera 3 pickedups, and the threedimensional coordinate of reconstruct laser stripe point.Laser stripe point extraction step is as follows:
1. smoothed image, the filtering noise.
2. each line data in the traversing graph picture finds the maximal value of this row brightness, and as the alternative point of laser stripe point, the capable laser stripe candidate point of i is Li in the note image.
3. calculate adjacent two distances of going between the laser stripe candidate points since first row, if distance greater than certain threshold value, is then preserved the current laser stripe section that finds.From next line, start new laser stripe section and follow the tracks of.Repeat said process, up to all row that traveled through on the image.
4. find the longest one section of all laser stripe sections, and think this Duan Biwei laser stripe, remember that this section striped is LP1.
5. calculate the distance of its terminal for all laser stripe sections to the terminal of LP1, chosen distance minimum and less than that section of certain threshold value as next section laser stripe LP2.
6. seek for remaining all laser stripes and calculate its terminals to LP1, the distance of the terminal of LP2, chosen distance minimum and less than that section of certain threshold value as next section laser stripe LP3.
7. return step 2., find all laser stripe points.
Fusion of the present invention and registration module 45 are utilized the registration matrix T, and the laser stripe that laser stripe extraction module 33 is extracted carries out data registration and fusion, to obtain the final scandata of object to be scanned, i.e. three dimensional type surface information.
For reliability and the expansion single sweep operation scope that increases data, need the laser stripe point of reconstruct under left and right two video cameras, 3 coordinate systems is carried out left and right sides order fusion, its fusion steps is as follows:
1. the laser stripe point of reconstruct under right video camera 3 coordinate systems is transformed into left video camera 3 times by the good transition matrix of prior demarcation.
2. for reconstruct each data point under left video camera 3, seek its closest approach under right video camera 3, if the distance of pointtopoint transmission less than the scanner ultimate resolution, then all joins in the final scandata these 2; Otherwise, calculate the average of both coordinates, and this average is pressed in the final scandata, merge good laser stripe point as left and right sides order.
In the foregoing description, step also can be transformed into right video camera 3 times by the good transition matrix of prior demarcation with the laser stripe of reconstruct under left video camera 3 coordinate systems in 1., adopts abovementioned same mode that the laser stripe point of reconstruct under left and right two video cameras, 3 coordinate systems is carried out left and right sides order and merges.
The step that the laser stripe point that fusion is good carries out the data registration is as follows:
1. definition: a container SignPtSet is used for the threedimensional gauge point coordinate that splendid attire reconstruct obtains; One container KNNId is used for writing down the K neighborhood sequence number of each point of SignPtSet; One container KNNInfo is used to write down the topology information of SignPtSet mid point and its K neighborhood every bit; The point set of the gauge point that note obtains from single frames scanning at every turn is P; The camera coordinates in left side was Coord when note scanned for the first time _{First}, and with Coord _{First}As global coordinate system.
2. all gauge points are pressed among the container SignPtSet in will the gauge point point set P that reconstruct is come out from first two field picture.
3. calculate the topology information of each gauge point among the gauge point point set P, it comprise color, type and with the range information of its neighbor point, write down the K neighborhood sequence number of each gauge point simultaneously.
4. the K neighborhood serial number information with every bit is pressed into KNNId, and the topology information of each gauge point is pressed among the KNNInfo.
5. will merge good laser stripe point from current first frame and be pressed into and go the whole scanning data, then finish of the processing of the twodimentional profile information of first frame to the three dimensional type surface information.
For the consistance of the coordinate that guarantees data, ensuing scanning need be transformed into Coord with the threedimensional coordinate of gauge point and laser stripe point under current camera coordinate system _{First}Down, its processing procedure is as follows:
1. all gauge points are pressed among the container SignPtSet in will the gauge point point set P that reconstruct is come out from the later current frame image of first frame.
2. calculate the topology information of each gauge point among the gauge point point set P, write down the K neighborhood sequence number of each gauge point simultaneously.
3. for each gauge point among the gauge point point set P that from current frame image, reconstructs, utilize the topology information and the K neighborhood sequence number of each gauge point, from SignPtSet, find the point that has identical topological structure with it, and it is right that these gauge points are formed point.
4. as the right number of fruit dot less than 2, then return step 3.; Greater than 2, then utilize the point that finds to be tied to global coordinate system Coord as the right number of fruit dot to asking for from current coordinate _{First}Transformation matrix T, i.e. registration matrix T.
5. utilize the registration matrix T to carry out conversion to each gauge point among the gauge point point set P, the point set after the note conversion is P1.
6. travel through the every bit among the point set P1,, then calculate the average of these gauge points, and make average replace corresponding gauge point among the original SignPtSet if face other gauge point that exists in the territory among the SignPtSet at its Rball; Otherwise, this point is pressed among the SignPtSet, calculate the topology information of this point, write down the K neighborhood sequence number of this point simultaneously; And the topology information that will put and K neighborhood sequence number are pressed into respectively among KNNId and the KNNInfo.
7. utilize the registration matrix T to carry out conversion to finishing the laser stripe point that left and right order merges, will merge good laser stripe point from present frame and be registrated to and go the whole scanning data, then finish of the processing of the twodimentional profile information of present frame to the three dimensional type surface information.
Abovementioned whole scanning data is the three dimensional type surface information of final object to be scanned.
In the foregoing description, as shown in Figure 1, extraction apparatus of the present invention also comprises two surround lighting projectors 8, two surround lighting projectors 8 are symmetricly set on respectively between left and right two video cameras 3 and the line formula laser projecting apparatus 2, guarantee that the surround lighting irradiation scope that forms is distributed in the camera coverage scope uniformly.The brightness of surround lighting realizes by the mode of voltageregulation, when brighter or dark, reduces the brightness of surround lighting when the scanned object surface color, guarantees can both extract the laser stripe point accurately on the surface of unlike material.In the present embodiment, environment light source 7 adopts 2 light emitting diodes that power is 1w.
The step of extracting method of the present invention is as follows:
1) the circular gauge point of some multiple colors and type is set at random on scanned object, two video cameras are symmetricly set on the and arranged on left and right sides of oneline laser projecting apparatus, an image acquisition with twoway concurrent working thread is set and handles integrated circuit board;
2) image acquisition is gathered the left and right order two dimension profile information that the left and right side video camera absorbs with the road worker thread of handling in the integrated circuit board;
3) image acquisition is utilized ellipse subject image disposal route with another road of handling in the integrated circuit board, gauge point in the left and right order two dimension profile information is extracted, and carry out stereoscopic vision coupling, the threedimensional coordinate and the topological structure of gauge point in the left and right order two dimension of the reconstruct profile information again; Utilize the topological structure of the threedimensional gauge point of reconstruct, calculate under the current attitude scandata at the registration matrix of whole scanning data; From left and right order two dimension profile information, extract the laser stripe point again, the threedimensional coordinate of laser stripe point in the left and right order two dimension of the reconstruct profile information, and the threedimensional coordinate of 2 laser stripe points carried out left and right order fusion; The threedimensional coordinate of the laser stripe point after utilizing registration matrix under the current attitude that left and right order is merged then is registrated in the whole scanning data of scanning object;
4) repeating step 2)～3), up to the whole scandatas that obtain object to be scanned.
Abovementioned steps 3) in, the image processing process of object to be scanned has been described clear in extraction apparatus of the present invention, does not repeat them here.
Claims (8)
Priority Applications (1)
Application Number  Priority Date  Filing Date  Title 

CN2010101738490A CN101853528B (en)  20100510  20100510  Handheld threedimensional surface information extraction method and extractor thereof 
Applications Claiming Priority (1)
Application Number  Priority Date  Filing Date  Title 

CN2010101738490A CN101853528B (en)  20100510  20100510  Handheld threedimensional surface information extraction method and extractor thereof 
Publications (2)
Publication Number  Publication Date 

CN101853528A CN101853528A (en)  20101006 
CN101853528B true CN101853528B (en)  20111207 
Family
ID=42804993
Family Applications (1)
Application Number  Title  Priority Date  Filing Date 

CN2010101738490A CN101853528B (en)  20100510  20100510  Handheld threedimensional surface information extraction method and extractor thereof 
Country Status (1)
Country  Link 

CN (1)  CN101853528B (en) 
Families Citing this family (20)
Publication number  Priority date  Publication date  Assignee  Title 

DE102012008905A1 (en) *  20120508  20131114  Airbus Operations Gmbh  Optical measuring device and displacement device and optical measuring method 
CN102779344B (en) *  20120702  20140827  济南大学  Registering block for space exchange and use method thereof 
CN102968820A (en) *  20121204  20130313  上海无线电设备研究所  Method for establishing external surface geometric model of space target based on highprecision scanning 
CN103033145B (en) *  20130108  20150902  天津锋时互动科技有限公司  For identifying the method and system of the shape of multiple object 
CN103256896B (en) *  20130419  20150624  大连理工大学  Position and posture measurement method of highspeed rolling body 
CN103632384B (en) *  20131025  20160601  大连理工大学  The rapid extracting method of builtup type mark point and mark dot center 
CN104517280B (en) *  20131114  20170412  广东朗呈医疗器械科技有限公司  Threedimensional imaging method 
TWI509566B (en) *  20140724  20151121  Etron Technology Inc  Attachable threedimensional scan module 
CN104268930B (en) *  20140910  20180501  芜湖林一电子科技有限公司  A kind of coordinate pair is than 3D scanning method 
CN107004278B (en) *  20141205  20201117  曼蒂斯影像有限公司  Tagging in 3D data capture 
CN104501740B (en) *  20141218  20170510  杭州鼎热科技有限公司  Handheld laser threedimension scanning method and handheld laser threedimension scanning equipment based on mark point trajectory tracking 
CN105091782A (en) *  20150529  20151125  南京邮电大学  Multilane laser light plane calibration method based on binocular vision 
CN204988183U (en) *  20150805  20160120  杭州思看科技有限公司  Handheld scanning apparatus skeleton texture 
CN105203046B (en) *  20150910  20180918  北京天远三维科技股份有限公司  Multithread array laser 3 D scanning system and multithread array laser 3D scanning method 
CN105300310A (en) *  20151109  20160203  杭州讯点商务服务有限公司  Handheld laser 3D scanner with no requirement for adhesion of target spots and use method thereof 
CN106500628B (en) *  20161019  20190219  杭州思看科技有限公司  A kind of 3D scanning method and scanner containing multiple and different long wavelength lasers 
CN108151671B (en) *  20161205  20191025  先临三维科技股份有限公司  A kind of 3 D digital imaging sensor, 3 D scanning system and its scan method 
CN107202554B (en) *  20170706  20180706  杭州思看科技有限公司  It is provided simultaneously with photogrammetric and 3D scanning function handheld large scale threedimensional measurement beam scanner system 
CN109029292A (en) *  20180821  20181218  孙傲  A kind of inner surface of container threedimensional appearance nondestructive testing device and detection method 
CN109341591A (en) *  20181112  20190215  杭州思看科技有限公司  A kind of edge detection method and system based on handheld threedimensional scanner 
Family Cites Families (4)
Publication number  Priority date  Publication date  Assignee  Title 

JP3614935B2 (en) *  19950620  20050126  オリンパス株式会社  3D image measuring device 
WO2004088245A1 (en) *  20030327  20041014  Zanen Pieter O  Method of solving the correspondence problem in convergent stereophotogrammetry 
CN102112845B (en) *  20080806  20130911  形创有限公司  System for adaptive threedimensional scanning of surface characteristics 
CN101504275A (en) *  20090311  20090812  华中科技大学  Handhold line laser threedimensional measuring system based on spacing wireless location 

2010
 20100510 CN CN2010101738490A patent/CN101853528B/en active IP Right Grant
Also Published As
Publication number  Publication date 

CN101853528A (en)  20101006 
Similar Documents
Publication  Publication Date  Title 

Casser et al.  Depth prediction without the sensors: Leveraging structure for unsupervised learning from monocular videos  
Zanuttigh et al.  Timeofflight and structured light depth cameras  
DE102012112321B4 (en)  Device for optically scanning and measuring an environment  
Geiger et al.  Stereoscan: Dense 3d reconstruction in realtime  
US9478035B2 (en)  2D/3D localization and pose estimation of harness cables using a configurable structure representation for robot operations  
CN105894499B (en)  A kind of space object threedimensional information rapid detection method based on binocular vision  
TWI569229B (en)  Method for registering data  
Pages et al.  Optimised De Bruijn patterns for oneshot shape acquisition  
DE102012112322B4 (en)  Method for optically scanning and measuring an environment  
US8867790B2 (en)  Object detection device, object detection method, and program  
Zhang et al.  Rapid shape acquisition using color structured light and multipass dynamic programming  
Fanello et al.  Hyperdepth: Learning depth from structured light without matching  
CN103106688B (en)  Based on the indoor method for reconstructing threedimensional scene of doubledeck method for registering  
Sagawa et al.  Dense 3D reconstruction method using a single pattern for fast moving object  
Young et al.  Coded structured light  
US7953271B2 (en)  Enhanced object reconstruction  
CN102710951B (en)  Multiviewpoint computing and imaging method based on specklestructure optical depth camera  
Song et al.  An accurate and robust stripedgebased structured light means for shiny surface micromeasurement in 3D  
KR20160088909A (en)  Slam on a mobile device  
EP1649423B1 (en)  Method and sytem for the threedimensional surface reconstruction of an object  
CN102231792B (en)  Electronic image stabilization method based on characteristic coupling  
Chun et al.  Markerless kinematic model and motion capture from volume sequences  
CN103810685B (en)  A kind of superresolution processing method of depth map  
US9436987B2 (en)  Geodesic distance based primitive segmentation and fitting for 3D modeling of nonrigid objects from 2D images  
US9454821B2 (en)  One method of depth perception based on binary laser speckle images 
Legal Events
Date  Code  Title  Description 

C06  Publication  
PB01  Publication  
C10  Entry into substantive examination  
SE01  Entry into force of request for substantive examination  
C14  Grant of patent or utility model  
GR01  Patent grant 