CN104374374A - Active omni-directional vision-based 3D (three-dimensional) environment duplication system and 3D omni-directional display drawing method - Google Patents

Active omni-directional vision-based 3D (three-dimensional) environment duplication system and 3D omni-directional display drawing method Download PDF

Info

Publication number
CN104374374A
CN104374374A CN201410632152.3A CN201410632152A CN104374374A CN 104374374 A CN104374374 A CN 104374374A CN 201410632152 A CN201410632152 A CN 201410632152A CN 104374374 A CN104374374 A CN 104374374A
Authority
CN
China
Prior art keywords
viewpoint
laser
beta
point
panorama
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201410632152.3A
Other languages
Chinese (zh)
Other versions
CN104374374B (en
Inventor
汤一平
周伟敏
鲁少辉
韩国栋
吴挺
陈麒
韩旺明
胡克钢
王伟羊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University of Technology ZJUT
Original Assignee
Zhejiang University of Technology ZJUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University of Technology ZJUT filed Critical Zhejiang University of Technology ZJUT
Priority to CN201410632152.3A priority Critical patent/CN104374374B/en
Publication of CN104374374A publication Critical patent/CN104374374A/en
Application granted granted Critical
Publication of CN104374374B publication Critical patent/CN104374374B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C11/00Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying

Abstract

The invention discloses an active omni-directional vision-based 3D (three-dimensional) environment duplication system. The system comprises an omni-directional vision sensor, a mobile body laser source and a microprocessor which is used for carrying out 3D omni-directional reconstruction and 3D omni-directional drawing output on an omni-directional image, wherein the microprocessor comprises a calibration part, a 3D reconstruction part and a 3D omni-directional display drawing part; the 3D omni-directional display drawing part mainly comprises an omni-directional drawing module by taking an object as a center, a perspective display drawing module by taking a human body as a center, an omni-directional perspective circulating display drawing module, a stereo perspective display drawing module, an omni-directional space diagram circulating display drawing module and a stereo perspective display drawing module for observing distance changes. The invention further discloses an active omni-directional vision-based 3D omni-directional display drawing method. According to the method, perfect combination of geometric accuracy and reality sense of 3D omni-directional model reconstruction, omni-directional 3D scene drawing display with immersive vision and reconstructing process automation can be realized.

Description

Based on 3D environment dubbing system and the 3D panorama display method for drafting of active panoramic vision
Technical field
The present invention relates to the application in stereo-visiuon measurement and 3D drafting of LASER Light Source, omnibearing vision sensor and computer vision technique, particularly relate to a kind of 3D environment dubbing system based on active panoramic vision and 3D panorama display method for drafting.
Background technology
Three-dimensional reconstruction includes three-dimensional measurement and stereo reconstruction, and be emerging, to have a development potentiality and practical value application technology, the reconstruction technique of three-dimensional model is mainly concerned with the content of following three aspects: the 1) accuracy of geometry; 2) sense of reality; 3) robotization of process of reconstruction.Data required for the reconstruction of three-dimensional model mainly comprise the depth image data of laser scanning and view data two aspects of imageing sensor collection.
Current three-dimensional laser scanner still has much can improvements, 1) as the hardware construction of precision, require CCD technology, laser technology, precision optical machinery sensing technology etc. to carry out high-quality integration, result in this quasi-instrument and there is expensive manufacturing cost and maintenance cost.2) existing three-dimensional laser scanning technique belongs to Surface scan imaging technique, and a width analyzing spot cloud atlas cannot obtain the overall picture of buildings, the especially overall picture of interior of building; The point cloud obtained from different scanning movement (visual angle) adopts its respective local coordinate system respectively, under therefore needing they to be registrated to a unified coordinate system.There is repeatedly the conversion between multiple coordinate system in registration process, cause various error and affect computing velocity and computational resource.3) more interference can be brought in some cloud gatherer process, result in and need to carry out the links such as pre-service to cloud data.4) the software point cloud data that the three-dimensional laser scanner of each manufacturer configures lacks unified data standard, is difficult to realize sharing of data, and this point will be particularly outstanding in digital city is built.5) two kinds of distinct devices obtain geometry and the color information of dimensional target point, and the geometry between distinct device and color information data registration quality directly affect the effect of texture and textures synthesis.6) need repeatedly manual intervention in three-dimensional modeling processing procedure, modeling efficiency is not high, and this needs operating personnel to have higher professional knowledge, and affects automaticity.
Chinese invention patent application number is 201210137201.7 disclose a kind of omnidirectional three-dimensional modeling based on active panoramic vision sensor, system mainly comprises omnibearing vision sensor, moving body LASER Light Source and the microprocessor for carrying out the reconstruct of 3D panorama to omnidirectional images, the scanning that moving body LASER Light Source completes a vertical direction obtains the section point cloud in differing heights situation, by these data using the height value of described moving body LASER Light Source as preservation index, so just can add up by section point cloud generation order, finally construct the panorama 3D model with geological information and colouring information.But this technical scheme also exists two subject matters, first moving body LASER Light Source scans the plane cloud data that the cloud data obtained cannot obtain perpendicular to moving body LASER Light Source, as desktop, ground and zenith plane within doors; Its two be current three-dimensionalreconstruction software can not meet 3D focusing on people draw display, existing 3D software mainly draws display for the 3D centered by thing, meeting 3D focusing on people display must be just allow observer have to come to personally the 3D reconstruction render in its border, such as to the three-dimensional digital of the Longmen Grottoes, the final goal of its three-dimensional digital can allow anyone by the long-range visit the Longmen Grottoes in internet, can appreciate the artistic charm of the Longmen Grottoes with multi-angle, comprehensive experience exactly.In human engineering, generally design the associate components such as visual displays with the quiet visual field for foundation, having during the scene of being shown by internet visit to allow people and come to its border visual impression personally, realizing a kind of 3D volume rendering display focusing on people with regard to needing by the method for human engineering.
Summary of the invention
In order to the computer resource usage overcoming existing passive type full-view stereo vision measurement device is large, real-time performance is poor, practicality is not strong, the not high deficiency of robustness, the initiative three-dimensional panoramic vision measurement mechanism of full color panorama LED light source is easily subject to the deficiencies such as the interference of surround lighting, the invention provides a kind of geometry of position information and colouring information by directly obtaining space three-dimensional point, computer resource usage can be reduced, complete measurement fast, real-time is good, practical, the omnidirectional three-dimensional modeling based on active panoramic vision sensor that robustness is high, adopt 3D Macrovision centered by thing and the drafting display mode seen vision in 3D focusing on people and merge mutually, adding users come to its border experience sense personally.
Realize foregoing invention content, several key problem must be solved: (1) realizes a kind of moving body LASER Light Source that can cover whole reconstruct scene; (2) a kind of active panoramic vision sensor that can obtain actual object depth information is fast realized; (3) respective pixel point in laser scanning spatial data points and panoramic picture is carried out the method for rapid fusion; (4) a kind of increasingly automated method for reconstructing three-dimensional scene of rule-based cloud data; (5) 3D panorama focusing on people draws display technique; (6) the drafting display technique that the 3D Macrovision centered by thing merges mutually with Macrovision in 3D focusing on people; (7) robotization of 3D process of reconstruction, reduces manual intervention, and whole scanning, process, generation, drafting procedure for displaying accomplish without any letup.
The technical solution adopted for the present invention to solve the technical problems is:
A kind of 3D environment dubbing system based on active panoramic vision, comprise omnibearing vision sensor, moving body LASER Light Source and the microprocessor for carrying out the reconstruct of 3D panorama and 3D panorama drafting output to omnidirectional images, omnibearing vision sensor is arranged on the guiding support bar of moving body LASER Light Source
Described moving body LASER Light Source also comprises the volumetric laser light source moved up and down along guiding support bar, this volumetric laser light source have vertically-guided support bar first comprehensive laser, become θ with the axial line of guiding support bar ctilt second comprehensive laser and become θ with the axial line of guiding support bar athree comprehensive the laser tilted;
Described microprocessor is divided into demarcation part, 3D to reconstruct part and 3D panorama display drafting part;
Described demarcation part, for determining the calibrating parameters of omnibearing vision sensor, and on the panoramic picture captured by omnibearing vision sensor, parse laser projection information corresponding to first comprehensive laser, second comprehensive laser and three comprehensive laser;
Described 3D reconstructs part, according to the position of moving body LASER Light Source, and the related pixel coordinate figure of described laser projection information, calculate the some cloud geological information of all-moving surface, and the some cloud geological information of all-moving surface and the colouring information of each comprehensive laser are merged, build panorama 3D model;
Part is drawn in described 3D panorama display, comprising:
Skeleton view focusing on people display drafting module, according to described panorama 3D model, and observer is in 3D and reconstructs visual angle in environment and field range, draws skeleton view focusing on people.
Part is drawn in described 3D panorama display, also comprises:
Full-view perspective Circulating fibrocytes drafting module, according to described panorama 3D model, and observer is in varying cyclically and field range that 3D reconstructs visual angle in environment, draws panoramic perspective Circulating fibrocytes figure focusing on people;
Perspective view display drafting module, according to described skeleton view, generates right visual point image, left visual point image and left and right visual point image, draws perspective view focusing on people;
Full-view stereo figure Circulating fibrocytes drafting module, according to described panorama 3D model, and observer is in varying cyclically and field range that 3D reconstructs visual angle in environment, by constantly changing azimuthal angle beta, be created on the left and right stereogram under this azimuthal angle beta, draw full-view stereo focusing on people perspective Circulating fibrocytes figure;
Viewing distance change perspective view display drafting module, according to described panorama 3D model, and observer reconstructs change and the field range of viewing distance in environment at 3D, be plotted in full-view stereo perspective display figure focusing on people when constantly changing viewing distance.
Based on a 3D panorama display method for drafting for above-mentioned 3D environment dubbing system, comprise step:
1) panoramic picture that omnibearing vision sensor shooting moving body laser light source projects is formed is adopted;
2) according to described panoramic picture, determine the calibrating parameters of omnibearing vision sensor, and parse laser projection information corresponding to first comprehensive laser, second comprehensive laser and three comprehensive laser;
3) according to the position of moving body LASER Light Source, and the related pixel coordinate figure of described laser projection information, calculate the some cloud geological information of all-moving surface, and the some cloud geological information of all-moving surface and the colouring information of each comprehensive laser are merged, build panorama 3D model;
4) according to described panorama 3D model, and observer is in 3D and reconstructs visual angle in environment and field range, draws skeleton view focusing on people; Concrete steps are as follows:
STEP1) with the single view of omnibearing vision sensor for true origin O m(0,0,0), sets up three-dimensional cylindrical space coordinate system;
STEP2) according to the visual range of human eye, determine the size of see-through window, the variable being see-through window with azimuthal angle beta and height h, and obtain cloud data corresponding to see-through window (h, β, r);
STEP3) according to the step-length of azimuthal angle beta and the scope of height h, and described cloud data (h, β, r), generate data matrix;
STEP4) coupled together by three-dimensional coordinate triangle surfaces all in data matrix, the color of connecting line adopts the color average of two points connected;
STEP5) triangle surface of all connections is carried out output display, complete skeleton view focusing on people and draw.
Described 3D panorama display method for drafting also comprises draws panoramic perspective Circulating fibrocytes figure focusing on people, by constantly changing azimuthal angle beta, being created on the display data matrix under this azimuthal angle beta, completing panoramic perspective Circulating fibrocytes figure and drawing.
Described 3D panorama display method for drafting also comprises draws perspective view focusing on people, and concrete rendering algorithm is as follows:
5.1: determine initial orientation angle β 1, read minor increment h minwith ultimate range h maxdata, determine viewpoint, median eye=0, left viewpoint=1, right viewpoint=2;
5.2:h=h min
5.3: the initial orientation angle β reading distance value h 1start to β 1the data of+300, if β 1+ 300>=1000, then β 1+ 300=β 1+ 300-1000; According to determined viewpoint, select median eye 0 computer memory object point at the cylindrical coordinates of right and left eyes; Select left viewpoint 1 computer memory object point at the cylindrical coordinates of right eye, using the cylindrical coordinates of former coordinate data as left eye; Select right viewpoint 2 computer memory object point at the cylindrical coordinates of left eye, using the cylindrical coordinates of former coordinate data as right eye; Use left and right viewpoint using the new data line of these values as left viewpoint matrix and right viewpoint matrix respectively, h=h+ Δ h;
5.4: judge h>=h maxif do not meet and forward 5.3 to, until generate the display matrix of left and right viewpoint;
5.5: respectively three-dimensional coordinate triangle surfaces all in display matrix is coupled together, method of attachment first the data of every a line is connected with straight line, then each data arranged is connected with straight line, finally by (i in matrix, j) connect with straight line with the cloud data of (i+1, j+1); The color of connecting line adopts the color average of two points connected;
5.6: carry out binocular solid display by the stereogram generated afterwards above-mentioned process, complete perspective view focusing on people and draw.
Described 3D panorama display method for drafting also comprises draws full-view stereo focusing on people perspective Circulating fibrocytes figure, by constantly changing azimuthal angle beta, preserve the display data matrix of current β, respectively left viewpoint matrix is connected with right viewpoint matrix triangle surface, the stereogram generated, stereo display exports.
Described 3D panorama display method for drafting also comprises draws viewing distance focusing on people change perspective view, and person constantly changes the self space position in 3D environment according to the observation, from the angle determination field range of human engineering, draws full-view stereo figure.
First comprehensive described laser, second comprehensive laser and three comprehensive laser are respectively blue line laser, red line laser and green line laser, blue line laser and green line laser are arranged on the both sides up and down of red line laser, and a bit on the axial line of described guiding support bar of the axes intersect of all line lasers.
The panorama that described 3D panorama display method for drafting also comprises centered by thing draws display, according to the single view O of omnibearing vision sensor mfor the cloud data of the 3D panorama model of true origin, with the scan slice produced during moving body LASER Light Source scanning panoramic scene, extract the cloud data that blueness, redness and green comprehensive laser projection produces, generate cloud data matrix, the panorama realized centered by thing draws display.
Beneficial effect of the present invention is mainly manifested in:
(1) provide a kind of brand-new stereoscopic vision acquisition methods, utilize the characteristic of comprehensive laser scanning and omni-directional visual to make the three-dimensional model after reconstructing have higher precision and good texture information simultaneously;
(2) effectively can reduce computer resource usage, have that real-time is good, practical, robustness is high, automaticity advantages of higher, whole 3D reconstruct does not need manpower intervention;
(3) comprehensive laser detection is utilized to ensure that the accuracy of geometry, adopt high-resolution panoramic picture acquisition technique to make each pixel on panoramic picture have geological information and colouring information simultaneously, thus ensure that the sense of reality that 3D reconstructs, whole process autoscan, automatically parsing and calculating, there is not the ill computational problem of three-dimensionalreconstruction, achieve the robotization of three-dimensional reconstruction process; Achieve the geometry accuracy of 3D panorama model reconstruct, the perfect unity of the sense of reality and process of reconstruction robotization;
(4) achieve a kind of 3D panorama focusing on people and draw display technique, the 3D vision centered by thing and 3D vision focusing on people are merged, the panorama 3D scene drawn out is had more and comes to its border visual impression personally.
Accompanying drawing explanation
Fig. 1 is a kind of structural drawing of omnibearing vision sensor;
Fig. 2 is single view catadioptric omnibearing vision sensor imaging model, Fig. 2 (a) perspective imaging process, Fig. 2 (b) sensor plane, Fig. 2 (c) plane of delineation;
Fig. 3 is moving body LASER Light Source structure diagram;
Fig. 4 is the demarcation key diagram of initiatively panoramic vision sensor;
Fig. 5 is the hardware structure diagram of the omnidirectional three-dimensional modeling based on active panoramic vision sensor;
Fig. 6 is the structural drawing of comprehensive laser generator parts, and Fig. 6 (a) is comprehensive laser generator parts front elevation, and figure (b) is comprehensive laser generator parts vertical view;
Fig. 7 is the imaging schematic diagram of omnibearing vision sensor;
Fig. 8 is perspective imaging schematic diagram;
The schematic diagram of Fig. 9 for a change viewing distance binocular stereo imaging, Fig. 9 (a) is the schematic diagram of binocular stereo imaging before viewing distance for a change, and Fig. 9 (b) changes the schematic diagram of binocular stereo imaging after viewing distance;
Figure 10 is the software architecture figure drawn based on the omnidirectional three-dimensional modeling of active panoramic vision sensor and 3D panorama;
Figure 11 is the key diagram calculated based on the some cloud space geometry information in the omnidirectional three-dimensional modeling of active panoramic vision sensor;
Figure 12 is the omnidirectional three-dimensional modeling section panoramic picture schematic diagram that obtains when obtaining three dimensional point cloud based on active panoramic vision sensor;
Figure 13 is the procedure declaration figure resolving the calculating of some cloud space geometry information on panoramic picture;
Figure 14 is the horizontal field of view key diagram of human eye;
Figure 15 is the vertical visual field key diagram of human eye;
Figure 16 is resolving key diagram panorama sectioning image obtaining respectively green, redness and blue laser projection line.
Embodiment
Embodiment 1
With reference to Fig. 1 ~ 16, based on 3D environment dubbing system and the 3D panorama display method for drafting of active panoramic vision, comprise omnibearing vision sensor, moving body LASER Light Source and the microprocessor for carrying out the reconstruct of 3D panorama and 3D panorama drafting output to omnidirectional images.
The center of omnibearing vision sensor and the center configuration of moving body LASER Light Source are on same axis heart line.As shown in Figure 1, omnibearing vision sensor comprises hyperboloid minute surface 2, upper cover 1, transparent semicircle outer cover 3, lower fixed seat 4, image unit holder 5, image unit 6, linkage unit 7, upper cover 8.Hyperboloid minute surface 2 is fixed on upper cover 1, lower fixed seat 4 and transparent semicircle outer cover 3 link into an integrated entity by linkage unit 7, together with transparent semicircle outer cover 3 is fixed by screws in upper cover 1 and upper cover 8, image unit 6 is screwed on image unit holder 5, image unit holder 5 is screwed on lower fixed seat 4, and the output of the image unit 6 in omnibearing vision sensor is connected with microprocessor.
As shown in Figure 3, moving body LASER Light Source, for generation of three-dimensional body structure projection source, comprising: guiding support bar 2-1, laser generation assembled unit 2-2, chassis 2-3, linear electric motors carriage release lever 2-4, linear motor assembly 2-5, blue line laser generating unit 2-6, red line laser generating unit 2-7, green line laser generating unit 2-8.
Laser generation assembled unit 2-2 has 12 holes altogether, wherein every 4 Kong Weiyi groups, be divided into blue line laser generating unit mounting hole group, red line laser generating unit mounting hole group and green line laser generating unit mounting hole group.The axial line of 4 red line laser generating unit mounting holes is orthogonal with the cylindrical axial line of laser generation assembled unit 2-2 relation respectively, and the axial line of 4 blue line laser generating unit mounting holes becomes cant angle theta with the cylindrical axial line of laser generation assembled unit 2-2 respectively crelation, the axial line of 4 green line laser generating unit mounting holes becomes cant angle theta with the cylindrical axial line of laser generation assembled unit 2-2 respectively arelation.These 12 holes are uniformly distributed at the cylindrical circumferentially angle in 90 ° of laser generation assembled unit 2-2, this ensure that in the same point that the axial line in 12 holes intersects on the cylindrical axial line of laser generation assembled unit 2-2, as shown in Figure 6.
Blue line laser generating unit 2-6 is fixed in the hole of blue line laser generating unit mounting hole group of laser generation assembled unit 2-2, and as shown in Figure 6, the blue line laser after combining like this can form one and send comprehensive blue LASER Light Source; Red line laser generating unit 2-7 is fixed in the hole of red line laser generating unit mounting hole group of line laser generation assembled unit 2-2, and as shown in Figure 6, the red line laser after combining like this can form one and send comprehensive red LASER Light Source; Green light rays laser generating unit 2-8 is fixed in the hole of green line laser generating unit mounting hole group of line laser generation assembled unit 2-2, and as shown in Figure 6, the green line laser after combining like this can form one and send comprehensive green LASER Light Source.After 12 Kong Jun in laser generation assembled unit 2-2 fix corresponding line laser generating unit, just constitute volumetric laser light source; Such volumetric laser light source just can project blueness, redness and green 3 looks comprehensive laser successively respectively, and wherein red comprehensive laser becomes vertical relation with guiding support bar, and blue comprehensive laser becomes θ with the axial line of guiding support bar cangle of inclination relation, green comprehensive laser becomes θ with the axial line of guiding support bar aangle of inclination relation.
The assembly relation of moving body LASER Light Source is nested in guiding support bar 2-1 by volumetric laser light source to form moving sets, guiding support bar 2-1 is vertically fixed on the 2-3 of chassis, linear motor assembly 2-5 is fixed on the 2-3 of chassis, linear electric motors carriage release lever 2-4 upper end is fixed with volumetric laser light source and is connected, control linear motor assembly 2-5 and realize moving up and down of linear electric motors carriage release lever 2-4, thus drive volumetric laser light source to move up and down under the guide effect of guiding support bar 2-1, form a kind of volumetric laser light source of movement, the panorama reconstructed all is scanned.Linear motor assembly 2-5 is Miniature alternating-current linear reciprocating decelerator, and its to-and-fro movement scope is 700mm, and model is 4IK25GNCMZ15S500, and linear reciprocation translational speed is 15mm/s, and maximum mobile thrust is 625N.
Omnibearing vision sensor is arranged on the guiding support bar 2-1 in moving body LASER Light Source by web joint, form an active mode all-directional vision sensor, as shown in Figure 4, omnibearing vision sensor is connected with microprocessor by USB interface.
Primarily of demarcation, 3D reconstruct and 3D panorama display drafting three part composition in the application software of microprocessor.Demarcation part mainly comprises: video image read module, omnibearing vision sensor demarcating module, comprehensive laser intelligence parsing module, combined calibrating module.3D reconstruct part mainly comprises: the computing module of the some cloud geological information of the position estimation module of the linear electric motors of video image read module, moving body LASER Light Source, comprehensive laser intelligence parsing module, all-moving surface, the point geological information of cloud and the Fusion Module of colouring information, panorama 3D model construction module is built, 3D panorama model generation module and storage unit with the positional information of all-moving surface.3D panorama display drafting part mainly comprises: the panorama drafting module centered by thing, skeleton view focusing on people display drafting module, full-view perspective Circulating fibrocytes drafting module, perspective view display drafting module, full-view stereo figure Circulating fibrocytes drafting module and viewing distance change perspective view display drafting module.
Video image read module, for reading the video image of omnibearing vision sensor, and preserve in the memory unit, its output is connected with omnibearing vision sensor demarcating module and comprehensive laser intelligence parsing module.
Omnibearing vision sensor demarcating module, for determining the parameter of mapping relations between the X-Y scheme picture point in three dimensions point and video camera imaging plane, as shown in Figure 2; Concrete calibration process is around omnibearing vision sensor one week by scaling board, take some groups of panoramic pictures, set up some equatioies of pixel in spatial point and imaging plane, optimization algorithm is used to obtain optimum solution, result of calculation is as shown in table 1, is the calibrating parameters of the omnibearing vision sensor used in the present invention;
The calibration result of table 1 ODVS
After calibrating the inside and outside parameter of omnibearing vision sensor, just can set up pixel and the incident ray of an imaging plane, the corresponding relation namely between incident angle, as formula (1) represents;
tan α = | | u ′ ′ | | f ( | | u ′ ′ | | ) = | | u ′ ′ | | a 0 + a 1 | | u ′ ′ | | + a 2 | | u ′ ′ | | 2 + . . . + a N | | u ′ ′ | | N - - - ( 1 )
In formula, α represents the incident angle of a cloud, || u " || for the point on imaging plane is to the distance of this planar central point, a 0, a 1, a 2and a nfor the inside and outside parameter of the omnibearing vision sensor of demarcation, set up the mapping table between an arbitrary pixel of imaging plane and incident angle by formula (1).
After demarcation for the omnibearing vision sensor adopted in the present invention, the point on imaging plane || u " || can represent by equation with the incident angle α relation of a cloud;
tan α = | | u ′ ′ | | - 75.12 + 0.0027 | | u ′ ′ | | 2 - - - ( 2 )
In the present invention, two extreme positions of moving body LASER Light Source are determined by the projectional angle of the range of the linear motor assembly in moving body LASER Light Source and volumetric laser light source, upper extreme position arranges and is in the height of the state of looking squarely for benchmark with adult's eyes when standing, upper extreme position initial value is set to 1500mm, lower limit position arranges and is in the height of the state of looking squarely for benchmark with adult's eyes when squatting down, and lower limit position initial value is set to 800mm; The range of linear motor assembly is 700mm, also has the upward view angle of 30 °, also have the depression angle of 30 ° when lower limit position when upper extreme position; The omnibearing vision sensor that the present invention adopts has the upward view angle of 28 ° and the depression angle of 65 °, covers nearly 93 ° of vertical field of view and 360 ° of horizontal field of view; According to design of the present invention, the single view O of moving body LASER Light Source and omnibearing vision sensor mbetween distance formula (3) calculate;
h ( z ) = h O m - h uplimit + h LaserMD - - - ( 3 )
In formula, for the single view O of omnibearing vision sensor mdistance overhead, h up lim itfor the upper extreme position of moving body LASER Light Source, h laserMDfor the displacement of moving body LASER Light Source, the single view O that h (z) is moving body LASER Light Source and omnibearing vision sensor mbetween distance; As shown in Figure 4.
Here the collection image rate of regulation omnibearing vision sensor is 15Flame/s, the linear reciprocation translational speed set in the present invention in the vertical direction of moving body LASER Light Source is 15mm/s, between two interframe moving body LASER Light Source vertical direction on rectilinear movement distance be 1mm, the spacing of two extreme positions is 700mm, therefore completing sweep time in a vertical direction is 47s, meets generation 700 panorama sectioning images together; Will process 700 two field pictures in a vertical scanning process, there are three projection lines of blue laser line, red laser line and green laser in 1 two field picture, wherein the 1st frame and 700 two field pictures are exactly the scanning panorama sectioning image of two extreme positions.
Comprehensive laser intelligence parsing module, for parsing laser projection information on panoramic picture.Resolve the blue laser on panorama sketch, the method in red laser and green laser incident point is according to blue laser, the brightness of the pixel in red laser and green laser incident point is greater than the mean flow rate on imaging plane, first be that the RGB color space conversion of panorama sketch is become HIS color space, then using 1.2 of the mean flow rate on imaging plane times as extraction blue laser, the threshold value in green laser and red laser incident point, extracting blue laser, need to distinguish blue laser further behind red laser and green laser incident point, red laser and green laser incident point, judge according to the tone value H in HIS color space in the present invention, if tone value H is (225, 255) blue laser incident point is just judged as between, if tone value H is (0, 30) red laser incident point is just judged as between, if tone value H is (105, 135) green laser incident point is just judged as between, rest of pixels point is just judged as interference, in order to obtain the accurate location of laser projection line, the present invention adopts Gaussian approximation method to extract the center of laser projection line, and specific implementation algorithm is:
Step1: β=0, initial orientation angle is set;
Step2: retrieve green laser, red laser and blue laser incident point with azimuthal angle beta from the central point of panoramic picture on panoramic picture, for pixel azimuthal angle beta also existing certain color laser projection of several continuous print, here select the I component in HIS color space, namely brightness value estimates the center of laser projection line by Gaussian approximation method close to three contiguous pixels of mxm.; Circular is provided by formula (4),
d = ln ( f ( i - 1 ) ) - ln ( f ( i + 1 ) ) 2 × [ ln ( f ( i - 1 ) ) - 2 ln ( f ( i ) ) + ln ( f ( i + 1 ) ) ] - - - ( 4 )
In formula, f (i-1), f (i) and f (i+1) are respectively the brightness value of three neighbors close to highest brightness value, and d is modified value, and i represents i-th pixel from image center.Therefore estimate that the center of this color laser projection line obtained is for (i+d), this value corresponds in formula (1) || u " ||; Owing to occurring that on panoramic picture the order of color laser projection is green laser, red laser and blue laser, the impact that the projection noise that therefore order gets rid of other is successively resolved laser intelligence;
Step3: change position angle and continue retrieval laser projection point, i.e. β=β+Δ β, Δ β=0.36;
Step4: judge azimuthal angle beta=360, if set up, retrieval terminates; Otherwise forward Step2 to.
Another kind of comprehensive laser intelligence analytic method is the laser projection point extraction algorithm based on frame-to-frame differences, this algorithm is a kind of panorama sectioning image by obtaining two adjacent position height obtains laser projection point method as calculus of differences, when mobile lasing area is in upper and lower scanning process, between frame and frame in the vertical direction, namely different tangent planes there will be comparatively significantly difference, two frame subtract, obtain the absolute value of two two field picture luminance differences, judge whether it is greater than threshold value to analyze the laser projection point extracted in panorama sectioning image; Then according to occurring that on panoramic picture the order of color laser projection judges green laser, red laser and blue laser projection light in a certain azimuthal angle beta.
Combined calibrating, for demarcating active mode all-directional vision sensor.Because omnibearing vision sensor and moving body LASER Light Source inevitably also exist various rigging error in assembling process, by combined calibrating, these errors are minimized.Specific practice is: first, and active mode all-directional vision sensor being placed on a diameter is in the hollow cylinder of 1000mm, and is overlapped with the axial line in hollow cylinder by the axial line of active mode all-directional vision sensor, as shown in Figure 4; Then, make moving body LASER Light Source ON, launch red and green laser, moving body LASER Light Source is adjusted to upper extreme position h up lim itand capturing panoramic view image, observe the blue aperture on panoramic picture, whether red aperture is consistent with the center on panoramic picture with the center of circle of green aperture, detect the circularity of blue aperture, red aperture and green aperture on panoramic picture, if there is center inconsistent or circularity not the situation of meeting the demands need to adjust the connection between omnibearing vision sensor and moving body LASER Light Source; Further, moving body LASER Light Source is adjusted to lower limit position h down lim itand capturing panoramic view image, observe the blue aperture on panoramic picture, whether red aperture is consistent with the center on panoramic picture with the center of circle of green aperture, detect the circularity of blue aperture, red aperture and green aperture on panoramic picture, if there is center inconsistent or circularity not the situation of meeting the demands need to adjust the connection between omnibearing vision sensor and moving body LASER Light Source; Finally, by upper extreme position h up lim it, lower limit position h down lim it, moving body LASER Light Source maximum moving distance h laserMD, omnibearing vision sensor calibrating parameters information leave in combined calibrating database, to call when three-dimensionalreconstruction.
In omnibearing vision sensor, adopt high definition imager chip in the present invention, there are 4096 × 2160 resolution; The moving step length of moving body LASER Light Source is 1mm, vertical scanning scope 700mm, the resolution of the section panoramic picture therefore produced by moving body LASER Light Source is 700, complete vertical scanning like this and just can complete the sampling of each pixel geological information on panoramic picture and colouring information, fusion until three-dimensionalreconstruction and draw display translation, as shown in Figure 10.
For three-dimensionalreconstruction part, its treatment scheme is:
StepA: read full-view video image by video image read module;
StepB: according to the translational speed of linear electric motors and the position of linear electric motors of time Estimate moving body LASER Light Source arriving two limit points;
StepC: parse comprehensive laser intelligence on panoramic picture, calculates mobile millet cake cloud geological information;
StepD: read without the full-view video image in laser projection situation from internal memory, according to result in StepC, all-moving surface geological information and colouring information are merged;
StepE: progressively build panorama 3D model;
StepF: judge whether to reach the limit of a position, if words forward StepG to, invalid words forward StepA to;
StepG: arranging moving body LASER Light Source is OFF, reads without the full-view video image in laser projection situation, and is kept in internal storage location, and export 3D panorama model and be saved in storage unit, arranging moving body LASER Light Source is ON, forwards StepA to.
Below the treatment scheme of three-dimensionalreconstruction is elaborated, in StepA, special employing thread reads full-view video image, the reading rate of video image is 15Flame/s, and the panoramic picture after collection is kept in an internal storage location, so that follow-up process is called;
In StepB, be mainly used in the current location estimating moving body LASER Light Source; Be defined in when reconstruct starts and the initial position of moving body LASER Light Source is fixed on upper extreme position h up lim it, initial step length controlling value z move(j)=0, the moving step length of adjacent two frame time moving body LASER Light Source is Δ z, namely there is following relation,
z move(j+1)=z move(j)+Δz (5)
In formula, z movej () is step size controlling value during jth frame, z move(j+1) be step size controlling value during jth+1 frame, Δ z is the moving step length of moving body LASER Light Source, specifies here from upper extreme position h up lim itwhen in downward direction moving, Δ z=1mm; From lower limit position h down lim itwhen upward direction moves, Δ z=-1mm; Judged by following relational expression when program realizes,
Δz = 1 if z move ( j ) = 0 - 1 if z move ( j ) = h LarerMD Δz else - - - ( 6 )
With the result of calculation value z of formula (5) move(j+1) h in formula (3) is substituted into laserMD, obtain the single view O of moving body LASER Light Source and omnibearing vision sensor mbetween distance h (z).
In StepC, read the panoramic picture in internal storage location and adopt comprehensive laser intelligence parsing module to parse comprehensive laser intelligence from panoramic picture, then calculating mobile millet cake cloud geological information.
The spatial positional information of some cloud represents with Gauss coordinate system, and each puts the single view O of volume coordinate relative to omnibearing vision sensor of cloud mfor the Gauss coordinate of Gauss coordinate initial point is determined by 3 values, i.e. (R, α, β), R is the single view O of some some cloud to omnibearing vision sensor mdistance, α is the single view O of some some cloud to omnibearing vision sensor mincident angle, β is the single view O of some some cloud to omnibearing vision sensor mposition angle, for the some cloud in accompanying drawing 13 with point, the computing method of the cloud data under Gauss coordinate are provided by formula (7), (8), (9),
R a = h ( z ) × cos θ G sin ( α a - θ G ) α a = arctan ( | | u ′ ′ | | ( β ) green f ( | | u ′ ′ | | ( β ) green ) ) = arctan ( | | u ′ ′ | | ( β ) green a 0 + a 2 | | u ′ ′ | | ( β ) green 2 ) - - - ( 7 )
R b = h ( z ) sin ( α b ) α b = arctan ( | | u ′ ′ | | ( β ) red f ( | | u ′ ′ | | ( β ) red ) ) = arctan ( | | u ′ ′ | | ( β ) red a 0 + a 2 | | u ′ ′ | | ( β ) red 2 ) - - - ( 8 )
R c = h ( z ) × cos θ B sin ( α c + θ B ) α c = arctan ( | | u ′ ′ | | ( β ) blue f ( | | u ′ ′ | | ( β ) blue ) ) = arctan ( | | u ′ ′ | | ( β ) blue a 0 + a 2 | | u ′ ′ | | ( β ) blue 2 ) - - - ( 9 )
In formula, (β) greenfor green laser point cloud projection is to the single view O of omnibearing vision sensor mposition angle, (β) redfor red laser point cloud projection is to the single view O of omnibearing vision sensor mposition angle, (β) bluefor blue laser point cloud projection is to the single view O of omnibearing vision sensor mposition angle, θ bfor the angle between the blue incident line and Z axis, θ gfor the angle between the green incident line and Z axis, h (z) is for moving body LASER Light Source is to the single view O of omnibearing vision sensor mdistance, α afor green laser point cloud projection is to the single view O of omnibearing vision sensor mincident angle, α bfor red laser point cloud projection is to the single view O of omnibearing vision sensor mincident angle, α cfor blue laser point cloud projection is to the single view O of omnibearing vision sensor mincident angle, R afor green laser point cloud projection is to the single view O of omnibearing vision sensor mdistance, R bfor red laser point cloud projection is to the single view O of omnibearing vision sensor mdistance, R cfor blue laser point cloud projection is to the single view O of omnibearing vision sensor mdistance, || u " || (β) greenfor the distance between the corresponding point of green laser subpoint on imaging plane to panoramic imagery planar central, || u " || (β) redfor the distance between the corresponding point of red laser subpoint on imaging plane to panoramic imagery planar central, || u " || (β) bluefor the distance between the corresponding point of blue laser subpoint on imaging plane to panoramic imagery planar central.
If will cloud be put with point cartesian coordinate system with represent, with reference to accompanying drawing 11, its computing method are provided by formula (10), (11), (12),
x a = R a × cos α a × sin ( β ) green y a = R a × cos α a × cos ( β ) green y a = R a × sin α a - - - ( 10 )
x b = R b × cos α b × sin ( β ) red y b = R b × cos α b × cos ( β ) red y b = R b × sin α b - - - ( 11 )
x c = R c × cos α c × sin ( β ) blue y c = R c × cos α c × cos ( β ) blue y c = R c × sin α c - - - ( 12 )
In formula, R afor green laser point cloud projection is to the single view O of omnibearing vision sensor mdistance, R bfor red laser point cloud projection is to the single view O of omnibearing vision sensor mdistance, R cfor blue laser point cloud projection is to the single view O of omnibearing vision sensor mdistance, α afor green laser point cloud projection is to the single view O of omnibearing vision sensor mincident angle, α bfor red laser point cloud projection is to the single view O of omnibearing vision sensor mincident angle, α cfor blue laser point cloud projection is to the single view O of omnibearing vision sensor mincident angle, (β) greenfor green laser point cloud projection is to the single view O of omnibearing vision sensor mposition angle, (β) redfor red laser point cloud projection is to the single view O of omnibearing vision sensor mposition angle, (β) bluefor blue laser point cloud projection is to the single view O of omnibearing vision sensor mposition angle.
The cloud data that the blueness of comprehensive 360 °, redness and green comprehensive laser projection produces has been traveled through in StepC computation process; Owing to adopting high definition imager chip in the present invention, in order to agree with vertical scanning precision, here material calculation is adopted to be that Δ β=0.36 is to the position angle traveling through whole 360 °, accompanying drawing 16 is moving body LASER Light Source scanning result panorama sketch on some height and positions, the cloud data that green short dash line produces for green comprehensive laser projection on panorama sketch the cloud data that red long dotted line produces for red comprehensive laser projection the cloud data that blue short dash line produces for blue comprehensive laser projection illustrate ergodic algorithm below;
Step I: β=0, initial orientation angle is set;
Step II: adopt comprehensive laser intelligence parsing module, along directions of rays Access Points cloud with obtain on imaging plane corresponding with cloud data || u " || (β) green, || u " || (β) redwith || u " || (β) bluethree points, with formula (7) calculation level cloud distance value R awith incident angle α a, with formula (8) calculation level cloud distance value R bwith incident angle α b, with formula (9) calculation level cloud distance value R cwith incident angle α c; And then with formula (10) calculation level cloud under cartesian coordinate system with formula (11) calculation level cloud under cartesian coordinate system with formula (12) calculation level cloud under cartesian coordinate system in this calculation procedure, traversal azimuthal angle beta is updated to (β) in formula (10), (11), (12) respectively green, (β) red, (β) blue; Above-mentioned calculating data are kept in internal storage location;
Step III: β ← β+Δ β, Δ β=0.36, judges whether β=360 set up, and terminates to calculate, otherwise forward Step II to if set up.
In StepD, first read without the full-view video image in laser projection situation from internal memory, according to result in StepC, the geological information of a cloud and colouring information are merged; Cloud data after fusion will comprise geological information and the colouring information of this cloud, namely use (R, α, β, r, g, b) to express geological information and the colouring information of some some cloud, illustrate blending algorithm below,
Step is 1.: arrange β=0, initial orientation angle;
Step is 2.: corresponding with cloud data with in sensor plane according to azimuthal angle beta || u " || (β) redwith || u " || (β) greenthe information of two points, reads (r, the g without the related pixel point on the panoramic video figure in laser projection situation, b) color data, merge with corresponding (R, α, β) that processing from StepC obtains, obtain corresponding some cloud geological information and colouring information (R, α, β, r, g, b);
Step is 3.: β ← β+Δ β, and Δ β=0.36, judges whether β=360 set up, and terminates to calculate, result of calculation preserved in the memory unit if set up; Otherwise forward Step to 2..
Panorama 3D model is progressively built according to the result of calculation of StepD in StepE, in the present invention, moving body LASER Light Source completes the scanning process of a vertical direction, namely the structure of panorama 3D model is just completed from an extreme position to another extreme position, in scanning process, each moving step length all can produce the section point cloud under some altitudes, as shown in Figure 12; By these data using the height value of moving body LASER Light Source as preservation index, so just can add up, for finally building the panorama 3D model with geological information and colouring information by section point cloud generation order; According to above-mentioned description, the present invention has downward panorama 3D to reconstruct and upwards panorama 3D reconstructs two kinds of different modes;
In StepF, judge that whether moving body LASER Light Source reaches capacity position, namely judges z move(j)=0 or z move(j)=h laserMDwhether set up, forward StepG to if set up, invalid words forward StepA to.
In StepG, groundwork exports reconstruction result and do some for reconstruct next time to prepare; Specific practice is: first arranging moving body LASER Light Source is OFF, reads without the full-view video image in laser projection situation, and is kept in internal storage location; Then export 3D reconstruct panorama model and be saved in storage unit, owing to being no matter all have employed high-resolution acquisition means in the comprehensive cloud data generation in section point cloud generation or in some sections in the present invention, each pixel on imaging plane has possessed the geological information corresponding with actual point cloud and colouring information, has therefore also just effectively avoided Correspondent problem, tiling problem and branch problem in three-dimensionalreconstruction; Finally arranging moving body LASER Light Source is ON, forwards StepA to, carries out the reconstruct of new 3D panorama model.
The single view O with omnibearing vision sensor is obtained by above-mentioned process mfor the cloud data of the 3D panorama model of true origin; Creating scan slice one by one with during panorama all-moving surface laser projection light source scanning panoramic scene, as shown in Figure 12; In figure, the blue portion cloud data that to be cloud data, the RED sector produced by blueness comprehensive laser projection be is produced by redness comprehensive laser projection, green portion are the cloud datas produced by green comprehensive laser projection.
These scan slice images are along with the mobile self-assembling formation of panorama all-moving surface laser projection light source in scanning process, just travel through whole panorama sectioning image with position angle and extract the cloud data that blueness, redness and green comprehensive laser projection produce after often scanning obtains a Zhang Quanjing sectioning image; Cloud data stores and stores in a matrix fashion, wherein 1 ~ 700 row of matrix stores the cloud data that blue comprehensive laser projection produces, 701 ~ 1400 row of matrix store the cloud data that red comprehensive laser projection produces, 1401 ~ 2100 row of matrix store the cloud data that green comprehensive laser projection produces, columns represents the number from 0 ° ~ 359.64 ° of azimuth sweeps, totally 1000 row; Therefore, putting cloud storage matrix is the matrix of 2100 × 1000; Include (x, y, z, R, G, B) 6 attributes in each some cloud, which forms orderly cloud data collection, the advantage of ordered data collection is, after the relation understanding consecutive point in advance, its neighborhood operation can be more efficient.
Panorama drafting module centered by thing, then a pcl_visualization storehouse in PCL is called for reading data in a cloud storage matrix, can carry out three-dimensional modeling realize visual display rapidly to the cloud data file obtained by this storehouse, the panorama realized centered by thing draws display.
Skeleton view focusing on people display drafting module, be in 3D for person according to the observation and reconstruct visual angle in environment and field range drafting 3 D skeleton view, its rendering algorithm step is as follows:
STEP1) with the single view of panoramic vision sensor for true origin O m(0,0,0), sets up three-dimensional cylindrical space coordinate system;
STEP 2) determine the size of see-through window, with the visual range of human eye for benchmark, Width about 108 ° is variable by azimuthal angle beta; Short transverse h is variable;
STEP 3) when obtaining the testing result of active panoramic vision sensor, obtain with the single view of panoramic vision sensor as true origin O mthe cloud data (h, β, r) of (0,0,0), the scope of short transverse is just with minor increment h minwith ultimate range h maxdetermine, the total capable data of N; Consider when Width, azimuthal angle beta for step-length obtains cloud data continuously, can produce 1000 cloud datas in each section scanning process with 0.36 ° of angle; Here with the left side edge of see-through window for initial orientation angle β 1, be so β in the right side edge of see-through window 1+ 300, total M column data, here M=300;
STEP 4) according to selected initial orientation angle β 1determine first data, such as initial orientation angle β 1=36 °, the 100th data are so selected to start to 400 data as at minor increment h minfirst row data, start to 1400 data as h with 1100 data minthe secondary series data of+Δ h,
STEP 5) processing of display data matrix of see-through window, the minor increment h such as obtained at present minwith ultimate range h maxdetermine, have 2100 row data; Specific algorithm is as follows:
STEP51: determine initial orientation angle β 1, read minor increment h minwith ultimate range h maxdata;
STEP52:h=h min
STEP53: the initial orientation angle β reading distance h 1start to β 1the data of+300, if β 1+ 300>=1000, then β 1+ 300=β 1+ 300-1000; Using the new data line of these values as matrix, h=h+ Δ h;
STEP54: judge h>=h maxif do not meet and forward STEP53 to;
STEP55: terminate.
The matrix of one 80 × 300 is obtained by above-mentioned algorithm;
STEP 6) all three-dimensional coordinate triangle surfaces in matrix are coupled together, method of attachment first the data of every a line is connected with straight line, then each data arranged is connected with straight line, finally by the (i in matrix, j) connect with straight line with the cloud data of (i+1, j+1); The color of connecting line adopts the color average of two points connected;
STEP 7) by the display of the triangle surface of all connections on an output device.
The principle of work of omnibearing vision sensor is: the light entering the center of hyperbolic mirror, reflects towards its virtual focus according to bi-curved specular properties.Material picture reflexes to imaging in collector lens through hyperbolic mirror, and some P (x, y) on this imaging plane correspond to the coordinate A (X, Y, Z) of a point spatially in kind.
2-hyperbolic curve face mirror in accompanying drawing 7,12-incident ray, the real focus Om (0 of 13-hyperbolic mirror, 0, c), the virtual focus of 14-hyperbolic mirror, the i.e. center Oc (0,0 ,-c) of image unit 6,15-reflection ray, 16-imaging plane, the volume coordinate A (X of 17-material picture, Y, Z), 18-incides the volume coordinate of the image on hyperboloid minute surface, 19-is reflected in some P (x, y) on imaging plane.
The optical system that hyperbolic mirror shown in accompanying drawing 7 is formed can be represented by 5 equatioies below;
((X 2+ Y 2)/a 2)-((Z-c) 2/ b 2)=-1 is (20) as Z>0
c = a 2 + b 2 - - - ( 21 )
β=tan -1(Y/X) (22)
α=tan -1[(b 2+c 2)sinγ-2bc]/(b 2+c 2)cosγ (23)
γ = tan - 1 [ f / ( x 2 + y 2 ) ] - - - ( 24 )
X, Y, Z representation space coordinate in formula, c represents the focus of hyperbolic mirror, 2c represents the distance between two focuses, a, b is the real axis of hyperbolic mirror and the length of the imaginary axis respectively, β represent incident ray on XY projection plane with the angle of X-axis, i.e. position angle, α represent incident ray on XZ projection plane with the angle of X-axis, here α is called incident angle, when α is more than or equal to 0, is called the angle of depression, when α being less than 0, be called the elevation angle, f represents the distance of imaging plane to the virtual focus of hyperbolic mirror, and γ represents the angle of catadioptric light and Z axis; X, y represent a point on imaging plane.
Embodiment 2
Other structures of the present embodiment are identical with embodiment 1 with the course of work, and difference is: for the demand of different reconstruct scenes, change moving body LASER Light Source hardware module, namely change the projectional angle θ of the green laser in accompanying drawing 4 awith the projectional angle θ of blue laser line c, to change the scope of vertical scanning.
Embodiment 3
All the other are identical with embodiment 1, for the demand of different 3D scene drawings display, full-view perspective Circulating fibrocytes drafting module focusing on people, constantly change 3D for person according to the observation and reconstruct visual angle in environment, drafting 3 D skeleton view is carried out from the angle determination field range of human engineering, its core wants constantly order slowly to change azimuthal angle beta exactly, is created on the display data matrix under this azimuthal angle beta; Its key algorithm is as follows:
STEP1: determine initial orientation angle β 1, read minor increment h minwith ultimate range h maxdata;
STEP2: β=β 1, determine Circulating fibrocytes times N, n=0;
STEP3:h=h min
STEP4: the initial orientation angle β of reading distance value h starts the data to β+300, if β+300 >=1000, then and β+300=β+300-1000; Using the new data line of these values as matrix, h=h+ Δ h;
STEP5: judge h>=h maxif do not meet and forward STEP4 to;
STEP6: the display data matrix being kept at current β, connects display data matrix triangle surface, display translation; β=β+Δ β;
STEP7: judge β >=3600;
STEP8: if β=β-3600, n=n+1;
STEP9: judge n >=N, forwards STEP3 to if do not met;
STEP10: terminate.
Embodiment 4
Other structures of the present embodiment are identical with embodiment 1 with the course of work, difference is: for the demand of different 3D scene drawings display, perspective view display drafting module focusing on people, constantly change 3D for person according to the observation and reconstruct visual angle in environment, carry out drafting 3 D perspective view from the angle determination field range of human engineering; Namely show the skeleton view generated of drafting module as in left visual point image situation at skeleton view focusing on people, generate right visual point image; Show the skeleton view generated of drafting module as in right visual point image situation at skeleton view focusing on people, generate left visual point image; Show the skeleton view generated of drafting module as under median eye image conditions at skeleton view focusing on people, generate left and right visual point image, thus realize stereogram generation; Therefore, the single view coordinate O with ODVS is needed mgeometric relationship between (0,0,0) and dimensional target point P (h, β, r) calculates the coordinate of new viewpoint;
By the single view coordinate O of ODVS m(0,0,0), as the coordinate of right eye viewpoint, calculates the coordinate of P point in space in left eye viewpoint with formula (13);
h R = h L r R = B 2 + r L 2 + 2 × B × r L × cos β L β R = arcsin ( r R r L × sin β L ) - - - ( 13 )
In formula, h rfor P point in space is at the height of left eye viewpoint, h lfor P point in space is at the height of right eye viewpoint, r rfor P point in space is in the incident angle of left eye viewpoint, for P point in space is at the square value of the incident angle of right eye viewpoint, β lfor P point in space is at the position angle of left eye viewpoint, β rfor P point in space is at the position angle of right eye viewpoint;
By the single view coordinate O of ODVS m(0,0,0), as the coordinate of left eye viewpoint, calculates the coordinate of P point in space in right eye viewpoint with formula (14);
h L = h R r L = B 2 + r R 2 + 2 × B × r R × cos β R β L = arcsin ( r L r R × sin β R ) - - - ( 14 )
In formula, h rfor P point in space is at the height of left eye viewpoint, h lfor P point in space is at the height of right eye viewpoint, r rfor P point in space is in the incident angle of left eye viewpoint, for P point in space is at the square value of the incident angle of right eye viewpoint, β lfor P point in space is at the position angle of left eye viewpoint, β rfor P point in space is at the position angle of right eye viewpoint;
By the single view coordinate O of ODVS m(0,0,0), as the coordinate of median eye, calculates the coordinate of P point in space in right and left eyes viewpoint with formula (15);
h L = h r L = ( B / 2 ) 2 + r 2 - B × r × cos β β L = arcsin ( r L r × sin β ) h R = h r R = ( B / 2 ) 2 + r 2 + B × r × cos β β R = arcsin ( r R r × sin β ) - - - ( 15 )
In formula, h rfor P point in space is at the height of left eye viewpoint, h lfor P point in space is at the height of right eye viewpoint, r rfor P point in space is in the incident angle of left eye viewpoint, for P point in space is at the square value of the incident angle of right eye viewpoint, β lfor P point in space is at the position angle of left eye viewpoint, β rfor P point in space is at the position angle of right eye viewpoint;
B is the distance between two, and the distance that Ms is two is at 56 ~ 64mm, and the distance that the male sex is two, at 60 ~ 70mm, gets all receptible distance 60mm, i.e. B=60 between both sexes here;
Volume rendering display algorithm is as follows:
STEP1: determine initial orientation angle β 1, read minor increment h minwith ultimate range h maxdata, determine viewpoint, median eye=0, left viewpoint=1, right viewpoint=2;
STEP2:h=h min
STEP3: the initial orientation angle β reading distance value h 1start to β 1the data of+300, if β 1+ 300>=1000, then β 1+ 300=β 1+ 300-1000; According to determined viewpoint, if select 0 (median eye) to use formula (15) computer memory object point at the cylindrical coordinates of right and left eyes; If select 1 (left viewpoint) with the cylindrical coordinates of formula (14) computer memory object point at right eye, using the cylindrical coordinates of former coordinate data as left eye; If select 2 (right viewpoint) with the cylindrical coordinates of formula (13) computer memory object point at left eye, using the cylindrical coordinates of former coordinate data as right eye; Use left and right viewpoint using the new data line of these values as left viewpoint matrix and right viewpoint matrix respectively, h=h+ Δ h;
STEP4: judge h>=h maxif do not meet and forward STEP3 to;
STEP5: terminate;
Obtained the matrix of two 80 × 300 by above-mentioned algorithm, be respectively the display matrix of left and right viewpoint.
Respectively all three-dimensional coordinate triangle surfaces in matrix are coupled together, method of attachment first the data of every a line is connected with straight line, then each data arranged is connected with straight line, finally by the (i in matrix, j) connect with straight line with the cloud data of (i+1, j+1); The color of connecting line adopts the color average of two points connected.
By the stereogram that above-mentioned process generates afterwards, then carry out binocular solid display.
For the binocular solid display that video card is supported.As under the OpenGL environment supporting stereo display, creating the stereoscopic display mode of equipment handle stage startup OpenGL, the stereogram of generation is being transported to respectively in two buffer zones, left and right, realizes stereo display.
For not supporting on the video card of stereo display, stereogram is synthesized a width red and green complementary color stereo-picture, from the image zooming-out red channel of the stereogram of left and right, in another image, extract green and blue channel, the passage extracted is merged, forms the stereo-picture of a complementary colors.
Embodiment 5
Other structures of the present embodiment are identical with embodiment 1 with the course of work, difference is: for the demand of different 3D scene drawings display, full-view stereo figure Circulating fibrocytes drafting module focusing on people, constantly change 3D for person according to the observation and reconstruct visual angle in environment, full-view stereo figure is drawn from the angle determination field range of human engineering, its core constantly will change azimuthal angle beta exactly, is created on the left and right stereogram under this azimuthal angle beta; Its key algorithm is as follows:
STEP1: determine initial orientation angle β 1, read minor increment h minwith ultimate range h maxdata, determine viewpoint, median eye=0, left viewpoint=1, right viewpoint=2;
STEP2: β=β 1, determine Circulating fibrocytes times N, n=0;
STEP3:h=h min
STEP4: the initial orientation angle β reading distance value h 1start to β 1the data of+300, if β 1+ 300>=1000, then β 1+ 300=β 1+ 300-1000; According to determined viewpoint, if select 0 (median eye) to use formula (3) computer memory object point at the cylindrical coordinates of right and left eyes; If select 1 (left viewpoint) with the cylindrical coordinates of formula (2) computer memory object point at right eye, using the cylindrical coordinates of former coordinate data as left eye; If select 2 (right viewpoint) with the cylindrical coordinates of formula (1) computer memory object point at left eye, using the cylindrical coordinates of former coordinate data as right eye; Use left and right viewpoint using the new data line of these values as left viewpoint matrix and right viewpoint matrix respectively, h=h+ Δ h;
STEP5: judge h>=h maxif do not meet and forward STEP4 to;
STEP6: the display data matrix being kept at current β, is connected left viewpoint matrix with right viewpoint matrix triangle surface, the stereogram of generation respectively, and stereo display exports; β=β+Δ β;
STEP7: judge β >=3600;
STEP8: if β=β-3600, n=n+1;
STEP9: judge n >=N, forwards STEP3 to if do not met;
STEP10: terminate.
Embodiment 6
Other structures of the present embodiment are identical with embodiment 1 with the course of work, difference is: for the demand of different 3D scene drawings display, viewing distance change perspective view display rendering technique focusing on people, constantly change the self space position in 3D environment for person according to the observation, draw full-view stereo figure from the angle determination field range of human engineering; Observer draws near roaming scence or from the close-by examples to those far off in process, observer and the position be reconstructed between space there occurs change, this relates to a change of cloud coordinate and multi-level display technique, and the amount simultaneously for the cloud data shown also can correspondingly change.
First, we observe observer draw near roaming scence time, some cloud locus how to change; When considering some cloud locus, still adopt the account form of median eye here, namely use the mid point between two as the viewpoint of observer;
When observer draws near roaming scence, the viewpoint P spatially (h, β, r) of relative observer becomes P (h d, β d, r d).Cumbersome owing to carrying out above-mentioned computing on cylindrical coordinate, the direction of observation as Y-axis, is so just moved a distance D when observer draws near roaming scence by us in Y-axis.Therefore some cloud P (h, β, the r) point of cylindrical coordinate first uses formula (16) to carry out by we;
x = r × sin β y = r × cos β z = h - - - ( 16 )
Convert some cloud P (x, y, the z) point of cartesian coordinate system to; According to operation result in order to without loss of generality, new viewpoint O' m(0,0,0) and former viewpoint O mthe displacement of (0,0,0) is D, represents with formula (17) and formula (18),
D = X 0 2 + Y 0 2 + Z 0 2 - - - ( 17 )
X ′ Y ′ Z ′ = 1 0 0 X 0 0 1 0 Y 0 0 0 1 Z 0 X Y Z 1 - - - ( 18 )
Calculate at new viewpoint O' with formula (19) mlower of (0,0,0) coordinate system situation has the coordinate P (x', y', z') of a cloud, then some cloud P (x', y', the z') point of cartesian coordinate system is converted again to some cloud P (h', β ', the r') point of Gauss coordinate system.For the situation in the Y-axis direction close to roaming scence, simple operation, i.e. a y'=y+y just can be 0.By the viewpoint O' after movement m(0,0,0), as the coordinate of median eye, calculates the coordinate of spatial point P (h', β ', r') in right and left eyes viewpoint with formula (19); For the viewpoint O' after movement min (0,0,0) situation, with former viewpoint O m(0,0,0) is compared, and the ken can change thereupon, and the horizontal field of view scope of the regulation ken is 108 ° above, vertical visual field scope is 66 °; Therefore, at new viewpoint O' mthe ken of (0,0,0) needs to pass through new viewpoint O' in formula (19) result of calculation mthe incident angle α ' of (0,0,0) and azimuthal angle beta ' determine, be then plotted in new viewpoint O' with the perspective view display rendering technique focusing on people based on ASODVS mthe perspective view of (0,0,0); For at new viewpoint O' mthe full-view stereo Circulating fibrocytes focusing on people of (0,0,0) is drawn with the full-view stereo figure Circulating fibrocytes rendering technique focusing on people based on ASODVS;
h ′ L = h ′ r ′ L = ( B / 2 ) 2 + r ′ 2 - B × r ′ × cos β ′ β ′ L = arcsin ( r ′ L r ′ × sin β ′ ) h ′ R = h ′ r ′ R = ( B / 2 ) 2 + r ′ 2 + B × r ′ × cos β ′ β ′ R = arcsin ( r ′ R r ′ × sin β ′ ) - - - ( 19 )
In formula, h' lfor new viewpoint O' munder (0,0,0) coordinate system, P point in space is at the height of right eye viewpoint, r' lfor new viewpoint O' munder (0,0,0) coordinate system, P point in space is in the incident angle of right eye viewpoint, β ' lfor new viewpoint O' munder (0,0,0) coordinate system, P point in space is at the position angle of left eye viewpoint, h' rfor new viewpoint O' munder (0,0,0) coordinate system, P point in space is at the height of left eye viewpoint, r' rfor new viewpoint O' munder (0,0,0) coordinate system, P point in space is in the incident angle of left eye viewpoint, β ' rfor new viewpoint O' munder (0,0,0) coordinate system, P point in space is at the position angle of right eye viewpoint.
When showing New Century Planned Textbook, first judging the distance of new viewpoint to root node, when close together, namely when the cloud data for showing is less than some threshold values, generating low level cloud data further from adjacent root node data interpolating; Generate low level cloud data, namely interpolation arithmetic is a basic operations in volume drawing.Because its operand is large, realized by the Fast Interpolation in volume drawing.

Claims (10)

1. the 3D environment dubbing system based on active panoramic vision, comprise omnibearing vision sensor, moving body LASER Light Source and the microprocessor for carrying out the reconstruct of 3D panorama and 3D panorama drafting output to omnidirectional images, omnibearing vision sensor is arranged on the guiding support bar of moving body LASER Light Source, it is characterized in that:
Described moving body LASER Light Source also comprises the volumetric laser light source moved up and down along guiding support bar, this volumetric laser light source have vertically-guided support bar first comprehensive laser, become θ with the axial line of guiding support bar ctilt second comprehensive laser and become θ with the axial line of guiding support bar athree comprehensive the laser tilted;
Described microprocessor is divided into demarcation part, 3D to reconstruct part and 3D panorama display drafting part;
Described demarcation part, for determining the calibrating parameters of omnibearing vision sensor, and on the panoramic picture captured by omnibearing vision sensor, parse laser projection information corresponding to first comprehensive laser, second comprehensive laser and three comprehensive laser;
Described 3D reconstructs part, according to the position of moving body LASER Light Source, and the related pixel coordinate figure of described laser projection information, calculate the some cloud geological information of all-moving surface, and the some cloud geological information of all-moving surface and the colouring information of each comprehensive laser are merged, build panorama 3D model;
Part is drawn in described 3D panorama display, comprising:
Skeleton view focusing on people display drafting module, according to described panorama 3D model, and observer is in 3D and reconstructs visual angle in environment and field range, draws skeleton view focusing on people.
2. as claimed in claim 1 based on the 3D environment dubbing system of active panoramic vision, it is characterized in that, part is drawn in described 3D panorama display, also comprises:
Full-view perspective Circulating fibrocytes drafting module, according to described panorama 3D model, and observer is in varying cyclically and field range that 3D reconstructs visual angle in environment, draws panoramic perspective Circulating fibrocytes figure focusing on people;
Perspective view display drafting module, according to described skeleton view, generates right visual point image, left visual point image and left and right visual point image, draws perspective view focusing on people;
Full-view stereo figure Circulating fibrocytes drafting module, according to described panorama 3D model, and observer is in varying cyclically and field range that 3D reconstructs visual angle in environment, by constantly changing azimuthal angle beta, be created on the left and right stereogram under this azimuthal angle beta, draw full-view stereo focusing on people perspective Circulating fibrocytes figure;
Viewing distance change perspective view display drafting module, according to described panorama 3D model, and observer reconstructs change and the field range of viewing distance in environment at 3D, be plotted in full-view stereo perspective display figure focusing on people when constantly changing viewing distance.
3., based on a 3D panorama display method for drafting for 3D environment dubbing system described in claim 1 or 2, it is characterized in that, comprise step:
1) panoramic picture that omnibearing vision sensor shooting moving body laser light source projects is formed is adopted;
2) according to described panoramic picture, determine the calibrating parameters of omnibearing vision sensor, and parse laser projection information corresponding to first comprehensive laser, second comprehensive laser and three comprehensive laser;
3) according to the position of moving body LASER Light Source, and the related pixel coordinate figure of described laser projection information, calculate the some cloud geological information of all-moving surface, and the some cloud geological information of all-moving surface and the colouring information of each comprehensive laser are merged, build panorama 3D model;
4) according to described panorama 3D model, and observer is in 3D and reconstructs visual angle in environment and field range, draws skeleton view focusing on people; Concrete steps are as follows:
STEP1) with the single view of omnibearing vision sensor for true origin O m(0,0,0), sets up three-dimensional cylindrical space coordinate system;
STEP2) according to the visual range of human eye, determine the size of see-through window, the variable being see-through window with azimuthal angle beta and height h, and obtain cloud data corresponding to see-through window (h, β, r);
STEP3) according to the step-length of azimuthal angle beta and the scope of height h, and described cloud data (h, β, r), generate data matrix;
STEP4) coupled together by three-dimensional coordinate triangle surfaces all in data matrix, the color of connecting line adopts the color average of two points connected;
STEP5) triangle surface of all connections is carried out output display, complete skeleton view focusing on people and draw.
4. 3D panorama display method for drafting as claimed in claim 3, it is characterized in that, described 3D panorama display method for drafting also comprises draws panoramic perspective Circulating fibrocytes figure focusing on people, by constantly changing azimuthal angle beta, be created on the display data matrix under this azimuthal angle beta, complete panoramic perspective Circulating fibrocytes figure and draw; The algorithm of described display data matrix is as follows:
4.1) initial orientation angle β is determined 1, read minor increment h minwith ultimate range h maxdata;
4.2) β=β 1, determine Circulating fibrocytes times N, n=0;
4.3)h=h min
4.4) the initial orientation angle β of reading distance value h starts the data to β+300, if β+300 >=1000, then and β+300=β+300-1000; Using the new data line of these values as matrix, h=h+ Δ h;
4.5) h>=h is judged maxif do not meet and forward 4.4 to);
4.6) be kept at the display data matrix of current β, display data matrix triangle surface connected, display translation; β=β+Δ β;
4.7) β >=3600 are judged;
4.8) if β=β-3600, n=n+1;
4.9) judge n >=N, forward 4.3 to if do not met).
5. 3D panorama display method for drafting as claimed in claim 3, is characterized in that, described 3D panorama display method for drafting also comprises draws perspective view focusing on people, and concrete rendering algorithm is as follows:
5.1: determine initial orientation angle β 1, read minor increment h minwith ultimate range h maxdata, determine viewpoint, median eye=0, left viewpoint=1, right viewpoint=2;
5.2:h=h min
5.3: the initial orientation angle β reading distance value h 1start to β 1the data of+300, if β 1+ 300>=1000, then β 1+ 300=β 1+ 300-1000; According to determined viewpoint, select median eye 0 computer memory object point at the cylindrical coordinates of right and left eyes; Select left viewpoint 1 computer memory object point at the cylindrical coordinates of right eye, using the cylindrical coordinates of former coordinate data as left eye; Select right viewpoint 2 computer memory object point at the cylindrical coordinates of left eye, using the cylindrical coordinates of former coordinate data as right eye; Use left and right viewpoint using the new data line of these values as left viewpoint matrix and right viewpoint matrix respectively, h=h+ Δ h;
5.4: judge h>=h maxif do not meet and forward 5.3 to, until generate the display matrix of left and right viewpoint;
5.5: respectively three-dimensional coordinate triangle surfaces all in display matrix is coupled together, method of attachment first the data of every a line is connected with straight line, then each data arranged is connected with straight line, finally by (i in matrix, j) connect with straight line with the cloud data of (i+1, j+1); The color of connecting line adopts the color average of two points connected;
5.6: carry out binocular solid display by the stereogram generated afterwards above-mentioned process, complete perspective view focusing on people and draw.
6. 3D panorama display method for drafting as claimed in claim 3, it is characterized in that, described 3D panorama display method for drafting also comprises draws full-view stereo focusing on people perspective Circulating fibrocytes figure, by constantly changing azimuthal angle beta, be created on the left and right stereogram under this azimuthal angle beta, its algorithm is as follows:
6.1: determine initial orientation angle β 1, read minor increment h minwith ultimate range h maxdata, determine viewpoint, median eye=0, left viewpoint=1, right viewpoint=2;
6.2: β=β 1, determine Circulating fibrocytes times N, n=0;
6.3:h=h min
6.4: the initial orientation angle β reading distance value h 1start to β 1the data of+300, if β 1+ 300>=1000, then β 1+ 300=β 1+ 300-1000; According to determined viewpoint, select median eye 0 computer memory object point at the cylindrical coordinates of right and left eyes; Select left viewpoint 1 computer memory object point at the cylindrical coordinates of right eye, using the cylindrical coordinates of former coordinate data as left eye; Select right viewpoint 2 computer memory object point at the cylindrical coordinates of left eye, using the cylindrical coordinates of former coordinate data as right eye; Use left and right viewpoint using the new data line of these values as left viewpoint matrix and right viewpoint matrix respectively, h=h+ Δ h;
6.5: judge h>=h maxif do not meet and forward 6.4 to;
6.6: the display data matrix preserving current β, be connected with right viewpoint matrix triangle surface by left viewpoint matrix respectively, the stereogram of generation, stereo display exports; β=β+Δ β;
6.7: judge β >=3600;
6.8: if β=β-3600, n=n+1;
6.9: judge n >=N, forward 6.3 to if do not met.
7. the 3D panorama display method for drafting as described in claim 5 or 6, is characterized in that, according to single view coordinate O mgeometric relationship between (0,0,0) and dimensional target point P (h, β, r), the coordinate calculating new viewpoint comprises:
By single view coordinate O m(0,0,0), as the coordinate of right eye viewpoint, calculates the coordinate of P point in space in left eye viewpoint with formula (13);
h R = h L r R = B 2 + r L 2 + 2 × B × r L × cos β L β R = arcsin ( r R r L × sin β L ) - - - ( 13 )
In formula, h rfor P point in space is at the height of left eye viewpoint, h lfor P point in space is at the height of right eye viewpoint, r rfor P point in space is in the incident angle of left eye viewpoint, for P point in space is at the square value of the incident angle of right eye viewpoint, β lfor P point in space is at the position angle of left eye viewpoint, β rfor P point in space is at the position angle of right eye viewpoint;
By single view coordinate O m(0,0,0), as the coordinate of left eye viewpoint, calculates the coordinate of P point in space in right eye viewpoint with formula (14);
h L = h R r L = B 2 + r R 2 + 2 × B × r R × cos β R β L = arcsin ( r L r R × sin β R ) - - - ( 14 )
By single view coordinate O m(0,0,0), as the coordinate of median eye, calculates the coordinate of P point in space in right and left eyes viewpoint with formula (15);
h L = h r L = ( B / 2 ) 2 + r 2 - B × r × cos β β L = arcsin ( r L r × sin β ) h R = h r R = ( B / 2 ) 2 + r 2 + B × r × cos β β R = arcsin ( r R r × sin β ) - - - ( 15 )
Wherein, B is the distance between two.
8. 3D panorama display method for drafting as claimed in claim 3, it is characterized in that, described 3D panorama display method for drafting also comprises draws viewing distance focusing on people change perspective view, the self space position in 3D environment is constantly changed for person according to the observation, from the angle determination field range of human engineering, draw full-view stereo figure; Specifically comprise:
First, when observer draws near roaming scence, the viewpoint P spatially (h, β, r) of relative observer becomes P (h d, β d, r d), using the direction of observation as Y-axis, so just in Y-axis, move a distance D when observer draws near roaming scence, first use formula (16) to carry out some cloud P (h, β, the r) point of cylindrical coordinate;
x = r × sin β y = r × cos β z = h - - - ( 16 )
Convert some cloud P (x, y, the z) point of cartesian coordinate system to, viewpoint O' new after mobile m(0,0,0) and former viewpoint O mthe displacement of (0,0,0) is D, represents with formula (17) and formula (18),
D = X 0 2 + Y 0 2 + Z 0 2 - - - ( 17 )
X ′ Y ′ Z ′ = 1 0 0 X 0 0 1 0 Y 0 0 0 1 Z 0 X Y Z 1 - - - ( 18 )
Calculate at new viewpoint O' with formula (19) mlower of (0,0,0) coordinate system has coordinate P (x', the y' of a cloud, z'), then some cloud P (x', y', the z') point of cartesian coordinate system is converted again to the some cloud P (h' of Gauss coordinate system, β ', r'), by the viewpoint O' after movement m(0,0,0), as the coordinate of median eye, calculates the coordinate of spatial point P (h', β ', r') in right and left eyes viewpoint with formula (19); At new viewpoint O' mthe ken of (0,0,0) needs in formula (19) result of calculation by new viewpoint O' mthe incident angle α ' of (0,0,0) and azimuthal angle beta ' determine, be then plotted in new viewpoint O' with perspective view display method for drafting focusing on people mthe perspective view of (0,0,0); At new viewpoint O' mthe full-view stereo Circulating fibrocytes focusing on people of (0,0,0) is drawn and is adopted full-view stereo figure Circulating fibrocytes method for drafting focusing on people;
h ′ L = h ′ r ′ L = ( B / 2 ) 2 + r ′ 2 - B × r ′ × cos β ′ β ′ L = arcsin ( r ′ L r ′ × sin β ′ ) h ′ R = h ′ r ′ R = ( B / 2 ) 2 + r ′ 2 + B × r ′ × cos β ′ β ′ R = arcsin ( r ′ R r ′ × sin β ′ ) - - - ( 19 )
In formula, h' lfor new viewpoint O' munder (0,0,0) coordinate system, P point in space is at the height of right eye viewpoint, r' lfor new viewpoint O' munder (0,0,0) coordinate system, P point in space is in the incident angle of right eye viewpoint, β ' lfor new viewpoint O' munder (0,0,0) coordinate system, P point in space is at the position angle of left eye viewpoint, h' rfor new viewpoint O' munder (0,0,0) coordinate system, P point in space is at the height of left eye viewpoint, r' rfor new viewpoint O' munder (0,0,0) coordinate system, P point in space is in the incident angle of left eye viewpoint, β ' rfor new viewpoint O' munder (0,0,0) coordinate system, P point in space is at the position angle of right eye viewpoint.
9. 3D panorama display method for drafting as claimed in claim 3, it is characterized in that, first comprehensive described laser, second comprehensive laser and three comprehensive laser are respectively blue line laser, red line laser and green line laser, blue line laser and green line laser are arranged on the both sides up and down of red line laser, and a bit on the axial line of described guiding support bar of the axes intersect of all line lasers.
10. 3D panorama display method for drafting as claimed in claim 9, is characterized in that, the panorama that described 3D panorama display method for drafting also comprises centered by thing draws display, according to the single view O of omnibearing vision sensor mfor the cloud data of the 3D panorama model of true origin, with the scan slice produced during moving body LASER Light Source scanning panoramic scene, extract the cloud data that blueness, redness and green comprehensive laser projection produces, generate cloud data matrix, the panorama realized centered by thing draws display.
CN201410632152.3A 2014-11-11 2014-11-11 3D environment dubbing system and 3D panoramas display method for drafting based on active panoramic vision Active CN104374374B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410632152.3A CN104374374B (en) 2014-11-11 2014-11-11 3D environment dubbing system and 3D panoramas display method for drafting based on active panoramic vision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410632152.3A CN104374374B (en) 2014-11-11 2014-11-11 3D environment dubbing system and 3D panoramas display method for drafting based on active panoramic vision

Publications (2)

Publication Number Publication Date
CN104374374A true CN104374374A (en) 2015-02-25
CN104374374B CN104374374B (en) 2017-07-07

Family

ID=52553430

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410632152.3A Active CN104374374B (en) 2014-11-11 2014-11-11 3D environment dubbing system and 3D panoramas display method for drafting based on active panoramic vision

Country Status (1)

Country Link
CN (1) CN104374374B (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106204733A (en) * 2016-07-22 2016-12-07 青岛大学附属医院 Liver and the three-dimensional constructing system of kidney CT image associating
CN107958489A (en) * 2016-10-17 2018-04-24 杭州海康威视数字技术股份有限公司 A kind of curve reestablishing method and device
CN109508755A (en) * 2019-01-22 2019-03-22 中国电子科技集团公司第五十四研究所 A kind of Psychological Evaluation method based on image cognition
CN109631799A (en) * 2019-01-09 2019-04-16 王红军 A kind of intelligentized measurement and labeling method
CN110659440A (en) * 2019-09-25 2020-01-07 云南电网有限责任公司曲靖供电局 Method for rapidly and dynamically displaying different detail levels of point cloud data large scene
CN106204733B (en) * 2016-07-22 2024-04-19 青岛大学附属医院 Liver and kidney CT image combined three-dimensional construction system

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2006094409A1 (en) * 2005-03-11 2006-09-14 Creaform Inc. Auto-referenced system and apparatus for three-dimensional scanning
CN101619962A (en) * 2009-07-30 2010-01-06 浙江工业大学 Active three-dimensional panoramic view vision sensor based on full color panoramic view LED light source
JP2011008772A (en) * 2009-05-28 2011-01-13 Honda Research Inst Europe Gmbh Driver assistance system or robot with dynamic attention module
CN102289144A (en) * 2011-06-30 2011-12-21 浙江工业大学 Intelligent three-dimensional (3D) video camera equipment based on all-around vision sensor

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2006094409A1 (en) * 2005-03-11 2006-09-14 Creaform Inc. Auto-referenced system and apparatus for three-dimensional scanning
JP2011008772A (en) * 2009-05-28 2011-01-13 Honda Research Inst Europe Gmbh Driver assistance system or robot with dynamic attention module
CN101619962A (en) * 2009-07-30 2010-01-06 浙江工业大学 Active three-dimensional panoramic view vision sensor based on full color panoramic view LED light source
CN102289144A (en) * 2011-06-30 2011-12-21 浙江工业大学 Intelligent three-dimensional (3D) video camera equipment based on all-around vision sensor

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
汤一平等: "立体全方位视觉传感器的设计", 《仪器仪表学报》, vol. 31, no. 7, 31 July 2010 (2010-07-31) *

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106204733A (en) * 2016-07-22 2016-12-07 青岛大学附属医院 Liver and the three-dimensional constructing system of kidney CT image associating
CN106204733B (en) * 2016-07-22 2024-04-19 青岛大学附属医院 Liver and kidney CT image combined three-dimensional construction system
CN107958489A (en) * 2016-10-17 2018-04-24 杭州海康威视数字技术股份有限公司 A kind of curve reestablishing method and device
CN107958489B (en) * 2016-10-17 2021-04-02 杭州海康威视数字技术股份有限公司 Curved surface reconstruction method and device
CN109631799A (en) * 2019-01-09 2019-04-16 王红军 A kind of intelligentized measurement and labeling method
CN109508755A (en) * 2019-01-22 2019-03-22 中国电子科技集团公司第五十四研究所 A kind of Psychological Evaluation method based on image cognition
CN109508755B (en) * 2019-01-22 2022-12-09 中国电子科技集团公司第五十四研究所 Psychological assessment method based on image cognition
CN110659440A (en) * 2019-09-25 2020-01-07 云南电网有限责任公司曲靖供电局 Method for rapidly and dynamically displaying different detail levels of point cloud data large scene

Also Published As

Publication number Publication date
CN104374374B (en) 2017-07-07

Similar Documents

Publication Publication Date Title
CN102679959B (en) Omnibearing 3D (Three-Dimensional) modeling system based on initiative omnidirectional vision sensor
US11677920B2 (en) Capturing and aligning panoramic image and depth data
CN109615652B (en) Depth information acquisition method and device
CN105243637B (en) One kind carrying out full-view image joining method based on three-dimensional laser point cloud
CN108648232B (en) Binocular stereoscopic vision sensor integrated calibration method based on precise two-axis turntable
CN105023275B (en) Super-resolution optical field acquisition device and its three-dimensional rebuilding method
CN104330074B (en) Intelligent surveying and mapping platform and realizing method thereof
CN104406539A (en) All-weather active type panoramic sensing device and 3D (three dimensional) panoramic modeling approach
CN108053476B (en) Human body parameter measuring system and method based on segmented three-dimensional reconstruction
CN103903222B (en) Three-dimensional sensing method and three-dimensional sensing device
CN111028340B (en) Three-dimensional reconstruction method, device, equipment and system in precise assembly
CN110827392B (en) Monocular image three-dimensional reconstruction method, system and device
CN110243307A (en) A kind of automatized three-dimensional colour imaging and measuring system
CN104374374A (en) Active omni-directional vision-based 3D (three-dimensional) environment duplication system and 3D omni-directional display drawing method
WO2009120073A2 (en) A dynamically calibrated self referenced three dimensional structured light scanner
CN104567818A (en) Portable all-weather active panoramic vision sensor
CN205451195U (en) Real -time three -dimensional some cloud system that rebuilds based on many cameras
CN110230979A (en) A kind of solid target and its demarcating three-dimensional colourful digital system method
CN109242898A (en) A kind of three-dimensional modeling method and system based on image sequence
CN103337069A (en) A high-quality three-dimensional color image acquisition method based on a composite video camera and an apparatus thereof
CN114543787B (en) Millimeter-scale indoor map positioning method based on fringe projection profilometry
CN103868500A (en) Spectral three-dimensional imaging system and method
Liu et al. The applications and summary of three dimensional reconstruction based on stereo vision
Holdener et al. Design and implementation of a novel portable 360 stereo camera system with low-cost action cameras
CN109089100B (en) Method for synthesizing binocular stereo video

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant