CN104374374B - 3D environment dubbing system and 3D panoramas display method for drafting based on active panoramic vision - Google Patents

3D environment dubbing system and 3D panoramas display method for drafting based on active panoramic vision Download PDF

Info

Publication number
CN104374374B
CN104374374B CN201410632152.3A CN201410632152A CN104374374B CN 104374374 B CN104374374 B CN 104374374B CN 201410632152 A CN201410632152 A CN 201410632152A CN 104374374 B CN104374374 B CN 104374374B
Authority
CN
China
Prior art keywords
laser
display
view
viewpoint
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201410632152.3A
Other languages
Chinese (zh)
Other versions
CN104374374A (en
Inventor
汤平
汤一平
周伟敏
鲁少辉
韩国栋
吴挺
陈麒
韩旺明
胡克钢
王伟羊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University of Technology ZJUT
Original Assignee
Zhejiang University of Technology ZJUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University of Technology ZJUT filed Critical Zhejiang University of Technology ZJUT
Priority to CN201410632152.3A priority Critical patent/CN104374374B/en
Publication of CN104374374A publication Critical patent/CN104374374A/en
Application granted granted Critical
Publication of CN104374374B publication Critical patent/CN104374374B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C11/00Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Processing Or Creating Images (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a kind of 3D environment dubbing systems based on active panoramic vision, the microprocessor of output is drawn including omnibearing vision sensor, moving body LASER Light Source and for omnidirectional images to be carried out with the reconstruct of 3D panoramas and 3D panoramas, microprocessor includes that three parts are drawn in demarcation, 3D reconstruct and the display of 3D panoramas, and wherein 3D panoramas display drafting part mainly includes:Panorama drafting module, perspective view focusing on people display drafting module, full-view perspective circulation display drafting module, perspective view display drafting module, full-view stereo figure circulation display drafting module and viewing distance change perspective view display drafting module centered on thing.Method for drafting is shown the invention also discloses a kind of 3D panoramas based on active panoramic vision.The present invention realize 3D panorama models reconstruct geometry accuracy, the sense of reality, with come to personally its border visual impression panorama 3D scene drawing show, process of reconstruction automate perfect unity.

Description

3D environment dubbing system and 3D panoramas display method for drafting based on active panoramic vision
Technical field
The present invention relates to LASER Light Source, omnibearing vision sensor and computer vision technique in stereo-visiuon measurement and A kind of application in terms of 3D draftings, more particularly to 3D environment dubbing system and 3D panoramas display based on active panoramic vision is drawn Method.
Background technology
Three-dimensional reconstruction includes three-dimensional measurement and stereo reconstruction, is an emerging, great development potentiality and practicality The application technology of value, the reconstruction technique of threedimensional model is mainly concerned with the content in terms of three below:1) accuracy of geometry; 2) sense of reality;3) automation of process of reconstruction.Data required for the reconstruction of threedimensional model mainly include the depth of laser scanning View data and two aspects of view data of imageing sensor collection.
Current three-dimensional laser scanner still has much can improvements, 1) such as accurate hardware construction, it is desirable to by CCD Technology, laser technology, precision optical machinery sensing technology etc. carry out high-quality integration, and result in the quasi-instrument has expensive manufacture Cost and maintenance cost.2) existing three-dimensional laser scanning technique belongs to Surface scan imaging technique, and width scanning point cloud chart cannot The overall picture of building is obtained, especially the overall picture of interior of building;The point cloud obtained from different scanning movements (visual angle) is respectively adopted Its respective local coordinate system, it is therefore desirable to which they are registrated under a unified coordinate system.Existed in registration process repeatedly Conversion between multiple coordinate systems, causes various errors and influences calculating speed and computing resource.3) during point cloud acquisition More interference can be brought into, result in needs that cloud data is carried out the link such as to pre-process.4) 3 D laser scanning of each manufacturer The software point cloud data that instrument is configured lacks unified data standard, it is difficult to realize the shared of data, and this point will be in Digital City City is especially prominent in building.5) two kinds of distinct devices obtain the geometry and color information of dimensional target point, between distinct device Geometry and color information data registration quality directly affect the effect of texture mapping and textures synthesis.6) three-dimensional modeling processing procedure Middle to need multiple manual intervention, modeling efficiency is not high, and this needs operating personnel with professional knowledge higher, and affects certainly Dynamicization degree.
Chinese invention patent application number is 201210137201.7 to disclose a kind of based on active panoramic vision sensor Omnidirectional three-dimensional modeling, system mainly includes omnibearing vision sensor, moving body LASER Light Source and for full side Bit image carries out the microprocessor of 3D panorama reconstruct, and the scanning that moving body LASER Light Source completes a vertical direction has been obtained not With the section point cloud under altitudes, by these data using the height value of described moving body LASER Light Source as preserving rope Draw, can thus be added up by section point cloud generation order, finally construct with geological information and colouring information Panorama 3D models.But the technical scheme has two subject matters, one is the point that the scanning of moving body LASER Light Source is obtained Cloud data cannot obtain the plane cloud data perpendicular to moving body LASER Light Source, such as desktop, ground and zenith plane within doors;Its Two is that current three-dimensionalreconstruction software can not meet 3D focusing on people and draw display, existing 3D softwares primarily directed to 3D centered on thing draws display, to meet 3D focusing on people and show it must is just to allow observer with the 3D for coming to its border personally Reconstruction render, such as, to the three-dimensional digital of the Longmen Grottoes, the final goal of its three-dimensional digital can exactly allow anyone can The Longmen Grottoes is remotely gone sight-seeing by internet, the artistic charm of the Longmen Grottoes is appreciated with multi-angle, comprehensive experience.Human engineering In, it is according to relevant parts such as design visual displays, in order to allow people to go sight-seeing displaying by internet typically with the quiet visual field Have during scene and come to its border visual impression personally, it is necessary to realize that a kind of 3D solids focusing on people are painted with the method for human engineering System display.
The content of the invention
In order to overcome, the computer resource usage of existing passive type full-view stereo vision measurement device is big, real-time performance Difference, practicality be not strong, the not high deficiency of robustness, the initiative three-dimensional panoramic vision measurement dress of full color panorama LED light source Put and be easily subject to the deficiencies such as the interference of ambient light, the present invention provides a kind of geometry of position by direct access space three-dimensional point and believes Breath and colouring information, can reduce computer resource usage, be rapidly completed that measurement, real-time be good, practical, robustness is high The omnidirectional three-dimensional modeling based on active panoramic vision sensor, use 3D Macrovisions centered on thing and with people Centered on 3D in see the drafting display pattern that blends of vision, increase user comes to its border experience sense personally.
Realize foregoing invention content, it is necessary to solve several key problems:(1) realize that one kind can cover whole reconstruct field The moving body LASER Light Source of scape;(2) a kind of active panoramic vision sensing that can quickly obtain actual object depth information is realized Device;(3) method that respective pixel point in laser scanning spatial data points and panoramic picture is carried out into rapid fusion;(4) a kind of base In the increasingly automated method for reconstructing three-dimensional scene of regular cloud data;(5) 3D panoramas focusing on people draw Display Technique; (6) the drafting Display Technique that Macrovision is blended in the 3D Macrovisions and 3D focusing on people centered on thing;(7)3D The automation of process of reconstruction, reduction manual intervention, whole scanning, treatment, generation, drafting display process are accomplished without any letup.
The technical solution adopted for the present invention to solve the technical problems is:
A kind of 3D environment dubbing systems based on active panoramic vision, including omnibearing vision sensor, mobile volumetric laser Light source and the microprocessor for omnidirectional images to be carried out with the reconstruct of 3D panoramas and the drafting output of 3D panoramas, omni-directional visual are passed Sensor is arranged on the guiding support bar of moving body LASER Light Source,
Described moving body LASER Light Source also includes the volumetric laser light source moved up and down along guiding support bar, the volumetric laser light Source has the first comprehensive face laser of vertically-guided support bar and the axial line of guiding support bar into θcInclined second full side Plane laser and with the axial line of guiding support bar into θaInclined 3rd comprehensive face laser;
Described microprocessor is divided into demarcation part, 3D reconstruct part and the display of 3D panoramas and draws part;
Described demarcation part, the calibrating parameters for determining omnibearing vision sensor, and in all-directional vision sensing The first comprehensive face laser, the second comprehensive face laser and the 3rd comprehensive face laser are parsed on panoramic picture captured by device Corresponding laser projection information;
Described 3D reconstruct part, according to the position of moving body LASER Light Source, and the laser projection information correlation Pixel coordinate value, calculates the point cloud geological information in mobile face, and by the point cloud geological information and each comprehensive face laser in mobile face Colouring information merged, build panorama 3D models;
Part is drawn in described 3D panoramas display, including:
Perspective view focusing on people shows drafting module, and 3D is according to described panorama 3D models, and observer The visual angle in environment and field range is reconstructed to draw perspective view focusing on people.
Part is drawn in described 3D panoramas display, is also included:
Full-view perspective circulation display drafting module, reconstructs according to described panorama 3D models, and observer in 3D The varying cyclically and field range at visual angle draw panoramic perspective focusing on people circulation display figure in environment;
Perspective view shows drafting module, according to described perspective view, generates right visual point image, left view dot image and a left side Right visual point image draws perspective view focusing on people;
Full-view stereo figure circulation display drafting module, reconstructs according to described panorama 3D models, and observer in 3D The varying cyclically and field range at visual angle in environment, by constantly changing azimuthal angle beta, the left and right generated under the azimuthal angle beta is stood Body image pair draws full-view stereo focusing on people perspective circulation display figure;
Viewing distance change perspective view display drafting module, exists according to described panorama 3D models, and observer It is focusing on people full when the change of viewing distance and field range constantly change viewing distance to be plotted in 3D reconstruct environment Scape volume rendering display figure.
A kind of 3D panoramas display method for drafting based on above-mentioned 3D environment dubbing system, including step:
1) panoramic picture that moving body laser light source projects are formed is shot using omnibearing vision sensor;
2) panoramic picture according to, determines the calibrating parameters of omnibearing vision sensor, and parses the first full side Plane laser, the second comprehensive face laser and the corresponding laser projection information of the 3rd comprehensive face laser;
3) according to the position of moving body LASER Light Source, and the laser projection information related pixel coordinate value, calculate The point cloud geological information in mobile face, and the colouring information of the point cloud geological information in mobile face and each comprehensive face laser is melted Close, build panorama 3D models;
4) the panorama 3D models according to, and observer are in visual angle and field range in 3D reconstruct environment, come Draw perspective view focusing on people;Comprise the following steps that:
STEP1 it is) origin of coordinates O with the single view of omnibearing vision sensorm(0,0,0), sets up three-dimensional column space Coordinate system;
STEP2) according to the visual range of human eye, the size of see-through window is determined, with azimuthal angle beta and height h as window Mouthful variable, and obtain the corresponding cloud data of see-through window (h, β, r);
STEP3 step-length and the scope of height h) according to azimuthal angle beta, and described cloud data (h, β r), generate number According to matrix;
STEP4) all of three-dimensional coordinate in data matrix is coupled together with triangle surface, the color of connecting line is used Two color averages of point of connection;
STEP5 the triangle surface of all connections) is carried out into output display, perspective view focusing on people is completed and is drawn.
Described 3D panoramas display method for drafting also includes drawing panoramic perspective circulation display figure focusing on people, passes through Constantly change azimuthal angle beta, generate the display data matrix under the azimuthal angle beta, complete panoramic perspective circulation display figure and draw.
Described 3D panoramas display method for drafting also includes drawing perspective view focusing on people, specific rendering algorithm It is as follows:
5.1:Determine initial orientation angle beta1, read minimum range hminWith ultimate range hmaxData, determine viewpoint, median eye =0, left view point=1, right viewpoint=2;
5.2:H=hmin
5.3:Read the initial orientation angle beta of distance value h1Start to β1+ 300 data, if β1+ 300 >=1000, then β1+ 300=β1+300-1000;According to identified viewpoint, selection median eye 0 calculates cylindrical coordinates of the dimensional target point in right and left eyes;Choosing Select left view point 1 calculate dimensional target point right eye cylindrical coordinates, using former coordinate data as left eye cylindrical coordinates;Select right viewpoint 2 Calculate dimensional target point left eye cylindrical coordinates, using former coordinate data as right eye cylindrical coordinates;Respectively with left and right viewpoint by these Value is used as left view dot matrix and the new data line of right viewpoint matrix, h=h+ Δs h;
5.4:Judge h >=hmaxIf being unsatisfactory for going to 5.3, until the display matrix of generation left and right viewpoint;
5.5:All of three-dimensional coordinate in display matrix is coupled together with triangle surface respectively, connection method is first The data of every a line are attached with straight line, then the data of each row are attached with straight line, finally by matrix (i, J) cloud data with (i+1, j+1) is attached with straight line;The color of connecting line is average using the color of two points of connection Value;
5.6:The stereogram generated after to above-mentioned treatment carries out binocular solid and shows, completes focusing on people vertical Body perspective view is drawn.
Described 3D panoramas display method for drafting also includes drawing full-view stereo perspective circulation display figure focusing on people, By constantly changing azimuthal angle beta, the display data matrix of current β is preserved, left view dot matrix and right viewpoint matrix are used three respectively Angular dough sheet is attached, the stereogram of generation, stereoscopic display output.
Described 3D panoramas display method for drafting also includes drawing viewing distance change perspective view focusing on people, Constantly change the self space position in 3D environment according to the observation, painted from the angle-determining field range of human engineering Full-view stereo figure processed.
Described the first comprehensive face laser, the second comprehensive face laser and the 3rd comprehensive face laser are respectively blue line Laser, red line laser and green line laser, blue line laser and green line laser are arranged on the both sides up and down of red line laser, And the axis of all line lasers intersects at a bit on the axial line of the guiding support bar.
Described 3D panoramas display method for drafting also includes that the panorama centered on thing draws display, is regarded according to comprehensive Feel the single view O of sensormIt is the cloud data of the 3D panorama models of the origin of coordinates, panoramic fields is scanned with moving body LASER Light Source The scan slice produced during scape, extracts the cloud data produced by blue, red and green comprehensive face laser projection, generation Cloud data matrix, realizes that the panorama centered on thing draws display.
Beneficial effects of the present invention are mainly manifested in:
(1) there is provided a kind of brand-new stereoscopic vision acquisition methods, using comprehensive laser scanning and omni-directional visual Characteristic causes the threedimensional model after reconstruct while having precision and preferable texture information higher;
(2) computer resource usage can be efficiently reduced, with real-time it is good, practical, robustness is high, automation journey The advantages of spending high, whole 3D reconstruct does not need manpower intervention;
(3) accuracy of geometry is ensure that using comprehensive laser detection, skill is gathered using high-resolution panoramic picture Art causes each pixel on panoramic picture while possessing geological information and colouring information, so as to ensure that the true of 3D reconstruct Sense, whole process automatically scanning, automatic parsing and calculating, does not exist the ill computational problem of three-dimensionalreconstruction, realizes three-dimensional The automation of process of reconstruction;Realize the complete of geometry accuracy, the sense of reality and process of reconstruction automation that 3D panorama models are reconstructed U.S. is unified;
(4) a kind of 3D panoramas focusing on people are realized and draw Display Technique, by the 3D visions centered on thing and with 3D visions centered on people are merged so that the panorama 3D scenes drawn out have more comes to its border visual impression personally.
Brief description of the drawings
Fig. 1 is a kind of structure chart of omnibearing vision sensor;
Fig. 2 is single view catadioptric omnibearing vision sensor imaging model, and Fig. 2 (a) perspective imaging processes, Fig. 2 (b) is passed Sensor plane, Fig. 2 (c) planes of delineation;
Fig. 3 is moving body LASER Light Source structure diagram;
Fig. 4 is the demarcation explanatory diagram of active panoramic vision sensor;
Fig. 5 is the hardware structure diagram of the omnidirectional three-dimensional modeling based on active panoramic vision sensor;
Fig. 6 is the structure chart of comprehensive laser generator part, and Fig. 6 (a) is comprehensive laser generator part front view, Figure (b) is comprehensive laser generator part top view;
Fig. 7 is the imaging schematic diagram of omnibearing vision sensor;
Fig. 8 is perspective imaging principle figure;
Fig. 9 is the schematic diagram for changing viewing distance binocular stereo imaging, and Fig. 9 (a) is binocular solid before change viewing distance The schematic diagram of imaging, Fig. 9 (b) changes the schematic diagram of binocular stereo imaging after viewing distance;
Figure 10 is the software frame that omnidirectional three-dimensional modeling and 3D panoramas based on active panoramic vision sensor are drawn Composition;
Figure 11 is the point cloud space geometry information in the omnidirectional three-dimensional modeling based on active panoramic vision sensor The explanatory diagram of calculating;
Figure 12 is the omnidirectional three-dimensional modeling based on active panoramic vision sensor when three dimensional point cloud is obtained The section panoramic picture schematic diagram for obtaining;
Figure 13 is that the procedure declaration figure that point cloud space geometry information is calculated is parsed on panoramic picture;
Figure 14 is the horizontal field of view explanatory diagram of human eye;
Figure 15 is the vertical visual field explanatory diagram of human eye;
Figure 16 is to obtain green, the explanatory diagram of red and blue laser projection line respectively on parsing panorama sectioning image.
Specific embodiment
Embodiment 1
Reference picture 1~16, a kind of 3D environment dubbing system and 3D panoramas display method for drafting based on active panoramic vision, Including omnibearing vision sensor, moving body LASER Light Source and for omnidirectional images to be carried out with the reconstruct of 3D panoramas and 3D panoramas Draw the microprocessor of output.
The center of omnibearing vision sensor is with the center configuration of moving body LASER Light Source on same axis heart line.It is such as attached Shown in Fig. 1, omnibearing vision sensor includes hyperboloid minute surface 2, upper lid 1, transparent semicircle outer cover 3, lower fixed seat 4, shooting Unit fixed seat 5, image unit 6, connection unit 7, upper cover 8.Hyperboloid minute surface 2 is fixed on lid 1, and connection unit 7 is by under Fixed seat 4 and transparent semicircle outer cover 3 link into an integrated entity, and transparent semicircle outer cover 3 is solid by screw with upper lid 1 and upper cover 8 It is scheduled on together, image unit 6 is screwed in image unit fixed seat 5, image unit fixed seat 5 is screwed under In fixed seat 4, the output of the image unit 6 in omnibearing vision sensor is connected with microprocessor.
As shown in Figure 3, moving body LASER Light Source is used to produce three-dimensional body structure projection source, including:Guiding support bar There is assembled unit 2-2, chassis 2-3, linear electric motors mobile bar 2-4, linear motor assembly 2-5, blue line laser in 2-1, laser Generating unit 2-6, red line laser generating unit 2-7, green line laser generating unit 2-8.
Laser occurs to be provided with 12 holes altogether on assembled unit 2-2, wherein every 4 Kong Weiyi groups, are divided into blue line laser hair Raw unit mounting hole group, red line laser generating unit mounting hole group and green line laser generating unit mounting hole group.4 red The axial line that the axial line of line laser generating unit mounting hole occurs the cylinder of assembled unit 2-2 with laser respectively is orthogonal pass There is the axle center of the cylinder of assembled unit 2-2 with laser respectively in system, 4 axial lines of blue line laser generating unit mounting hole Line is into cant angle thetacRelation, 4 axial lines of green line laser generating unit mounting hole occur assembled unit 2-2's with laser respectively The axial line of cylinder is into cant angle thetaaRelation.This 12 holes occur on the circumferencial direction of the cylinder of assembled unit 2-2 in laser Angle in 90 ° is uniformly distributed, this ensure that the axial line in 12 holes intersects at laser occurs the cylinder of assembled unit 2-2 In same point on axial line, as shown in Figure 6.
Blue line laser generating unit 2-6 is fixed on laser and the blue line laser generating unit peace of assembled unit 2-2 occurs Fill in the hole of hole group, as shown in Figure 6, can form one by the blue line laser after so combination and send the comprehensive of blueness Face LASER Light Source;Red line laser generating unit 2-7 is fixed on line laser and the red line laser generation list of assembled unit 2-2 occurs In the hole of first mounting hole group, as shown in Figure 6, can form one by the red line laser after so combination and send the complete of red Azimuth plane LASER Light Source;Green light rays laser generating unit 2-8 is fixed on line laser and the green line laser hair of assembled unit 2-2 occurs In the hole of raw unit mounting hole group, as shown in Figure 6, can form one by the green line laser after so combination and send green Comprehensive face LASER Light Source.12 Kong Jun in laser occurs assembled unit 2-2 fix corresponding line laser to be occurred After unit, volumetric laser light source is just constituted;So volumetric laser light source just can respectively project the 3 of blue, red and green successively The comprehensive face laser of color, wherein red comprehensive face laser and guiding support bar are into vertical relation, blue comprehensive face laser with The axial line of guiding support bar is into θcAngle of inclination relation, the axial line of green comprehensive face laser and guiding support bar is into θaIncline Rake angle relation.
The assembly relation of moving body LASER Light Source is to be nested into volumetric laser light source to constitute movement in guiding support bar 2-1 Pair, guiding support bar 2-1 is vertically fixed on the 2-3 of chassis, and linear motor assembly 2-5 is fixed on the 2-3 of chassis, and linear electric motors are moved Lever 2-4 upper ends are fixedly connected with volumetric laser light source, control linear motor assembly 2-5 to realize linear electric motors mobile bar 2- 4 move up and down, so as to drive volumetric laser light source to be moved up and down under the guide effect of guiding support bar 2-1, forms a kind of shifting Dynamic volumetric laser light source so that the panorama of reconstruct is scanned.Linear motor assembly 2-5 slows down for Miniature alternating-current linear reciprocating Motor, it is 700mm that it moves back and forth scope, and model 4IK25GNCMZ15S500, linear reciprocation translational speed is 15mm/s, The mobile thrust of maximum is 625N.
Omnibearing vision sensor is arranged on the guiding support bar 2-1 in moving body LASER Light Source by connecting plate, structure Into an active mode all-directional vision sensor, as shown in Figure 4, omnibearing vision sensor passes through USB interface and microprocessor Device is connected.
It is main in the application software of microprocessor to be made up of demarcation, three parts of 3D reconstruct and the display drafting of 3D panoramas.Mark Fixed part mainly includes:Video image read module, omnibearing vision sensor demarcating module, the parsing of comprehensive face laser intelligence Module, combined calibrating module.3D reconstruct part mainly includes:The straight-line electric of video image read module, moving body LASER Light Source The position estimation module of machine, comprehensive face laser intelligence parsing module, the computing module of the point cloud geological information in mobile face, put cloud Geological information and colouring information Fusion Module, panorama 3D model construction modules are built with the positional information in the face of moving, 3D is complete Scape model generation module and memory cell.Part is drawn in the display of 3D panoramas mainly to be included:Panorama drafting module centered on thing, Perspective view display drafting module focusing on people, full-view perspective circulation display drafting module, perspective view display are drawn Module, full-view stereo figure circulation display drafting module and viewing distance change perspective view display drafting module.
Video image read module, for reading the video image of omnibearing vision sensor, and is stored in memory cell In, its output is connected with omnibearing vision sensor demarcating module and comprehensive face laser intelligence parsing module.
Omnibearing vision sensor demarcating module, for determining the X-Y scheme in three dimensions point and video camera imaging plane The parameter of mapping relations between picture point, as shown in Figure 2;Specific calibration process is around omnibearing vision sensor one by scaling board In week, some groups of panoramic pictures are shot, set up some equatioies of pixel in spatial point and imaging plane, asked using optimization algorithm Go out optimal solution, result of calculation is as shown in table 1, the calibrating parameters of the omnibearing vision sensor for as being used in the present invention;
The calibration result of the ODVS of table 1
After calibrating the inside and outside parameter of omnibearing vision sensor, a pixel for imaging plane can be just set up with incidence Corresponding relation between light, i.e. incidence angle, such as formula (1) are represented;
In formula, α represents the incidence angle of a cloud, | | u " | | it is the distance of the point on imaging plane to the planar central point, a0、 a1、a2And aNIt is the inside and outside parameter of the omnibearing vision sensor of demarcation, an any picture of imaging plane is set up by formula (1) Mapping table between vegetarian refreshments and incidence angle.
After for the demarcation of the omnibearing vision sensor employed in the present invention, the point on imaging plane | | u " | | with point The incident angle α relation of cloud can be represented with equation;
Two extreme positions of moving body LASER Light Source are by the straight-line electric unit in moving body LASER Light Source in the present invention What the range of part and the projectional angle of volumetric laser light source were determined, upper extreme position setting eyes when standing with adult On the basis of height in the state of looking squarely, upper extreme position initial value is set to 1500mm, and lower limit position is set with adult When squatting down, eyes are on the basis of the height of state of looking squarely, and lower limit position initial value is set to 800mm;Linear motor assembly Range be 700mm, also have 30 ° of upward view angle in upper extreme position, also have 30 ° of vertical view in lower limit position Angle;The omnibearing vision sensor that the present invention is used covers nearly 93 ° and vertically regards with 28 ° of upward view angle and 65 ° of depression angle Field and 360 ° of horizontal field of view;The single view O of design of the invention, moving body LASER Light Source and omnibearing vision sensorm The distance between calculated with formula (3);
In formula,It is the single view O of omnibearing vision sensormDistance from the ground, hup limitIt is moving body laser light The upper extreme position in source, hLaserMDBe the displacement of moving body LASER Light Source, h (z) be moving body LASER Light Source with it is comprehensive The single view O of vision sensormThe distance between;As shown in Figure 4.
Here the collection image rate of regulation omnibearing vision sensor is 15Flame/s, and movement is set in the present invention Linear reciprocation translational speed in the vertical direction of volumetric laser light source is 15mm/s, moving body LASER Light Source between two interframe Rectilinear movement distance in vertical direction is 1mm, and distance is 700mm between two extreme positions, therefore completes a Vertical Square The time is scanned up for 47s, 700 panorama sectioning images of generation are met together;700 frame figures are processed during a vertical scanning Picture, has three projection lines of blue laser line, red laser line and green laser in 1 two field picture, wherein the 1st frame and 700 two field pictures are exactly two scanning panorama sectioning images of extreme position.
Comprehensive face laser intelligence parsing module, for parsing laser projection information on panoramic picture.Parsing is complete The method of blue laser, red laser and green laser incident point on scape figure is according to blue laser, red laser and green The brightness of the pixel of laser projection point is greater than the mean flow rate on imaging plane, is first by the RGB color of panorama sketch HIS color spaces are changed into, then using 1.2 times of the mean flow rate on imaging plane as extraction blue laser, green laser With the threshold value in red laser incident point, needed after blue laser, red laser and green laser incident point is extracted further Blue laser, red laser and green laser incident point are distinguished, the tone value H in the present invention in HIS color spaces is carried out Judge, if tone value H is judged as blue laser incident point between (225,255), if tone value H is between (0,30) Red laser incident point is judged as, if tone value H is judged as green laser incident point between (105,135), remaining Pixel is judged as interference;In order to obtain the accurate location of laser projection line, the present invention is extracted using Gaussian approximation method Go out the center of laser projection line, implementing algorithm is:
Step1:Initial orientation angle beta=0 is set;
Step2:Green laser, red are retrieved on panoramic picture since the central point of panoramic picture with azimuthal angle beta to swash Light and blue laser incident point, for the pixel that several continuous certain color laser projections are there are in azimuthal angle beta, here I component in selection HIS color spaces, i.e. brightness value is estimated close to three contiguous pixels of peak by Gaussian approximation method Calculate the center of laser projection line;Circular is given by formula (4),
In formula, f (i-1), f (i) and f (i+1) are respectively three adjacent pixels close to the brightness value of highest brightness value, and d is Correction value, i represents the ith pixel point since image center.Therefore in estimating the color laser projection line for obtaining Heart position is (i+d), and the value corresponds in formula (1) | | u " | |;Due to occurring color laser projection on panoramic picture Order is green laser, red laser and blue laser, therefore sequentially excludes other projection noises successively to laser intelligence The influence of parsing;
Step3:Change azimuth to continue to retrieve laser projection point, i.e. β=β+Δ β, Δ β=0.36;
Step4:Judge azimuthal angle beta=360, if set up, retrieval terminates;Otherwise go to Step2.
Another comprehensive face laser intelligence analytic method is the laser projection point extraction algorithm based on frame-to-frame differences, the algorithm It is a kind of laser projection point to be obtained as calculus of differences by the panorama sectioning image that is obtained to two adjacent positions height Method, when mobile lasing area is in upper and lower scanning process, in vertical direction between frame and frame, i.e., occur on different tangent planes compared with It is obvious difference, two frames subtract each other, and obtain the absolute value of two field pictures luminance difference, judges whether it analyzes extraction more than threshold value Laser projection point in panorama sectioning image;Then judged according to the order for occurring color laser projection on panoramic picture Green laser, red laser and blue laser projection light in a certain azimuthal angle beta.
Combined calibrating, for being demarcated to active mode all-directional vision sensor.Due to omnibearing vision sensor and Moving body LASER Light Source inevitably has various rigging errors in assembling process, by combined calibrating by these errors Minimize.Specific practice is:First, active mode all-directional vision sensor is placed on an a diameter of 1000mm Hollow cylinder body in, and the axial line of active mode all-directional vision sensor is overlapped with the axial line in hollow cylinder body, As shown in Figure 4;Then so that moving body LASER Light Source ON, red and green laser is launched, by the adjustment of moving body LASER Light Source To upper extreme position hup lim it, and capturing panoramic view image observes blue aperture on panoramic picture, red aperture and green Whether the center of circle of aperture is consistent with the center on panoramic picture, detects blue aperture on panoramic picture, red aperture and green The circularity of coloured light circle, if there is center is inconsistent or circularity is unsatisfactory for requiring that situation needs to adjust omnibearing vision sensor And the connection between moving body LASER Light Source;Further, moving body LASER Light Source is adjusted to lower limit position hdown lim it, and Capturing panoramic view image, observe blue aperture on panoramic picture, red aperture and green aperture the center of circle whether with panorama sketch As upper center is consistent, the circularity of the blue aperture, red aperture and green aperture on panoramic picture is detected, if there is in The heart is inconsistent or circularity is unsatisfactory for requiring that situation is needed between adjustment omnibearing vision sensor and moving body LASER Light Source Connection;Finally, by upper extreme position hup lim it, lower limit position hdown lim it, moving body LASER Light Source maximum moving distance hLaserMD, omnibearing vision sensor calibrating parameters information be stored in combined calibrating database, so as in three-dimensionalreconstruction Call.
High definition imager chip is used in the present invention in omnibearing vision sensor, with 4096 × 2160 resolution ratio;Move The moving step length of kinetoplast LASER Light Source is 1mm, vertical scanning scope 700mm, therefore the section produced by moving body LASER Light Source The resolution ratio of panoramic picture is 700, and each pixel that so vertical scanning of completion can be just completed on panoramic picture is several Sampling, the fusion of what information and colouring information until three-dimensionalreconstruction and draw display output, as shown in Figure 10.
For three-dimensionalreconstruction part, its handling process is:
StepA:Full-view video image is read by video image read module;
StepB:Two time Estimate moving body laser lights of limit point of translational speed and arrival according to linear electric motors The position of the linear electric motors in source;
StepC:Comprehensive face laser intelligence is parsed on panoramic picture, mobile millet cake cloud geological information is calculated;
StepD:Full-view video image in the case of being read without laser projection from internal memory, according to result in StepC Mobile face geological information and colouring information are merged;
StepE:Progressively build panorama 3D models;
StepF:Judge whether to have arrived at limit point position, if so go to StepG, it is invalid if go to StepA;
StepG:It is OFF to set moving body LASER Light Source, is read without the full-view video image in the case of laser projection, and will It is stored in internal storage location, is exported 3D panorama models and is saved in memory cell, and it is ON to set moving body LASER Light Source, is gone to StepA。
The handling process to three-dimensionalreconstruction elaborates below, special in StepA to read panorama using a thread Video image, the reading rate of video image is 15Flame/s, and the panoramic picture after collection is stored in an internal storage location, So that follow-up treatment is called;
In StepB, it is mainly used in estimating the current location of moving body LASER Light Source;Regulation will be mobile when reconstructing and starting The initial position of volumetric laser light source is scheduled on upper extreme position hup lim it, initial step length controlling value zmove(j)=0, during adjacent two frame Between moving body LASER Light Source moving step length be Δ z, that is, there is following relation,
zmove(j+1)=zmove(j)+Δz (5)
In formula, zmoveStep-length controlling value, z when () is jth frame jmove(j+1) step-length controlling value when for the frame of jth+1, Δ z is shifting The moving step length of kinetoplast LASER Light Source, specifies from upper extreme position h hereup lim itWhen in downward direction moving, Δ z=1mm;From Lower limit position hdown lim itWhen upward direction is moved, Δ z=-1mm;Judged by relationship below when program is realized,
With the result of calculation value z of formula (5)move(j+1) h in formula (3) is substituted intoLaserMD, obtain moving body LASER Light Source With the single view O of omnibearing vision sensormThe distance between h (z).
In StepC, the panoramic picture and the comprehensive face laser intelligence parsing module of use in internal storage location are read from complete Comprehensive face laser intelligence is parsed on scape image, mobile millet cake cloud geological information is then calculated.
If the spatial positional information of point cloud is represented with Gauss coordinate system, the space coordinates of each point cloud is relative to complete The single view O of orientation vision sensormDetermined with 3 values for the Gauss coordinate of Gauss coordinate origin, i.e., (R, α, β), R is certain Single view O of one cloud to omnibearing vision sensormDistance, α is some point cloud to omnibearing vision sensor Single view OmIncidence angle, β be some point cloud to omnibearing vision sensor single view OmAzimuth, for accompanying drawing 13 In point cloudWithPoint, the computational methods of the cloud data under Gauss coordinate are given by formula (7), (8), (9),
In formula, (β)greenIt is the single view O of green laser point cloud projection to omnibearing vision sensormAzimuth, (β)redIt is the single view O of red laser point cloud projection to omnibearing vision sensormAzimuth, (β)blueFor blue laser is thrown Single view O of the shadow point cloud to omnibearing vision sensormAzimuth, θBIt is the angle between the blue incident line and Z axis, θGFor Angle between the green incident line and Z axis, h (z) is single view O of the moving body LASER Light Source to omnibearing vision sensorm's Distance, αaIt is the single view O of green laser point cloud projection to omnibearing vision sensormIncidence angle, αbFor red laser is projected Single view O of the point cloud to omnibearing vision sensormIncidence angle, αcIt is blue laser point cloud projection to all-directional vision sensing The single view O of devicemIncidence angle, RaIt is the single view O of green laser point cloud projection to omnibearing vision sensormDistance, Rb It is the single view O of red laser point cloud projection to omnibearing vision sensormDistance, RcIt is blue laser point cloud projection to complete The single view O of orientation vision sensormDistance, | | u " | | (β)greenIt is correspondence of the green laser subpoint on imaging plane Point arrives the distance between panoramic imagery planar central, | | u " | | (β)redIt is correspondence of the red laser subpoint on imaging plane Point arrives the distance between panoramic imagery planar central, | | u " | | (β)blueIt is correspondence of the blue laser subpoint on imaging plane Point arrives the distance between panoramic imagery planar central.
If cloud will be putWithPoint cartesian coordinate systemWithIf representing, refer to the attached drawing 11, its computational methods are given by formula (10), (11), (12),
In formula, RaIt is the single view O of green laser point cloud projection to omnibearing vision sensormDistance, RbFor red swashs Single view O of the light point cloud projection to omnibearing vision sensormDistance, RcIt is blue laser point cloud projection to omni-directional visual The single view O of sensormDistance, αaIt is the single view O of green laser point cloud projection to omnibearing vision sensormIncidence Angle, αbIt is the single view O of red laser point cloud projection to omnibearing vision sensormIncidence angle, αcIt is blue laser subpoint Single view O of the cloud to omnibearing vision sensormIncidence angle, (β)greenFor green laser point cloud projection to omni-directional visual is passed The single view O of sensormAzimuth, (β)redIt is the single view O of red laser point cloud projection to omnibearing vision sensormSide Parallactic angle, (β)blueIt is the single view O of blue laser point cloud projection to omnibearing vision sensormAzimuth.
Comprehensive 360 ° of blue, red and green comprehensive face laser projection institute has been traveled through in StepC calculating process The cloud data of generation;Due to using high definition imager chip in the present invention, in order to be agreed with vertical scanning precision, adopt here With material calculation being Δ β=0.36 travels through whole 360 ° of azimuth, and accompanying drawing 16 is moving body LASER Light Source high at some Scanning result panorama sketch on degree position, green short dash line is the point cloud produced by green comprehensive face laser projection on panorama sketch DataRed dotted line long is the cloud data produced by red comprehensive face laser projection Blue short dash line is the cloud data produced by blue comprehensive face laser projectionTraversal is specifically described below Algorithm;
StepⅠ:Initial orientation angle beta=0 is set;
StepⅡ:Using comprehensive face laser intelligence parsing module, along directions of rays Access Points cloudWith To corresponding with cloud data on imaging plane | | u " | | (β)green、||u″||(β)redWith | | u " | | (β)blueThree points, Point cloud is calculated with formula (7)Distance value RaAnd incident angle αa, point cloud is calculated with formula (8)Distance value RbAnd incidence angle αb, point cloud is calculated with formula (9)Distance value RcAnd incident angle αc;Then point cloud is calculated with formula (10) againIn Descartes Under coordinate systemPoint cloud is calculated with formula (11)Under cartesian coordinate systemWith public affairs Formula (12) calculates point cloudUnder cartesian coordinate systemIn this calculation procedure, traversal azimuthal angle beta difference It is updated to (β) in formula (10), (11), (12)green、(β)red、(β)blue;Above-mentioned calculating data are stored in interior deposit receipt In unit;
StepⅢ:β ← β+Δ β, Δ β=0.36 judges whether β=360 set up, if set up to terminate to calculate, otherwise turns To Step II.
In StepD, the full-view video image in the case of being read without laser projection from internal memory first, according in StepC Result is merged the geological information and colouring information of a cloud;Cloud data after fusion will include the geometry of the cloud Information and colouring information, i.e., express the geological information and colouring information of some point cloud, lower mask body with (R, α, β, r, g, b) Illustrate blending algorithm,
Step①:Initial orientation angle beta=0 is set;
Step②:It is corresponding with cloud data according to azimuthal angle beta and in sensor plane | | u " | | (β)redWith | | u " ||(β)greenTwo information of point, read without (r, g, b) related like vegetarian refreshments on the panoramic video figure in the case of laser projection Color data, is merged with corresponding (R, α, the β) that the processing from StepC is obtained, and obtains corresponding point cloud several What information and colouring information (R, α, β, r, g, b);
Step③:β ← β+Δ β, Δ β=0.36 judges whether β=360 set up, if set up to terminate to calculate, will calculate Result is preserved in the memory unit;Otherwise go to Step 2..
Result of calculation in StepE according to StepD progressively builds panorama 3D models, in the present invention, mobile volumetric laser Light source completes a scanning process for vertical direction, i.e., just complete panorama 3D to another extreme position from an extreme position The structure of model, each moving step length can all produce the section point cloud under some altitudes in scanning process, such as Shown in accompanying drawing 12;By these data using the height value of moving body LASER Light Source as index is preserved, thus can be by a section point cloud Data generation order is added up, and is the last panorama 3D models built with geological information and colouring information;According to above-mentioned Description, the present invention has downward panorama 3D reconstruct and upward panorama 3D to reconstruct two kinds of different modes;
Judge that whether moving body LASER Light Source reaches capacity position, that is, judge z in StepFmove(j)=0 or zmove (j)=hLaserMDWhether set up, StepG gone to if setting up, it is invalid if go to StepA.
In StepG, groundwork is output reconstruction result and does some preparations to reconstruct next time;Specific practice is: It is OFF to set moving body LASER Light Source first, is read without the full-view video image in the case of laser projection, and is saved it in interior In memory cell;Then output 3D reconstructs panorama model and is saved in memory cell, due to either putting a cloud in section in the present invention High-resolution collection is employed in terms of data generation aspect or the comprehensive cloud data generation in some section Means, each pixel on imaging plane possesses the geological information and colouring information corresponding with actual point cloud, therefore Also Correspondent problem in three-dimensionalreconstruction, tiling problem and branch problem have just effectively been avoided;Moving body is finally set to swash Radiant is ON, goes to StepA, carries out the reconstruct of new 3D panorama models.
Obtained with the single view O of omnibearing vision sensor by above-mentioned treatmentmIt is the 3D panorama models of the origin of coordinates Cloud data;Scan slice one by one is generated when face laser projection light source scanning panoramic scene is moved with panorama, it is such as attached Shown in Figure 12;Blue portion is that cloud data as produced by blue comprehensive face laser projection, RED sector are by red in figure Cloud data, green portion produced by the comprehensive face laser projection of color are the points as produced by green comprehensive face laser projection Cloud data.
These scan slice images are the mobile natural shapes for moving face laser projection light source with panorama in scanning process Into, often scanning obtains after a Zhang Quanjing sectioning image just being extracted with the azimuth whole panorama sectioning image of traversal blue, red Cloud data produced by color and green comprehensive face laser projection;Cloud data storage is stored in a matrix fashion, wherein matrix 1~700 row store cloud data produced by blue comprehensive face laser projection, 701~1400 rows storage of matrix is red Cloud data produced by comprehensive face laser projection, 1401~2100 rows of matrix store green comprehensive face laser projection institute The cloud data of generation, columns is represented from 0 °~359.64 ° numbers of azimuth sweep, totally 1000 arranged;Therefore, cloud storage square is put Battle array is 2100 × 1000 matrix;(x, y, z, R, G, B) 6 attributes are included in each point cloud, which forms point cloud in order Data set, the advantage of ordered data collection is that its neighborhood operation can be more efficient after the relation for understanding consecutive points in advance.
Panorama drafting module centered on thing, for read data in point cloud storage matrix then call in PCL one Individual pcl_visualization storehouses, can carry out rapidly three-dimensional modeling simultaneously real by the storehouse to the cloud data file of acquisition Now visualization display, realizes that the panorama centered on thing draws display.
Perspective view focusing on people shows drafting module, for the visual angle that is according to the observation in 3D reconstruct environment and Field range draws 3D perspective views, and its rendering algorithm step is as follows:
STEP1 it is) origin of coordinates O with the single view of panoramic vision sensorm(0,0,0), sets up three-dimensional column space seat Mark system;
STEP 2) determine the size of see-through window, on the basis of the visual range of human eye, about 108 ° of width, with side Parallactic angle β is variable;Short transverse h is variable;
STEP 3) when the testing result of active panoramic vision sensor is obtained, obtain with panoramic vision sensor Single view be origin of coordinates Om(h, β, r), the scope of short transverse is just with minimum range h for the cloud data of (0,0,0)minWith Ultimate range hmaxIt is determined that, have N row data;In view of in width, azimuthal angle beta is connected with 0.36 ° of angle as step-length It is continuous to obtain cloud data, can produce 1000 cloud datas in each section scanning process;Here with a left side for see-through window Lateral edges are initial orientation angle beta1, then see-through window right side edge be β1+ 300, M column datas are had, here M=300;
STEP 4) according to selected initial orientation angle beta1Determine first data, such as initial orientation angle beta1=36 °, that The 100th data are just selected to start to 400 data to be in minimum range hminThe first column data, opened with 1100 data It is h to begin to 1400 dataminSecond column data ... of+Δ h
STEP 5) see-through window display data matrix processing, such as the minimum range h for obtaining at presentminWith maximum away from From hmaxIt is determined that, have 2100 row data;Specific algorithm is as follows:
STEP51:Determine initial orientation angle beta1, read minimum range hminWith ultimate range hmaxData;
STEP52:H=hmin
STEP53:Read the initial orientation angle beta apart from h1Start to β1+ 300 data, if β1+ 300 >=1000, then β1 + 300=β1+300-1000;Using these values as matrix new data line, h=h+ Δs h;
STEP54:Judge h >=hmaxIf being unsatisfactory for going to STEP53;
STEP55:Terminate.
The matrix of one 80 × 300 is obtained by above-mentioned algorithm;
STEP 6) all of three-dimensional coordinate in matrix is coupled together with triangle surface, connection method is first will Be attached with straight line per the data of a line, then the data of each row be attached with straight line, finally by matrix (i, J) cloud data with (i+1, j+1) is attached with straight line;The color of connecting line is average using the color of two points of connection Value;
STEP 7) triangle surface of all connections is shown on an output device.
The operation principle of omnibearing vision sensor is:Into the light at the center of hyperbolic mirror, according to bi-curved minute surface Characteristic is reflected towards its virtual focus.Material picture is imaged in reflexing to collector lens through hyperbolic mirror, on the imaging plane One point P (x, y) correspond to the coordinate A (X, Y, Z) of a point spatially in kind.
2- hyperbolas face mirror in accompanying drawing 7,12- incident rays, 13- hyperbolic mirrors real focus Om (0,0, c), The center Oc (0,0 ,-c) of the virtual focus of 14- hyperbolic mirrors, i.e. image unit 6,15- reflection lights, 16- imaging planes, The space coordinates A (X, Y, Z) of 17- material pictures, 18- incide the space coordinates of the image on hyperboloid minute surface, and 19- is anti- Point P (x, y) penetrated on imaging plane.
The optical system that hyperbolic mirror shown in accompanying drawing 7 is constituted can be represented by following 5 equatioies;
((X2+Y2)/a2)-((Z-c)2/b2Work as Z in)=- 1>When 0 (20)
β=tan-1(Y/X) (22)
α=tan-1[(b2+c2)sinγ-2bc]/(b2+c2)cosγ (23)
X, Y, Z representation space coordinate in formula, c represent the focus of hyperbolic mirror, and 2c represents the distance between two focuses, a, B is respectively the real axis of hyperbolic mirror and the length of the imaginary axis, β represent incident ray on XY projection planes with the angle of X-axis, i.e., just Parallactic angle, α represent incident ray on XZ projection planes with the angle of X-axis, α is referred to as incidence angle here, when α is more than or equal to 0 Referred to as the angle of depression, the elevation angle is referred to as when α is less than into 0, and f represents imaging plane to the distance of the virtual focus of hyperbolic mirror, and γ represents catadioptric Penetrate the angle of light and Z axis;X, y represent a point on imaging plane.
Embodiment 2
The other structures and the course of work of the present embodiment are same as Example 1, except that:For different reconstruct The demand of scape, changes moving body LASER Light Source hardware module, that is, change the projectional angle θ of the green laser in accompanying drawing 4aAnd blueness The projectional angle θ of laser raysc, to change the scope of vertical scanning.
Embodiment 3
Remaining is same as Example 1, for the demand that different 3D scene drawings show, panoramic perspective focusing on people Figure circulation display drafting module, for constantly changing the visual angle in 3D reconstruct environment according to the observation, from the angle of human engineering Determine field range to draw 3D perspective views, its core seeks to the slow change azimuthal angle beta of constantly order, generates at the azimuth Display data matrix under β;Its key algorithm is as follows:
STEP1:Determine initial orientation angle beta1, read minimum range hminWith ultimate range hmaxData;
STEP2:β=β1, it is determined that circulation display times N, n=0;
STEP3:H=hmin
STEP4:The initial orientation angle beta of reading distance value h starts the data to β+300, if β+300 >=1000, β+ 300=β+300-1000;Using these values as matrix new data line, h=h+ Δs h;
STEP5:Judge h >=hmaxIf being unsatisfactory for going to STEP4;
STEP6:The display data matrix of current β is stored in, display data matrix is attached with triangle surface, shown Show output;β=β+Δ β;
STEP7:Judge β >=3600;
STEP8:If β=β -3600, n=n+1;
STEP9:N >=N is judged, if being unsatisfactory for going to STEP3;
STEP10:Terminate.
Embodiment 4
The other structures and the course of work of the present embodiment are same as Example 1, except that:For different 3D scenes The demand of display is drawn, perspective view focusing on people shows drafting module, for constantly changing 3D weights according to the observation Visual angle in structure environment, 3D perspective views are drawn from the angle-determining field range of human engineering;I.e. focus be put on man Perspective view display drafting module the perspective view for being generated as under left view point image conditions, generate right visual point image;With In the case of the perspective view for being generated of the perspective view display drafting module centered on people is as right visual point image, left view point diagram is generated Picture;Show that the perspective view for being generated of drafting module, as under median eye image conditions, is generated in perspective view focusing on people Left and right visual point image, so as to realize that stereogram is generated;Accordingly, it would be desirable to the haplopia point coordinates O of ODVSmAnd space (0,0,0) Object point P (h, β, r) between geometrical relationship calculate the coordinate of new viewpoint;
By the haplopia point coordinates O of ODVSm(0,0,0) calculates space P points as the coordinate of right eye viewpoint with formula (13) In the coordinate of left eye viewpoint;
In formula, hRIt is space P points in the height of left eye viewpoint, hLIt is space P points in the height of right eye viewpoint, rRIt is space P The incidence angle in left eye viewpoint is put,It is space P points in the square value of the incidence angle of right eye viewpoint, βLFor P points in space are regarded in left eye The azimuth of point, βRIt is space P points at the azimuth of right eye viewpoint;
By the haplopia point coordinates O of ODVSm(0,0,0) calculates space P points as the coordinate of left eye viewpoint with formula (14) In the coordinate of right eye viewpoint;
In formula, hRIt is space P points in the height of left eye viewpoint, hLIt is space P points in the height of right eye viewpoint, rRIt is space P The incidence angle in left eye viewpoint is put,It is space P points in the square value of the incidence angle of right eye viewpoint, βLIt is space P points in left eye The azimuth of viewpoint, βRIt is space P points at the azimuth of right eye viewpoint;
By the haplopia point coordinates O of ODVSm(0,0,0) calculates space P points and exists as the coordinate of median eye with formula (15) The coordinate of right and left eyes viewpoint;
In formula, hRIt is space P points in the height of left eye viewpoint, hLIt is space P points in the height of right eye viewpoint, rRIt is space P The incidence angle in left eye viewpoint is put,It is space P points in the square value of the incidence angle of right eye viewpoint, βLFor P points in space are regarded in left eye The azimuth of point, βRIt is space P points at the azimuth of right eye viewpoint;
B is the distance between two, the distance that Ms is two in 56~64mm, the distance that male is two in 60~70mm, this In take it is all receptible apart from 60mm, i.e. B=60 between a both sexes;
Volume rendering display algorithm is as follows:
STEP1:Determine initial orientation angle beta1, read minimum range hminWith ultimate range hmaxData, determine viewpoint, center Eye=0, left view point=1, right viewpoint=2;
STEP2:H=hmin
STEP3:Read the initial orientation angle beta of distance value h1Start to β1+ 300 data, if β1+ 300 >=1000, then β1+ 300=β1+300-1000;According to identified viewpoint, if 0 (median eye) of selection formula (15) calculates dimensional target point and exists The cylindrical coordinates of right and left eyes;If 1 (left view point) of selection formula (14) calculates cylindrical coordinates of the dimensional target point in right eye, by former coordinate Data as left eye cylindrical coordinates;If 2 (right viewpoints) of selection formula (13) calculates cylindrical coordinates of the dimensional target point in left eye, will Former coordinate data as right eye cylindrical coordinates;Respectively with left and right viewpoint using these values as left view dot matrix and right viewpoint matrix New data line, h=h+ Δs h;
STEP4:Judge h >=hmaxIf being unsatisfactory for going to STEP3;
STEP5:Terminate;
The matrix of two 80 × 300, the respectively display matrix of left and right viewpoint are obtained by above-mentioned algorithm.
The all of three-dimensional coordinate in matrix is coupled together with triangle surface respectively, connection method is first will be each Capable data are attached with straight line, and then the data of each row are attached with straight line, finally by (i, j) in matrix and The cloud data of (i+1, j+1) is attached with straight line;The color of connecting line is using two color averages of point for connecting.
By the stereogram generated after above-mentioned treatment, then carry out binocular solid and show.
Shown for the binocular solid that video card is supported.Such as under the OpenGL environment for supporting stereoscopic display, in the equipment of establishment The handle stage starts the stereoscopic display mode of OpenGL, and the stereogram of generation is respectively delivered in two buffering areas in left and right, Realize stereoscopic display.
For on the video card for not supporting stereoscopic display, stereogram being synthesized into a width red and green complementary color stereo-picture, from An image zooming-out red channel in the stereogram of left and right, extracts green and blue channel in another image, by what is extracted Passage is merged, and forms a stereo-picture for complementary colours.
Embodiment 5
The other structures and the course of work of the present embodiment are same as Example 1, except that:For different 3D scenes Draw the demand of display, full-view stereo figure circulation display drafting module focusing on people, for constantly changing according to the observation Visual angle in 3D reconstruct environment, full-view stereo figure is drawn from the angle-determining field range of human engineering, and its core seeks to Constantly change azimuthal angle beta, generate the left and right stereogram under the azimuthal angle beta;Its key algorithm is as follows:
STEP1:Determine initial orientation angle beta1, read minimum range hminWith ultimate range hmaxData, determine viewpoint, center Eye=0, left view point=1, right viewpoint=2;
STEP2:β=β1, it is determined that circulation display times N, n=0;
STEP3:H=hmin
STEP4:Read the initial orientation angle beta of distance value h1Start to β1+ 300 data, if β1+ 300 >=1000, then β1+ 300=β1+300-1000;According to identified viewpoint, if 0 (median eye) of selection formula (3) calculates dimensional target point and exists The cylindrical coordinates of right and left eyes;If 1 (left view point) of selection formula (2) calculates cylindrical coordinates of the dimensional target point in right eye, by former coordinate Data as left eye cylindrical coordinates;If 2 (right viewpoints) of selection formula (1) calculates cylindrical coordinates of the dimensional target point in left eye, will Former coordinate data as right eye cylindrical coordinates;Respectively with left and right viewpoint using these values as left view dot matrix and right viewpoint matrix New data line, h=h+ Δs h;
STEP5:Judge h >=hmaxIf being unsatisfactory for going to STEP4;
STEP6:The display data matrix of current β is stored in, respectively by left view dot matrix and right viewpoint matrix triangle Dough sheet is attached, the stereogram of generation, stereoscopic display output;β=β+Δ β;
STEP7:Judge β >=3600;
STEP8:If β=β -3600, n=n+1;
STEP9:N >=N is judged, if being unsatisfactory for going to STEP3;
STEP10:Terminate.
Embodiment 6
The other structures and the course of work of the present embodiment are same as Example 1, except that:For different 3D scenes Draw the demand of display, viewing distance change perspective view display rendering technique focusing on people, for according to the observation Constantly change the self space position in 3D environment, full-view stereo is drawn from the angle-determining field range of human engineering Figure;Observer draw near roaming scence or from the close-by examples to those far off during, observer and the position that is reconstructed between space occur Change, this is related to a change of cloud coordinate and multi-level Display Technique, while the amount of the cloud data for showing also can Accordingly change.
First, we come observe observer draw near roaming scence when, how point cloud locus changes;Consider During point cloud locus, here still using the calculation of median eye, i.e., the regarding as observer with the midpoint between two Point;
When observer draws near roaming scence, with respect to the P for the viewpoint of observer spatially, (h, β r) become P(hDD,rD).Cumbersome due to carrying out above-mentioned computing on cylindrical coordinate, the direction that we will observe is used as Y-axis, then It is that one is moved in Y-axis apart from D when observer draws near roaming scence.Therefore we are by the point cloud of cylindrical coordinate (h, β, r) put first is carried out P with formula (16);
It is converted into point cloud P (x, y, z) point of cartesian coordinate system;According to operation result in order to without loss of generality, new regards Point O'm(0,0,0) with former viewpoint OmThe displacement of (0,0,0) is D, is represented with formula (17) and formula (18),
Calculated in new viewpoint O' with formula (19)m(0,0,0) have in the case of coordinate system a cloud coordinate P (x', Y', z'), then by point cloud P (x', y', z') the point reconvert of cartesian coordinate system into Gauss coordinate system point cloud P (h', β ', R') point.In the case of in the Y-axis direction close to roaming scence, it is possible to do a simple operation, i.e. y'=y+y0.To move Viewpoint O' after dynamicm(0,0,0) calculates spatial point P (h', β ', r') in left and right as the coordinate of median eye with formula (19) The coordinate of eye viewpoint;For the viewpoint O' after movementmIn the case of (0,0,0), with former viewpoint Om(0,0,0) is compared, and the ken can be therewith Change, above it has been specified that the ken horizontal field of view scope for 108 °, vertical visual field scope be 66 °;Therefore, in new viewpoint O'm(0,0,0) the ken is needed in formula (19) result of calculation by new viewpoint O'mIncident angle α (0,0,0) ' and orientation Angle beta ' to determine, then use the perspective view focusing on people display rendering technique for being based on ASODVS to be plotted in new viewpoint O'm(0,0,0) perspective view;For in new viewpoint O'm(0,0,0) full-view stereo focusing on people circulation display is painted Focusing on people full-view stereo figure of the system based on ASODVS circulates display rendering technique;
In formula, h'LIt is new viewpoint O'mUnder (0,0,0) coordinate system space P points right eye viewpoint height, r'LFor new Viewpoint O'mUnder (0,0,0) coordinate system space P points right eye viewpoint incidence angle, β 'LIt is new viewpoint O'm(0,0,0) coordinate system Lower space P points are at the azimuth of left eye viewpoint, h'RIt is new viewpoint O'm(0,0,0) under coordinate system space P points in left eye viewpoint Highly, r'RIt is new viewpoint O'mUnder (0,0,0) coordinate system space P points left eye viewpoint incidence angle, β 'RIt is new viewpoint O'm (0,0,0) under coordinate system space P points at the azimuth of right eye viewpoint.
When being shown to New Century Planned Textbook, new viewpoint to the distance of root node is first determined whether, when closer to the distance, i.e., when used for aobvious When the cloud data for showing is less than some threshold value, low level cloud data is further generated from adjacent root node data interpolation; Generation low level cloud data, i.e. interpolation arithmetic are a basic operations in volume drawing.Because its operand is big, by body painting Fast Interpolation in system is realized.

Claims (9)

1. a kind of 3D environment dubbing systems based on active panoramic vision, including omnibearing vision sensor, moving body laser light Source and the microprocessor for omnidirectional images to be carried out with the reconstruct of 3D panoramas and the drafting output of 3D panoramas, all-directional vision sensing Device is arranged on the guiding support bar of moving body LASER Light Source, it is characterised in that:
Described moving body LASER Light Source also includes the volumetric laser light source moved up and down along guiding support bar, volumetric laser light source tool There are the first comprehensive face laser of vertically-guided support bar and the axial line of guiding support bar into θcThe full side of angle inclined second Plane laser and with the axial line of guiding support bar into θaThe comprehensive face laser of angle the inclined 3rd;
Described microprocessor is divided into demarcation part, 3D reconstruct part and the display of 3D panoramas and draws part;
Described demarcation part, the calibrating parameters for determining omnibearing vision sensor, and in omnibearing vision sensor institute The first comprehensive face laser, the second comprehensive face laser and the 3rd comprehensive face laser correspondence are parsed on the panoramic picture of shooting Laser projection information;
Described 3D reconstruct part, according to the position of moving body LASER Light Source, and the laser projection information related pixel Coordinate value, calculates the point cloud geological information in mobile face, and by the point cloud geological information in mobile face and the face of each comprehensive face laser Color information is merged, and builds panorama 3D models;
Part is drawn in described 3D panoramas display, including:
Perspective view focusing on people shows drafting module, is reconstructed in 3D according to described panorama 3D models, and observer Perspective view focusing on people is drawn at visual angle and field range in environment.
2. the 3D environment dubbing systems of active panoramic vision are based on as claimed in claim 1, it is characterised in that described 3D is complete Part is drawn in scape display, is also included:
Full-view perspective circulation display drafting module, is in 3D and reconstructs environment according to described panorama 3D models, and observer The varying cyclically and field range at middle visual angle draw panoramic perspective focusing on people circulation display figure;
Perspective view shows drafting module, according to the perspective view focusing on people, generates right visual point image, left view point Image and left and right visual point image draw perspective view focusing on people;
Full-view stereo figure circulation display drafting module, is in 3D and reconstructs environment according to described panorama 3D models, and observer The varying cyclically and field range at middle visual angle, by constantly changing azimuthal angle beta, generate the left and right space image under the azimuthal angle beta It is right to draw full-view stereo focusing on people perspective circulation display figure;
With viewing distance change perspective view display drafting module, according to described panorama 3D models, and observer is in 3D Panorama focusing on people when the change of viewing distance and field range constantly change viewing distance to be plotted in reconstruct environment Volume rendering display figure.
3. a kind of 3D panoramas based on 3D environment dubbing system described in claim 1 or 2 show method for drafting, it is characterised in that Including step:
1) panoramic picture that moving body laser light source projects are formed is shot using omnibearing vision sensor;
2) panoramic picture according to, determines the calibrating parameters of omnibearing vision sensor, and parse the first comprehensive face Laser, the second comprehensive face laser and the corresponding laser projection information of the 3rd comprehensive face laser;
3) according to the position of moving body LASER Light Source, and the laser projection information related pixel coordinate value, calculate movement The point cloud geological information in face, and the colouring information of the point cloud geological information in mobile face and each comprehensive face laser is merged, Build panorama 3D models;
4) visual angle and field range that the panorama 3D models according to, and observer are in 3D reconstruct environment are drawn Perspective view focusing on people;Comprise the following steps that:
STEP1 it is) origin of coordinates O with the single view of omnibearing vision sensorm(0,0,0), sets up three-dimensional column space coordinates System;
STEP2) according to the visual range of human eye, the size of see-through window is determined, with azimuthal angle beta and height h as see-through window Variable, and (h, β, r), r is distance of the correspondence spatial point to single view to obtain the corresponding cloud data of see-through window;
STEP3 step-length and the scope of height h) according to azimuthal angle beta, and described cloud data (h, β r), generate data square Battle array;
STEP4) all of three-dimensional coordinate in data matrix is coupled together with triangle surface, the color of connecting line is using connection Two color averages of point;
STEP5 the triangle surface of all connections) is carried out into output display, perspective view focusing on people is completed and is drawn.
4. 3D panoramas as claimed in claim 3 show method for drafting, it is characterised in that described 3D panoramas display method for drafting Also include drawing panoramic perspective circulation display figure focusing on people, by constantly changing azimuthal angle beta, generate in the azimuthal angle beta Under display data matrix, complete panoramic perspective circulation display figure draw;The algorithm of the display data matrix is as follows:
4.1) initial orientation angle beta is determined1, read minimum range hminWith ultimate range hmaxData;
4.2) β is entered as β1, it is determined that circulation display number of times is N, n initial values are 0;
4.3) h is entered as hmin
4.4) the initial orientation angle beta of reading distance value h starts the data to β+300, if β+300 >=1000, the assignment of β+300 It is β+300-1000;Using these β value as matrix new data line, h is entered as h+ Δs h;
4.5) h >=h is judgedmaxIf being unsatisfactory for going to 4.4);
4.6) the display data matrix of current β is stored in, display data matrix is attached with triangle surface, shown defeated Go out;β is entered as β+Δ β;
4.7) β >=3600 are judged;
4.8) if it is, β is entered as β -3600, n is entered as n+1;
4.9) n >=N is judged, if being unsatisfactory for going to 4.3).
5. 3D panoramas as claimed in claim 3 show method for drafting, it is characterised in that described 3D panoramas display method for drafting Also include drawing perspective view focusing on people, specific rendering algorithm is as follows:
5.1:Determine initial orientation angle beta1, read minimum range hminWith ultimate range hmaxData, determine viewpoint;
5.2:H is entered as hmin
5.3:Read the initial orientation angle beta of distance value h1Start to β1+ 300 data, if β1+ 300 >=1000, then β1+ 300 assign It is β to be worth1+300-1000;According to identified viewpoint, selection median eye calculates cylindrical coordinates of the dimensional target point in right and left eyes;Selection Left view point calculate dimensional target point right eye cylindrical coordinates, using former coordinate data as left eye cylindrical coordinates;Right viewpoint is selected to calculate Dimensional target point left eye cylindrical coordinates, using former coordinate data as right eye cylindrical coordinates;These values are made with left and right viewpoint respectively It is left view dot matrix and the new data line of right viewpoint matrix, h is entered as h+ Δs h;
5.4:Judge h >=hmaxIf being unsatisfactory for going to 5.3, until the display matrix of generation left and right viewpoint;
5.5:All of three-dimensional coordinate in display matrix is coupled together with triangle surface respectively, connection method is first will be every The data of a line are attached with straight line, and then the data of each row are attached with straight line, finally by (i, j) in matrix and The cloud data of (i+1, j+1) is attached with straight line;The color of connecting line is using two color averages of point for connecting;
5.6:The stereogram generated after being processed to more than carries out binocular solid and shows, completes focusing on people three-dimensional saturating View Drawing.
6. 3D panoramas as claimed in claim 3 show method for drafting, it is characterised in that described 3D panoramas display method for drafting Also include drawing full-view stereo perspective circulation display figure focusing on people, by constantly changing azimuthal angle beta, generate in the orientation Left and right stereogram under angle beta, its algorithm is as follows:
6.1:Determine initial orientation angle beta1, read minimum range hminWith ultimate range hmaxData, on median eye, left view point and the right side Determine viewpoint among viewpoint three;
6.2:β is entered as β1, it is determined that circulation display times N, n initial values are 0;
6.3:H is entered as hmin
6.4:Read the initial orientation angle beta of distance value h1Start to β1+ 300 data, if β1+ 300 >=1000, then β1+ 300 assign It is β to be worth1+300-1000;According to identified viewpoint, selection median eye calculates cylindrical coordinates of the dimensional target point in right and left eyes;Selection Left view point calculate dimensional target point right eye cylindrical coordinates, using former coordinate data as left eye cylindrical coordinates;Right viewpoint is selected to calculate Dimensional target point left eye cylindrical coordinates, using former coordinate data as right eye cylindrical coordinates;These values are made with left and right viewpoint respectively It is left view dot matrix and the new data line of right viewpoint matrix, h is entered as h+ Δs h;
6.5:Judge h >=hmaxIf being unsatisfactory for going to 6.4;
6.6:The display data matrix of current β is preserved, left view dot matrix and right viewpoint matrix are carried out with triangle surface respectively Connection, generates stereogram, stereoscopic display output;β is entered as β+Δ β;
6.7:Judge β >=3600;
6.8:If β >=3600, β is entered as β -3600, and n is entered as n+1;
6.9:N >=N is judged, if being unsatisfactory for going to 6.3.
7. the 3D panoramas as described in claim 5 or 6 show method for drafting, it is characterised in that according to haplopia point coordinates Om(0,0, 0) and dimensional target point P (h, β, r) between geometrical relationship, the coordinate for calculating new viewpoint includes:
By haplopia point coordinates Om(0,0,0) calculates space P points in left eye viewpoint as the coordinate of right eye viewpoint with formula (13) Coordinate;
h R = h L r R = B 2 + r L 2 + 2 × B × r L × cosβ L β R = a r c s i n ( r R r L × sinβ L ) - - - ( 13 )
In formula, hRIt is space P points in the height of left eye viewpoint, hLIt is space P points in the height of right eye viewpoint, rRFor P points in space exist The incidence angle of left eye viewpoint,It is space P points in the square value of the incidence angle of right eye viewpoint, βLIt is space P points in left eye viewpoint Azimuth, βRIt is space P points at the azimuth of right eye viewpoint;
By haplopia point coordinates Om(0,0,0) calculates space P points in right eye viewpoint as the coordinate of left eye viewpoint with formula (14) Coordinate;
h L = h R r L = B 2 + r R 2 + 2 × B × r R × cosβ R β L = a r c s i n ( r L r R × sinβ R ) - - - ( 14 )
By haplopia point coordinates Om(0,0,0) calculates space P points in right and left eyes viewpoint as the coordinate of median eye with formula (15) Coordinate;
h L = h r L = ( B / 2 ) 2 + r 2 - B × r × c o s β β L = arcsin ( r L r × s i n β ) h R = h r R = ( B / 2 ) 2 + r 2 + B × r × c o s β β R = arcsin ( r R r × s i n β ) - - - ( 15 )
Wherein, the distance between B is two.
8. 3D panoramas as claimed in claim 3 display method for drafting, it is characterised in that described the first comprehensive face laser, Second comprehensive face laser and the 3rd comprehensive face laser are respectively blue line laser, red line laser and green line laser, blue Colo(u)r streak laser and green line laser are separately mounted to the both sides up and down of red line laser, and the axis of all line lasers intersects at institute State a bit on the axial line of guiding support bar.
9. 3D panoramas as claimed in claim 8 show method for drafting, it is characterised in that described 3D panoramas display method for drafting Also include that the panorama centered on thing draws display, according to the single view O of omnibearing vision sensormIt is the 3D of the origin of coordinates The cloud data of panorama model, the scan slice produced when scanning panoramic scene with moving body LASER Light Source, extracts blue, red Cloud data produced by color and green comprehensive face laser projection, generates cloud data matrix, realizes complete centered on thing Scape draws display.
CN201410632152.3A 2014-11-11 2014-11-11 3D environment dubbing system and 3D panoramas display method for drafting based on active panoramic vision Active CN104374374B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410632152.3A CN104374374B (en) 2014-11-11 2014-11-11 3D environment dubbing system and 3D panoramas display method for drafting based on active panoramic vision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410632152.3A CN104374374B (en) 2014-11-11 2014-11-11 3D environment dubbing system and 3D panoramas display method for drafting based on active panoramic vision

Publications (2)

Publication Number Publication Date
CN104374374A CN104374374A (en) 2015-02-25
CN104374374B true CN104374374B (en) 2017-07-07

Family

ID=52553430

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410632152.3A Active CN104374374B (en) 2014-11-11 2014-11-11 3D environment dubbing system and 3D panoramas display method for drafting based on active panoramic vision

Country Status (1)

Country Link
CN (1) CN104374374B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106204733B (en) * 2016-07-22 2024-04-19 青岛大学附属医院 Liver and kidney CT image combined three-dimensional construction system
CN107958489B (en) * 2016-10-17 2021-04-02 杭州海康威视数字技术股份有限公司 Curved surface reconstruction method and device
CN109631799B (en) * 2019-01-09 2021-03-26 浙江浙大万维科技有限公司 Intelligent measuring and marking method
CN109508755B (en) * 2019-01-22 2022-12-09 中国电子科技集团公司第五十四研究所 Psychological assessment method based on image cognition
CN110659440B (en) * 2019-09-25 2023-04-18 云南电网有限责任公司曲靖供电局 Method for rapidly and dynamically displaying different detail levels of point cloud data large scene

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2006094409A1 (en) * 2005-03-11 2006-09-14 Creaform Inc. Auto-referenced system and apparatus for three-dimensional scanning
CN101619962A (en) * 2009-07-30 2010-01-06 浙江工业大学 Active three-dimensional panoramic view vision sensor based on full color panoramic view LED light source
CN102289144A (en) * 2011-06-30 2011-12-21 浙江工业大学 Intelligent three-dimensional (3D) video camera equipment based on all-around vision sensor

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2256667B1 (en) * 2009-05-28 2012-06-27 Honda Research Institute Europe GmbH Driver assistance system or robot with dynamic attention module

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2006094409A1 (en) * 2005-03-11 2006-09-14 Creaform Inc. Auto-referenced system and apparatus for three-dimensional scanning
CN101619962A (en) * 2009-07-30 2010-01-06 浙江工业大学 Active three-dimensional panoramic view vision sensor based on full color panoramic view LED light source
CN102289144A (en) * 2011-06-30 2011-12-21 浙江工业大学 Intelligent three-dimensional (3D) video camera equipment based on all-around vision sensor

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
立体全方位视觉传感器的设计;汤一平等;《仪器仪表学报》;20100731;第31卷(第7期);第1520-1527页 *

Also Published As

Publication number Publication date
CN104374374A (en) 2015-02-25

Similar Documents

Publication Publication Date Title
CN111060024B (en) 3D measuring and acquiring device with rotation center shaft intersected with image acquisition device
CN105023275B (en) Super-resolution optical field acquisition device and its three-dimensional rebuilding method
CN104374374B (en) 3D environment dubbing system and 3D panoramas display method for drafting based on active panoramic vision
CN105243637B (en) One kind carrying out full-view image joining method based on three-dimensional laser point cloud
KR101265667B1 (en) Device for 3d image composition for visualizing image of vehicle around and method therefor
CN108513123B (en) Image array generation method for integrated imaging light field display
CN1238691C (en) Combined stereovision, color 3D digitizing and motion capture system
CN102679959B (en) Omnibearing 3D (Three-Dimensional) modeling system based on initiative omnidirectional vision sensor
CN104406539B (en) Round-the-clock active panorama sensing device and 3D full-view modeling methods
CN206563985U (en) 3-D imaging system
CN110243307A (en) A kind of automatized three-dimensional colour imaging and measuring system
CN110246186A (en) A kind of automatized three-dimensional colour imaging and measurement method
WO2009120073A2 (en) A dynamically calibrated self referenced three dimensional structured light scanner
CN111060006A (en) Viewpoint planning method based on three-dimensional model
CN110827392A (en) Monocular image three-dimensional reconstruction method, system and device with good scene usability
CN110230979A (en) A kind of solid target and its demarcating three-dimensional colourful digital system method
CN109242898A (en) A kind of three-dimensional modeling method and system based on image sequence
CN114359406A (en) Calibration of auto-focusing binocular camera, 3D vision and depth point cloud calculation method
KR20190019059A (en) System and method for capturing horizontal parallax stereo panoramas
CN112907647B (en) Three-dimensional space size measurement method based on fixed monocular camera
CN112082486B (en) Handheld intelligent 3D information acquisition equipment
CN108510537B (en) 3D modeling method and device
CN111735414A (en) Area metering system and metering method based on panoramic three-dimensional imaging
CN112435080A (en) Virtual garment manufacturing equipment based on human body three-dimensional information
JPWO2020121406A1 (en) 3D measuring device, mobile robot, push wheel type moving device and 3D measurement processing method

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant