CN109102537A - A kind of three-dimensional modeling method and system of laser radar and the combination of ball curtain camera - Google Patents
A kind of three-dimensional modeling method and system of laser radar and the combination of ball curtain camera Download PDFInfo
- Publication number
- CN109102537A CN109102537A CN201810663053.XA CN201810663053A CN109102537A CN 109102537 A CN109102537 A CN 109102537A CN 201810663053 A CN201810663053 A CN 201810663053A CN 109102537 A CN109102537 A CN 109102537A
- Authority
- CN
- China
- Prior art keywords
- laser radar
- data
- ball curtain
- mobile terminal
- curtain camera
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
- G06T7/521—Depth or shape recovery from laser ranging, e.g. using interferometry; from the projection of structured light
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
- G06T7/55—Depth or shape recovery from multiple images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10024—Color image
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Optics & Photonics (AREA)
- Processing Or Creating Images (AREA)
- Length Measuring Devices By Optical Means (AREA)
- Image Analysis (AREA)
- Traffic Control Systems (AREA)
- Optical Radar Systems And Details Thereof (AREA)
Abstract
The three-dimensional modeling method and system combined the invention discloses a kind of laser radar and ball curtain camera, is related to three-dimensional imaging modeling technique field.Method includes the following steps: carrying out real time scan using laser radar to current scene, data are obtained;Trigger ball curtain camera is taken pictures, and distant view photograph is obtained;Radar data and distant view photograph are uploaded to mobile terminal, and path and buffering distant view photograph are calculated according to radar data through mobile terminal;The path data and distant view photograph data to server that uploading step is handled carry out three-dimensional modeling.The present invention is positioned by laser radar real time scan, compared to encounter the characteristic points such as white wall, glass it is less when will appear tracking lose view-based access control model SLAM, the present invention is positioned more stable and accurate by the video flowing of the SLAM of the real time scan contrast locating view-based access control model of laser radar, the threedimensional model of creation will not generation model distortion, the scene of the threedimensional model of creation is more accurate and reliable.
Description
Technical field
The present invention relates to three-dimensional imaging modeling technique fields, and in particular to a kind of laser radar and ball curtain camera combine three
Tie up modeling method and system.
Background technique
During carrying out three-dimensional modeling using ball curtain camera, because involving positioning immediately and map structuring (SLAM) skill
Art, the shooting data volume to be treated that ball curtain camera (typically binocular or more mesh) carries out always video flowing is bigger,
Very big burden can be caused to hardware in this way, generates biggish fever phenomenon, about a few minutes to more than ten minutes electricity will exhaust.
Secondly, if directly carrying out space orientation using ball curtain camera, the frame picture for the video flowing that must be just shot using ball curtain camera,
Then SLAM positioning is carried out, but it is such computationally intensive, a large amount of cpu resource is occupied, is disappeared to will increase huge electricity
Consumption.In addition, being needed before carrying out SLAM positioning between the frame picture of the video flowing of ball curtain camera shooting using this positioning method
Spliced, distortion can be generated;Ball curtain camera just need when SLAM positioning to processor return data, and data back produces
The raw time difference will lead to live preview delay.
For this purpose, having developed the instant fixed of view-based access control model on the basis of instant positioning is with map structuring (SLAM) technology
Position and map structuring (VSLAM) technology.The advantages of VSLAM is the abundant texture information that it is utilized.Such as two block size it is identical
Content different billboard, the laser SLAM algorithm based on cloud cannot be distinguished from them, and vision can then be differentiated easily.This
Bring reorientation, unrivaled huge advantage on scene classification.Meanwhile visual information can be relatively easy to be used to
Dynamic object in track and prediction scene, such as pedestrian, vehicle, in complicated dynamic scene being most important using this
's.Third, the projection model of vision theoretically can allow the object of unlimited distance all to enter in visual, reasonably configure
Under (binocular cameras of such as Long baselines) positioning and the map structuring of large scene can be carried out.
However, the instant positioning of view-based access control model is encountering the characteristic points such as white wall, glass ratio with map structuring (VSLAM) technology
It will appear the case where tracking is lost in the case where less.Although winding detection meeting re-optimization path, meeting after optimization repeatedly
Still remain biggish error.And laser radar contrast locating VSLAM positioning is more stable, is scanned the reason is that being engraved at that time.
The effect of laser radar is to do just positioning, while rear end modeling being given to provide initial position, its advantage is that positioning stablity, accurate.
Summary of the invention
In view of the deficiencies in the prior art, the present invention intends to provide the three-dimensional that a kind of laser radar and ball curtain camera combine to build
Mould method and system, the present invention apply in large scene.
To achieve the goals above, The technical solution adopted by the invention is as follows:
The method for carrying out three-dimensional modeling to large scene by laser radar and ball curtain camera, the method includes following steps
It is rapid:
S1, real time scan is carried out using laser radar to current scene, obtains data;
S2, trigger ball curtain camera are taken pictures, and distant view photograph is obtained;
S3, radar data and distant view photograph are uploaded to mobile terminal, and path is calculated according to radar data through mobile terminal
And buffering distant view photograph;
The path data and distant view photograph data to server handled in S4, uploading step S3 carries out three-dimensional modeling.
Further, in step sl, include using the data that laser radar carries out real time scan acquisition to current scene
Location information, range information and rotation information.
Further, in step s 2, the number of distant view photograph is at least one.
Further, in step s 2, ball curtain camera pans the number of photo and the area of scene that needs are shot
It is directly proportional, a panoramic photograph is shot in the position at interval of 1.5 meters of distances of captured scene.
Further, in step s3, path is calculated according to radar data and specifically includes step:
S31, the characteristic point for extracting scan image obtain environmental information;
S32, the data obtained to present laser radar scanning are compared with data already existing in map and feature
It updates, determines whether this feature derives from the same object in environment;
S33, environment description is carried out using grating map.
Further, in step s 32, random in one group of state space construction according to the conditional probability distribution of system mode
Particle constantly adjusts the pose and weight of each particle according to observation information, corrects system according to particle information adjusted
The previous conditional probability distribution of state.
Further, in step s3, the distant view photograph for being buffered to mobile terminal is presented in the form of preview picture.
Further, in step s 4, server backstage according to the path data that handles on mobile terminals and
Distant view photograph data construction threedimensional model simultaneously generates link return mobile terminal.
Further, in step s 4, server passes through mobile terminal feedback operation information and scene modeling information.
The beneficial effects of the invention are that:
1, positioned by laser radar real time scan, compared to encounter the characteristic points such as white wall, glass it is less when will appear with
The case where track is lost, that is, the case where the SLAM of view-based access control model will appear, real time scan contrast locating view-based access control model of the invention
SLAM video flowing positioning it is more stable and accurate, positioned by the real time scan of laser radar, created based on of the invention
Threedimensional model will not generation model distortion, the scene of the threedimensional model of creation is more accurate and reliable.
2, ball curtain camera only pan photo or modification it is some in parameter when can be triggered calling,
The remaining time is in half asleep, can save the energy consumption of ball curtain camera to greatest extent.
Detailed description of the invention
Fig. 1 is the schematic diagram that the device of front end scanning operation is carried out using method of the invention;
Fig. 2 is the 2D map of laser radar scanning of the invention;
Fig. 3 is the path profile that laser radar of the invention positions;
Fig. 4 is that the present invention carries out matched schematic diagram to the characteristic point after extracting in scene;
Fig. 5 be the present invention in carrying out the two-dimension picture after feature point extraction matching the three-dimensional space position of each characteristic point with
The schematic diagram of camera position;
Fig. 6 is the rudimentary model schematic diagram that the present invention carries out structured modeling after sparse points cloud processing;
Fig. 7 is the Virtual Space model schematic that the present invention constructs after textures.
Specific embodiment
The invention will be further described below, it should be noted that before following embodiment is with the technical program
It mentions, the detailed implementation method and specific operation process are given, but protection scope of the present invention is not limited to the present embodiment.
The present invention is a kind of method for carrying out three-dimensional modeling to large scene by laser radar 3 and ball curtain camera 1, this method
The following steps are included:
S1, real time scan is carried out using laser radar 3 to current scene, obtains data;
S2, trigger ball curtain camera 1 are taken pictures, and distant view photograph is obtained;
S3, radar data and distant view photograph are uploaded to mobile terminal 2, and tunnel is calculated according to radar data through mobile terminal 2
Diameter and buffering distant view photograph;
The path data and distant view photograph data to server handled in S4, uploading step S3 carries out three-dimensional modeling.
It should be noted that the mobile terminal 2 of present invention meaning includes but is not limited to mobile phone, tablet computer etc. with camera shooting
The terminal device of head.
It preferably, in step sl, include position using the data that laser radar 3 carries out real time scan acquisition to current scene
Confidence breath, range information and rotation information.
Preferably, in step s 2, the number of distant view photograph is at least one.
Preferably, in step s 2, ball curtain camera 1 pan photo number and scene that needs are shot area at
Direct ratio shoots a panoramic photograph in the position at interval of captured scene pre-determined distance, and pre-determined distance can be 1~2 meter, excellent
It is selected as 1.5 meters.
Preferably, in step s3, path is calculated according to radar data and specifically includes step:
S31, the characteristic point for extracting scan image obtain environmental information.Characteristic point is that curvature mutation or normal direction are generated and dashed forward
The place of change.Point cloud data Feature Points Extraction has: the boundary edge extracting method based on curvature, the boundary based on characteristic value
Edge extracting method, the boundary edge extracting method based on neighborhood information, three kinds of methods all respectively have its own advantage and disadvantage.Demarsin
In the extracting method of the three dimensional point cloud closed characteristic line of proposition, point normal direction is calculated using principal component analysis and is then based on office
A cluster is formed different clusters by the normal direction transformation of portion's neighborhood.Carry out characteristic point deterministic process in, using pass through two
The method that the normal angle and acceptable maximum angle threshold value of point are compared, it is single that characteristic point at this time, which is with a cluster,
What position was judged.Clustering is a kind of data prospecting tools, is effectively that its target is exactly handle for unfiled sample
Object is grouped as nature classification or the cluster class based on similitude or distance, and in the case where object type is unknown, cluster technology is often
More effectively.So such technology in positioning immediately and has obtained being widely applied very much in map structuring (SLAM) technology.
S32, the data obtained to the scanning of present laser radar 3 are compared with data already existing in map and feature
It updates, determines whether this feature derives from the same object in environment.Data correlation (Data Association, DA) is also referred to as
Consistency problem (Consistency) refers to already existing data and feature in current sensor observation data and map
When update is compared, determine whether they derive from the same object in environment.
S33, environment description is carried out using grating map.The concept of grating map is in 1985 by Elfeshe and Moravec
It is put forward for the first time, is mainly used in robot field later.Ambient enviroment is divided into network by grating map, is usually divided into
Then equal-sized square assigns an attribute value to each grid, indicates the occupied state of the grid.Generally for two
Dimensional plane grating map has 0 and 1 liang of attribute value.0, which represents the grid, does not have occupied, and 1 represents the grid, and there are barriers.
And each grid attribute value of 3 d grid map will also include the elevation information of barrier.The advantages of grating map is to be easy to create
With maintenance, and independent of environment terrain.
Preferably, in step s 32, one group of random grain is constructed in state space according to the conditional probability distribution of system mode
Son constantly adjusts the pose and weight of each particle according to observation information, system shape is corrected according to particle information adjusted
The previous conditional probability distribution of state.In the preferred embodiment of the present invention, laser radar 3 uses particle filtering method.Particle
The thought of filtering is with N number of particle { x with weighti t, wi t}NI=1To indicate the posterior probability Density Distribution of robot pose.
xi tFor the pose of i-th of particle, wi tFor the weight of i-th of particle.According to the conditional probability distribution of system mode in state space
One group of random particles is constructed, the pose and weight of each particle are constantly adjusted according to observation information, is believed according to particle adjusted
Breath is to correct the previous conditional probability distribution of system mode.
Preferably, in step s3, the distant view photograph for being buffered to mobile terminal 2 is presented in the form of preview picture.It is mobile whole
The data preview that 2 visualization ball curtain camera 1 of end has been shot, i.e., facilitate operator to observe in real time in the form of preview picture, and
It can be communicated with server.In step s 4, server shines according to the path data and panorama handled on mobile terminal 2
Sheet data construction threedimensional model simultaneously generates link return mobile terminal 2.Server by mobile terminal feedback operation information and
Scene modeling information facilitates operator to use and be adjusted flexibly.
As a kind of preferred embodiment of the present invention, mobile terminal 2 of the invention is smart phone, and laser radar 3 is preferably adopted
With Japanese Chan Beiyang Motor Corporation Hokuyo, the laser radar of model URG04LX and UTM-30LX.
It should be noted that mobile terminal 2 is constantly in operating status during 3 Scan orientation of laser radar, it is used for
The distant view photograph that the data and ball curtain camera 1 for receiving laser radar 3 are absorbed.Mobile terminal 2 and the communication mode of ball curtain camera 1 are
Communication, using WIFI in the prior art, bluetooth etc..Communication modes between laser radar 3 and mobile terminal 2
It can be communication, be also possible to data cable transmission mode, to guarantee undistorted, the of the invention preferred implementation of data
In example, the data of laser radar 3 are carried out data transmission by the way of USB connecting line.
It should be further noted that APP application program is mounted on mobile terminal 2, ball curtain camera 1 and mobile terminal
2 by WIFI connection, by the APP application program of the panoramic picture real-time Transmission taken to mobile terminal 2, to entire scene
Photograph taking finish after, photo is uploaded to together server carry out three-dimensional modeling.Laser radar 3 passes through USB connecting line
The map datum of scanning is sent to mobile terminal 2, and processing calculating is carried out by the APP application program on mobile terminal 2.?
When ball curtain camera 1 shoots scene, trigger point is artificially defined, i.e., user voluntarily judges the curtain that when concedes points
Camera 1 starts to shoot, that is, trigger point is customized, when ball curtain camera 1 is shot, the row of multiple trigger points formation
Layout (how far be spaced distance once shot) built in the case where dense degree is moderate the model that comes browsing with
It all can be more preferable on transition effect.
In entire shooting process, the scanning of laser radar 3 is being run always, mobile terminal 2 be not used in shooting photo and
Shoot video;Ball curtain camera 1 can only call when acquiring data (taking pictures) or modifying some inherent parameters, other
Time is mobile terminal 2 and sharing out the work and help one another for ball curtain camera 1 to make ball curtain camera 1 in the constantly half asleep of transmitting WIFI
Energy consumption is lower.
In the present embodiment, three-dimensional reconstruction off-line algorithm refers to SFM algorithm.In other embodiments, it can also take other
Three-dimensional reconstruction off-line algorithm.
Further include the following steps according to the photo three-dimensional modeling that ball curtain camera 1 is shot in S4 step:
The characteristic point at least one set of photo that S41 is obtained based on ball curtain camera 1 is identified and is matched;
S42 is detected automatically based on the closed loop that 1 three-dimensional digital of ball curtain camera models;
After S43 detection, digitization modeling is carried out;
S44 structural model textures.
It should be noted that carrying out spy with SIFT descriptor to single photo in one group of photo or video flowing
Each described feature neighborhood of a point is extracted while being analyzed to sign point (pixel on picture), controls the feature according to neighborhood
Point.
It should be noted that closed loop detection are as follows: with currently calculating 1 position of ball curtain camera and pass by the ball curtain
1 position of camera is compared, and is detected the presence of closely located;If detecting, the two distance in certain threshold range, is considered as institute
It states ball curtain camera 1 and is returned to the place passed by originally, start closed loop detection at this time.
It should be further noted that filtering refers to: the corresponding three-dimensional coordinate in certain point in it confirmed two-dimension picture
Behind position, this three-dimensional coordinate point is projected on original ball curtain photo again, reaffirms whether be still that point.It is former
Because being, the point of two-dimension picture and its in the position of the point of three-dimensional world be one-to-one relationship, so confirmed two-dimension picture
After the three-dimensional coordinate point of middle certain point, this three-dimensional coordinate point can be projected again and go back to verify whether two-dimensional coordinate point still exists
Position originally determines whether the pixel is noise with this, if need to filter.It should be noted that in photo really
A fixed optimal picture from ball curtain camera 1 described in some.Above-mentioned described photo is also by mobile terminal 2 in current location
The photograph frame for shooting obtained video flowing puts alternative ranks into.
All see a certain target it should be noted that working as ball curtain camera 1 described in multi-section and capture picture, chooses and use
Wherein optimal one progress textures.
It should be further noted that calculating the graphic color that corresponding ball curtain camera 1 and its photographed using formula:
V1=normalize (CameraMatrixi*V0)
In formula: V0 is the spatial point coordinate (x, y, z, 1) that any one needs to sample, and a model is needed to rasterize
All the points;V1 is the new position coordinates that V0 transforms to camera space, is transformed in unit sphere by vector normalization;Tx and Ty
For texture coordinate (x, y) corresponding to V0, selection coordinate system is OPENGL texture coordinate system;aspecti: i-th of sampling
The length-width ratio of panoramic pictures;CameraMatrixi: the transformation matrix of i-th of panoramic pictures of sampling, by 1, ball curtain camera
It sets and transforms to origin, Place curtain camera 1 of laying equal stress on faces direction.
Based on the foregoing, it is desirable to which, it is noted that closed loop detection is a dynamic process, in the process of shooting ball curtain photograph
In be continue carry out.
As shown in Figure 1, the present invention also proposes the 3 d modeling system of a kind of laser radar and the combination of ball curtain camera, the system
System includes: ball curtain camera 1, laser radar 3, mobile terminal 2, server, is stored in mobile terminal 2 and can be in the mobile terminal
What is run on 2 calculates path and buffering distant view photograph program according to radar data, is stored in server and can be in the service
The three-dimensional modeling program run on device, the ball curtain camera 1, laser radar 3 are connect with 2 signal of mobile terminal respectively, in reality
In, laser radar 3 can be accessed by the USB interface of mobile terminal 2, mobile terminal 2 and laser radar 3 are connect by USB
Mouthful carry out data transmission.The mobile terminal 2 and server wireless communication connection, the Primary Location and calibration procedure are by institute
The step as described in step S3 in claim 1 is realized when stating mobile terminal execution, the three-dimensional modeling program is by the service
The step as described in step S4 in claim 1 is realized when device executes.Wherein, laser radar 3, ball curtain camera 1, mobile terminal 2
The Trinity, that is to say, that the geographical location of three is consistent in the present invention, and the distance between three is in the present invention
Location information is negligible.As shown in Figure 1, the upper end of the upright bar is arranged in ball curtain camera 1, mobile terminal 2 is rotationally solid
Determine the middle part of upright bar, screen angle is adjustable, and laser radar 3 is connected to mobile terminal 2 by the USB interface of mobile terminal 2
On.
Combining positioning immediately and map structuring (SLAM) technology based on laser radar 3 is letter in the present invention in the prior art
It is described as follows: the distance between mobile terminal and surrounding visible objects information and surrounding objects is obtained by laser radar 3
Point cloud information calculates position and the posture of 3 adjacent moment of laser radar in mobile terminal by matching and comparison to cloud
Variation, and then positions mobile terminal, then optimize to the result of output, using filtering theory or optimum theory,
Optimal pose estimation is finally obtained, that is, current absolute position or relative position are calculated, to be aware of mobile terminal
Currently where.
The present invention is based on laser radars 3 and SLAM technology to carry out digital three-dimensional modeling to large scene, real by laser radar 3
When Scan orientation, the SLAM of real time scan contrast locating view-based access control model of the invention video flowing positioning it is more stable and accurate;Cause
For view-based access control model SLAM encounter the characteristic points such as white wall, glass it is less when will appear the case where tracking is lost, and the present invention is not
It will appear such case.It is positioned by the real time scan of laser radar 3, mould will not occur for the threedimensional model created based on the present invention
Type distortion, the scene of the threedimensional model of creation are more accurate and reliable.
The present invention is aided in illustrating with reference to the accompanying drawing, as shown in Fig. 2, being the 2D map of laser radar scanning, is led to
It crosses laser radar and real time scan is carried out to current scene, the 2D map datum of acquisition, on the 2D map, current scanned field
The profile and border of scape displays.
Further, as shown in figure 3, the path profile positioned for laser radar of the invention.In sweeping for transformation laser radar
When retouching position, the position of movement is recorded, and forms a complete movement routine figure.
Further, as shown in figure 4, carrying out matched schematic diagram to the characteristic point after extracting in scene for the present invention.It is logical
Cross and characteristic point automatically extracted to a ball curtain photo (master drawing), mainly showed by the point on picture in figure, i.e., to extraction after
Characteristic point is matched.It should be noted that can be clicked through in actual operation to the feature of all photos of a certain scene of shooting
Row matching.
Further, as shown in figure 5, being the present invention to each spy in carrying out the two-dimension picture after feature point extraction matching
The three-dimensional space position of point and the schematic diagram of camera position are levied, is further processed based on Fig. 4, two-dimension picture can be obtained
In each characteristic point three-dimensional space position and camera position, form sparse point.The lesser point of area is exactly sparse cloud in picture,
Biggish area is camera position.
Again as shown in fig. 6, carrying out the rudimentary model schematic diagram of structured modeling after sparse points cloud processing for the present invention.
A cloud is obtained after handling by Fig. 5, and carries out structured modeling, generates the threedimensional model entity of scene.After the completion of modeling, it is based on
The space structure of Fig. 6 carries out automation textures, forms the Virtual Space model coincideing with real world, i.e., shown in Fig. 7 virtual
Spatial model, textures carry out Auto-matching using the textures data in server.
It is above that the invention will be further described, it should be noted that the present embodiment premised on the technical program,
The detailed implementation method and specific operation process are given, but protection scope of the present invention is not limited to the present embodiment.
Claims (10)
1. the three-dimensional modeling method that a kind of laser radar and ball curtain camera combine, the described method comprises the following steps:
S1, real time scan is carried out using laser radar to current scene, obtains data;
S2, trigger ball curtain camera are taken pictures, and distant view photograph is obtained;
S3, radar data and distant view photograph are uploaded to mobile terminal, and through mobile terminal according to radar data calculate path and
Buffer distant view photograph;
The path data and distant view photograph data to server handled in S4, uploading step S3 carries out three-dimensional modeling.
2. the three-dimensional modeling method that laser radar according to claim 1 and ball curtain camera combine, which is characterized in that in step
In rapid S1, to current scene using the data that laser radar carries out real time scan acquisition include location information, range information and
Rotation information.
3. the three-dimensional modeling method that laser radar according to claim 1 and ball curtain camera combine, which is characterized in that in step
In rapid S2, the number of distant view photograph is at least one.
4. the three-dimensional modeling method that laser radar according to claim 1 or 3 and ball curtain camera combine, which is characterized in that
In step s 2, ball curtain camera pan photo number and the area of scene that needs are shot it is directly proportional, at interval of institute
The position of photographed scene pre-determined distance shoots a panoramic photograph.
5. the three-dimensional modeling method that laser radar according to claim 1 and ball curtain camera combine, which is characterized in that in step
In rapid S3, path is calculated according to radar data and specifically includes step:
S31, the characteristic point for extracting scan image obtain environmental information;
Update is compared with data already existing in map and feature in S32, the data obtained to present laser radar scanning,
Determine whether this feature derives from the same object in environment;
S33, environment description is carried out using grating map.
6. the three-dimensional modeling method that laser radar according to claim 5 and ball curtain camera combine, which is characterized in that in step
In rapid S32, one group of random particles is constructed in state space according to the conditional probability distribution of system mode, it is continuous according to observation information
The pose and weight for adjusting each particle correct the previous conditional probability of system mode point according to particle information adjusted
Cloth.
7. the three-dimensional modeling method that laser radar according to claim 1 and ball curtain camera combine, which is characterized in that in step
In rapid S3, the distant view photograph for being buffered to mobile terminal is presented in the form of preview picture.
8. the three-dimensional modeling method that laser radar according to claim 1 and ball curtain camera combine, which is characterized in that in step
In rapid S4, server builds three-dimensional mould according to the path data and distant view photograph data that handle on mobile terminals on backstage
Type simultaneously generates link return mobile terminal.
9. the three-dimensional modeling method that laser radar according to claim 1 and ball curtain camera combine, which is characterized in that in step
In rapid S4, server passes through mobile terminal feedback operation information and scene modeling information.
10. the 3 d modeling system that a kind of laser radar and ball curtain camera combine, which is characterized in that the system comprises: ball curtain
Camera, laser radar, mobile terminal, server, be stored in mobile terminal and can run on the mobile terminal according to thunder
Path and buffering distant view photograph program are calculated up to data, is stored in server and the three-dimensional that can be run on the server is built
Mold process sequence, the ball curtain camera, laser radar are connect with mobile terminal signal respectively, the mobile terminal and server channel radio
Letter connection, the Primary Location and calibration procedure are realized step S3 as described in the appended claim 1 when the mobile terminal execution
The step is realized described in step S4 as described in the appended claim 1 when the three-dimensional modeling program is executed by the server
The step of.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810663053.XA CN109102537B (en) | 2018-06-25 | 2018-06-25 | Three-dimensional modeling method and system combining two-dimensional laser radar and dome camera |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810663053.XA CN109102537B (en) | 2018-06-25 | 2018-06-25 | Three-dimensional modeling method and system combining two-dimensional laser radar and dome camera |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109102537A true CN109102537A (en) | 2018-12-28 |
CN109102537B CN109102537B (en) | 2020-03-20 |
Family
ID=64844942
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810663053.XA Active CN109102537B (en) | 2018-06-25 | 2018-06-25 | Three-dimensional modeling method and system combining two-dimensional laser radar and dome camera |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109102537B (en) |
Cited By (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109878926A (en) * | 2019-04-17 | 2019-06-14 | 上海振华重工(集团)股份有限公司 | The localization method and device of the fixed cage knob of container |
CN110148216A (en) * | 2019-05-24 | 2019-08-20 | 中德(珠海)人工智能研究院有限公司 | A kind of method of double ball curtain cameras and double ball curtain camera three-dimensional modelings |
CN110223297A (en) * | 2019-04-16 | 2019-09-10 | 广东康云科技有限公司 | Segmentation and recognition methods, system and storage medium based on scanning point cloud data |
CN110276774A (en) * | 2019-06-26 | 2019-09-24 | Oppo广东移动通信有限公司 | Drawing practice, device, terminal and the computer readable storage medium of object |
CN110288650A (en) * | 2019-05-27 | 2019-09-27 | 盎锐(上海)信息科技有限公司 | Data processing method and end of scan for VSLAM |
CN110298136A (en) * | 2019-07-05 | 2019-10-01 | 广东金雄城工程项目管理有限公司 | Application based on BIM technology scene method of construction and system and in garden landscape digital modeling |
CN110421557A (en) * | 2019-06-21 | 2019-11-08 | 国网安徽省电力有限公司淮南供电公司 | Environmental perspective perception and the safe early warning of distribution network live line work robot protect system and method |
CN110910498A (en) * | 2019-11-21 | 2020-03-24 | 大连理工大学 | Method for constructing grid map by using laser radar and binocular camera |
CN110969696A (en) * | 2019-12-19 | 2020-04-07 | 中德人工智能研究院有限公司 | Method and system for three-dimensional modeling rapid space reconstruction |
CN110992468A (en) * | 2019-11-28 | 2020-04-10 | 贝壳技术有限公司 | Point cloud data-based modeling method, device and equipment, and storage medium |
DE102019202304A1 (en) * | 2019-02-20 | 2020-08-20 | Siemens Schweiz Ag | Method and arrangement for creating a digital building model |
CN111739158A (en) * | 2020-06-29 | 2020-10-02 | 成都信息工程大学 | Erasure code based three-dimensional scene image recovery method |
CN113177975A (en) * | 2021-05-07 | 2021-07-27 | 中德(珠海)人工智能研究院有限公司 | Depth calculation method and three-dimensional modeling method based on dome camera and laser radar |
CN113496545A (en) * | 2020-04-08 | 2021-10-12 | 阿里巴巴集团控股有限公司 | Data processing system, method, sensor, mobile acquisition backpack and equipment |
CN114440928A (en) * | 2022-01-27 | 2022-05-06 | 杭州申昊科技股份有限公司 | Combined calibration method for laser radar and odometer, robot, equipment and medium |
CN117351140A (en) * | 2023-09-15 | 2024-01-05 | 中国科学院自动化研究所 | Three-dimensional reconstruction method, device and equipment integrating panoramic camera and laser radar |
Citations (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103995264A (en) * | 2014-04-17 | 2014-08-20 | 北京金景科技有限公司 | Vehicle-mounted mobile laser radar mapping system |
US8817018B1 (en) * | 2011-06-13 | 2014-08-26 | Google Inc. | Using photographic images to construct a three-dimensional model with a curved surface |
CN104573646A (en) * | 2014-12-29 | 2015-04-29 | 长安大学 | Detection method and system, based on laser radar and binocular camera, for pedestrian in front of vehicle |
CN105678783A (en) * | 2016-01-25 | 2016-06-15 | 西安科技大学 | Data fusion calibration method of catadioptric panorama camera and laser radar |
US20160314593A1 (en) * | 2015-04-21 | 2016-10-27 | Hexagon Technology Center Gmbh | Providing a point cloud using a surveying instrument and a camera device |
CN106443687A (en) * | 2016-08-31 | 2017-02-22 | 欧思徕(北京)智能科技有限公司 | Piggyback mobile surveying and mapping system based on laser radar and panorama camera |
CN206095238U (en) * | 2016-10-11 | 2017-04-12 | 广州正度数据处理服务有限公司 | Dynamic testing that moves who uses 360 degrees panoramic camera paints device |
CN107392944A (en) * | 2017-08-07 | 2017-11-24 | 广东电网有限责任公司机巡作业中心 | Full-view image and the method for registering and device for putting cloud |
US20180061055A1 (en) * | 2016-08-30 | 2018-03-01 | The Boeing Company | 2d vehicle localizing using geoarcs |
WO2018066352A1 (en) * | 2016-10-06 | 2018-04-12 | 株式会社アドバンスド・データ・コントロールズ | Image generation system, program and method, and simulation system, program and method |
CN107948501A (en) * | 2017-10-30 | 2018-04-20 | 深圳市易成自动驾驶技术有限公司 | Automatic ring vision method, device and computer-readable recording medium |
CN108053473A (en) * | 2017-12-29 | 2018-05-18 | 北京领航视觉科技有限公司 | A kind of processing method of interior three-dimensional modeling data |
CN108171780A (en) * | 2017-12-28 | 2018-06-15 | 电子科技大学 | A kind of method that indoor true three-dimension map is built based on laser radar |
-
2018
- 2018-06-25 CN CN201810663053.XA patent/CN109102537B/en active Active
Patent Citations (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8817018B1 (en) * | 2011-06-13 | 2014-08-26 | Google Inc. | Using photographic images to construct a three-dimensional model with a curved surface |
CN103995264A (en) * | 2014-04-17 | 2014-08-20 | 北京金景科技有限公司 | Vehicle-mounted mobile laser radar mapping system |
CN104573646A (en) * | 2014-12-29 | 2015-04-29 | 长安大学 | Detection method and system, based on laser radar and binocular camera, for pedestrian in front of vehicle |
US20160314593A1 (en) * | 2015-04-21 | 2016-10-27 | Hexagon Technology Center Gmbh | Providing a point cloud using a surveying instrument and a camera device |
CN105678783A (en) * | 2016-01-25 | 2016-06-15 | 西安科技大学 | Data fusion calibration method of catadioptric panorama camera and laser radar |
US20180061055A1 (en) * | 2016-08-30 | 2018-03-01 | The Boeing Company | 2d vehicle localizing using geoarcs |
CN106443687A (en) * | 2016-08-31 | 2017-02-22 | 欧思徕(北京)智能科技有限公司 | Piggyback mobile surveying and mapping system based on laser radar and panorama camera |
WO2018066352A1 (en) * | 2016-10-06 | 2018-04-12 | 株式会社アドバンスド・データ・コントロールズ | Image generation system, program and method, and simulation system, program and method |
CN206095238U (en) * | 2016-10-11 | 2017-04-12 | 广州正度数据处理服务有限公司 | Dynamic testing that moves who uses 360 degrees panoramic camera paints device |
CN107392944A (en) * | 2017-08-07 | 2017-11-24 | 广东电网有限责任公司机巡作业中心 | Full-view image and the method for registering and device for putting cloud |
CN107948501A (en) * | 2017-10-30 | 2018-04-20 | 深圳市易成自动驾驶技术有限公司 | Automatic ring vision method, device and computer-readable recording medium |
CN108171780A (en) * | 2017-12-28 | 2018-06-15 | 电子科技大学 | A kind of method that indoor true three-dimension map is built based on laser radar |
CN108053473A (en) * | 2017-12-29 | 2018-05-18 | 北京领航视觉科技有限公司 | A kind of processing method of interior three-dimensional modeling data |
Non-Patent Citations (1)
Title |
---|
黄明伟等: "采用激光扫描的古田会议会址三维几何造型建模", 《华侨大学学报》 * |
Cited By (25)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
DE102019202304A1 (en) * | 2019-02-20 | 2020-08-20 | Siemens Schweiz Ag | Method and arrangement for creating a digital building model |
DE102019202304B4 (en) * | 2019-02-20 | 2021-01-28 | Siemens Schweiz Ag | Method and arrangement for creating a digital building model |
CN110223297A (en) * | 2019-04-16 | 2019-09-10 | 广东康云科技有限公司 | Segmentation and recognition methods, system and storage medium based on scanning point cloud data |
CN109878926A (en) * | 2019-04-17 | 2019-06-14 | 上海振华重工(集团)股份有限公司 | The localization method and device of the fixed cage knob of container |
CN110148216A (en) * | 2019-05-24 | 2019-08-20 | 中德(珠海)人工智能研究院有限公司 | A kind of method of double ball curtain cameras and double ball curtain camera three-dimensional modelings |
CN110148216B (en) * | 2019-05-24 | 2023-03-24 | 中德(珠海)人工智能研究院有限公司 | Three-dimensional modeling method of double-dome camera |
CN110288650B (en) * | 2019-05-27 | 2023-02-10 | 上海盎维信息技术有限公司 | Data processing method and scanning terminal for VSLAM |
CN110288650A (en) * | 2019-05-27 | 2019-09-27 | 盎锐(上海)信息科技有限公司 | Data processing method and end of scan for VSLAM |
CN110421557A (en) * | 2019-06-21 | 2019-11-08 | 国网安徽省电力有限公司淮南供电公司 | Environmental perspective perception and the safe early warning of distribution network live line work robot protect system and method |
CN110276774A (en) * | 2019-06-26 | 2019-09-24 | Oppo广东移动通信有限公司 | Drawing practice, device, terminal and the computer readable storage medium of object |
CN110298136A (en) * | 2019-07-05 | 2019-10-01 | 广东金雄城工程项目管理有限公司 | Application based on BIM technology scene method of construction and system and in garden landscape digital modeling |
CN110910498B (en) * | 2019-11-21 | 2021-07-02 | 大连理工大学 | Method for constructing grid map by using laser radar and binocular camera |
CN110910498A (en) * | 2019-11-21 | 2020-03-24 | 大连理工大学 | Method for constructing grid map by using laser radar and binocular camera |
CN110992468A (en) * | 2019-11-28 | 2020-04-10 | 贝壳技术有限公司 | Point cloud data-based modeling method, device and equipment, and storage medium |
US11935186B2 (en) | 2019-11-28 | 2024-03-19 | Realsee (Beijing) Technology Co., Ltd. | Point cloud data based modeling method and apparatus, device and storage medium |
CN110969696A (en) * | 2019-12-19 | 2020-04-07 | 中德人工智能研究院有限公司 | Method and system for three-dimensional modeling rapid space reconstruction |
CN113496545A (en) * | 2020-04-08 | 2021-10-12 | 阿里巴巴集团控股有限公司 | Data processing system, method, sensor, mobile acquisition backpack and equipment |
CN113496545B (en) * | 2020-04-08 | 2022-05-27 | 阿里巴巴集团控股有限公司 | Data processing system, method, sensor, mobile acquisition backpack and equipment |
CN111739158B (en) * | 2020-06-29 | 2023-04-25 | 成都信息工程大学 | Three-dimensional scene image recovery method |
CN111739158A (en) * | 2020-06-29 | 2020-10-02 | 成都信息工程大学 | Erasure code based three-dimensional scene image recovery method |
CN113177975A (en) * | 2021-05-07 | 2021-07-27 | 中德(珠海)人工智能研究院有限公司 | Depth calculation method and three-dimensional modeling method based on dome camera and laser radar |
CN113177975B (en) * | 2021-05-07 | 2023-12-05 | 中德(珠海)人工智能研究院有限公司 | Depth calculation method and three-dimensional modeling method based on spherical screen camera and laser radar |
CN114440928A (en) * | 2022-01-27 | 2022-05-06 | 杭州申昊科技股份有限公司 | Combined calibration method for laser radar and odometer, robot, equipment and medium |
CN117351140A (en) * | 2023-09-15 | 2024-01-05 | 中国科学院自动化研究所 | Three-dimensional reconstruction method, device and equipment integrating panoramic camera and laser radar |
CN117351140B (en) * | 2023-09-15 | 2024-04-05 | 中国科学院自动化研究所 | Three-dimensional reconstruction method, device and equipment integrating panoramic camera and laser radar |
Also Published As
Publication number | Publication date |
---|---|
CN109102537B (en) | 2020-03-20 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109102537A (en) | A kind of three-dimensional modeling method and system of laser radar and the combination of ball curtain camera | |
KR102524422B1 (en) | Object modeling and movement method and device, and device | |
JP5093053B2 (en) | Electronic camera | |
Kawai et al. | Diminished reality based on image inpainting considering background geometry | |
CN108470370B (en) | Method for jointly acquiring three-dimensional color point cloud by external camera of three-dimensional laser scanner | |
CN113052835B (en) | Medicine box detection method and system based on three-dimensional point cloud and image data fusion | |
WO2019161813A1 (en) | Dynamic scene three-dimensional reconstruction method, apparatus and system, server, and medium | |
US9747668B2 (en) | Reconstruction of articulated objects from a moving camera | |
CN109461210B (en) | Panoramic roaming method for online home decoration | |
CN108958469B (en) | Method for adding hyperlinks in virtual world based on augmented reality | |
JP2016537901A (en) | Light field processing method | |
CN107862718B (en) | 4D holographic video capture method | |
CN110544273B (en) | Motion capture method, device and system | |
CN113379901A (en) | Method and system for establishing house live-action three-dimension by utilizing public self-photographing panoramic data | |
CN109147027A (en) | Monocular image three-dimensional rebuilding method, system and device based on reference planes | |
CN108629828B (en) | Scene rendering transition method in the moving process of three-dimensional large scene | |
CN112270736A (en) | Augmented reality processing method and device, storage medium and electronic equipment | |
CN111739080A (en) | Method for constructing 3D space and 3D object by multiple depth cameras | |
CN107610219A (en) | The thick densification method of Pixel-level point cloud that geometry clue perceives in a kind of three-dimensional scenic reconstruct | |
CN111862278A (en) | Animation obtaining method and device, electronic equipment and storage medium | |
CN108564654B (en) | Picture entering mode of three-dimensional large scene | |
CN108566545A (en) | The method that three-dimensional modeling is carried out to large scene by mobile terminal and ball curtain camera | |
CN113763544A (en) | Image determination method, image determination device, electronic equipment and computer-readable storage medium | |
WO2023098737A1 (en) | Three-dimensional reconstruction method, electronic device, and computer-readable storage medium | |
CN115497029A (en) | Video processing method, device and computer readable storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |