CN109102537B - Three-dimensional modeling method and system combining two-dimensional laser radar and dome camera - Google Patents

Three-dimensional modeling method and system combining two-dimensional laser radar and dome camera Download PDF

Info

Publication number
CN109102537B
CN109102537B CN201810663053.XA CN201810663053A CN109102537B CN 109102537 B CN109102537 B CN 109102537B CN 201810663053 A CN201810663053 A CN 201810663053A CN 109102537 B CN109102537 B CN 109102537B
Authority
CN
China
Prior art keywords
dimensional
mobile terminal
dome camera
data
panoramic
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810663053.XA
Other languages
Chinese (zh)
Other versions
CN109102537A (en
Inventor
崔岩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sino-German Institute Of Artificial Intelligence Ltd
Wuyi University
Original Assignee
Sino-German Institute Of Artificial Intelligence Ltd
Wuyi University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sino-German Institute Of Artificial Intelligence Ltd, Wuyi University filed Critical Sino-German Institute Of Artificial Intelligence Ltd
Priority to CN201810663053.XA priority Critical patent/CN109102537B/en
Publication of CN109102537A publication Critical patent/CN109102537A/en
Application granted granted Critical
Publication of CN109102537B publication Critical patent/CN109102537B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/521Depth or shape recovery from laser ranging, e.g. using interferometry; from the projection of structured light
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Optics & Photonics (AREA)
  • Processing Or Creating Images (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Image Analysis (AREA)
  • Traffic Control Systems (AREA)
  • Optical Radar Systems And Details Thereof (AREA)

Abstract

The invention discloses a three-dimensional modeling method and a three-dimensional modeling system combining a laser radar and a dome camera, and relates to the technical field of three-dimensional imaging modeling. The method comprises the following steps: scanning a current scene in real time by adopting a laser radar to acquire data; triggering a dome camera to take a picture to obtain a panoramic picture; uploading radar data and the panoramic photos to a mobile terminal, and calculating a path and buffering the panoramic photos according to the radar data through the mobile terminal; and uploading the path data and the panoramic photo data obtained by the processing in the step to a server for three-dimensional modeling. Compared with the visual SLAM which is lost when meeting with few characteristic points such as white walls, glass and the like, the method has the advantages that the real-time scanning and positioning of the laser radar are more stable and accurate than the video stream positioning of the visual SLAM, the established three-dimensional model cannot be distorted, and the scenes of the established three-dimensional model are more accurate and reliable.

Description

Three-dimensional modeling method and system combining two-dimensional laser radar and dome camera
Technical Field
The invention relates to the technical field of three-dimensional imaging modeling, in particular to a three-dimensional modeling method and a three-dimensional modeling system combining a two-dimensional laser radar and a dome camera.
Background
In the process of using the dome camera to perform three-dimensional modeling, due to the involvement of an instant positioning and map building (SLAM) technology, the amount of data to be processed is relatively large when the dome camera (generally, binocular or multi-view) always performs shooting of video streams, so that a very large burden is caused to hardware, a large heating phenomenon is generated, and the electric quantity is exhausted in about several minutes to ten minutes. Secondly, if the dome camera is directly used for spatial positioning, frame pictures of the video stream shot by the dome camera must be used for performing SLAM positioning, but the amount of calculation is large, a large amount of CPU resources are occupied, and huge power consumption is increased. In addition, by using the positioning mode, the frame pictures of the video stream shot by the dome camera need to be spliced before SLAM positioning, and distortion can be generated; when the dome camera performs SLAM positioning, data needs to be transmitted back to the processor, and real-time preview delay is caused by time difference generated in data transmission back.
For this reason, on the basis of the instant location and mapping (SLAM) technology, a vision-based instant location and mapping (VSLAM) technology has been developed. The advantage of VSLAM is the rich texture information it utilizes. For example, two advertising boards with the same size and different contents cannot be distinguished by the laser SLAM algorithm based on point cloud, and the visual sense can be easily distinguished. This brings incomparable great advantages in repositioning and scene classification. Meanwhile, visual information can be easily used for tracking and predicting dynamic objects in a scene, such as pedestrians, vehicles and the like, and is very important for application in complex dynamic scenes. Thirdly, the projection model of vision can theoretically allow objects at infinity to enter a visual picture, and positioning and mapping of a large scene can be performed under reasonable configuration (such as a binocular camera with a long baseline).
However, the vision-based instantaneous location and mapping (VSLAM) technique suffers from loss of tracking when there are few characteristic points such as white walls and glass. Although the loop detection will re-optimize the path, there will still be a large error after optimizing many times. Whereas two-dimensional lidar positioning is more stable than VSLAM positioning because it is scanning at the moment. The two-dimensional laser radar is used for initial positioning and providing an initial position for rear-end modeling, and has the advantages of stable and accurate positioning.
Disclosure of Invention
In view of the defects of the prior art, the invention aims to provide a three-dimensional modeling method and system combining a two-dimensional laser radar and a dome camera, and the invention is applied to a large scene.
In order to achieve the purpose, the technical scheme adopted by the invention is as follows:
a method for three-dimensional modeling of a large scene by means of a two-dimensional lidar and a dome camera, the method comprising the steps of:
s1, scanning the current scene in real time by adopting a two-dimensional laser radar to acquire data;
s2, triggering the dome camera to take a picture to obtain a panoramic picture;
s3, uploading the radar data and the panoramic photos to the mobile terminal, and calculating a path and buffering the panoramic photos through the mobile terminal according to the radar data;
and S4, uploading the path data and the panoramic photo data processed in the step S3 to a server for three-dimensional modeling.
Further, in step S1, the data obtained by real-time scanning the current scene with the two-dimensional lidar includes position information, distance information, and rotation information.
Further, in step S2, the number of panoramic photographs is at least one.
Further, in step S2, the dome camera takes a panoramic photograph in a number proportional to the area of the scene to be photographed, and takes a panoramic photograph at a position spaced apart from the scene by a distance of 1.5 meters.
Further, in step S3, the calculating a path according to the radar data specifically includes the steps of:
s31, extracting two-dimensional laser radar scanning data to obtain environmental information;
s32, comparing and updating the data obtained by scanning the current two-dimensional laser radar with the data and the characteristics already existing in the map, and determining whether the characteristics come from the same position in the environment;
and S33, adopting a grid map to describe the environment.
Further, in step S32, a set of random particles is constructed in the state space according to the conditional probability distribution of the system state, the pose and the weight of each particle are continuously adjusted according to the observation information, and the previous conditional probability distribution of the system state is corrected according to the adjusted particle information.
Further, in step S3, the panoramic photograph buffered to the mobile terminal is presented in a picture preview form.
Further, in step S4, the server builds a three-dimensional model in the background from the path data and the panoramic photograph data processed by the mobile terminal and generates a link to return to the mobile terminal.
Further, in step S4, the server feeds back the operation information and the scene modeling information through the mobile terminal.
The invention has the beneficial effects that:
1. compared with the situation that tracking is lost when the characteristic points such as white walls, glass and the like are few, namely the situation that the SLAM based on vision appears, the real-time scanning and positioning of the method are more stable and accurate than the video stream positioning of the SLAM based on vision through the real-time scanning and positioning of the two-dimensional laser radar, the three-dimensional model created based on the method cannot generate model distortion, and the scene of the created three-dimensional model is more accurate and reliable.
2. The dome camera can be triggered and called only when a panoramic photo is shot or some internal parameters are modified, and the dome camera is in a semi-sleep state in the rest time, so that the energy consumption of the dome camera can be saved to the maximum extent.
Drawings
FIG. 1 is a schematic diagram of an apparatus for performing front end scanning operations using the method of the present invention;
FIG. 2 is a 2D map of a two-dimensional lidar scanning of the present invention;
FIG. 3 is a path diagram of a two-dimensional lidar positioning of the present invention;
FIG. 4 is a schematic diagram illustrating matching of feature points extracted from a scene according to the present invention;
FIG. 5 is a schematic diagram of the three-dimensional spatial position and the camera position of each feature point in a two-dimensional picture after feature point extraction and matching according to the present invention;
FIG. 6 is a schematic diagram of a preliminary model for structured modeling after sparse point cloud processing according to the present invention;
FIG. 7 is a schematic diagram of a virtual space model constructed by mapping according to the present invention.
Detailed Description
The present invention will be further described below, and it should be noted that the following examples are provided to illustrate the detailed embodiments and specific procedures based on the technical solution, but the scope of the present invention is not limited to the examples.
The invention relates to a method for carrying out three-dimensional modeling on a large scene through a two-dimensional laser radar 3 and a dome camera 1, which comprises the following steps:
s1, scanning the current scene in real time by adopting a two-dimensional laser radar 3 to obtain data;
s2, triggering the dome camera 1 to take a picture to obtain a panoramic picture;
s3, uploading the radar data and the panoramic photo to the mobile terminal 2, and calculating a path and buffering the panoramic photo according to the radar data through the mobile terminal 2;
and S4, uploading the path data and the panoramic photo data processed in the step S3 to a server for three-dimensional modeling.
It should be noted that the mobile terminal 2 referred to in the present invention includes, but is not limited to, a mobile phone, a tablet computer, and other terminal devices with a camera.
Preferably, in step S1, the data obtained by real-time scanning the current scene with the two-dimensional lidar 3 includes position information, distance information, and rotation information.
Preferably, in step S2, the number of panoramic photographs is at least one.
Preferably, in step S2, the dome camera 1 takes a panoramic picture in a number proportional to the area of the scene to be taken, and takes a panoramic picture at a preset distance every time the scene is taken, where the preset distance may be 1-2 meters, and is preferably 1.5 meters.
Preferably, in step S3, the calculating a path from the radar data specifically includes the steps of:
and S31, extracting the two-dimensional laser radar scanning data to obtain environmental information. The characteristic point is where the curvature is mutated or the normal mutation is generated. The method for extracting the point cloud data feature points comprises a curvature-based boundary edge extraction method, a feature value-based boundary edge extraction method and a neighborhood information-based boundary edge extraction method, wherein the three methods respectively have advantages and disadvantages. In the extraction method of the three-dimensional point cloud data closed characteristic line provided by Demarsin, the normal direction of points is calculated by utilizing principal component analysis, and then the points are clustered based on the normal transformation of local neighborhood to form different clusters. In the process of judging the characteristic points, a method of comparing the normal included angle of the two points with the acceptable maximum angle threshold value is adopted, and the characteristic points are judged by taking one cluster as a unit. Cluster analysis is a data detection tool that is effective for unclassified cases, where the goal is to group objects into natural classes or clusters based on similarity or distance, and where object classes are unknown, clustering techniques are often more effective. Therefore, such techniques have found wide application in the instant positioning and mapping (SLAM) technique.
And S32, comparing and updating the data obtained by scanning the current two-dimensional laser radar 3 with the data and the characteristics already existing in the map, and determining whether the characteristics come from the same position in the environment. Data Association (DA), also known as Consistency problem (Consistency), refers to determining whether current sensor observation Data originates from the same object in the environment when comparing and updating the Data and features already existing in the map.
And S33, adopting a grid map to describe the environment. The concept of grid maps was first proposed in 1985 by Elfeshe and Moravec, and was later mainly applied in the field of robotics. The grid map divides the surrounding environment into grid structures, generally divided into squares of equal size, and then assigns an attribute value to each grid to indicate the occupancy state of the grid. There are typically two attribute values, 0 and 1, for a two-dimensional land grid map. 0 represents that the grid is not occupied and 1 represents that an obstacle exists in the grid. And each grid attribute value of the three-dimensional grid map also comprises height information of the obstacles. The advantage of grid maps is that they are easy to create and maintain and do not depend on the environmental terrain.
Preferably, in step S32, a set of random particles is constructed in the state space according to the conditional probability distribution of the system state, the pose and the weight of each particle are continuously adjusted according to the observation information, and the previous conditional probability distribution of the system state is corrected according to the adjusted particle information. In a preferred embodiment of the present invention, the two-dimensional lidar 3 employs a particle filter positioning method. The idea of particle filtering is to use N weighted particles xi t,wi t}N i=1To represent a posterior probability density distribution of the robot pose. x is the number ofi tPosition of the ith particle, wi tIs the weight of the ith particle. And constructing a group of random particles in a state space according to the conditional probability distribution of the system state, continuously adjusting the pose and the weight of each particle according to the observation information, and correcting the previous conditional probability distribution of the system state according to the adjusted particle information.
Preferably, in step S3, the panoramic photograph buffered to the mobile terminal 2 is presented in a picture preview form. The mobile terminal 2 can be used for visualizing the data preview shot by the dome camera 1, namely facilitating the real-time observation of an operator in a picture preview mode, and can be communicated with a server. In step S4, the server constructs a three-dimensional model from the route data and the panoramic photograph data processed by the mobile terminal 2, and generates a link to return to the mobile terminal 2. The server feeds back the operation information and the scene modeling information through the mobile terminal, so that the operation is convenient for an operator to use and the scene modeling information is flexibly adjusted.
As a preferred embodiment of the present invention, the mobile terminal 2 of the present invention is a smart phone, and the two-dimensional lidar 3 preferably adopts a two-dimensional lidar manufactured by Hokuyo, north-south electric motor company, japan, and having models URG04LX and UTM-30 LX.
It should be noted that, during the scanning and positioning period of the two-dimensional laser radar 3, the mobile terminal 2 is always in an operating state, and is used for receiving data of the two-dimensional laser radar 3 and a panoramic photo taken by the dome camera 1. The communication mode of the mobile terminal 2 and the dome camera 1 is a wireless communication mode, and WIFI, Bluetooth and the like in the prior art can be utilized. The communication mode between the two-dimensional laser radar 3 and the mobile terminal 2 may be a wireless communication mode or a data cable transmission mode, and in order to ensure that data is not distorted, in a preferred embodiment of the invention, data transmission is performed on the data of the two-dimensional laser radar 3 in a USB connection line mode.
It needs to be further explained that the APP application program is loaded on the mobile terminal 2, the dome camera 1 is connected with the mobile terminal 2 through WIFI, the shot panoramic photo is transmitted to the APP application program of the mobile terminal 2 in real time, and after the shot of the photo of the whole scene is finished, the photo is uploaded to the server together for three-dimensional modeling. Two-dimensional laser radar 3 passes through the map data conveying of USB connecting wire with the scanning to mobile terminal 2 to through the APP application on mobile terminal 2 processing calculation. When the dome camera 1 shoots the scene, the trigger point is artificially defined, namely the user judges by oneself when to let the dome camera 1 begin to shoot, and is also the trigger point is self-defined, when the dome camera 1 shoots, the model that the arrangement point (how far away apart carry out once shooting promptly) that a plurality of trigger points formed was built out under the moderate condition of density degree all can be better on browsing and transition effect.
In the whole shooting process, the scanning of the two-dimensional laser radar 3 is always operated, and the mobile terminal 2 is not used for shooting pictures and videos; the dome camera 1 can be called only when data are collected (photographed) or some internal parameters are modified, the dome camera 1 is in a semi-sleep state of continuously emitting WIFI at other times, and the energy consumption of the dome camera 1 is lower due to the division of work and cooperation of the mobile terminal 2 and the dome camera 1.
In the present embodiment, the three-dimensional reconstruction offline algorithm refers to an SFM algorithm. In other embodiments, other three-dimensional reconstruction offline algorithms may also be employed.
In step S4, the three-dimensional modeling based on the picture taken by the dome camera 1 further includes the steps of:
s41, recognizing and matching based on the feature points of at least one group of photos obtained by the dome camera 1;
s42, automatic detection is carried out based on the closed loop of the three-dimensional digital modeling of the dome camera 1;
after S43 detection, carrying out digital modeling;
s44 structuring the model map.
It should be noted that, in the group of photos or video streams, feature points (pixel points on the picture) of a single photo are extracted by using SIFT descriptors, and the neighborhood of each feature point is analyzed at the same time, and the feature points are controlled according to the neighborhood.
It should be noted that the closed loop detection is: comparing the position of the dome camera 1 calculated currently with the position of the dome camera 1 calculated in the past, and detecting whether the distances are close; if the distance between the two is detected to be within a certain threshold range, the dome camera 1 is considered to return to the original walking place, and closed loop detection is started at the moment.
It should be further noted that the filtering means: after the three-dimensional coordinate position corresponding to a certain point in the two-dimensional picture is confirmed, the three-dimensional coordinate point is re-projected onto the original spherical screen picture, and whether the point is still the point is confirmed again. The reason is that the point of the two-dimensional picture and the position of the point in the three-dimensional world are in one-to-one correspondence, so that after the three-dimensional coordinate point of a certain point in the two-dimensional picture is confirmed, the three-dimensional coordinate point can be re-projected to verify whether the two-dimensional coordinate point is still at the original position, and whether the pixel point is a noise point or not and whether filtering is needed or not is determined. It should be noted that an optimal picture from a certain dome camera 1 is determined in the photos. Said pictures also put the frame pictures of the video stream taken by the mobile terminal 2 in the current position in an alternative line.
It should be noted that, when a plurality of dome cameras 1 all see a certain target and capture a picture, an optimal one of the targets is selected for mapping.
It should be further explained that the corresponding dome camera 1 and the color of the captured image are calculated by using the formula:
V1=normalize(CameraMatrixi*V0)
Figure GDA0002094911210000101
Figure GDA0002094911210000102
in the formula: v0 is the coordinates (x, y, z, 1) of any spatial point to be sampled, all points to be rasterized for a model; v1 is a new position coordinate transformed to a camera space by V0, and is transformed to a unit spherical surface through vector normalization; tx and Ty are texture coordinates (x, y) corresponding to V0, and a coordinate system is selected as OPENGL texture coordinate system; aspecti: of the ith panoramic picture for samplingAn aspect ratio; CameraMatrixi: and transforming the position of the dome camera 1 to the original point by using the transformation matrix of the ith panoramic picture for sampling, and resetting the facing direction of the dome camera 1.
Based on the above, it should be noted that the closed loop detection is a dynamic process, which is continuously performed during the process of taking the dome photograph.
As shown in fig. 1, the present invention further provides a three-dimensional modeling system combining a two-dimensional lidar and a dome camera, the system comprising: the dome camera 1 and the two-dimensional laser radar 3 are respectively in signal connection with the mobile terminal 2, in practical application, the two-dimensional laser radar 3 can be accessed through a USB interface of the mobile terminal 2, and the mobile terminal 2 and the two-dimensional laser radar 3 carry out data transmission through the USB interface. The mobile terminal 2 and the server are in wireless communication connection, the preliminary positioning and calibration procedure, when executed by the mobile terminal, implementing the steps of step S3 in claim 1, the three-dimensional modeling procedure, when executed by the server, implementing the steps of step S4 in claim 1. The two-dimensional laser radar 3, the dome camera 1 and the mobile terminal 2 are integrated into a whole, namely the geographic positions of the three are consistent in the invention, and the position information of the distance between the three can be ignored in the invention. As shown in FIG. 1, the dome camera 1 is arranged at the upper end of the vertical rod, the mobile terminal 2 can rotatably fix the middle part of the vertical rod, the screen angle of the mobile terminal can be adjusted, and the two-dimensional laser radar 3 is connected to the mobile terminal 2 through the USB interface of the mobile terminal 2.
Based on the two-dimensional laser radar 3 combined with the instant positioning and map building (SLAM) technology, which is the prior art, the invention is briefly described as follows: the distance information between the mobile terminal and surrounding visible objects and point cloud information of the surrounding objects are obtained through the two-dimensional laser radar 3, the position and attitude change of the two-dimensional laser radar 3 at adjacent moments are calculated through matching and comparison of the point cloud at the mobile terminal, then the mobile terminal is positioned, the output result is optimized, and optimal pose estimation is finally obtained through a filtering theory or an optimization theory, namely the current absolute position or relative position is calculated, so that the current position of the mobile terminal is known.
The method carries out digital three-dimensional modeling on a large scene based on the two-dimensional laser radar 3 and the SLAM technology, and the real-time scanning and positioning of the method is more stable and accurate than the video stream positioning of the SLAM based on vision through the real-time scanning and positioning of the two-dimensional laser radar 3; because the vision-based SLAM has the problem of tracking loss when the number of characteristic points such as white walls, glass and the like is less, the invention does not have the problem. By the real-time scanning and positioning of the two-dimensional laser radar 3, the three-dimensional model created based on the method can not generate model distortion, and the scene of the created three-dimensional model is more accurate and reliable.
The following description of the present invention is made with reference to the accompanying drawings, and as shown in fig. 2, the present scene is scanned by a two-dimensional lidar in real time, and the contour boundary of the scanned scene appears on the 2D map, which is 2D map data obtained by scanning the current scene by the two-dimensional lidar in real time.
Further, as shown in fig. 3, a path diagram of the two-dimensional lidar positioning of the present invention is shown. When the scanning position of the two-dimensional laser radar is changed, the moving position of the two-dimensional laser radar is recorded to form a complete moving path diagram.
Further, as shown in fig. 4, a schematic diagram of matching the feature points extracted from the scene according to the present invention is shown. The feature points are automatically extracted from a spherical screen photo (sample picture), and the picture is mainly represented by points on the picture, namely the extracted feature points are matched. It should be noted that, in actual practice, feature points of all photos taken of a certain scene may be matched.
Further, as shown in fig. 5, a schematic diagram of the three-dimensional spatial position and the camera position of each feature point in the two-dimensional picture after feature point extraction and matching is shown in the present invention, and further processing is performed based on fig. 4, so that the three-dimensional spatial position and the camera position of each feature point in the two-dimensional picture can be obtained, and sparse points are formed. The points with smaller area in the picture are the sparse point cloud, and the points with larger area are the camera positions.
As shown in fig. 6, a schematic diagram of a preliminary model for structured modeling after sparse point cloud processing is shown. And (5) obtaining point clouds after processing according to the graph in FIG. 5, and performing structured modeling to generate a three-dimensional model entity of the scene. After modeling is completed, automatic mapping is performed based on the space structure of fig. 6, a virtual space model which is consistent with the real world, namely the virtual space model shown in fig. 7, is formed, and mapping is automatically matched by adopting mapping data in a server.
The present invention is further described above, and it should be noted that the present embodiment is based on the technical solution, and detailed implementation and specific operation procedures are provided, but the protection scope of the present invention is not limited to the present embodiment.

Claims (8)

1. A three-dimensional modeling method combining a two-dimensional lidar and a dome camera, the method comprising the steps of:
s1, scanning the current scene in real time by adopting a two-dimensional laser radar to acquire data; the data includes position information, distance information, and rotation information
S2, triggering the dome camera to take a picture to obtain a panoramic picture;
s3, uploading the radar data and the panoramic photos to the mobile terminal, and calculating a path and buffering the panoramic photos through the mobile terminal according to the radar data; wherein the calculating a path from the radar data specifically comprises the steps of:
s31, extracting two-dimensional laser radar scanning data to obtain environmental information;
s32, comparing and updating the data obtained by scanning the current two-dimensional laser radar with the data and the characteristics already existing in the map, and determining whether the characteristics come from the same position in the environment;
s33, adopting a grid map to describe the environment;
s4, uploading the path data and the panoramic photo data processed in the step S3 to a server for three-dimensional modeling;
wherein the three-dimensional modeling comprises:
s41, recognizing and matching based on the feature points of at least one group of photos obtained by the dome camera 1;
s42, carrying out closed loop detection of three-dimensional digital modeling based on the position change of the dome camera 1;
after S43 detection, carrying out digital modeling;
s44 structuring the model map.
2. The method for three-dimensional modeling by combining two-dimensional lidar and a dome camera according to claim 1, wherein in step S2, the number of the panoramic photo is at least one.
3. The method for three-dimensional modeling by combining two-dimensional lidar and a dome camera according to claim 1 or 2, wherein in step S2, the dome camera takes a panoramic photograph in a number proportional to the area of the scene to be photographed, and takes a panoramic photograph at a position spaced apart from the scene to be photographed by a predetermined distance.
4. The method of claim 1, wherein in step S32, a set of random particles is constructed in the state space according to the conditional probability distribution of the system state, the pose and weight of each particle are continuously adjusted according to the observation information, and the previous conditional probability distribution of the system state is corrected according to the adjusted particle information.
5. The method for three-dimensional modeling by combining two-dimensional lidar and a dome camera according to claim 1, wherein in step S3, the panoramic photo buffered to the mobile terminal is presented in a picture preview form.
6. The method for three-dimensional modeling by combining two-dimensional lidar and a dome camera according to claim 1, wherein in step S4, the server builds a three-dimensional model in the background according to the path data and the panoramic photo data processed on the mobile terminal and generates a link to return to the mobile terminal.
7. The method for three-dimensional modeling by combining two-dimensional lidar and a dome camera according to claim 1, wherein in step S4, the server feeds back the operation information and the scene modeling information through the mobile terminal.
8. A two-dimensional lidar and dome camera combined three-dimensional modeling system, the system comprising: dome camera, two-dimensional lidar, a mobile terminal, a server, a program stored in and operable on the mobile terminal for calculating a path from radar data and buffering panoramic photographs, a three-dimensional modeling program stored in and operable on the server, the dome camera and the two-dimensional lidar being respectively in signal connection with the mobile terminal, the mobile terminal and the server being in wireless communication, the preliminary positioning and calibration program of the mobile terminal, when executed by the mobile terminal, implementing the steps of step S3 in claim 1, the three-dimensional modeling program, when executed by the server, implementing the steps of step S4 in claim 1.
CN201810663053.XA 2018-06-25 2018-06-25 Three-dimensional modeling method and system combining two-dimensional laser radar and dome camera Active CN109102537B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810663053.XA CN109102537B (en) 2018-06-25 2018-06-25 Three-dimensional modeling method and system combining two-dimensional laser radar and dome camera

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810663053.XA CN109102537B (en) 2018-06-25 2018-06-25 Three-dimensional modeling method and system combining two-dimensional laser radar and dome camera

Publications (2)

Publication Number Publication Date
CN109102537A CN109102537A (en) 2018-12-28
CN109102537B true CN109102537B (en) 2020-03-20

Family

ID=64844942

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810663053.XA Active CN109102537B (en) 2018-06-25 2018-06-25 Three-dimensional modeling method and system combining two-dimensional laser radar and dome camera

Country Status (1)

Country Link
CN (1) CN109102537B (en)

Families Citing this family (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102019202304B4 (en) * 2019-02-20 2021-01-28 Siemens Schweiz Ag Method and arrangement for creating a digital building model
CN110223297A (en) * 2019-04-16 2019-09-10 广东康云科技有限公司 Segmentation and recognition methods, system and storage medium based on scanning point cloud data
CN109878926A (en) * 2019-04-17 2019-06-14 上海振华重工(集团)股份有限公司 The localization method and device of the fixed cage knob of container
CN110148216B (en) * 2019-05-24 2023-03-24 中德(珠海)人工智能研究院有限公司 Three-dimensional modeling method of double-dome camera
CN110288650B (en) * 2019-05-27 2023-02-10 上海盎维信息技术有限公司 Data processing method and scanning terminal for VSLAM
CN110421557A (en) * 2019-06-21 2019-11-08 国网安徽省电力有限公司淮南供电公司 Environmental perspective perception and the safe early warning of distribution network live line work robot protect system and method
CN110276774B (en) * 2019-06-26 2021-07-23 Oppo广东移动通信有限公司 Object drawing method, device, terminal and computer-readable storage medium
CN110298136A (en) * 2019-07-05 2019-10-01 广东金雄城工程项目管理有限公司 Application based on BIM technology scene method of construction and system and in garden landscape digital modeling
CN110910498B (en) * 2019-11-21 2021-07-02 大连理工大学 Method for constructing grid map by using laser radar and binocular camera
CN110992468B (en) 2019-11-28 2020-10-30 贝壳找房(北京)科技有限公司 Point cloud data-based modeling method, device and equipment, and storage medium
CN110969696A (en) * 2019-12-19 2020-04-07 中德人工智能研究院有限公司 Method and system for three-dimensional modeling rapid space reconstruction
CN113496545B (en) * 2020-04-08 2022-05-27 阿里巴巴集团控股有限公司 Data processing system, method, sensor, mobile acquisition backpack and equipment
CN111739158B (en) * 2020-06-29 2023-04-25 成都信息工程大学 Three-dimensional scene image recovery method
CN113177975B (en) * 2021-05-07 2023-12-05 中德(珠海)人工智能研究院有限公司 Depth calculation method and three-dimensional modeling method based on spherical screen camera and laser radar
CN114440928A (en) * 2022-01-27 2022-05-06 杭州申昊科技股份有限公司 Combined calibration method for laser radar and odometer, robot, equipment and medium
CN117351140B (en) * 2023-09-15 2024-04-05 中国科学院自动化研究所 Three-dimensional reconstruction method, device and equipment integrating panoramic camera and laser radar

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104573646A (en) * 2014-12-29 2015-04-29 长安大学 Detection method and system, based on laser radar and binocular camera, for pedestrian in front of vehicle
CN107392944A (en) * 2017-08-07 2017-11-24 广东电网有限责任公司机巡作业中心 Full-view image and the method for registering and device for putting cloud
WO2018066352A1 (en) * 2016-10-06 2018-04-12 株式会社アドバンスド・データ・コントロールズ Image generation system, program and method, and simulation system, program and method
CN108171780A (en) * 2017-12-28 2018-06-15 电子科技大学 A kind of method that indoor true three-dimension map is built based on laser radar

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8817018B1 (en) * 2011-06-13 2014-08-26 Google Inc. Using photographic images to construct a three-dimensional model with a curved surface
CN103995264A (en) * 2014-04-17 2014-08-20 北京金景科技有限公司 Vehicle-mounted mobile laser radar mapping system
EP3086283B1 (en) * 2015-04-21 2019-01-16 Hexagon Technology Center GmbH Providing a point cloud using a surveying instrument and a camera device
CN105678783B (en) * 2016-01-25 2018-10-19 西安科技大学 Refractive and reflective panorama camera merges scaling method with laser radar data
US10402675B2 (en) * 2016-08-30 2019-09-03 The Boeing Company 2D vehicle localizing using geoarcs
CN106443687B (en) * 2016-08-31 2019-04-16 欧思徕(北京)智能科技有限公司 A kind of backpack mobile mapping system based on laser radar and panorama camera
CN206095238U (en) * 2016-10-11 2017-04-12 广州正度数据处理服务有限公司 Dynamic testing that moves who uses 360 degrees panoramic camera paints device
CN107948501A (en) * 2017-10-30 2018-04-20 深圳市易成自动驾驶技术有限公司 Automatic ring vision method, device and computer-readable recording medium
CN108053473A (en) * 2017-12-29 2018-05-18 北京领航视觉科技有限公司 A kind of processing method of interior three-dimensional modeling data

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104573646A (en) * 2014-12-29 2015-04-29 长安大学 Detection method and system, based on laser radar and binocular camera, for pedestrian in front of vehicle
WO2018066352A1 (en) * 2016-10-06 2018-04-12 株式会社アドバンスド・データ・コントロールズ Image generation system, program and method, and simulation system, program and method
CN107392944A (en) * 2017-08-07 2017-11-24 广东电网有限责任公司机巡作业中心 Full-view image and the method for registering and device for putting cloud
CN108171780A (en) * 2017-12-28 2018-06-15 电子科技大学 A kind of method that indoor true three-dimension map is built based on laser radar

Also Published As

Publication number Publication date
CN109102537A (en) 2018-12-28

Similar Documents

Publication Publication Date Title
CN109102537B (en) Three-dimensional modeling method and system combining two-dimensional laser radar and dome camera
Moreau et al. Lens: Localization enhanced by nerf synthesis
CN110400363B (en) Map construction method and device based on laser point cloud
WO2020192706A1 (en) Object three-dimensional model reconstruction method and device
CN113052835B (en) Medicine box detection method and system based on three-dimensional point cloud and image data fusion
JP5093053B2 (en) Electronic camera
CN107843251B (en) Pose estimation method of mobile robot
CN110634177A (en) Object modeling movement method, device and equipment
CN108958469B (en) Method for adding hyperlinks in virtual world based on augmented reality
US20170330375A1 (en) Data Processing Method and Apparatus
EP2895986A2 (en) Methods, devices and systems for detecting objects in a video
WO2010130987A2 (en) Image generation method
JP2016537901A (en) Light field processing method
CN112207821B (en) Target searching method of visual robot and robot
CN110378995B (en) Method for three-dimensional space modeling by using projection characteristics
CN113379901A (en) Method and system for establishing house live-action three-dimension by utilizing public self-photographing panoramic data
CN111899345B (en) Three-dimensional reconstruction method based on 2D visual image
CN116051747A (en) House three-dimensional model reconstruction method, device and medium based on missing point cloud data
CN114782628A (en) Indoor real-time three-dimensional reconstruction method based on depth camera
CN111325828A (en) Three-dimensional face acquisition method and device based on three-eye camera
Saxena et al. 3-d reconstruction from sparse views using monocular vision
CN113345084B (en) Three-dimensional modeling system and three-dimensional modeling method
WO2021217403A1 (en) Method and apparatus for controlling movable platform, and device and storage medium
US20230290061A1 (en) Efficient texture mapping of a 3-d mesh
WO2023098737A1 (en) Three-dimensional reconstruction method, electronic device, and computer-readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant