CN110223223A - Street scan method, device and scanner - Google Patents

Street scan method, device and scanner Download PDF

Info

Publication number
CN110223223A
CN110223223A CN201910349750.2A CN201910349750A CN110223223A CN 110223223 A CN110223223 A CN 110223223A CN 201910349750 A CN201910349750 A CN 201910349750A CN 110223223 A CN110223223 A CN 110223223A
Authority
CN
China
Prior art keywords
data
point cloud
image
street
cloud data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910349750.2A
Other languages
Chinese (zh)
Inventor
苏腾
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Qingcheng Tongheng Intelligence Park High-Tech Research Institute Co Ltd
Original Assignee
Beijing Qingcheng Tongheng Intelligence Park High-Tech Research Institute Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Qingcheng Tongheng Intelligence Park High-Tech Research Institute Co Ltd filed Critical Beijing Qingcheng Tongheng Intelligence Park High-Tech Research Institute Co Ltd
Priority to CN201910349750.2A priority Critical patent/CN110223223A/en
Publication of CN110223223A publication Critical patent/CN110223223A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/86Combinations of lidar systems with systems other than lidar, radar or sonar, e.g. with direction finders
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/89Lidar systems specially adapted for specific applications for mapping or imaging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/176Urban or other man-made structures
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/04Synchronising
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/265Mixing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Signal Processing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Electromagnetism (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Traffic Control Systems (AREA)

Abstract

The embodiment of the invention discloses a kind of street scan method, device and scanners, are related to technical field of mapping, and main purpose is to solve the problems, such as that measurement efficiency is low in existing street scanning process.Main technical schemes of the embodiment of the present invention are as follows: obtain point cloud data, image data and location parameter, the point cloud data, image data and location parameter are respectively the data obtained after the street moving process Zhong Dui is scanned;The point cloud data is synchronized with image data according to temporal information;By the point cloud data after synchronizing and rear image data is synchronized, is merged according to corresponding location parameter, fused image information is obtained, described image information is the street image that fusion has point cloud data and image data.The embodiment of the present invention is mainly used for avenue and is scanned and surveys and draws.

Description

Street scanning method and device and scanner
Technical Field
The embodiment of the invention relates to the technical field of surveying and mapping, in particular to a street scanning method, a street scanning device and a scanner.
Background
With the continuous progress of scientific technology, smart cities are the trend of the development of the modern times, wherein in the construction process of the smart cities, accurate position information and visual map expression are important links for realizing the digitization of the smart cities, so that street scanning has important significance in the aspects of surveying and mapping, urban area transformation and urban planning in the construction process of the digitized urban maps.
At present, urban street surveying and mapping are mainly carried out through surveying and mapping type laser radars, the positions are selected in the street in the process of scanning and surveying and mapping the street, and the surveying and mapping type laser radars are arranged at the positions to carry out scanning and surveying and mapping of a target area. However, in practical application, among the current measurement process, this kind of survey and drawing type lidar sets up on the tripod, does not possess the mobility when using, needs the manpower to carry, influences the survey and drawing efficiency, and when measuring to the street of large tracts of land, need set up scanning point in the position of difference, with survey and drawing type lidar setting with this scanning point department, measure many times, splice the multiple measurement again, time consumption is more. Therefore, the problem of low mapping efficiency exists in the existing urban street measuring process.
Disclosure of Invention
In view of this, embodiments of the present invention provide a street scanning method, device and scanner, which mainly aim to improve the measurement efficiency in the urban street mapping process and solve the problem of low measurement efficiency in the existing street scanning process.
In order to achieve the above purpose, the embodiments of the present invention mainly provide the following technical solutions:
in a first aspect, an embodiment of the present invention provides a street scanner, including: a data acquisition component, a data processing component and a mobile carrier, wherein,
the data acquisition assembly is arranged at the top of the mobile carrier and is used for acquiring point cloud data, image data and position parameters;
the data acquisition assembly is connected with the data processing assembly.
Optionally, the data collecting assembly includes: the system comprises a laser radar, a panoramic camera, a differential global positioning system, an inertial sensor and a synchronization module;
the laser radar is used for collecting point cloud data, the panoramic camera is used for collecting image data, the differential global positioning system and the inertial sensor are used for collecting position and attitude information, the laser radar, the panoramic camera, the differential global positioning system and the inertial sensor are respectively connected with the synchronization module, and the synchronization module is used for synchronizing the point cloud data and the image data with the position and attitude information respectively.
In a second aspect, an embodiment of the present invention provides a street scanning method, where the method is implemented based on the scanner described in the first aspect, and includes:
acquiring point cloud data, image data and position parameters, wherein the point cloud data, the image data and the position parameters are respectively data obtained after scanning a street in a moving process;
synchronizing the point cloud data and the image data according to time information, wherein the time information is determined when the point cloud data and the image data are acquired;
and fusing the synchronized point cloud data and the synchronized image data according to the corresponding position parameters to obtain fused image information, wherein the image information is a street image fused with the point cloud data and the image data.
Optionally, the point cloud data is collected by a laser radar;
the image data is acquired by a panoramic camera;
the position parameters are position and attitude information, and the position and attitude information is acquired through a differential global positioning system and an inertial sensor.
Optionally, after fusing the synchronized point cloud data and the synchronized image data according to the corresponding position parameters to obtain fused image information, the method includes:
and performing vectorization operation on the fused image information to obtain a vectorized image, wherein the vectorized image is a line image of a street obtained after extracting meridian lines in the street from which non-target objects are removed.
Optionally, the performing vectorization operation on the fused image information to obtain a vectorized image includes:
carrying out feature extraction on the fused image information;
identifying objects in the street according to the extracted features on the fused image information;
classifying the objects in the street according to target objects and non-target objects according to preset screening conditions, wherein the preset screening conditions comprise the target objects set by a user and corresponding characteristics;
and deleting the non-target object from the fused image information, and extracting lines to obtain the vectorized image.
Optionally, the fusing the synchronized point cloud data and the synchronized image data according to the corresponding position parameters to obtain fused image information includes:
superposing the point cloud data after synchronization and the image data after synchronization to obtain single-frame images corresponding to different moments;
and determining the relative relationship between the single-frame images according to the position information in the position parameters, and splicing all the single-frame images according to the relative relationship to obtain the fused image information containing the point cloud data and the image data.
In a third aspect, an embodiment of the present invention further provides a street scanning apparatus, where the apparatus includes:
the system comprises an acquisition unit, a processing unit and a display unit, wherein the acquisition unit is used for acquiring point cloud data, image data and position parameters, and the point cloud data, the image data and the position parameters are respectively data obtained after scanning a street in the moving process;
the synchronization unit is used for synchronizing the point cloud data and the image data according to the time information;
and the fusion unit is used for fusing the synchronized point cloud data and the synchronized image data according to the corresponding position parameters to obtain fused image information, wherein the image information is a street image fused with the point cloud data and the image data.
Optionally, the point cloud data is collected by a laser radar;
the image data is acquired by a panoramic camera;
the position parameters are position and attitude information, and the position and attitude information is acquired through a differential global positioning system and an inertial sensor.
Optionally, the apparatus further comprises:
and the vectorization operation unit is used for carrying out vectorization operation on the fused image information to obtain a vectorized image, wherein the vectorized image is a line image of a street obtained after the longitude lines in the street from which the non-target objects are removed are extracted.
Optionally, the vectorization operation unit includes:
the extraction module is used for extracting the characteristics of the fused image information;
the recognition module is used for recognizing the object in the street according to the extracted features;
the classification module is used for classifying the objects in the street according to target objects and non-target objects according to preset screening conditions, wherein the preset screening conditions comprise the target objects and corresponding characteristics set by a user;
and the vectorization module is used for deleting the non-target object from the fused image information and extracting the vectorized image by lines.
Optionally, the fusion unit includes:
the superposition module is used for superposing according to the synchronized point cloud data and the synchronized image data to obtain single-frame images corresponding to different moments;
and the splicing module is used for overlapping and splicing the point cloud data and the image data to obtain the image information of the integrated whole street, which comprises the point cloud data and the image data.
Compared with the problem of low mapping efficiency in the existing street mapping process, the street scanning method, the street scanning device and the street scanning scanner provided by the embodiment of the invention can obtain the fused image information by firstly obtaining the point cloud data, the image data and the position parameters, then synchronizing the point cloud data and the image data according to the time information and finally fusing the synchronized point cloud data and the synchronized image data according to the corresponding position parameters. The point cloud data, the image data and the position parameters are respectively data obtained after scanning a street in a moving process, so that the realization of street scanning in the moving process is realized, the problem that the arrangement position needs to be selected every time in the existing process is avoided, and the process of time consumption caused by moving a surveying and mapping type laser radar is avoided, so that the scanning efficiency is improved. In addition, based on the integration of the point cloud data and the image data in the scanning process, the integration of the point cloud data set and the image data can be realized, so that the image data obtained by street scanning is more comprehensive.
The foregoing description is only an overview of the technical solutions of the embodiments of the present invention, and the embodiments of the present invention can be implemented according to the content of the description in order to make the technical means of the embodiments of the present invention more clearly understood, and the detailed description of the embodiments of the present invention is provided below in order to make the foregoing and other objects, features, and advantages of the embodiments of the present invention more clearly understandable.
Drawings
Various other advantages and benefits will become apparent to those of ordinary skill in the art upon reading the following detailed description of the preferred embodiments. The drawings are only for purposes of illustrating the preferred embodiments and are not to be construed as limiting the embodiments of the invention. Also, like reference numerals are used to refer to like parts throughout the drawings. In the drawings:
FIG. 1 illustrates a street scanner provided by an embodiment of the present invention;
FIG. 2 is a flow chart of a street scanning method according to an embodiment of the present invention;
FIG. 3 is a flow chart of another street scanning method provided by an embodiment of the invention;
FIG. 4 is a block diagram of a street scanning apparatus according to an embodiment of the present invention;
fig. 5 is a block diagram of another street scanning apparatus according to an embodiment of the present invention.
Detailed Description
Exemplary embodiments of the present invention will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the invention are shown in the drawings, it should be understood that the invention can be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the invention to those skilled in the art.
An embodiment of the present invention first provides a street scanner, as shown in fig. 1, including:
a data acquisition component 11, a data processing component 12 and a mobile carrier 13,
wherein,
in the embodiment of the present invention, the data acquisition component 11 is disposed on the top of the mobile carrier 13 and is configured to acquire point cloud data, image data and position parameters.
The data acquisition component 11 is connected with the data processing component 12, and the data processing component 12 is used for processing the data output by the data acquisition component 11; the data processing component 12 is disposed in the mobile carrier 13, it should be noted that in the embodiment of the present invention, the data processing component 12 may be disposed in the mobile carrier 13 as shown in fig. 1, and of course, the data processing component may also be disposed in other areas, for example, the data processing component is disposed in a remote machine room and is remotely connected to the data acquisition component 11, here, the disposition manner described in the embodiment of the present invention is exemplary, and the specific disposition position may be selected according to actual needs.
Like this, can remove through its removal carrier based on this street scanner, then scan through removing the in-process, solved current scanning in-process and need the manpower to carry, influence plotting efficiency to when measuring to the street of large tracts of land, need not to set up scanning point at a plurality of positions and measure, will measure the process of carrying out the concatenation again many times again, improved measurement of efficiency.
Wherein, in the above street scanner, the data acquisition assembly 11 comprises: a laser radar 111, a panoramic camera 112, a differential global positioning system and inertial sensor 113, and a synchronization module 114;
the laser radar 111 is used for collecting point cloud data, the panoramic camera 112 is used for collecting image data, the differential global positioning system and the inertial sensor 113 are used for collecting position and attitude information, the laser radar 111, the panoramic camera 112, the differential global positioning system and the inertial sensor 113 are respectively connected with the synchronization module 114, and the synchronization module is used for synchronizing the point cloud data and the image data with the position and attitude information respectively. In addition, the differential global positioning system and inertial sensor 113 is provided with an antenna so as to be connected to an external satellite through the antenna.
Further, an embodiment of the present invention further provides a street scanning method, as shown in fig. 2, the method includes:
201. and acquiring point cloud data, image data and position parameters.
The point cloud data, the image data and the position parameters are respectively data obtained after scanning a street in the moving process.
In the embodiment of the present invention, when street scanning is performed, the scanning is performed based on the scanner described in the foregoing embodiment. Specifically, the point cloud data can be collected through a laser radar, the image data can be collected through a preset camera, and the position parameters can be collected through a positioning module, such as a global positioning system. Specifically, the obtaining method may include, but is not limited to, any of the above-mentioned methods, and is not limited herein.
202. And synchronizing the point cloud data and the image data according to the time information.
Wherein the time information is determined simultaneously with the point cloud data and the image data at the time of acquisition. After the point cloud data, the image data and the position parameters are obtained in the above steps, the data are all obtained in the moving process, therefore, in order to generate the street image, the data need to be synchronized, i.e. determining point cloud data, image data and correspondence acquired at the same time, and, in addition, in the course of the actual operation, the acquisition means based on the point cloud data and the image data may have a distance at the actual position, i.e., the difference between the coordinate systems of the acquired data, to ensure accuracy in subsequent fusion, in embodiments of the invention it is also possible to perform the fusion on the acquired point cloud data set image data prior to acquiring the point cloud data set image data, and initializing coordinate systems of the acquisition devices of the point cloud data and the image data to ensure that the point cloud data and the image data are positioned in the same coordinate system, thereby ensuring the accuracy.
203. And fusing the synchronized point cloud data and the synchronized image data according to the corresponding position parameters to obtain fused image information.
The image information is a street image fused with point cloud data and image data.
After the point cloud data and the image data are synchronized in step 202, synchronized point cloud data and synchronized image data are obtained. In order to fuse the two data, a street image containing point cloud data and image data is obtained. The synchronized point cloud data and the synchronized image data can be fused according to the method of the step, wherein the fusion process can perform fusion of single-frame images through position coordinates among the data, and when multi-frame image fusion is performed, a plurality of single-frame images can be spliced according to position parameters.
Of course, reference may be made to, but not limited to, the following in the specific fusion thereof: firstly, determining the corresponding relation between the position parameters corresponding to the point cloud data at each moment and the position parameters corresponding to the image data at each moment, and splicing the point cloud data and the image data at the same moment according to the position parameters to obtain the corresponding street scanning image.
Compared with the problem of low mapping efficiency in the existing street mapping process, the street scanning method provided by the embodiment of the invention can obtain the fused image information by firstly obtaining the point cloud data, the image data and the position parameters, then synchronizing the point cloud data and the image data according to the time information, and finally fusing the synchronized point cloud data and the synchronized image data according to the corresponding position parameters. The point cloud data, the image data and the position parameters are respectively data obtained after scanning a street in a moving process, so that the realization of street scanning in the moving process is realized, the problem that the arrangement position needs to be selected every time in the existing process is avoided, and the process of time consumption caused by moving a surveying and mapping type laser radar is avoided, so that the scanning efficiency is improved. In addition, based on the integration of the point cloud data and the image data in the scanning process, the integration of the point cloud data set and the image data can be realized, so that the image data obtained by street scanning is more comprehensive.
For purposes of more detailed description, another street scanning method is provided in accordance with an embodiment of the present invention, and as particularly shown in fig. 3, the method includes:
301. and acquiring point cloud data, image data and position parameters.
The point cloud data, the image data and the position parameters are respectively data obtained after scanning a street in the moving process. Specifically, the point cloud data is collected by a laser radar; the image data is acquired by a panoramic camera; the position parameters are position and attitude information, and the position and attitude information is acquired through a differential global positioning system and an inertial sensor.
In the embodiment of the invention, the laser radar can adopt a mechanical rotation type multi-line 3D laser radar commonly used for intelligent driving of automobiles at present. The laser radar has a 360-degree horizontal field angle, a 0.1-0.2-degree horizontal field angle resolution, a 20-40-degree vertical field angle, a 0.3-2-degree vertical field angle resolution and centimeter-level distance measurement accuracy, and can be used for drawing accurate 3D point cloud data of surrounding scenes.
The panoramic camera can be obtained by splicing the view fields of four industrial cameras with ten million-level pixels, and is widely applied to street view collection at present. The panoramic camera also has a 360-degree horizontal field angle and a 270-300-degree vertical field angle, and can obtain high-definition RGB images of streets.
The differential Global positioning System, namely a GNSS (Global Navigation Satellite System, which is called a Global Navigation Satellite System for short) adopts a differential mode to acquire high-precision position information. However, in a complex dynamic environment, especially in a large city, the problem of multipath reflection is significant, so that the obtained GNSS positioning information is easy to generate an error of several meters. In addition, since the update frequency of the GNSS is low (the update frequency is 10Hz), it is difficult to give accurate real-time positioning when the vehicle is traveling fast. Navigation that relies solely on GNSS is likely to result in traffic accidents. Therefore, GNSS generally assists an Inertial sensor, i.e., an INS (Inertial Navigation System, abbreviated as INS), according to an embodiment of the present invention to enhance the positioning accuracy. The INS is a high-frequency (1KHz) sensor for detecting acceleration and rotational motion, but the INS has the problems of deviation accumulation, noise and the like to influence the result. By using a Kalman filtering-based sensor fusion technology, GNSS and INS data can be fused, and the advantages of high positioning precision and no error accumulation of the GNSS and the advantages of autonomy and instantaneity of the INS are combined, so that higher positioning precision is obtained. When the GPS signal is good, the positioning information and the attitude information of the data acquisition unit are subject to the fusion data of the GNSS and the INS.
However, in some special scenarios, due to the fact that there may be a long-term loss of GNSS signals, for example, a narrow street or the like may block a relatively serious area, and as time increases, the positioning accuracy of the INS may gradually decrease, so that the positioning accuracy is reduced. At the moment, the laser point cloud instant positioning and mapping technology SLAM is adopted to combine INS and GNSS to carry out fusion positioning. Wherein, SLAM (simultaneous localization and mapping, synchronous positioning and mapping, referred to as SLAM for short). And the position before the loss of the GNSS differential signal and the position after the recovery are used as end point constraints, the middle position is mapped by using a laser SLAM, and in order to avoid the problem of the accumulative error of the SLAM, nonlinear optimization methods such as graph optimization and the like are adopted to optimize the SLAM.
302. And synchronizing the point cloud data and the image data according to the time information.
In the embodiment of the invention, the execution process of the synchronization process can be synchronous operation by using a synchronization module after a sensor in a street scanner is triggered, the module can mainly generate a trigger signal of a panoramic camera and is used for synchronizing data of a laser radar, the panoramic camera and GNSS + INS in millisecond order, namely point cloud data and image data in the step are synchronous operated according to time, specifically, the synchronization mode can be performed by selecting any existing mode, and the selected synchronization mode is not specifically limited on the premise of only ensuring the accuracy of the synchronization result in the step.
303. And fusing the synchronized point cloud data and the synchronized image data according to the corresponding position parameters to obtain fused image information.
During the fusion, the single-frame images can be fused first, and the multi-frame images are spliced based on the fused single-frame images. Specifically, the process of fusing a single-frame image to multiple frames may include: firstly, according to the corresponding relation between point cloud data obtained after synchronization and position and attitude information and the corresponding relation between image data obtained after synchronization and the position and attitude information, point cloud data and corresponding image data under the same position coordinate system are determined and are superposed to obtain a single-frame image; and then, splicing the point cloud data and the image data according to the position relation between the images in the position parameters to obtain the fused image information containing the point cloud data and the image data. Therefore, the point cloud data and the image data can be fused according to the position parameters during fusion, and the accuracy of the fused image is ensured.
304. And carrying out vectorization operation on the fused image information to obtain a vectorized image.
And the vectorized image is a line image of the street obtained after non-target objects are removed from the street. Wherein, this step can include: carrying out feature extraction on the fused image information; identifying objects in the street according to the extracted features on the fused image information; classifying the objects in the street according to target objects and non-target objects according to preset screening conditions, wherein the preset screening conditions comprise the target objects set by a user and corresponding characteristics; and deleting the non-target object from the fused image information, and extracting lines to obtain the vectorized image after extracting the lines. Specifically, the implementation process may include, but is not limited to, the following steps: first, relevant objects in a high-precision map, such as buildings, vehicles, trees, roads, pedestrians, etc., are identified and classified. Then, the uninteresting data are removed, and only the point cloud of the interesting part is reserved. For example, it is desirable to obtain vector information of streets, and to require information of fixed facilities such as buildings and roads, and to eliminate the information of vehicles, trees, pedestrians, etc. if the information is not of interest. Therefore, on one hand, the data volume can be reduced, the pressure of post data processing is reduced, the data processing speed is improved, on the other hand, the related interference data is also reduced, and the extraction of the required feature information is facilitated. In addition, the image data acquired by the panoramic camera has high resolution, and the detailed information such as color, texture and the like is rich, so that the characteristics such as related lines, surfaces and the like can be extracted easily, and specific equations of the related lines and surfaces, namely vectorization equations, can be obtained and displayed by combining the 3D position information of the point cloud, so that the corresponding vectorization image can be obtained, and the subsequent opening and editing in tools such as CAD and the like are facilitated.
Further, as an implementation of the methods shown in fig. 2 and fig. 3, an embodiment of the present invention provides a street scanning apparatus. The embodiment of the apparatus corresponds to the embodiment of the method, and for convenience of reading, details in the embodiment of the apparatus are not repeated one by one, but it should be clear that the apparatus in the embodiment can correspondingly implement all the contents in the embodiment of the method. As shown in fig. 4 in detail, the apparatus includes:
the acquiring unit 41 may be configured to acquire point cloud data, image data, and position parameters, where the point cloud data, the image data, and the position parameters are respectively data obtained after scanning a street in a moving process;
a synchronization unit 42, configured to synchronize the position parameters acquired by the acquisition unit 41 with the point cloud data and the image data, respectively;
the fusion unit 43 may be configured to fuse the point cloud data synchronized by the synchronization unit 42 and the synchronized image data according to the corresponding position parameters to obtain fused image information, where the image information is a street image fused with the point cloud data and the image data.
Further, as shown in fig. 5, the point cloud data is collected by a laser radar;
the image data is acquired by a panoramic camera;
the position parameters are position and attitude information, and the position and attitude information is acquired through a differential global positioning system and an inertial sensor.
Further, as shown in fig. 5, the apparatus further includes:
the vectorization operation unit 44 may be configured to perform vectorization operation on the image information fused by the fusion unit 43 to obtain a vectorized image, where the vectorized image is obtained by extracting a meridian bar in a street from which a non-target object is removed.
Further, as shown in fig. 5, the vectoring operation unit 44 includes:
an extraction module 441, configured to perform feature extraction on the fused image information;
an identifying module 442, configured to identify an object in the street from the fused image information according to the features extracted by the extracting module 441;
the classifying module 443 may be configured to classify the objects in the street identified by the identifying module 442 according to a target object and a non-target object according to a preset screening condition, where the preset screening condition includes the target object and a corresponding feature set by a user;
the vectorization module 444 may be configured to delete the non-target object classified by the classification module 443 from the fused image information, and extract a line to obtain the vectorized image.
Further, as shown in fig. 5, the fusion unit 43 includes:
the superimposing module 431 may be configured to superimpose the synchronized point cloud data and the synchronized image data to obtain single-frame images corresponding to different times;
the stitching module 432 may be configured to determine a relative relationship between the single-frame images according to the position information in the position parameter, and stitch all the single-frame images according to the relative relationship to obtain the fused image information including the point cloud data and the image data.
Compared with the problem of low mapping efficiency in the existing street mapping process, the street scanning method, the street scanning device and the street scanning scanner provided by the embodiment of the invention can obtain the fused image information by firstly obtaining the point cloud data, the image data and the position parameters, then synchronizing the point cloud data and the image data according to the time information and finally fusing the synchronized point cloud data and the synchronized image data according to the corresponding position parameters. The point cloud data, the image data and the position parameters are respectively data obtained after scanning a street in a moving process, so that the realization of street scanning in the moving process is realized, the problem that the arrangement position needs to be selected every time in the existing process is avoided, and the process of time consumption caused by moving a surveying and mapping type laser radar is avoided, so that the scanning efficiency is improved. In addition, based on the integration of the point cloud data and the image data in the scanning process, the integration of the point cloud data set and the image data can be realized, so that the image data obtained by street scanning is more comprehensive.
The street scanning device comprises a processor and a memory, wherein the acquisition unit, the synchronization unit, the fusion unit and the like are stored in the memory as program units, and the processor executes the program units stored in the memory to realize corresponding functions.
The processor comprises a kernel, and the kernel calls the corresponding program unit from the memory. The kernel can be set to be one or more than one, the problem of low measurement efficiency in the existing street scanning process is solved by adjusting the kernel parameters, and the measurement efficiency in the urban street surveying and mapping process is improved.
The memory may include volatile memory in a computer readable medium, Random Access Memory (RAM) and/or nonvolatile memory such as Read Only Memory (ROM) or flash memory (flash RAM), and the memory includes at least one memory chip.
Embodiments of the present invention provide a non-transitory computer-readable storage medium storing computer instructions that cause a computer to perform the street scanning method described in the above embodiments.
The present application further provides a computer program product adapted to perform program code for initializing the following method steps when executed on a data processing device: acquiring point cloud data, image data and position parameters, wherein the point cloud data, the image data and the position parameters are respectively data obtained after scanning a street in a moving process; synchronizing the point cloud data with the image data according to the time information; and fusing the synchronized point cloud data and the synchronized image data according to the corresponding position parameters to obtain fused image information, wherein the image information is a street image fused with the point cloud data and the image data.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In a typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, Random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). The memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in the process, method, article, or apparatus that comprises the element.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The above are merely examples of the present application and are not intended to limit the present application. Various modifications and changes may occur to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the scope of the claims of the present application.

Claims (12)

1. A street scanner, comprising: a data acquisition component, a data processing component and a mobile carrier, wherein,
the data acquisition assembly is arranged at the top of the mobile carrier and is used for acquiring point cloud data, image data and position parameters;
the data acquisition assembly is connected with the data processing assembly, and the data processing assembly is used for processing the data output by the acquisition assembly.
2. The scanner of claim 1, wherein the data acquisition assembly comprises: the system comprises a laser radar, a panoramic camera, a differential global positioning system, an inertial sensor and a synchronization module;
the laser radar is used for collecting point cloud data, the panoramic camera is used for collecting image data, the differential global positioning system and the inertial sensor are used for collecting position and attitude information, the laser radar, the panoramic camera, the differential global positioning system and the inertial sensor are respectively connected with the synchronization module, and the synchronization module is used for synchronizing the point cloud data and the image data with the position and attitude information respectively.
3. A street scanning method, which is implemented on the basis of a street scanner according to claims 1-2, characterized in that it comprises:
acquiring point cloud data, image data and position parameters, wherein the point cloud data, the image data and the position parameters are respectively data obtained after scanning a street in a moving process;
synchronizing the point cloud data and the image data according to time information, wherein the time information is determined when the point cloud data and the image data are acquired;
and fusing the synchronized point cloud data and the synchronized image data according to the corresponding position parameters to obtain fused image information, wherein the image information is a street image fused with the point cloud data and the image data.
4. The method of claim 3,
the point cloud data is collected through a laser radar;
the image data is acquired by a panoramic camera;
the position parameters are position and attitude information, and the position and attitude information is acquired through a differential global positioning system and an inertial sensor.
5. The method of claim 4, wherein after fusing the synchronized point cloud data and the synchronized image data according to the corresponding position parameters to obtain fused image information, the method comprises:
and performing vectorization operation on the fused image information to obtain a vectorized image, wherein the vectorized image is a line image of a street obtained after non-target objects are removed from the street.
6. The method according to claim 5, wherein performing vectorization operation on the fused image information to obtain a vectorized image comprises:
carrying out feature extraction on the fused image information;
identifying objects in the street according to the extracted features on the fused image information;
classifying the objects in the street according to target objects and non-target objects according to preset screening conditions, wherein the preset screening conditions comprise the target objects set by a user and corresponding characteristics;
and deleting the non-target object from the fused image information, and extracting lines to obtain the vectorized image.
7. The method of claim 6, wherein the fusing the synchronized point cloud data and the synchronized image data according to the corresponding position parameters to obtain fused image information comprises:
superposing the point cloud data after synchronization and the image data after synchronization to obtain single-frame images corresponding to different moments;
and determining the relative relationship between the single-frame images according to the position information in the position parameters, and splicing all the single-frame images according to the relative relationship to obtain the fused image information containing the point cloud data and the image data.
8. A street scanning apparatus, characterized in that the apparatus comprises:
the system comprises an acquisition unit, a processing unit and a display unit, wherein the acquisition unit is used for acquiring point cloud data, image data and position parameters, and the point cloud data, the image data and the position parameters are respectively data obtained after scanning a street in the moving process;
the synchronization unit is used for synchronizing the point cloud data and the image data according to time information, and the time information is determined when the point cloud data and the image data are acquired;
and the fusion unit is used for fusing the synchronized point cloud data and the synchronized image data according to the corresponding position parameters to obtain fused image information, wherein the image information is a street image fused with the point cloud data and the image data.
9. The apparatus of claim 8,
the point cloud data is collected through a laser radar;
the image data is acquired by a panoramic camera;
the position parameters are position and attitude information, and the position and attitude information is acquired through a differential global positioning system and an inertial sensor.
10. The apparatus of claim 9, further comprising:
and the vectorization operation unit is used for carrying out vectorization operation on the fused image information to obtain a vectorized image, wherein the vectorized image is a line image of a street obtained after non-target objects are removed from the street.
11. The apparatus according to claim 10, wherein the vectoring operation unit comprises:
the extraction module is used for extracting the characteristics of the fused image information;
the recognition module is used for recognizing the object in the street according to the extracted features;
the classification module is used for classifying the objects in the street according to target objects and non-target objects according to preset screening conditions, wherein the preset screening conditions comprise the target objects and corresponding characteristics set by a user;
and the vectorization module is used for deleting the non-target object from the fused image information and extracting lines to obtain the vectorized image.
12. The apparatus of claim 11, wherein the fusion unit comprises:
the superposition module is used for superposing according to the synchronized point cloud data and the synchronized image data to obtain single-frame images corresponding to different moments;
and the splicing module is used for determining the relative relationship between the single-frame images according to the position information in the position parameters, and splicing all the single-frame images according to the relative relationship to obtain the fused image information containing the point cloud data and the image data.
CN201910349750.2A 2019-04-28 2019-04-28 Street scan method, device and scanner Pending CN110223223A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910349750.2A CN110223223A (en) 2019-04-28 2019-04-28 Street scan method, device and scanner

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910349750.2A CN110223223A (en) 2019-04-28 2019-04-28 Street scan method, device and scanner

Publications (1)

Publication Number Publication Date
CN110223223A true CN110223223A (en) 2019-09-10

Family

ID=67820150

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910349750.2A Pending CN110223223A (en) 2019-04-28 2019-04-28 Street scan method, device and scanner

Country Status (1)

Country Link
CN (1) CN110223223A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110531377A (en) * 2019-10-08 2019-12-03 北京邮电大学 Data processing method, device, electronic equipment and the storage medium of radar system
CN110942514A (en) * 2019-11-26 2020-03-31 三一重工股份有限公司 Method, system and device for generating point cloud data and panoramic image
CN113678136A (en) * 2019-12-30 2021-11-19 深圳元戎启行科技有限公司 Obstacle detection method and device based on unmanned technology and computer equipment

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140253689A1 (en) * 2013-03-08 2014-09-11 Kabushiki Kaisha Topcon Measuring Instrument
CN104268935A (en) * 2014-09-18 2015-01-07 华南理工大学 Feature-based airborne laser point cloud and image data fusion system and method
CN107121064A (en) * 2017-04-27 2017-09-01 上海华测导航技术股份有限公司 A kind of laser scanner
CN107421507A (en) * 2017-04-28 2017-12-01 上海华测导航技术股份有限公司 Streetscape data acquisition measuring method
CN109297510A (en) * 2018-09-27 2019-02-01 百度在线网络技术(北京)有限公司 Relative pose scaling method, device, equipment and medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140253689A1 (en) * 2013-03-08 2014-09-11 Kabushiki Kaisha Topcon Measuring Instrument
CN104268935A (en) * 2014-09-18 2015-01-07 华南理工大学 Feature-based airborne laser point cloud and image data fusion system and method
CN107121064A (en) * 2017-04-27 2017-09-01 上海华测导航技术股份有限公司 A kind of laser scanner
CN107421507A (en) * 2017-04-28 2017-12-01 上海华测导航技术股份有限公司 Streetscape data acquisition measuring method
CN109297510A (en) * 2018-09-27 2019-02-01 百度在线网络技术(北京)有限公司 Relative pose scaling method, device, equipment and medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
谢波等: "车载移动激光扫描测绘系统设计", 《压电与声光》 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110531377A (en) * 2019-10-08 2019-12-03 北京邮电大学 Data processing method, device, electronic equipment and the storage medium of radar system
CN110531377B (en) * 2019-10-08 2022-02-25 北京邮电大学 Data processing method and device of radar system, electronic equipment and storage medium
CN110942514A (en) * 2019-11-26 2020-03-31 三一重工股份有限公司 Method, system and device for generating point cloud data and panoramic image
CN113678136A (en) * 2019-12-30 2021-11-19 深圳元戎启行科技有限公司 Obstacle detection method and device based on unmanned technology and computer equipment

Similar Documents

Publication Publication Date Title
EP3519770B1 (en) Methods and systems for generating and using localisation reference data
CN111830953B (en) Vehicle self-positioning method, device and system
EP3332219B1 (en) Methods and systems for generating and using localisation reference data
CN106461402B (en) For determining the method and system of the position relative to numerical map
US8571265B2 (en) Measurement apparatus, measurement method, and feature identification apparatus
CN108362295A (en) Vehicle route guides device and method
Wang et al. Automated road sign inventory system based on stereo vision and tracking
US10473453B2 (en) Operating device, operating method, and program therefor
CN102037325A (en) Computer arrangement and method for displaying navigation data in 3D
CN110223223A (en) Street scan method, device and scanner
CN115164918B (en) Semantic point cloud map construction method and device and electronic equipment
JP2006119591A (en) Map information generation method, map information generation program and map information collection apparatus
CN113838129B (en) Method, device and system for obtaining pose information
CN115409910A (en) Semantic map construction method, visual positioning method and related equipment
CN113435227A (en) Map generation and vehicle positioning method, system, device and storage medium
CN111693043B (en) Map data processing method and apparatus
Lee et al. Semi-automatic framework for traffic landmark annotation
CN115909183B (en) Monitoring system and monitoring method for external environment of fuel gas delivery
CN117132728B (en) Method and device for constructing map, electronic equipment and storage medium
CN114693574B (en) Unmanned simulation scene generation method and equipment
Zuev et al. Mobile system for road inspection and 3D modelling
CN118628860A (en) Image processing method and device, electronic equipment and storage medium
Center AUTOMATED REAL-TIME OBJECT DETECTION AND RECOGNITION ON TRANSPORTATION FACILITIES

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20190910