CN110823211A - Multi-sensor map construction method, device and chip based on visual SLAM - Google Patents

Multi-sensor map construction method, device and chip based on visual SLAM Download PDF

Info

Publication number
CN110823211A
CN110823211A CN201911040508.3A CN201911040508A CN110823211A CN 110823211 A CN110823211 A CN 110823211A CN 201911040508 A CN201911040508 A CN 201911040508A CN 110823211 A CN110823211 A CN 110823211A
Authority
CN
China
Prior art keywords
target object
point cloud
initial
distance
cloud information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911040508.3A
Other languages
Chinese (zh)
Inventor
肖刚军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhuhai Amicro Semiconductor Co Ltd
Original Assignee
Zhuhai Amicro Semiconductor Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhuhai Amicro Semiconductor Co Ltd filed Critical Zhuhai Amicro Semiconductor Co Ltd
Priority to CN201911040508.3A priority Critical patent/CN110823211A/en
Publication of CN110823211A publication Critical patent/CN110823211A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/005Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 with correlation of navigation data from several sources, e.g. map or contour matching
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/02Systems using the reflection of electromagnetic waves other than radio waves
    • G01S17/06Systems determining position data of a target
    • G01S17/08Systems determining position data of a target for measuring distance only
    • G01S17/10Systems determining position data of a target for measuring distance only using transmission of interrupted, pulse-modulated waves
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/89Lidar systems specially adapted for specific applications for mapping or imaging

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Electromagnetism (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Automation & Control Theory (AREA)
  • Optical Radar Systems And Details Thereof (AREA)

Abstract

The invention relates to a method, a device and a chip for constructing a multi-sensor map based on visual SLAM, obtaining the distance between the laser radar sensor and the target object according to the initial detection signal transmitting time and the echo detection signal receiving time of the laser radar sensor, obtaining the orientation of the target object according to the initial detection signal and the echo detection signal, acquiring first point cloud information according to the distance to the target object and the orientation of the target object, performing information matching on the first point cloud information and the initial point cloud information based on the distance to the target object and the orientation of the target object, fusing the first point cloud information and the initial point cloud information according to a matching result to construct a point cloud map, the acquired point cloud information is fused with the original point cloud information, and the formed map is high in precision, so that the robot can realize autonomous obstacle avoidance and route planning.

Description

Multi-sensor map construction method, device and chip based on visual SLAM
Technical Field
The invention relates to the technical field of intelligent robots, in particular to a method, a device and a chip for constructing a multi-sensor map based on visual SLAM.
Background
Although a direct map constructed by a feature point map generally has a small data volume and is convenient for data storage, extraction and use, the direct map cannot reflect some necessary information of the environment, such as the position of an obstacle in the environment, so that the map can be basically used for positioning only, and a robot cannot perform autonomous obstacle and path planning. In addition, most of the existing robots adopt a visual sensor or a laser sensor independently when constructing a map, the obtained data is effective, and the map precision is low.
Disclosure of Invention
The invention provides a method, a device and a chip for constructing a multi-sensor map based on a visual SLAM (simultaneous localization and mapping), and mainly aims to improve the accuracy of a robot in constructing the map, solve the problem of positioning of the robot and realize autonomous obstacle avoidance and route planning.
In order to achieve the above object, the present invention provides a method for constructing a multi-sensor map based on a visual SLAM, including: controlling an initial detection signal of a laser radar sensor to a target object, and recording the emission time of the initial detection signal; receiving echo detection signals transmitted from the target object in real time and the receiving time of the echo detection signals; obtaining the distance between the target object and the initial detection signal transmitting time and the echo detection signal receiving time; obtaining the direction of the target object according to the initial detection signal and the echo detection signal; acquiring first point cloud information according to the distance to the target object and the direction of the target object; and performing information matching on the first point cloud information and the initial point cloud information based on the distance to the target object and the direction of the target object, and fusing the first point cloud information and the initial point cloud information according to a matching result to construct a point cloud map.
Optionally, the step of obtaining the first point cloud information according to the distance to the target object and the orientation of the target object further includes: controlling a vision acquisition sensor to acquire initial image information of a current position and acquire first image information of a target object; and extracting image characteristics in the initial image information and the first image information, and acquiring the distance to the target object and the direction of the target object according to the image characteristics.
Optionally, the step of obtaining the first point cloud information according to the distance to the target object and the orientation of the target object includes: controlling a flight time ranging sensor to send an initial laser signal to a target object, and recording the emission time of the initial laser signal; receiving a reflected laser signal reflected from the target object and the receiving time of the reflected laser signal in real time; obtaining the distance between the target object and the initial laser signal transmitting time and the transmitting laser signal receiving time; and obtaining the position of the target object according to the initial laser signal and the transmitting laser signal.
Optionally, the step of obtaining the first point cloud information according to the distance to the target object and the orientation of the target object includes: controlling an ultrasonic sensor to send an initial ultrasonic signal to a target object, and recording the transmitting time of the initial ultrasonic signal; receiving a reflected ultrasonic signal reflected from the target object and the reflected ultrasonic signal receiving time in real time; obtaining the distance between the target object and the ultrasonic signal transmitting time and the ultrasonic signal receiving time according to the initial ultrasonic signal transmitting time and the transmitting ultrasonic signal receiving time; and obtaining the position of the target object according to the initial ultrasonic signal and the transmitted ultrasonic signal.
Optionally, the step of obtaining the first point cloud information according to the distance to the target object and the orientation of the target object includes: controlling a triangular ranging laser radar to send an initial light spot signal to a target object, and recording the initial ultrasonic signal transmitting time; receiving a first light spot signal reflected from the target object in real time and receiving time of the first light spot signal; obtaining the distance between the target object and the initial light spot signal transmitting time and the first light spot signal receiving time; and obtaining the position of the target object according to the initial light spot signal and the first light spot signal.
Optionally, the step of obtaining the first point cloud information according to the distance to the target object and the orientation of the target object includes: and preprocessing the acquired first point cloud information.
Optionally, the step of performing information matching on the first point cloud information and the initial point cloud information based on the distance to the target object and the orientation of the target object, and fusing the first point cloud information and the initial point cloud information according to a matching result to construct a point cloud map includes: presetting a walking route according to the specific position of the target object on the point cloud map; and controlling the robot to move to the target object according to a preset walking path.
Optionally, the step of presetting the walking route according to the specific position of the target object on the point cloud map further includes: receiving walking obstacle signals; and adjusting the walking route.
In addition, to achieve the above object, the present invention also provides a multi-sensor map building apparatus, including: a memory, a processor, and a visual SLAM-based multi-sensor mapping program stored on the memory and executable on the processor, the visual SLAM-based multi-sensor mapping program when executed by the processor implementing the steps of the method of visual SLAM-based multi-sensor mapping as described above.
In addition, to achieve the above object, the present application also proposes a chip having a visual SLAM-based multi-sensor mapping program stored thereon, which when executed by a processor implements the steps of the method for visual SLAM-based multi-sensor mapping as described above.
The invention provides a method, a device and a chip for constructing a multi-sensor map based on visual SLAM, which are used for controlling a laser radar sensor to initially detect a signal to a target object, recording the emission time of the initial detection signal, receiving an echo detection signal emitted from the target object and the receiving time of the echo detection signal in real time, obtaining the distance between the laser radar sensor and the target object according to the emission time of the initial detection signal and the receiving time of the echo detection signal, obtaining the direction of the target object according to the initial detection signal and the echo detection signal, obtaining first point cloud information according to the distance between the laser radar sensor and the target object and the direction of the target object, performing information matching on the first point cloud information and the initial point cloud information based on the distance between the laser radar sensor and the target object and the direction of the target object, fusing the first point cloud information and the initial point cloud information to construct a point cloud map according to a matching result, the acquired point cloud information is fused with the original point cloud information, and the formed map is high in precision, so that the robot can realize autonomous obstacle avoidance and route planning.
Drawings
In order to more clearly illustrate the embodiments or exemplary technical solutions of the present invention, the drawings used in the embodiments or exemplary descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a schematic terminal structure diagram of a hardware operating environment according to an embodiment of the present invention;
FIG. 2 is a schematic flow chart of a first embodiment of a method for multi-sensor mapping according to the present invention;
FIG. 3 is a schematic flow chart of a method for multi-sensor mapping according to a second embodiment of the present invention;
FIG. 4 is a flowchart illustrating a method for constructing a multi-sensor map according to a third embodiment of the present invention.
FIG. 5 is a flowchart illustrating a method for constructing a multi-sensor map according to a fourth embodiment of the present invention.
FIG. 6 is a flowchart illustrating a fifth embodiment of a multi-sensor mapping method according to the present invention.
FIG. 7 is a flowchart illustrating a method for constructing a multi-sensor map according to a sixth embodiment of the present invention.
FIG. 8 is a flowchart illustrating a method for multi-sensor mapping according to a seventh embodiment of the present invention.
FIG. 9 is a schematic diagram of the triangulation method of the present invention.
The implementation, functional features and advantages of the objects of the present invention will be further explained with reference to the accompanying drawings.
Detailed Description
It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
The main solution of the embodiment of the invention is as follows: the method comprises the steps of obtaining the distance of a target, matching the obtained point cloud information with original point cloud information according to the point cloud information obtained by the distance of the target, so that the obtained point cloud information is fused with the original point cloud information, and the formed map is high in precision, so that the robot can achieve autonomous obstacle avoidance and route planning.
Although the direct map constructed by the feature point map in the prior art has small data volume and is convenient for data storage and extraction, the direct map cannot reflect some necessary information of the environment, such as the position of an obstacle in the environment, so that the map can be basically only used for positioning problems and cannot enable a robot to carry out autonomous barrier and path planning.
The invention provides a solution, which is characterized in that the distance of a target is acquired, the acquired point cloud information is matched with the original point cloud information according to the point cloud information acquired by the distance of the target, so that the acquired point cloud information is fused with the original point cloud information, and the robot can conveniently realize autonomous obstacle avoidance and route planning.
As shown in fig. 1, fig. 1 is a schematic diagram of a hardware operating environment of a terminal according to an embodiment of the present invention.
As shown in fig. 1, the terminal may include: a processor 1001, such as a CPU, a network interface 1004, a user interface 1003, a memory 1005, a communication bus 1002. Wherein a communication bus 1002 is used to enable connective communication between these components. The user interface 1003 may include a Display (Display), an input unit such as a Keyboard (Keyboard), a remote controller, and the optional user interface 1003 may also include a standard wired interface, a wireless interface. The network interface 1004 may optionally include a standard wired interface, a wireless interface (such as a non-volatile memory, e.g., disk storage) and the memory 1005 may optionally also be a storage device separate from the processor 1001.
Those skilled in the art will appreciate that the configuration of the terminal shown in fig. 1 is not intended to be limiting and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components.
As shown in fig. 1, a memory 1005, which is a kind of computer storage medium, may include therein an operating system, a network communication module, a user interface module, and a multi-sensor map building program.
In the terminal shown in fig. 1, the network interface 1004 is mainly used for connecting to a backend server and performing data communication with the backend server; the user interface 1003 is mainly used for connecting a client (user side) and performing data communication with the client; and the processor 1001 may be configured to invoke the multi-sensor mapping program stored in the memory 1005 and perform the steps of the method of visual SLAM-based multi-sensor mapping.
Further, the processor 1001 may invoke a network operation control application stored in the memory 1005 and also perform the steps of the method of visual SLAM-based multi-sensor mapping.
The invention provides a multi-sensor map construction method based on visual SLAM.
Referring to fig. 2, fig. 2 is a flowchart illustrating a first embodiment of the steps of the method for constructing a multi-sensor map based on visual SLAM according to the present invention.
The embodiment provides a method for constructing a multi-sensor map based on a visual SLAM, which includes: step S100, controlling an initial detection signal of a laser radar sensor to a target object, and recording the emission time of the initial detection signal; step S200, receiving echo detection signals transmitted from the target object in real time and receiving time of the echo detection signals; step S300, obtaining the distance between the target object and the initial detection signal transmitting time and the echo detection signal receiving time; step S400, obtaining the direction of the target object according to the initial detection signal and the echo detection signal; step S500, acquiring first point cloud information according to the distance to the target object and the direction of the target object; step S600, performing information matching on the first point cloud information and the initial point cloud information based on the distance to the target object and the direction of the target object, and fusing the first point cloud information and the initial point cloud information according to a matching result to construct a point cloud map.
The point cloud can only obtain the unknown environment information of the point cloud at a certain moment, so that the point cloud can only see a part of the environment where the point cloud is located, a map is obtained through the point cloud, and the core process is required to be carried out by matching with an SLAM algorithm, and the SLAM core step in the main framework of a complete SLAM and navigation system comprises the following steps: the first part is called preprocessing, namely, the laser radar original data is optimized, and problematic data are removed or filtered; subsequently, matching is carried out, namely, the point cloud data of the current local environment is searched for a corresponding position on the established map, and the accuracy of the map constructed by the SLAM is directly influenced by the quality of the step; matching and splicing the current point cloud information of the laser radar into the original map, wherein if the matching process is not carried out, the constructed map can be cluttered, and after the part is finished, map fusion is carried out, namely the round of new measurement data is spliced into the original map.
Lidar sensors typically incorporate a receiver. The laser generates and emits a beam of light pulses which impinge on the target and are reflected back and ultimately received by the receiver. The receiver accurately measures the travel time of a light pulse from transmission to reflection, since the light pulse travels at the speed of light, the receiver always receives the previous reflected pulse before the next pulse is transmitted, the travel time can be converted into a distance measurement, the coordinates X, Y, Z of each ground spot can be accurately calculated by combining the height of the laser, the laser scanning angle, the position of the laser from the GPS and the laser transmission direction from the INS, the frequency of the laser beam transmission can be from a few pulses per second to tens of thousands of pulses per second, for example, a system with a frequency of ten thousand pulses per second, the receiver will record sixty thousand points in one minute, and in general, the ground spot spacing of the lidar sensor varies from 2 to 4 m.
The working principle of the laser radar is very similar to that of the radar, the laser is used as a signal source, pulse laser emitted by a laser device strikes trees, roads, bridges and buildings on the ground to cause scattering, a part of light waves can be reflected to a receiver of the laser radar, the distance from the laser radar to a target point is obtained according to calculation of a laser ranging principle, the pulse laser continuously scans a target object to obtain data of all target points on the target, and after the data is used for imaging processing, an accurate three-dimensional image can be obtained.
The embodiment fuses the acquired point cloud information and the original point cloud information so as to facilitate the robot to realize autonomous obstacle avoidance and route planning.
Further, referring to fig. 3, a second embodiment of the present invention is proposed based on the first embodiment, and in this embodiment, before the step of acquiring the first point cloud information according to the distance to the target object and the orientation of the target object, the step further includes: step S700, controlling a vision acquisition sensor to acquire initial image information of a current position and acquire first image information of a target object; step S800, extracting image characteristics in the initial image information and the first image information, and acquiring the distance to the target object and the direction of the target object according to the image characteristics.
The embodiment fuses the acquired point cloud information and the original point cloud information so as to facilitate the robot to realize autonomous obstacle avoidance and route planning.
Further, referring to fig. 4, a third embodiment of the present invention is proposed based on the above embodiment, and in this embodiment, before the step of acquiring the first point cloud information according to the distance to the target object and the orientation of the target object, the step further includes: step S900, controlling a flight time ranging sensor to send an initial laser signal to a target object, and recording the emission time of the initial laser signal; step S1000, receiving a reflected laser signal reflected from the target object in real time and receiving time of the reflected laser signal; step S1100, obtaining the distance to a target object according to the initial laser signal transmitting time and the transmitting laser signal receiving time; and S1200, obtaining the direction of the target object according to the initial laser signal and the emission laser signal.
The time-of-flight ranging sensor is also called ToF, and the ToF ranging method belongs to a two-way ranging technology, and mainly measures the distance between nodes by using the time of flight of a signal going back and forth between two asynchronous transceivers (or reflected surfaces). Under the condition of better Signal level modulation or a non-line-of-sight environment, the estimation result of the distance measurement method based on RSSI (Received Signal Strength Indication) is more ideal; under the sight distance and sight line environment, the estimation method based on the ToF distance can make up the defects of the estimation method based on the RSSI distance.
The ToF ranging method has two key constraints: firstly, the sending equipment and the receiving equipment must be always synchronous, and secondly, the receiving equipment provides the transmission time of the signals. In order to realize clock synchronization, the ToF ranging method adopts clock offset to solve the clock synchronization problem. The Intersil latest ToF signal processing IC-ISL29501 scheme is a typical ToF scheme, which can be used for all lighting conditions and achieves low power consumption for miniaturization and battery applications, because the signal processor technology of the Intersil patent uses phase-based ToF to cope with the effects of ambient light around the test object.
Further, referring to fig. 5, a fourth embodiment of the present invention is proposed based on the above embodiment, and in this embodiment, the step of acquiring the first point cloud information according to the distance to the target object and the orientation of the target object further includes: step S1300, controlling an ultrasonic sensor to send an initial ultrasonic signal to a target object, and recording the emission time of the initial ultrasonic signal; step S1400, receiving a reflected ultrasonic signal reflected from the target object in real time and receiving time of the reflected ultrasonic signal; step S1500, obtaining the distance to the target object according to the initial ultrasonic signal transmitting time and the transmitting ultrasonic signal receiving time; and S1600, obtaining the direction of the target object according to the initial ultrasonic signal and the transmitting ultrasonic signal.
Ultrasonic sensors mainly use direct reflection type detection mode, a target positioned in front of the sensor partially transmits the transmitted sound wave back to a receiver of the sensor, so that the sensor detects the detected object, and also uses opposite type detection mode, one set of opposite type ultrasonic sensors comprises a transmitter and a receiver, the transmitter and the receiver are continuously kept in listening, the target positioned between the receiver and the transmitter can block the receiver from receiving the transmitted sound wave, so that the ultrasonic sensor can generate a switch signal.
Further, referring to fig. 6, a fifth embodiment of the present invention is proposed based on the above embodiment, and in this embodiment, the step of acquiring the first point cloud information according to the distance to the target object and the orientation of the target object further includes: step S1700, controlling a triangular ranging laser radar to send an initial light spot signal to a target object, and recording the initial ultrasonic signal transmitting time; step S1800, receiving a first light spot signal reflected from the target object in real time and receiving time of the first light spot signal; step S1900, obtaining the distance to the target object according to the initial light spot signal transmitting time and the first light spot signal receiving time; and S2000, obtaining the direction of the target object according to the initial light spot signal and the first light spot signal.
The Laser emits a Laser beam at a certain angle beta, and the Laser beam is reflected by an object with a distance d along the Laser direction. The laser beam is received by a generally long strip of CMOS (which can be regarded as a long strip of camera), and the laser beam reflected by the object is imaged by Imager (i.e., CMOS) through a "pinhole. The focal length is f, the vertical distance of the object from the plane is q, the distance between the laser and the focal point is s, the virtual line of the over-focus point parallel to the laser direction, the intersection position of the over-focus point and the Imager is generally known in advance (known after beta is determined), and the position of the point imaged on the Imager after the object laser is reflected is at a distance X from the position.
From FIG. 9, it can be seen that the triangle formed by q, d, beta is similar to the triangle formed by X, f, and thus:
f/X = q/s, then q = f s/X;
also since sin (beta) = q/d, then d = q/(sin (beta));
and finally obtaining: d = f s/(X sin (beta)) since f, s, and beta are known quantities in advance, the only quantity to be measured is X, and thus d is measured by measuring X, i.e. the distance of the object from the laser.
As can be seen from fig. 9, if the distance d is shorter, X becomes larger, and d becomes larger and X becomes smaller. When the distance X is measured by Imager, the distance X can be obtained by only calculating the center of the obtained light spot.
Further, referring to fig. 7, a sixth embodiment of the present invention is proposed based on the above embodiment, and in this embodiment, the step of acquiring the first point cloud information according to the distance to the target object and the orientation of the target object includes: in step S2100, the acquired first point cloud information is preprocessed. Preprocessing, namely optimizing the original data of the laser radar, eliminating problematic data or filtering to obtain more accurate point cloud information.
Further, referring to fig. 8, a seventh embodiment of the present invention is provided based on the above embodiment, and in this embodiment, after the step of performing information matching on the first point cloud information and the initial point cloud information based on the distance to the target object and the orientation of the target object, and fusing the first point cloud information and the initial point cloud information according to a matching result to construct the point cloud map, the method includes: step S2200, presetting a walking route according to the specific position of the target object on the point cloud map; and step S2300, controlling the robot to move to the target object according to a preset walking line.
And constructing a map after the point cloud information and the original point cloud information are fused, and presetting and planning a walking route on the map according to the position and the target position of the current robot, namely searching the point cloud data of the current walking route on the established map to find the corresponding position.
The embodiment is to plan a global path of the robot, specifically determine a position coordinate of the robot in a network map in real time according to a set target location and acquired geographic position information of the robot, and plan and determine an urban road navigation path where the robot reaches the target location in the network map.
The step of presetting a walking route according to the specific position of the target object on the point cloud map further comprises the following steps: receiving walking obstacle signals; and adjusting the walking route.
In order to realize the autonomous barrier and path planning of the robot, only the barrier needs to be identified, and the preset walking route is adjusted, so that the reaction efficiency of the robot is increased, the avoidance capacity of the robot is enhanced, and the user experience is enhanced.
The method comprises the steps of planning a global path of a robot, specifically performing planar two-dimensional processing on an area three-dimensional scene map obtained by processing a three-dimensional scene reconstruction module to obtain a corresponding area two-dimensional scene map used for presenting the occupation condition of an environment object two-dimensional plane of a position area where the robot is located, further determining position state information of the robot in the area two-dimensional scene map, and establishing a corresponding relation of position coordinates of the robot in a network map and the area two-dimensional scene map according to the position coordinate of the robot in the network map, so that local obstacle avoidance path optimization processing is performed on an urban road navigation path of the robot in the network map, wherein the urban road navigation path reaches a target location position according to the occupation condition of the environment object two-dimensional plane of the position area where the robot is located in the two-dimensional scene map.
The invention also provides a multi-sensor map construction device, which comprises: a memory, a processor, and a visual SLAM-based multi-sensor mapping program stored on the memory and executable on the processor, the visual SLAM-based multi-sensor mapping program when executed by the processor implementing the steps of the method of visual SLAM-based multi-sensor mapping as described above.
The present application also provides a chip having a visual SLAM-based multi-sensor mapping program stored thereon, where the visual SLAM-based multi-sensor mapping program, when executed by a processor, implements the steps of the method for visual SLAM-based multi-sensor mapping as described above.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present invention may be substantially or partially embodied in the form of a software product, where the computer software product is stored in a storage medium (e.g., ROM/RAM, magnetic disk, optical disk), and includes instructions for enabling a terminal device (e.g., a mobile phone, a computer, a cloud server, an air conditioner, or a network device) to execute the method according to the embodiments of the present invention.
The above description is only a preferred embodiment of the present invention, and not intended to limit the scope of the present invention, and all modifications of equivalent structures and equivalent processes, which are made by using the contents of the present specification and the accompanying drawings, or directly or indirectly applied to other related technical fields, are included in the scope of the present invention.

Claims (10)

1. A method for multi-sensor map construction based on visual SLAM is characterized in that the method for multi-sensor map construction based on visual SLAM comprises the following steps:
controlling an initial detection signal of a laser radar sensor to a target object, and recording the emission time of the initial detection signal;
receiving echo detection signals transmitted from the target object in real time and the receiving time of the echo detection signals;
obtaining the distance between the target object and the initial detection signal transmitting time and the echo detection signal receiving time;
obtaining the direction of the target object according to the initial detection signal and the echo detection signal;
acquiring first point cloud information according to the distance to the target object and the direction of the target object;
and performing information matching on the first point cloud information and the initial point cloud information based on the distance to the target object and the direction of the target object, and fusing the first point cloud information and the initial point cloud information according to a matching result to construct a point cloud map.
2. The method of visual SLAM-based multi-sensor mapping of claim 1, wherein the step of obtaining first point cloud information based on the distance to the target object and the orientation of the target object is preceded by the step of:
controlling a vision acquisition sensor to acquire initial image information of a current position and acquire first image information of a target object;
and extracting image characteristics in the initial image information and the first image information, and acquiring the distance to the target object and the direction of the target object according to the image characteristics.
3. The method of visual SLAM-based multi-sensor mapping of claim 1, wherein the step of obtaining first point cloud information based on the distance to the target object and the orientation of the target object is preceded by the step of:
controlling a flight time ranging sensor to send an initial laser signal to a target object, and recording the emission time of the initial laser signal;
receiving a reflected laser signal reflected from the target object and the receiving time of the reflected laser signal in real time;
obtaining the distance between the target object and the initial laser signal transmitting time and the transmitting laser signal receiving time;
and obtaining the position of the target object according to the initial laser signal and the transmitting laser signal.
4. The method of visual SLAM-based multi-sensor mapping of claim 1, wherein the step of obtaining first point cloud information based on the distance to the target object and the orientation of the target object is preceded by the step of:
controlling an ultrasonic sensor to send an initial ultrasonic signal to a target object, and recording the transmitting time of the initial ultrasonic signal;
receiving a reflected ultrasonic signal reflected from the target object and the reflected ultrasonic signal receiving time in real time;
obtaining the distance between the target object and the ultrasonic signal transmitting time and the ultrasonic signal receiving time according to the initial ultrasonic signal transmitting time and the transmitting ultrasonic signal receiving time;
and obtaining the position of the target object according to the initial ultrasonic signal and the transmitted ultrasonic signal.
5. The method of visual SLAM-based multi-sensor mapping of claim 1, wherein the step of obtaining first point cloud information based on the distance to the target object and the orientation of the target object is preceded by the step of:
controlling a triangular ranging laser radar to send an initial light spot signal to a target object, and recording the initial ultrasonic signal transmitting time;
receiving a first light spot signal reflected from the target object in real time and receiving time of the first light spot signal;
obtaining the distance between the target object and the initial light spot signal transmitting time and the first light spot signal receiving time;
and obtaining the position of the target object according to the initial light spot signal and the first light spot signal.
6. The method of visual SLAM-based multi-sensor mapping of claim 1, wherein the step of obtaining first point cloud information as a function of distance from the target object and orientation of the target object is followed by:
and preprocessing the acquired first point cloud information.
7. The method of visual SLAM-based multi-sensor map construction of claim 1, wherein the step of information matching the first point cloud information and initial point cloud information based on the distance to the target object and the orientation of the target object, fusing the first point cloud information and the initial point cloud information according to the matching result to construct a point cloud map comprises:
presetting a walking route according to the specific position of the target object on the point cloud map;
and controlling the robot to move to the target object according to a preset walking path.
8. The method of visual SLAM-based multi-sensor mapping of claim 7, wherein the step of pre-setting a travel route based on the specific location of the target object on the point cloud map further comprises, after the step of:
receiving walking obstacle signals;
and adjusting the walking route.
9. A multi-sensor mapping device, the multi-sensor mapping device comprising: a memory, a processor, and a visual SLAM-based multi-sensor mapping program stored on the memory and executable on the processor, the visual SLAM-based multi-sensor mapping program when executed by the processor implementing the steps of the visual SLAM-based multi-sensor mapping method of any of claims 1 to 8.
10. A chip having stored thereon a visual SLAM-based multi-sensor mapping program which, when executed by a processor, performs the steps of the method of visual SLAM-based multi-sensor mapping as claimed in any one of claims 1 to 8.
CN201911040508.3A 2019-10-29 2019-10-29 Multi-sensor map construction method, device and chip based on visual SLAM Pending CN110823211A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911040508.3A CN110823211A (en) 2019-10-29 2019-10-29 Multi-sensor map construction method, device and chip based on visual SLAM

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911040508.3A CN110823211A (en) 2019-10-29 2019-10-29 Multi-sensor map construction method, device and chip based on visual SLAM

Publications (1)

Publication Number Publication Date
CN110823211A true CN110823211A (en) 2020-02-21

Family

ID=69551202

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911040508.3A Pending CN110823211A (en) 2019-10-29 2019-10-29 Multi-sensor map construction method, device and chip based on visual SLAM

Country Status (1)

Country Link
CN (1) CN110823211A (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112415522A (en) * 2020-10-27 2021-02-26 衡阳市智谷科技发展有限公司 Semantic SLAM method for real-time and perception of dynamic object information
CN113640802A (en) * 2021-07-30 2021-11-12 国网上海市电力公司 Robot space positioning method and system based on multiple fusion sensors
CN114019953A (en) * 2021-10-08 2022-02-08 中移(杭州)信息技术有限公司 Map construction method, map construction device, map construction equipment and storage medium
CN114131629A (en) * 2021-12-10 2022-03-04 北京东方计量测试研究所 Ground detection robot
WO2023109649A1 (en) * 2021-12-13 2023-06-22 深圳市道通智能航空技术股份有限公司 Data processing method, device, and system, and medium
CN113640802B (en) * 2021-07-30 2024-05-17 国网上海市电力公司 Robot space positioning method and system based on multiple fusion sensors

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106094836A (en) * 2016-08-08 2016-11-09 成都希德电子信息技术有限公司 A kind of microrobot control system based on two-dimensional laser radar and method
CN106681330A (en) * 2017-01-25 2017-05-17 北京航空航天大学 Robot navigation method and device based on multi-sensor data fusion
CN107301654A (en) * 2017-06-12 2017-10-27 西北工业大学 A kind of positioning immediately of the high accuracy of multisensor is with building drawing method
CN107966989A (en) * 2017-12-25 2018-04-27 北京工业大学 A kind of robot autonomous navigation system
CN108710376A (en) * 2018-06-15 2018-10-26 哈尔滨工业大学 The mobile chassis of SLAM and avoidance based on Multi-sensor Fusion
CN109443369A (en) * 2018-08-20 2019-03-08 北京主线科技有限公司 The method for constructing sound state grating map using laser radar and visual sensor
GB2566443A (en) * 2017-09-05 2019-03-20 Nokia Technologies Oy Cross-source point cloud registration
CN110261870A (en) * 2019-04-15 2019-09-20 浙江工业大学 It is a kind of to synchronize positioning for vision-inertia-laser fusion and build drawing method

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106094836A (en) * 2016-08-08 2016-11-09 成都希德电子信息技术有限公司 A kind of microrobot control system based on two-dimensional laser radar and method
CN106681330A (en) * 2017-01-25 2017-05-17 北京航空航天大学 Robot navigation method and device based on multi-sensor data fusion
CN107301654A (en) * 2017-06-12 2017-10-27 西北工业大学 A kind of positioning immediately of the high accuracy of multisensor is with building drawing method
GB2566443A (en) * 2017-09-05 2019-03-20 Nokia Technologies Oy Cross-source point cloud registration
CN107966989A (en) * 2017-12-25 2018-04-27 北京工业大学 A kind of robot autonomous navigation system
CN108710376A (en) * 2018-06-15 2018-10-26 哈尔滨工业大学 The mobile chassis of SLAM and avoidance based on Multi-sensor Fusion
CN109443369A (en) * 2018-08-20 2019-03-08 北京主线科技有限公司 The method for constructing sound state grating map using laser radar and visual sensor
CN110261870A (en) * 2019-04-15 2019-09-20 浙江工业大学 It is a kind of to synchronize positioning for vision-inertia-laser fusion and build drawing method

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
刘洞波等: "《移动机器人粒子滤波定位与地图创建》", 30 September 2016, 湘潭大学出版社 *
张国良等: "《移动机器人的SLAM与VSLAM方法》", 31 October 2018, 西安交通大学出版社 *
蔡来良等: "《SLAM在室内测绘仪器研发中的应用综述》", 《矿山测量》 *
谢宏全等: "《激光雷达测绘技术与应用》", 31 December 2018, 武汉大学出版社 *
赵卫等: "《超高时空分辨多维信息获取技术及其应用》", 31 December 2016, 国防工业出版社 *
陈宗海: "《系统仿真技术及其应用 第19卷》", 31 August 2018, 中国科学技术大学出版社 *

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112415522A (en) * 2020-10-27 2021-02-26 衡阳市智谷科技发展有限公司 Semantic SLAM method for real-time and perception of dynamic object information
CN113640802A (en) * 2021-07-30 2021-11-12 国网上海市电力公司 Robot space positioning method and system based on multiple fusion sensors
CN113640802B (en) * 2021-07-30 2024-05-17 国网上海市电力公司 Robot space positioning method and system based on multiple fusion sensors
CN114019953A (en) * 2021-10-08 2022-02-08 中移(杭州)信息技术有限公司 Map construction method, map construction device, map construction equipment and storage medium
CN114019953B (en) * 2021-10-08 2024-03-19 中移(杭州)信息技术有限公司 Map construction method, device, equipment and storage medium
CN114131629A (en) * 2021-12-10 2022-03-04 北京东方计量测试研究所 Ground detection robot
CN114131629B (en) * 2021-12-10 2024-02-06 北京东方计量测试研究所 Ground detection robot
WO2023109649A1 (en) * 2021-12-13 2023-06-22 深圳市道通智能航空技术股份有限公司 Data processing method, device, and system, and medium

Similar Documents

Publication Publication Date Title
JP7398506B2 (en) Methods and systems for generating and using localization reference data
CN110823211A (en) Multi-sensor map construction method, device and chip based on visual SLAM
US11670193B2 (en) Extrinsic parameter of on-board sensor
EP3540464B1 (en) Ranging method based on laser radar system, device and readable storage medium
US10240934B2 (en) Method and system for determining a position relative to a digital map
CN105579811B (en) Method for the drawing of external mix photo
CN109490825B (en) Positioning navigation method, device, equipment, system and storage medium
US11828604B2 (en) Method and apparatus for positioning vehicle, electronic device, and storage medium
JP6330471B2 (en) Wireless positioning device
CN105513132A (en) Real-time map construction system, method and device
JP2020148694A (en) Self-position estimating device, automatic drive system equipped therewith, and self-generated map sharing device
CN111796315A (en) Indoor and outdoor positioning method and device for unmanned aerial vehicle
CN114545918A (en) Robot inspection system and inspection method capable of accessing mobile terminal
US20230065727A1 (en) Vehicle and vehicle control method
CN112230211A (en) Vehicle positioning method and device, storage medium and vehicle
CN110988795A (en) Mark-free navigation AGV global initial positioning method integrating WIFI positioning
WO2019151110A1 (en) Road surface information acquisition method
TW201140123A (en) Locating electromagnetic signal sources
CN110412613B (en) Laser-based measurement method, mobile device, computer device, and storage medium
CN112985438A (en) Positioning method and device of intelligent blind guiding stick, electronic equipment and storage medium
WO2021031126A1 (en) Unmanned aerial vehicle positioning method and device, unmanned aerial vehicle, and computer-readable medium
Jaya et al. Identifying the Available Parking Area by the Assisstance of Parked-Vehicle
KR20210032694A (en) Method and apparatus for wireless localization based on image
EP3961263A1 (en) Proximity-based navigation method
WO2022256976A1 (en) Method and system for constructing dense point cloud truth value data and electronic device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20200221