WO2020168742A1 - Method and device for vehicle body positioning - Google Patents
Method and device for vehicle body positioning Download PDFInfo
- Publication number
- WO2020168742A1 WO2020168742A1 PCT/CN2019/115903 CN2019115903W WO2020168742A1 WO 2020168742 A1 WO2020168742 A1 WO 2020168742A1 CN 2019115903 W CN2019115903 W CN 2019115903W WO 2020168742 A1 WO2020168742 A1 WO 2020168742A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- vehicle body
- point cloud
- cloud data
- line lidar
- lidar
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/88—Lidar systems specially adapted for specific applications
- G01S17/93—Lidar systems specially adapted for specific applications for anti-collision purposes
Definitions
- the present disclosure relates to the technical field of radar positioning, in particular to a vehicle body positioning method and device.
- Unmanned driving is a type of smart car, also known as wheeled mobile robot, which mainly relies on the intelligent driving instrument based on the computer system in the car to realize the purpose of unmanned driving.
- unmanned driving technology positioning and path planning in robotic systems are a problem. Without positioning, the path cannot be planned.
- unmanned vehicle positioning mostly relies on multi-line lidar (also called three-dimensional lidar) or single-line lidar (also called two-dimensional lidar) or GPS (Global Positioning System, Global Positioning System) positioning.
- Three-dimensional lidar (also called multi-line lidar) is relatively expensive and has limited applications.
- Two-dimensional lidar (also known as single-line lidar) has fewer features in the surrounding environment, such as relatively open deserts or suburbs, and has poor positioning effects.
- GPS positioning technology based on radio technology is used in high buildings or tunnels, indoors, etc. In the environment, the positioning is not accurate enough. Therefore, how to achieve accurate positioning in some special environments, such as open areas or indoors, tunnels, etc., while saving costs has become an urgent technical problem to be solved.
- the present disclosure provides a vehicle body positioning method and device.
- a vehicle body positioning method including:
- At least one single-line lidar to scan and obtain three-dimensional point cloud data of the surrounding environment of the vehicle body, and the directions of the at least one single-line lidar emitting laser light respectively have at least one angle relationship with the plane where the bottom of the vehicle body is located;
- the point cloud feature information is registered with a preset high-precision map to determine the position information of the vehicle body.
- the included angle relationship includes: the single-line lidar is arranged on the vehicle body and located at a position above the plane of the bottom of the vehicle body, and the laser pulse emitted by the single-line lidar is in an oblique upward direction Or diagonally downward.
- the included angle relationship includes: the single-line lidar is arranged on the upper half of the vehicle body in an obliquely downward direction, so that the laser pulse emitted by the single-line lidar is in an obliquely downward direction.
- the scanning and acquiring three-dimensional point cloud data of the surrounding environment of the vehicle body by using at least one single-line lidar includes:
- the three-dimensional point cloud data of the surrounding environment of the vehicle body at the different scanning positions are fused to generate three-dimensional point cloud data based on the same coordinate system.
- the fusing three-dimensional point cloud data of the surrounding environment of the vehicle body at the different scanning positions to generate three-dimensional point cloud data based on the same coordinate system includes:
- the point cloud data corresponding to the observation point is converted into the same coordinate system.
- the converting point cloud data corresponding to the observation point into the same coordinate system based on the coordinate position of the observation point includes:
- the point cloud data corresponding to the observation point in the previous scan is sequentially converted to the coordinate system corresponding to the next scan, until the point cloud data corresponding to the observation point is transferred to the last scan Scan the corresponding coordinate system.
- the description form of the point cloud feature information includes one or more of normal estimation, point feature histogram, fast point feature histogram, and spin image.
- the scanning and acquiring three-dimensional point cloud data of the surrounding environment of the vehicle body using at least one single-line lidar includes:
- the point cloud data corresponding to the observation point is converted into the same coordinate system.
- the relative position relationship includes:
- At least one of the single-line lidar is arranged on the top of the vehicle body, so that the laser pulse emitted by the single-line lidar is obliquely downward;
- At least one of the single-line lidar is arranged at the bottom of the vehicle body, so that the laser pulse emitted by the single-line lidar is parallel to the direction of the bottom of the vehicle body.
- a vehicle body positioning device including:
- the lidar device includes at least one single-line lidar for scanning and acquiring three-dimensional point cloud data of the surrounding environment of the vehicle body, and the direction in which the at least one single-line lidar emits laser light is at least the same as the plane where the bottom of the vehicle body is located.
- Kind of angle relationship ;
- An extraction module for extracting point cloud feature information in the three-dimensional point cloud data
- the registration module is used to register the point cloud feature information with a preset high-precision map to determine the position information of the vehicle body.
- the included angle relationship includes: the single-line lidar is arranged on the vehicle body and located at a position above the plane of the bottom of the vehicle body, and the laser pulse emitted by the single-line lidar is in an oblique upward direction Or diagonally downward.
- the included angle relationship includes: the single-line lidar is arranged on the upper half of the vehicle body in an obliquely downward direction, so that the laser pulse emitted by the single-line lidar is in an obliquely downward direction.
- the lidar device includes:
- the first acquisition module is configured to use at least one single-line lidar to perform multiple scans to acquire three-dimensional point cloud data of the surrounding environment of the vehicle body at different scanning positions;
- the processing module is used to fuse the three-dimensional point cloud data of the surrounding environment of the vehicle body at the different scanning positions to generate three-dimensional point cloud data based on the same coordinate system.
- the processing module includes:
- the determining sub-module is used to determine the observation point in the surrounding environment scanned by the single-line lidar in multiple scans, and obtain the coordinate position of the observation point;
- the conversion sub-module is used to convert the point cloud data corresponding to the observation point into the same coordinate system based on the coordinate position of the observation point.
- the conversion submodule includes:
- the determining unit is used to determine the coordinate system information of the observation point during multiple scans
- the conversion unit based on the coordinate system information and the relative pose information, sequentially converts the point cloud data corresponding to the observation point in the previous scan to the coordinate system corresponding to the next scan, until the point cloud data corresponding to the observation point is transferred To the coordinate system corresponding to the last scan.
- the description form of the point cloud feature information includes one or more of normal estimation, point feature histogram, fast point feature histogram, and spin image.
- the included angle relationship includes a plane parallel to the bottom of the vehicle body.
- the lidar device includes:
- the second acquisition module is configured to acquire the relative position relationship between the single-line laser radars when at least two single-line laser radars are provided on the vehicle body;
- the conversion module is configured to convert the point cloud data corresponding to the observation point by the single-line lidar into the same coordinate system according to the relative position relationship.
- the relative position relationship includes:
- At least one of the single-line lidar is arranged on the top of the vehicle body, so that the laser pulse emitted by the single-line lidar is obliquely downward;
- At least one of the single-line lidar is arranged at the bottom of the vehicle body, so that the laser pulse emitted by the single-line lidar is parallel to the direction of the bottom of the vehicle body.
- a vehicle body positioning device including:
- a memory for storing processor executable instructions
- the processor is configured to execute the method described in any embodiment of the present disclosure.
- a non-transitory computer-readable storage medium When instructions in the storage medium are executed by a processor, the processor can execute the method.
- the present disclosure includes at least one single-line lidar disposed on the vehicle body and located at a position above the plane of the bottom of the vehicle body, and the at least one single-line lidar emits laser light
- the directions are in at least one angle relationship with the plane where the bottom of the vehicle body is located, so that the scanning plane of the single-line radar constantly changes when the vehicle is traveling, so as to obtain three-dimensional point cloud data of the surrounding environment of the vehicle body.
- the two-dimensional point cloud data obtained by the single-line radar in the prior art is more reliable when the surrounding environment is scarce; at the same time, single-line radars located in other parts of the car body are used, such as parallel to the plane where the bottom of the car body is located, Long-distance two-dimensional point cloud data can be obtained, and then fused with the three-dimensional point cloud data to further enhance the accuracy and stability of positioning.
- Fig. 1 is a flow chart showing a method for positioning a vehicle body according to an exemplary embodiment.
- Fig. 2 is a flowchart showing a vehicle body positioning method according to an exemplary embodiment.
- Fig. 3 is a flow chart showing a vehicle body positioning method according to an exemplary embodiment.
- Fig. 4 is a flowchart showing a vehicle body positioning method according to an exemplary embodiment.
- Fig. 5 is a flow chart showing a method for positioning a vehicle body according to an exemplary embodiment.
- Fig. 6 is a block diagram showing a vehicle body positioning device according to an exemplary embodiment.
- Fig. 7 is a block diagram showing a vehicle body positioning device according to an exemplary embodiment.
- Fig. 8 is a block diagram showing a vehicle body positioning device according to an exemplary embodiment.
- Fig. 9 is a block diagram showing a vehicle body positioning device according to an exemplary embodiment.
- Fig. 10 is a block diagram showing a vehicle body positioning device according to an exemplary embodiment.
- Fig. 11 is a block diagram showing a device according to an exemplary embodiment.
- Fig. 12 is a block diagram showing a device according to an exemplary embodiment.
- Lidar is an active sensor, the data formed is in the form of point cloud, which is mainly composed of transmitter, receiver, measurement control and power supply.
- the lidar When the lidar is working, it first emits a laser beam to the measured target, and then measures the time for the reflected or scattered signal to reach the transmitter, the signal strength, and the frequency change, so as to determine the distance, movement speed and orientation of the measured target
- it can also measure the dynamics of particles in the atmosphere that are invisible to the naked eye, such as distance and angle, shape and size, and speed and attitude.
- lidar includes single-line lidar and multi-line lidar.
- the line beam emitted by the laser source in single-line lidar is single-line, and point cloud information is obtained on a fixed scanning plane, such as the horizontal plane;
- a laser source can emit laser pulses of multiple beams.
- the multiple laser sources are arranged and distributed in a vertical direction, and the scanning of the multiple beams is formed by motor rotation, and has multiple scanning planes.
- Fig. 1 is a flowchart showing a vehicle body positioning method according to an exemplary embodiment. Referring to Fig. 1, the method includes the following steps.
- step S101 at least one single-line lidar is used to scan and obtain three-dimensional point cloud data of the surrounding environment of the vehicle body, and the direction in which the at least one single-line lidar emits laser light and the plane at the bottom of the vehicle body are at least one kind of clip. Angle relationship.
- the direction of the at least one single-line lidar emitting laser light is in at least one angle relationship with the plane of the bottom of the vehicle body, and the angle relationship may be an angle number below the plane of the bottom of the vehicle body. , It can also be a number of angles above the bottom plane of the vehicle.
- the laser emission direction of the single-line lidar can include 45 degrees diagonally upwards, or diagonally. 45 degrees downward; when the single-line lidar is installed at the roof of the vehicle, the laser emission direction of the single-line lidar may include an oblique downward 45 degrees.
- step S102 point cloud feature information in the three-dimensional point cloud data is extracted.
- the amount of three-dimensional point cloud data obtained through the step S101 is relatively large, and there are data redundancy and noise.
- the point cloud feature information refers to the information of some special points in the point cloud data, such as sharp edges. , Smooth edges, ridges or valleys, sharps, etc., the feature points can reflect the most basic geometric shape of the model. Extracting point cloud feature information includes detecting feature points from the three-dimensional point cloud data, and retaining several shapes of the model for subsequent Prepare for registration.
- step S103 the point cloud feature information is registered with a preset high-precision map to determine the position information of the vehicle body.
- the GPS of the vehicle can be used to make an approximate location judgment, and then a pre-prepared high-precision map can be used for registration with the point cloud feature information. After the pairing is successful, the location information of the vehicle body can be confirmed.
- the present disclosure includes that at least one single-line lidar is arranged on the vehicle body and located at a position above the plane of the bottom of the vehicle body, and the direction in which the at least one single-line lidar emits laser light and the plane of the bottom of the vehicle body are at least one kind of clip
- the angle relationship makes the scanning plane of the single-line radar continuously change during the vehicle's travel, and then obtain the three-dimensional point cloud data of the surrounding environment of the vehicle body, compared with the two-dimensional point cloud data obtained by the single-line radar in the prior art , When the surrounding environment is scarce, reliability is enhanced.
- the included angle relationship includes: the single-line lidar is arranged on the vehicle body and located at a position above the plane of the bottom of the vehicle body, and the laser pulse emitted by the single-line lidar is in an oblique upward direction Or diagonally downward.
- the single-line lidar is arranged on the vehicle body and located at a position above the plane of the bottom of the vehicle body, including the single-line lidar is arranged at the front middle position of the vehicle body, such as the headlights. Including the single-line lidar is arranged on the upper part of the front of the vehicle body, such as the roof.
- the laser emission direction of the single-line lidar may include an oblique upward direction or an oblique downward direction; when the single-line lidar is set on the roof At the time, the laser emission direction of the single-line lidar may include an oblique downward direction.
- the single-line lidar is arranged on the upper half of the vehicle body in an obliquely downward direction, so that the laser pulse emitted by the single-line lidar is in an obliquely downward direction.
- the position of the single-line lidar is higher than that of other parts of the vehicle body, and then scans and obtains more comprehensive point cloud information. Transmitting laser pulses can obtain point cloud information of objects that are equivalent to the height of the vehicle body to meet the driving needs of unmanned driving.
- the laser pulse emitted by the single-line lidar is in the diagonally downward direction, adjust the appropriate diagonal downward direction
- the number of angles can be within a small range, such as 20 meters, within the three-dimensional point cloud data.
- Fig. 2 is a flowchart of a vehicle body positioning method according to an exemplary embodiment.
- the step S101 uses at least one single-line lidar to scan and acquire the surrounding environment of the vehicle body
- the 3D point cloud data includes step S104 and step S105.
- step S104 at least one single-line lidar is used to perform multiple scans to obtain three-dimensional point cloud data of the surrounding environment of the vehicle body at different scan positions.
- the scanning frequency of the single-line lidar can be set according to time or distance parameters. For example, it can be set that the single-line lidar scans once every 2 seconds. According to the driving speed of the vehicle, The acceleration and other information can be used to obtain the driving distance of the vehicle; it can also be set that the single-line lidar scans once every 1m.
- step S105 the three-dimensional point cloud data of the surrounding environment of the vehicle body at the different scanning positions are merged to generate three-dimensional point cloud data based on the same coordinate system.
- the position of the single-line lidar can be the origin of the coordinate, and the coordinate system can be defined according to the sequence of data acquisition and the direction of motor rotation.
- the X axis is defined to be in the horizontal scanning plane and to the right. It is the positive X-axis, the Y-axis is positive when it is perpendicular to the X-axis in the horizontal scanning plane, and the Z-axis is positive when it is perpendicular to the XY plane.
- the coordinate system of the three-dimensional point cloud data obtained from two adjacent scans is not the same, and the three-dimensional point cloud data needs to be fused to generate the data based on the same Three-dimensional point cloud data of the coordinate system.
- Fig. 3 is a flowchart showing a vehicle body positioning method according to an exemplary embodiment. Referring to Fig. 3, the difference from the above embodiment is that in step S105, the vehicle body is positioned at the different scanning positions. The three-dimensional point cloud data of the surrounding environment is merged to generate three-dimensional point cloud data based on the same coordinate system, including step S107 and step S108.
- step S107 determine the observation point in the surrounding environment scanned by the single-line lidar in multiple scans, and obtain the coordinate position of the observation point;
- a precise clock can be used to obtain the time difference between the time when the single-line lidar emits laser pulses and the time when the reflected signal is received, and the distance S between the observation point and the single-line lidar can be calculated.
- the precision encoding device that comes with the scanning instrument records the vertical scanning angle ⁇ and the horizontal scanning angle ⁇ between adjacent pulses in the encoder, and uses the polar coordinate method to convert the distance and angle into the coordinates of the observation point P, which defines The X axis is in the horizontal scanning plane, the X axis is positive to the right, the Y axis is positive in the horizontal scanning plane perpendicular to the X axis, and the Z axis and the XY plane are positive.
- step S108 based on the coordinate position of the observation point, the information corresponding to the observation point is converted into the same coordinate system.
- the coordinate system used in each scan is different, and the coordinate system used in the observation point in the last scan can be changed to 0 1 -X by means of coordinate conversion.
- 1 Y 1 Z 1 is converted to the coordinate system used by the observation point in the next scan, such as 0 2 -X 2 Y 2 Z 2 , and the coordinate conversion method may include: First, realize the two coordinate centers 0 1 and 0 through translation.
- Fig. 4 is a flowchart showing a vehicle body positioning method according to an exemplary embodiment.
- the difference from the above-mentioned embodiment is that in step S108, based on the coordinate position of the observation point, The information corresponding to the observation point is converted to the same coordinate system, including step S109, step S110, and step S111.
- step S109 determine the coordinate system information of the observation point during multiple scans
- step S110 obtain relative pose information of the vehicle body between two adjacent scans in multiple scans
- step S111 based on the coordinate system information and the relative pose information, the information corresponding to the observation point in the previous scan is sequentially converted to the coordinate system corresponding to the next scan, until the information corresponding to the observation point is transferred to the last One scan corresponds to the coordinate system.
- the scanning frequency of the single-line lidar can be set according to time or distance parameters. For example, according to the distance parameter setting, for example, the vehicle body travels 1 meter forward, and the starting position of the vehicle body is used as the starting point. During this period, the single-line lidar scans once every 0.1 meters, for a total of 10 scans.
- the first scan is started, the first coordinate system information of the observation point is determined, and the position where the single-line laser class is located is the coordinate origin; when the vehicle body continues to travel forward 0.2 meters , Start the second scan, determine the second coordinate system information of the observation point, take the position of the single-line laser class as the origin of the coordinate, and use different coordinate systems for the two scans.
- the point cloud data of the observation point is converted into the coordinate system used in the second scan.
- the wheel speed sensor can be used to measure the rotation angle of the car body between the first scan and the second scan, which can be obtained by an inertial sensor (IMU, Inertial Measurement Unit). According to the obtained distance and angle, Perform the same translation or rotation on the first coordinate system, and convert the point cloud data of the observation point acquired in the first scan to the second coordinate system. By analogy, the point cloud data of the observation point acquired in the ninth scan is converted to the coordinate system used in the tenth scan.
- IMU Inertial Measurement Unit
- the description form of the point cloud feature information includes one or more of normal estimation, point feature histogram, fast point feature histogram, and spin image.
- the normal estimation method can be used to extract point cloud features.
- the advantage of the normal estimation method is that the calculation speed is faster, and the point feature histogram can be used for more complex scenes.
- Method by parameterizing the spatial difference between the query point and the neighboring point, and forming a multi-dimensional histogram to describe the k-neighborhood geometric attributes of the point.
- the high-dimensional hyperspace where the histogram is located provides measurable information for the feature representation Space, it is invariant to the 6-dimensional posture of the corresponding surface of the point cloud, and it is robust under different sampling densities or noise levels in the neighborhood; fast point feature histograms with a small amount of calculation and Anti-resolution change rotating image method to extract point cloud feature information.
- Fig. 5 is a flowchart of a vehicle body positioning method according to an exemplary embodiment. Referring to Fig. 5, the difference from the above-mentioned embodiment is that in step S101, at least one single-line lidar is used to scan and acquire the vehicle body
- the three-dimensional point cloud data of the surrounding environment includes step S112 and step S113.
- step S112 when at least two single-line lidars are provided on the vehicle body, the relative position relationship between the single-line lidars is acquired;
- step S113 the point cloud data corresponding to the observation point is converted into the same coordinate system according to the relative position relationship.
- the lidars when there are multiple lidars installed on the vehicle body, for example, there is a single-line lidar arranged diagonally downward on the roof of the vehicle, and a single-line laser arranged parallel to the plane of the vehicle bottom at the bottom of the vehicle Radar, or a single-line lidar with an oblique upward direction at the headlights, where diagonally downward and diagonally upward refer to the laser pulse direction emitted by the single-line lidar.
- the multiple single-line lidars When the multiple single-line lidars are installed, the relative positional relationship of the single-line lidars has been determined.
- the multiple lidars are used to scan and acquire three-dimensional point cloud data of the surrounding environment of the vehicle body, and the single-line lidar is used to translate and rotate the point cloud data corresponding to the observation point.
- the method is converted to the unified coordinate system.
- the relative position relationship includes:
- At least one of the single-line lidar is arranged on the top of the vehicle body, so that the laser pulse emitted by the single-line lidar is obliquely downward;
- At least one of the single-line lidar is arranged at the bottom of the vehicle body, so that the laser pulse emitted by the single-line lidar is parallel to the direction of the bottom of the vehicle body.
- At least one of the single-line lidar is arranged at the bottom of the vehicle body, so that the laser pulse emitted by the single-line lidar is parallel to the direction of the bottom of the vehicle body.
- the plane scanned by the single-line lidar includes a horizontal plane. If the vehicle body is running smoothly, the scanning plane of the single-line lidar does not change, so The acquired point cloud data is two-dimensional point cloud data.
- the single-line lidar can scan a relatively distant observation point, which can be 80 meters away, and is used in conjunction with the single-line lidar placed diagonally downward, and the single-line lidar placed diagonally downward It is used to scan close observation points, such as observation points within 20 meters, and merge the point cloud data obtained by the two kinds of lidars to achieve a more stable positioning.
- Fig. 6 is a block diagram showing a vehicle body positioning device according to an exemplary embodiment. 6, the device includes a lidar device 11, an extraction module 12 and a registration module 13.
- the lidar device 11 includes at least one single-line lidar for scanning and acquiring three-dimensional point cloud data of the surrounding environment of the vehicle body, and the direction in which the at least one single-line lidar emits laser light is at least on the plane of the bottom of the vehicle body. An angle relationship;
- the extraction module 12 is used to extract point cloud feature information in the three-dimensional point cloud data
- the registration module 13 is configured to register the point cloud feature information with a preset high-precision map to determine the position information of the vehicle body.
- the included angle relationship includes: the single-line lidar is arranged on the vehicle body and located at a position above the plane of the bottom of the vehicle body, and the laser pulse emitted by the single-line lidar is in an oblique upward direction Or diagonally downward.
- the included angle relationship includes: the single-line lidar is arranged on the upper half of the vehicle body in an obliquely downward direction, so that the laser pulse emitted by the single-line lidar is in an obliquely downward direction.
- Fig. 7 is a block diagram showing a vehicle body positioning device according to an exemplary embodiment.
- the lidar device 11 includes:
- the first acquisition module 14 is configured to use at least one single-line lidar to perform multiple scans to acquire three-dimensional point cloud data of the surrounding environment of the vehicle body at different scanning positions;
- the processing module 15 is used for fusing the three-dimensional point cloud data of the surrounding environment of the vehicle body at the different scanning positions to generate three-dimensional point cloud data based on the same coordinate system.
- Fig. 8 is a block diagram showing a vehicle body positioning device according to an exemplary embodiment.
- the processing module 15 includes:
- the determining sub-module 16 is used to determine the observation point in the surrounding environment scanned by the single-line lidar in multiple scans, and obtain the coordinate position of the observation point;
- the conversion sub-module 17 is configured to convert the point cloud data corresponding to the observation point into the same coordinate system based on the coordinate position of the observation point.
- Fig. 9 is a block diagram showing a vehicle body positioning device according to an exemplary embodiment.
- the conversion unit 17 includes:
- the determining unit 18 is used to determine the coordinate system information of the observation point during multiple scans
- the acquiring unit 19 acquires the relative pose information of the vehicle body between two adjacent scans in multiple scans;
- the conversion unit 20 based on the coordinate system information and the relative pose information, sequentially converts the point cloud data corresponding to the observation point in the previous scan to the coordinate system corresponding to the next scan until the point cloud data corresponding to the observation point Transfer to the coordinate system corresponding to the last scan.
- the description form of the point cloud feature information includes one or more of normal estimation, point feature histogram, fast point feature histogram, and spin image.
- Fig. 10 is a block diagram showing a vehicle body positioning device according to an exemplary embodiment.
- the lidar device 11 includes:
- the second acquisition module 21 is configured to acquire the relative position relationship between the single-line lidars when at least two single-line lidars are provided on the vehicle body;
- the conversion module 22 is configured to convert the point cloud data corresponding to the observation point by the single-line lidar to the same coordinate system according to the relative position relationship.
- the relative position relationship includes:
- At least one of the single-line lidar is arranged on the top of the vehicle body, so that the laser pulse emitted by the single-line lidar is obliquely downward;
- At least one of the single-line lidar is arranged at the bottom of the vehicle body, so that the laser pulse emitted by the single-line lidar is parallel to the direction of the bottom of the vehicle body.
- Fig. 11 is a block diagram showing a vehicle body positioning device 800 according to an exemplary embodiment.
- the device 800 may be a mobile phone, a computer, a digital broadcasting terminal, a messaging device, a game console, a tablet device, a medical device, a fitness device, a personal digital assistant, etc.
- the device 800 may include one or more of the following components: a processing component 802, a memory 804, a power supply component 806, a multimedia component 808, an audio component 810, an input/output (I/O) interface 812, a sensor component 814, And the communication component 816.
- the processing component 802 generally controls the overall operations of the device 800, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations.
- the processing component 802 may include one or more processors 820 to execute instructions to complete all or part of the steps of the foregoing method.
- the processing component 802 may include one or more modules to facilitate the interaction between the processing component 802 and other components.
- the processing component 802 may include a multimedia module to facilitate the interaction between the multimedia component 808 and the processing component 802.
- the memory 804 is configured to store various types of data to support operations in the device 800. Examples of these data include instructions for any application or method operating on the device 800, contact data, phone book data, messages, pictures, videos, etc.
- the memory 804 can be implemented by any type of volatile or nonvolatile storage device or a combination thereof, such as static random access memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable Programmable Read Only Memory (EPROM), Programmable Read Only Memory (PROM), Read Only Memory (ROM), Magnetic Memory, Flash Memory, Magnetic Disk or Optical Disk.
- SRAM static random access memory
- EEPROM electrically erasable programmable read-only memory
- EPROM erasable Programmable Read Only Memory
- PROM Programmable Read Only Memory
- ROM Read Only Memory
- Magnetic Memory Flash Memory
- Magnetic Disk Magnetic Disk or Optical Disk.
- the power supply component 806 provides power to various components of the device 800.
- the power supply component 806 may include a power management system, one or more power supplies, and other components associated with the generation, management, and distribution of power for the device 800.
- the multimedia component 808 includes a screen that provides an output interface between the device 800 and the user.
- the screen may include a liquid crystal display (LCD) and a touch panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive input signals from the user.
- the touch panel includes one or more touch sensors to sense touch, sliding, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure related to the touch or slide operation.
- the multimedia component 808 includes a front camera and/or a rear camera. When the device 800 is in an operation mode, such as a shooting mode or a video mode, the front camera and/or the rear camera can receive external multimedia data. Each front camera and rear camera can be a fixed optical lens system or have focal length and optical zoom capabilities.
- the audio component 810 is configured to output and/or input audio signals.
- the audio component 810 includes a microphone (MIC), and when the device 800 is in an operation mode, such as a call mode, a recording mode, and a voice recognition mode, the microphone is configured to receive external audio signals.
- the received audio signal may be further stored in the memory 804 or transmitted via the communication component 816.
- the audio component 810 further includes a speaker for outputting audio signals.
- the I/O interface 812 provides an interface between the processing component 802 and a peripheral interface module.
- the peripheral interface module may be a keyboard, a click wheel, a button, and the like. These buttons may include but are not limited to: home button, volume button, start button, and lock button.
- the sensor component 814 includes one or more sensors for providing the device 800 with various aspects of status assessment.
- the sensor component 814 can detect the on/off status of the device 800 and the relative positioning of the components.
- the component is the display and the keypad of the device 800.
- the sensor component 814 can also detect the position change of the device 800 or a component of the device 800. , The presence or absence of contact between the user and the device 800, the orientation or acceleration/deceleration of the device 800, and the temperature change of the device 800.
- the sensor component 814 may include a proximity sensor configured to detect the presence of nearby objects when there is no physical contact.
- the sensor component 814 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications.
- the sensor component 814 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor or a temperature sensor.
- the communication component 816 is configured to facilitate wired or wireless communication between the device 800 and other devices.
- the device 800 can access a wireless network based on a communication standard, such as WiFi, 2G, or 3G, or a combination thereof.
- the communication component 816 receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel.
- the communication component 816 further includes a near field communication (NFC) module to facilitate short-range communication.
- the NFC module can be implemented based on radio frequency identification (RFID) technology, infrared data association (IrDA) technology, ultra-wideband (UWB) technology, Bluetooth (BT) technology and other technologies.
- RFID radio frequency identification
- IrDA infrared data association
- UWB ultra-wideband
- Bluetooth Bluetooth
- the apparatus 800 may be implemented by one or more application specific integrated circuits (ASIC), digital signal processors (DSP), digital signal processing equipment (DSPD), programmable logic devices (PLD), field programmable A gate array (FPGA), controller, microcontroller, microprocessor, or other electronic components are implemented to implement the above methods.
- ASIC application specific integrated circuits
- DSP digital signal processors
- DSPD digital signal processing equipment
- PLD programmable logic devices
- FPGA field programmable A gate array
- controller microcontroller, microprocessor, or other electronic components are implemented to implement the above methods.
- non-transitory computer-readable storage medium including instructions, such as the memory 804 including instructions, which can be executed by the processor 820 of the device 800 to complete the foregoing method.
- the non-transitory computer-readable storage medium may be ROM, random access memory (RAM), CD-ROM, magnetic tape, floppy disk, optical data storage device, etc.
- Fig. 12 is a block diagram showing a vehicle body positioning device 1900 according to an exemplary embodiment.
- the device 1900 may be provided as a server. 12
- the device 1900 includes a processing component 1922, which further includes one or more processors, and a memory resource represented by a memory 1932, for storing instructions executable by the processing component 1922, such as application programs.
- the application program stored in the memory 1932 may include one or more modules each corresponding to a set of instructions.
- the processing component 1922 is configured to execute instructions to perform the above-described methods.
- the device 1900 may also include a power component 1926 configured to perform power management of the device 1900, a wired or wireless network interface 1950 configured to connect the device 1900 to the network, and an input output (I/O) interface 1958.
- the device 1900 can operate based on an operating system stored in the memory 1932, such as Windows ServerTM, Mac OS XTM, UnixTM, LinuxTM, FreeBSDTM or the like.
- non-transitory computer-readable storage medium including instructions, such as the memory 1932 including instructions, which may be executed by the processing component 1922 of the device 1900 to complete the foregoing method.
- the non-transitory computer-readable storage medium may be ROM, random access memory (RAM), CD-ROM, magnetic tape, floppy disk, optical data storage device, etc.
Abstract
Description
Claims (20)
- 一种车体定位方法,其特征在于,包括:A vehicle body positioning method is characterized by comprising:利用至少一个单线激光雷达扫描并获取车体周围环境的三维点云数据,且所述至少一个单线激光雷达发射激光的方向分别与所述车体底部所在平面呈至少一种夹角关系;Using at least one single-line lidar to scan and obtain three-dimensional point cloud data of the surrounding environment of the vehicle body, and the directions of the at least one single-line lidar emitting laser light respectively have at least one angle relationship with the plane where the bottom of the vehicle body is located;提取所述三维点云数据中的点云特征信息;Extracting point cloud feature information in the three-dimensional point cloud data;将所述点云特征信息与预设高精地图进行配准,确定所述车体的位置信息。The point cloud feature information is registered with a preset high-precision map to determine the position information of the vehicle body.
- 根据权利要求1所述的方法,其特征在于,所述夹角关系包括:所述单线激光雷达设置于车体上并位于车体底部所在平面以上的位置,所述单线激光雷达发射的激光脉冲为斜向上方向或斜向下方向。The method according to claim 1, wherein the angle relationship comprises: the single-line lidar is arranged on the vehicle body and located at a position above the plane of the bottom of the vehicle body, and the laser pulse emitted by the single-line lidar It is diagonally upward or diagonally downward.
- 根据权利要求1所述的方法,其特征在于,所述夹角关系包括:所述单线激光雷达按照斜向下方向设置于车体上半部,使得所述单线激光雷达发射的激光脉冲为斜向下方向。The method according to claim 1, wherein the angle relationship comprises: the single-line lidar is arranged on the upper half of the vehicle body in an oblique downward direction, so that the laser pulse emitted by the single-line lidar is oblique Down direction.
- 根据权利要求1所述的方法,其特征在于,所述利用至少一个单线激光雷达扫描并获取车体周围环境的三维点云数据包括:The method according to claim 1, wherein the scanning and acquiring three-dimensional point cloud data of the surrounding environment of the vehicle body using at least one single-line lidar comprises:利用至少一个单线激光雷达进行多次扫描,获取车体在不同扫描位置处周围环境的三维点云数据;Use at least one single-line lidar to perform multiple scans to obtain three-dimensional point cloud data of the surrounding environment of the vehicle body at different scan positions;将所述车体在所述不同扫描位置处周围环境的三维点云数据进行融合,生成基于同一坐标系的三维点云数据。The three-dimensional point cloud data of the surrounding environment of the vehicle body at the different scanning positions are fused to generate three-dimensional point cloud data based on the same coordinate system.
- 根据权利要求4所述的方法,其特征在于,所述将所述车体在所述不同扫描位置处周围环境的三维点云数据进行融合,生成基于同一坐标系的三维点云数据,包括:The method according to claim 4, wherein the fusing three-dimensional point cloud data of the surrounding environment of the vehicle body at the different scanning positions to generate three-dimensional point cloud data based on the same coordinate system comprises:确定所述单线激光雷达在多次扫描中扫描到的周围环境中的观测点,并获取所述观测点的坐标位置;Determine the observation point in the surrounding environment scanned by the single-line lidar in multiple scans, and obtain the coordinate position of the observation point;基于所述观测点的坐标位置,将所述观测点对应的点云数据转换到同一坐标系中。Based on the coordinate position of the observation point, the point cloud data corresponding to the observation point is converted into the same coordinate system.
- 根据权利要求5所述的方法,其特征在于,所述基于所述观测点的坐标位置,将所述观测点对应的点云数据转换到同一坐标系中,包括:The method according to claim 5, wherein said converting the point cloud data corresponding to the observation point to the same coordinate system based on the coordinate position of the observation point comprises:确定多次扫描时所述观测点所在的坐标系信息;Determining the coordinate system information of the observation point during multiple scans;获取多次扫描中相邻两次扫描之间所述车体的相对位姿信息;Acquiring relative pose information of the vehicle body between two adjacent scans in multiple scans;基于所述坐标系信息以及相对位姿信息,依次将上一次扫描时观测点对应的点云数据转换至下一次扫描对应的坐标系中,直至所述观测点对应的点云数据转移至最后一次扫描对应的坐标系中。Based on the coordinate system information and the relative pose information, the point cloud data corresponding to the observation point in the previous scan is sequentially converted to the coordinate system corresponding to the next scan, until the point cloud data corresponding to the observation point is transferred to the last scan Scan the corresponding coordinate system.
- 根据权利要求1所述的方法,其特征在于,所述点云特征信息的描述形式包括法线估计、点特征直方图、快速点特征直方图及旋转图像(spin image)中的一种或几种。The method according to claim 1, wherein the description form of the point cloud feature information includes one or more of normal estimation, point feature histogram, fast point feature histogram, and spin image. Kind.
- 根据权利要求1所述的方法,其特征在于,所述利用至少一个单线激光雷达扫描并获取车体周围环境的三维点云数据,包括:The method according to claim 1, wherein the scanning and acquiring three-dimensional point cloud data of the surrounding environment of the vehicle body using at least one single-line lidar comprises:在车体上设置有至少两个单线激光雷达的情况下,获取所述单线激光雷达之间的相对位置关系;In the case where at least two single-line lidars are provided on the vehicle body, acquiring the relative position relationship between the single-line lidars;根据所述相对位置关系,将所述观测点对应的点云数据转换到同一坐标系中。According to the relative position relationship, the point cloud data corresponding to the observation point is converted into the same coordinate system.
- 根据权利要求8所述的方法,其特征在于,所述相对位置关系包括:The method according to claim 8, wherein the relative position relationship comprises:至少一个所述单线激光雷达设置于车体顶部,使得所述单线激光雷达发射的激光脉冲为斜向下方向;At least one of the single-line lidar is arranged on the top of the vehicle body, so that the laser pulse emitted by the single-line lidar is obliquely downward;至少一个所述单线激光雷达设置于车体底部,使得所述单线激光雷达发射的激光脉冲为平行于车体底部方向。At least one of the single-line lidar is arranged at the bottom of the vehicle body, so that the laser pulse emitted by the single-line lidar is parallel to the direction of the bottom of the vehicle body.
- 一种车体定位装置,其特征在于,包括:A vehicle body positioning device, characterized in that it comprises:激光雷达装置,包括至少一个单线激光雷达,用于扫描并获取车体周围环境的三维点云数据,且所述至少一个单线激光雷达发射激光的方向分别与所述车体底部所在平面呈至少一种夹角关系;The lidar device includes at least one single-line lidar for scanning and acquiring three-dimensional point cloud data of the surrounding environment of the vehicle body, and the direction in which the at least one single-line lidar emits laser light is at least the same as the plane where the bottom of the vehicle body is located. Kind of angle relationship;提取模块,用于提取所述三维点云数据中的点云特征信息;An extraction module for extracting point cloud feature information in the three-dimensional point cloud data;配准模块,用于将所述点云特征信息与预设高精地图进行配准,确定所述车体的位置信息。The registration module is used to register the point cloud feature information with a preset high-precision map to determine the position information of the vehicle body.
- 根据权利要求10所述的装置,其特征在于,所述夹角关系包括:所述单线激光雷达设置于车体上并位于车体底部所在平面以上的位置,所述单线激光雷达发射的激光脉冲为斜向上方向或斜向下方向。The device according to claim 10, wherein the angle relationship comprises: the single-line lidar is arranged on the vehicle body and located at a position above the plane of the bottom of the vehicle body, and the laser pulse emitted by the single-line lidar It is diagonally upward or diagonally downward.
- 根据权利要求10所述的装置,其特征在于,所述夹角关系包括:所述单线激光雷达按照斜向下方向设置于车体上半部,使得所述单线激光雷达发射的激光脉冲为斜向下方向。The device according to claim 10, wherein the included angle relationship comprises: the single-line lidar is arranged on the upper half of the vehicle body in an oblique downward direction, so that the laser pulse emitted by the single-line lidar is oblique Down direction.
- 根据权利要求10所述的装置,其特征在于,所述激光雷达装置包括:The device according to claim 10, wherein the lidar device comprises:第一获取模块,用于利用至少一个单线激光雷达进行多次扫描,获取车体在不同扫描位置处周围环境的三维点云数据;The first acquisition module is configured to use at least one single-line lidar to perform multiple scans to acquire three-dimensional point cloud data of the surrounding environment of the vehicle body at different scanning positions;处理模块,用于将所述车体在所述不同扫描位置处周围环境的三维点云数据进行融合,生成基于同一坐标系的三维点云数据。The processing module is used to fuse the three-dimensional point cloud data of the surrounding environment of the vehicle body at the different scanning positions to generate three-dimensional point cloud data based on the same coordinate system.
- 根据权利要求13所述的装置,其特征在于,所述处理模块包括:The device according to claim 13, wherein the processing module comprises:确定子模块,用于确定所述单线激光雷达在多次扫描中扫描到的周围环境中的观测点,并获取所述观测点的坐标位置;The determining sub-module is used to determine the observation point in the surrounding environment scanned by the single-line lidar in multiple scans, and obtain the coordinate position of the observation point;转换子模块,用于基于所述观测点的坐标位置,将所述观测点对应的点云数据转换到同一坐标系中。The conversion sub-module is used to convert the point cloud data corresponding to the observation point into the same coordinate system based on the coordinate position of the observation point.
- 根据权利要求14所述的装置,其特征在于,所述转换子模块包括:The device according to claim 14, wherein the conversion sub-module comprises:确定单元,用于确定多次扫描时所述观测点所在的坐标系信息;The determining unit is used to determine the coordinate system information of the observation point during multiple scans;获取单元,获取多次扫描中相邻两次扫描之间所述车体的相对位姿信息;An acquiring unit to acquire relative pose information of the vehicle body between two adjacent scans in multiple scans;转换单元,基于所述坐标系信息以及相对位姿信息,依次将上一次扫描时观测点对应的点云数据转换至下一次扫描对应的坐标系中,直至所述观测点对应的点云数据转移至最后一次扫描对应的坐标系中。The conversion unit, based on the coordinate system information and the relative pose information, sequentially converts the point cloud data corresponding to the observation point in the previous scan to the coordinate system corresponding to the next scan, until the point cloud data corresponding to the observation point is transferred To the coordinate system corresponding to the last scan.
- 根据权利要求10所述的装置,其特征在于,所述点云特征信息的描述形式包括法线估计、点特征直方图、快速点特征直方图及旋转图像(spin image)中的一种或几种。The device according to claim 10, wherein the description form of the point cloud feature information includes one or more of normal estimation, point feature histogram, fast point feature histogram, and spin image. Kind.
- 根据权利要求10所述的装置,其特征在于,所述激光雷达装置包括:The device according to claim 10, wherein the lidar device comprises:第二获取模块,用于在车体上设置有至少两个单线激光雷达的情况下,获取所述单线激光雷达之间的相对位置关系;The second acquisition module is configured to acquire the relative position relationship between the single-line laser radars when at least two single-line laser radars are provided on the vehicle body;转换模块,用于根据所述相对位置关系,将所述单线激光雷达将所述观测点对应的点云数据转换到同一坐标系中。The conversion module is configured to convert the point cloud data corresponding to the observation point by the single-line lidar into the same coordinate system according to the relative position relationship.
- 根据权利要求17所述的装置,其特征在于,所述相对位置关系包括:The device according to claim 17, wherein the relative position relationship comprises:至少一个所述单线激光雷达设置于车体顶部,使得所述单线激光雷达发射的激光脉冲为斜向下方向;At least one of the single-line lidar is arranged on the top of the vehicle body, so that the laser pulse emitted by the single-line lidar is obliquely downward;至少一个所述单线激光雷达设置于车体底部,使得所述单线激光雷达发射的激光脉冲为平行于车体底部方向。At least one of the single-line lidar is arranged at the bottom of the vehicle body, so that the laser pulse emitted by the single-line lidar is parallel to the direction of the bottom of the vehicle body.
- 一种车体定位装置,其特征在于,包括:A vehicle body positioning device, characterized in that it comprises:处理器;processor;用于存储处理器可执行指令的存储器;A memory for storing processor executable instructions;其中,所述处理器被配置为执行上述权利要求1至8所述的车体定位方法。Wherein, the processor is configured to execute the vehicle body positioning method described in claims 1 to 8.
- 一种非临时性计算机可读存储介质,当所述存储介质中的指令由处理器执行时,使得处理器能够执行根据权利要求1至8所述的车体定位方法。A non-transitory computer-readable storage medium, when the instructions in the storage medium are executed by a processor, so that the processor can execute the vehicle body positioning method according to claims 1 to 8.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910127565.9A CN109725330A (en) | 2019-02-20 | 2019-02-20 | A kind of Location vehicle method and device |
CN201910127565.9 | 2019-02-20 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2020168742A1 true WO2020168742A1 (en) | 2020-08-27 |
Family
ID=66301526
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2019/115903 WO2020168742A1 (en) | 2019-02-20 | 2019-11-06 | Method and device for vehicle body positioning |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN109725330A (en) |
WO (1) | WO2020168742A1 (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112731434A (en) * | 2020-12-15 | 2021-04-30 | 武汉万集信息技术有限公司 | Positioning method and system based on laser radar and marker |
CN113534097A (en) * | 2021-08-27 | 2021-10-22 | 北京工业大学 | Optimization method suitable for rotating-axis laser radar |
CN114506212A (en) * | 2022-02-15 | 2022-05-17 | 国能神东煤炭集团有限责任公司 | Space positioning auxiliary driving system and method for shuttle car |
CN115937069A (en) * | 2022-03-24 | 2023-04-07 | 北京小米移动软件有限公司 | Part detection method, device, electronic device and storage medium |
Families Citing this family (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109725330A (en) * | 2019-02-20 | 2019-05-07 | 苏州风图智能科技有限公司 | A kind of Location vehicle method and device |
CN110276834B (en) * | 2019-06-25 | 2023-04-11 | 达闼科技(北京)有限公司 | Construction method of laser point cloud map, terminal and readable storage medium |
CN110376595B (en) * | 2019-06-28 | 2021-09-24 | 湖北大学 | Vehicle measuring system of automatic loading machine |
CN110789533B (en) * | 2019-09-25 | 2021-08-13 | 华为技术有限公司 | Data presentation method and terminal equipment |
WO2021056339A1 (en) * | 2019-09-26 | 2021-04-01 | 深圳市大疆创新科技有限公司 | Positioning method and system, and movable platform |
CN110677491B (en) * | 2019-10-10 | 2021-10-19 | 郑州迈拓信息技术有限公司 | Method for estimating position of vehicle |
CN110827542A (en) * | 2019-11-11 | 2020-02-21 | 江苏中路工程技术研究院有限公司 | Highway safety vehicle distance early warning system |
CN110988848B (en) * | 2019-12-23 | 2022-04-26 | 潍柴动力股份有限公司 | Vehicle-mounted laser radar relative pose monitoring method and device |
CN111257903B (en) * | 2020-01-09 | 2022-08-09 | 广州微牌智能科技有限公司 | Vehicle positioning method and device, computer equipment and storage medium |
CN111273270A (en) * | 2020-03-17 | 2020-06-12 | 北京宸控科技有限公司 | Positioning and orienting method of heading machine |
CN111693043B (en) * | 2020-06-18 | 2023-04-07 | 北京四维图新科技股份有限公司 | Map data processing method and apparatus |
CN112082484A (en) * | 2020-09-11 | 2020-12-15 | 武汉理工大学 | Device and method for detecting engineering vehicle body deviation based on single line laser radar |
CN114252883B (en) * | 2020-09-24 | 2022-08-23 | 北京万集科技股份有限公司 | Target detection method, apparatus, computer device and medium |
CN112904841B (en) * | 2021-01-12 | 2023-11-03 | 北京布科思科技有限公司 | Non-horizontal single-line positioning obstacle avoidance method, device, equipment and storage medium |
CN115586511B (en) * | 2022-11-25 | 2023-03-03 | 唐山百川工业服务有限公司 | Laser radar two-dimensional positioning method based on array stand column |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP4953502B2 (en) * | 2000-10-02 | 2012-06-13 | 日本信号株式会社 | Two-dimensional scanning optical radar sensor |
CN104808192A (en) * | 2015-04-15 | 2015-07-29 | 中国矿业大学 | Three-dimensional laser scanning swing device and coordinate conversion method thereof |
CN106323269A (en) * | 2015-12-10 | 2017-01-11 | 上海思岚科技有限公司 | Self-positioning and navigation device, positioning and navigation method and self-positioning and navigation system |
CN106371105A (en) * | 2016-08-16 | 2017-02-01 | 长春理工大学 | Vehicle targets recognizing method, apparatus and vehicle using single-line laser radar |
KR20170116305A (en) * | 2016-04-08 | 2017-10-19 | 한국전자통신연구원 | Objects recognition device for co-pilot vehicle based on tracking information from different multiple sensors |
CN107957583A (en) * | 2017-11-29 | 2018-04-24 | 江苏若博机器人科技有限公司 | A kind of round-the-clock quick unmanned vehicle detection obstacle avoidance system of Multi-sensor Fusion |
CN108072880A (en) * | 2018-01-17 | 2018-05-25 | 上海禾赛光电科技有限公司 | The method of adjustment of laser radar field of view center direction, medium, laser radar system |
CN109725330A (en) * | 2019-02-20 | 2019-05-07 | 苏州风图智能科技有限公司 | A kind of Location vehicle method and device |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108401461B (en) * | 2017-12-29 | 2021-06-04 | 达闼机器人有限公司 | Three-dimensional mapping method, device and system, cloud platform, electronic equipment and computer program product |
-
2019
- 2019-02-20 CN CN201910127565.9A patent/CN109725330A/en active Pending
- 2019-11-06 WO PCT/CN2019/115903 patent/WO2020168742A1/en active Application Filing
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP4953502B2 (en) * | 2000-10-02 | 2012-06-13 | 日本信号株式会社 | Two-dimensional scanning optical radar sensor |
CN104808192A (en) * | 2015-04-15 | 2015-07-29 | 中国矿业大学 | Three-dimensional laser scanning swing device and coordinate conversion method thereof |
CN106323269A (en) * | 2015-12-10 | 2017-01-11 | 上海思岚科技有限公司 | Self-positioning and navigation device, positioning and navigation method and self-positioning and navigation system |
KR20170116305A (en) * | 2016-04-08 | 2017-10-19 | 한국전자통신연구원 | Objects recognition device for co-pilot vehicle based on tracking information from different multiple sensors |
CN106371105A (en) * | 2016-08-16 | 2017-02-01 | 长春理工大学 | Vehicle targets recognizing method, apparatus and vehicle using single-line laser radar |
CN107957583A (en) * | 2017-11-29 | 2018-04-24 | 江苏若博机器人科技有限公司 | A kind of round-the-clock quick unmanned vehicle detection obstacle avoidance system of Multi-sensor Fusion |
CN108072880A (en) * | 2018-01-17 | 2018-05-25 | 上海禾赛光电科技有限公司 | The method of adjustment of laser radar field of view center direction, medium, laser radar system |
CN109725330A (en) * | 2019-02-20 | 2019-05-07 | 苏州风图智能科技有限公司 | A kind of Location vehicle method and device |
Non-Patent Citations (1)
Title |
---|
WEI, CHONGYANG: "3D Feature Point Clouds-based Research on Mapping and Localization in Urban Environments", ELECTRONIC TECHNOLOGY & INFORMATION SCIENCE, CHINA DOCTORAL DISSERTATIONS FULL-TEXT DATABASE, no. 1, 15 January 2019 (2019-01-15), ISSN: 1674-022X, DOI: 20191231103617Y * |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112731434A (en) * | 2020-12-15 | 2021-04-30 | 武汉万集信息技术有限公司 | Positioning method and system based on laser radar and marker |
CN113534097A (en) * | 2021-08-27 | 2021-10-22 | 北京工业大学 | Optimization method suitable for rotating-axis laser radar |
CN113534097B (en) * | 2021-08-27 | 2023-11-24 | 北京工业大学 | Optimization method suitable for rotation axis laser radar |
CN114506212A (en) * | 2022-02-15 | 2022-05-17 | 国能神东煤炭集团有限责任公司 | Space positioning auxiliary driving system and method for shuttle car |
CN114506212B (en) * | 2022-02-15 | 2023-09-22 | 国能神东煤炭集团有限责任公司 | Space positioning auxiliary driving system and method for shuttle car |
CN115937069A (en) * | 2022-03-24 | 2023-04-07 | 北京小米移动软件有限公司 | Part detection method, device, electronic device and storage medium |
CN115937069B (en) * | 2022-03-24 | 2023-09-19 | 北京小米移动软件有限公司 | Part detection method, device, electronic equipment and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN109725330A (en) | 2019-05-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2020168742A1 (en) | Method and device for vehicle body positioning | |
US11100260B2 (en) | Method and apparatus for interacting with a tag in a wireless communication area | |
WO2019179417A1 (en) | Data fusion method and related device | |
EP3469306B1 (en) | Geometric matching in visual navigation systems | |
US9646384B2 (en) | 3D feature descriptors with camera pose information | |
CN105940429B (en) | For determining the method and system of the estimation of equipment moving | |
CN110967011A (en) | Positioning method, device, equipment and storage medium | |
CN109668551B (en) | Robot positioning method, device and computer readable storage medium | |
WO2022078467A1 (en) | Automatic robot recharging method and apparatus, and robot and storage medium | |
WO2022036980A1 (en) | Pose determination method and apparatus, electronic device, storage medium, and program | |
KR20180044279A (en) | System and method for depth map sampling | |
US20210158560A1 (en) | Method and device for obtaining localization information and storage medium | |
US11893317B2 (en) | Method and apparatus for associating digital content with wireless transmission nodes in a wireless communication area | |
WO2019019819A1 (en) | Mobile electronic device and method for processing tasks in task region | |
CN111272172A (en) | Unmanned aerial vehicle indoor navigation method, device, equipment and storage medium | |
WO2022110653A1 (en) | Pose determination method and apparatus, electronic device and computer-readable storage medium | |
CN109696173A (en) | A kind of car body air navigation aid and device | |
US20220237533A1 (en) | Work analyzing system, work analyzing apparatus, and work analyzing program | |
US20120002044A1 (en) | Method and System for Implementing a Three-Dimension Positioning | |
US20200164508A1 (en) | System and Method for Probabilistic Multi-Robot Positioning | |
CN115407355B (en) | Library position map verification method and device and terminal equipment | |
CN116385528A (en) | Method and device for generating annotation information, electronic equipment, vehicle and storage medium | |
US20130155211A1 (en) | Interactive system and interactive device thereof | |
CN113433566B (en) | Map construction system and map construction method | |
CN113065392A (en) | Robot tracking method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 19916418 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 19916418 Country of ref document: EP Kind code of ref document: A1 |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 19916418 Country of ref document: EP Kind code of ref document: A1 |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 18/03/2022) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 19916418 Country of ref document: EP Kind code of ref document: A1 |