WO2020168742A1 - Procédé et dispositif de positionnement d'un corps de véhicule - Google Patents

Procédé et dispositif de positionnement d'un corps de véhicule Download PDF

Info

Publication number
WO2020168742A1
WO2020168742A1 PCT/CN2019/115903 CN2019115903W WO2020168742A1 WO 2020168742 A1 WO2020168742 A1 WO 2020168742A1 CN 2019115903 W CN2019115903 W CN 2019115903W WO 2020168742 A1 WO2020168742 A1 WO 2020168742A1
Authority
WO
WIPO (PCT)
Prior art keywords
vehicle body
point cloud
cloud data
line lidar
lidar
Prior art date
Application number
PCT/CN2019/115903
Other languages
English (en)
Chinese (zh)
Inventor
杜新新
Original Assignee
苏州风图智能科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 苏州风图智能科技有限公司 filed Critical 苏州风图智能科技有限公司
Publication of WO2020168742A1 publication Critical patent/WO2020168742A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/93Lidar systems specially adapted for specific applications for anti-collision purposes

Definitions

  • the present disclosure relates to the technical field of radar positioning, in particular to a vehicle body positioning method and device.
  • Unmanned driving is a type of smart car, also known as wheeled mobile robot, which mainly relies on the intelligent driving instrument based on the computer system in the car to realize the purpose of unmanned driving.
  • unmanned driving technology positioning and path planning in robotic systems are a problem. Without positioning, the path cannot be planned.
  • unmanned vehicle positioning mostly relies on multi-line lidar (also called three-dimensional lidar) or single-line lidar (also called two-dimensional lidar) or GPS (Global Positioning System, Global Positioning System) positioning.
  • Three-dimensional lidar (also called multi-line lidar) is relatively expensive and has limited applications.
  • Two-dimensional lidar (also known as single-line lidar) has fewer features in the surrounding environment, such as relatively open deserts or suburbs, and has poor positioning effects.
  • GPS positioning technology based on radio technology is used in high buildings or tunnels, indoors, etc. In the environment, the positioning is not accurate enough. Therefore, how to achieve accurate positioning in some special environments, such as open areas or indoors, tunnels, etc., while saving costs has become an urgent technical problem to be solved.
  • the present disclosure provides a vehicle body positioning method and device.
  • a vehicle body positioning method including:
  • At least one single-line lidar to scan and obtain three-dimensional point cloud data of the surrounding environment of the vehicle body, and the directions of the at least one single-line lidar emitting laser light respectively have at least one angle relationship with the plane where the bottom of the vehicle body is located;
  • the point cloud feature information is registered with a preset high-precision map to determine the position information of the vehicle body.
  • the included angle relationship includes: the single-line lidar is arranged on the vehicle body and located at a position above the plane of the bottom of the vehicle body, and the laser pulse emitted by the single-line lidar is in an oblique upward direction Or diagonally downward.
  • the included angle relationship includes: the single-line lidar is arranged on the upper half of the vehicle body in an obliquely downward direction, so that the laser pulse emitted by the single-line lidar is in an obliquely downward direction.
  • the scanning and acquiring three-dimensional point cloud data of the surrounding environment of the vehicle body by using at least one single-line lidar includes:
  • the three-dimensional point cloud data of the surrounding environment of the vehicle body at the different scanning positions are fused to generate three-dimensional point cloud data based on the same coordinate system.
  • the fusing three-dimensional point cloud data of the surrounding environment of the vehicle body at the different scanning positions to generate three-dimensional point cloud data based on the same coordinate system includes:
  • the point cloud data corresponding to the observation point is converted into the same coordinate system.
  • the converting point cloud data corresponding to the observation point into the same coordinate system based on the coordinate position of the observation point includes:
  • the point cloud data corresponding to the observation point in the previous scan is sequentially converted to the coordinate system corresponding to the next scan, until the point cloud data corresponding to the observation point is transferred to the last scan Scan the corresponding coordinate system.
  • the description form of the point cloud feature information includes one or more of normal estimation, point feature histogram, fast point feature histogram, and spin image.
  • the scanning and acquiring three-dimensional point cloud data of the surrounding environment of the vehicle body using at least one single-line lidar includes:
  • the point cloud data corresponding to the observation point is converted into the same coordinate system.
  • the relative position relationship includes:
  • At least one of the single-line lidar is arranged on the top of the vehicle body, so that the laser pulse emitted by the single-line lidar is obliquely downward;
  • At least one of the single-line lidar is arranged at the bottom of the vehicle body, so that the laser pulse emitted by the single-line lidar is parallel to the direction of the bottom of the vehicle body.
  • a vehicle body positioning device including:
  • the lidar device includes at least one single-line lidar for scanning and acquiring three-dimensional point cloud data of the surrounding environment of the vehicle body, and the direction in which the at least one single-line lidar emits laser light is at least the same as the plane where the bottom of the vehicle body is located.
  • Kind of angle relationship ;
  • An extraction module for extracting point cloud feature information in the three-dimensional point cloud data
  • the registration module is used to register the point cloud feature information with a preset high-precision map to determine the position information of the vehicle body.
  • the included angle relationship includes: the single-line lidar is arranged on the vehicle body and located at a position above the plane of the bottom of the vehicle body, and the laser pulse emitted by the single-line lidar is in an oblique upward direction Or diagonally downward.
  • the included angle relationship includes: the single-line lidar is arranged on the upper half of the vehicle body in an obliquely downward direction, so that the laser pulse emitted by the single-line lidar is in an obliquely downward direction.
  • the lidar device includes:
  • the first acquisition module is configured to use at least one single-line lidar to perform multiple scans to acquire three-dimensional point cloud data of the surrounding environment of the vehicle body at different scanning positions;
  • the processing module is used to fuse the three-dimensional point cloud data of the surrounding environment of the vehicle body at the different scanning positions to generate three-dimensional point cloud data based on the same coordinate system.
  • the processing module includes:
  • the determining sub-module is used to determine the observation point in the surrounding environment scanned by the single-line lidar in multiple scans, and obtain the coordinate position of the observation point;
  • the conversion sub-module is used to convert the point cloud data corresponding to the observation point into the same coordinate system based on the coordinate position of the observation point.
  • the conversion submodule includes:
  • the determining unit is used to determine the coordinate system information of the observation point during multiple scans
  • the conversion unit based on the coordinate system information and the relative pose information, sequentially converts the point cloud data corresponding to the observation point in the previous scan to the coordinate system corresponding to the next scan, until the point cloud data corresponding to the observation point is transferred To the coordinate system corresponding to the last scan.
  • the description form of the point cloud feature information includes one or more of normal estimation, point feature histogram, fast point feature histogram, and spin image.
  • the included angle relationship includes a plane parallel to the bottom of the vehicle body.
  • the lidar device includes:
  • the second acquisition module is configured to acquire the relative position relationship between the single-line laser radars when at least two single-line laser radars are provided on the vehicle body;
  • the conversion module is configured to convert the point cloud data corresponding to the observation point by the single-line lidar into the same coordinate system according to the relative position relationship.
  • the relative position relationship includes:
  • At least one of the single-line lidar is arranged on the top of the vehicle body, so that the laser pulse emitted by the single-line lidar is obliquely downward;
  • At least one of the single-line lidar is arranged at the bottom of the vehicle body, so that the laser pulse emitted by the single-line lidar is parallel to the direction of the bottom of the vehicle body.
  • a vehicle body positioning device including:
  • a memory for storing processor executable instructions
  • the processor is configured to execute the method described in any embodiment of the present disclosure.
  • a non-transitory computer-readable storage medium When instructions in the storage medium are executed by a processor, the processor can execute the method.
  • the present disclosure includes at least one single-line lidar disposed on the vehicle body and located at a position above the plane of the bottom of the vehicle body, and the at least one single-line lidar emits laser light
  • the directions are in at least one angle relationship with the plane where the bottom of the vehicle body is located, so that the scanning plane of the single-line radar constantly changes when the vehicle is traveling, so as to obtain three-dimensional point cloud data of the surrounding environment of the vehicle body.
  • the two-dimensional point cloud data obtained by the single-line radar in the prior art is more reliable when the surrounding environment is scarce; at the same time, single-line radars located in other parts of the car body are used, such as parallel to the plane where the bottom of the car body is located, Long-distance two-dimensional point cloud data can be obtained, and then fused with the three-dimensional point cloud data to further enhance the accuracy and stability of positioning.
  • Fig. 1 is a flow chart showing a method for positioning a vehicle body according to an exemplary embodiment.
  • Fig. 2 is a flowchart showing a vehicle body positioning method according to an exemplary embodiment.
  • Fig. 3 is a flow chart showing a vehicle body positioning method according to an exemplary embodiment.
  • Fig. 4 is a flowchart showing a vehicle body positioning method according to an exemplary embodiment.
  • Fig. 5 is a flow chart showing a method for positioning a vehicle body according to an exemplary embodiment.
  • Fig. 6 is a block diagram showing a vehicle body positioning device according to an exemplary embodiment.
  • Fig. 7 is a block diagram showing a vehicle body positioning device according to an exemplary embodiment.
  • Fig. 8 is a block diagram showing a vehicle body positioning device according to an exemplary embodiment.
  • Fig. 9 is a block diagram showing a vehicle body positioning device according to an exemplary embodiment.
  • Fig. 10 is a block diagram showing a vehicle body positioning device according to an exemplary embodiment.
  • Fig. 11 is a block diagram showing a device according to an exemplary embodiment.
  • Fig. 12 is a block diagram showing a device according to an exemplary embodiment.
  • Lidar is an active sensor, the data formed is in the form of point cloud, which is mainly composed of transmitter, receiver, measurement control and power supply.
  • the lidar When the lidar is working, it first emits a laser beam to the measured target, and then measures the time for the reflected or scattered signal to reach the transmitter, the signal strength, and the frequency change, so as to determine the distance, movement speed and orientation of the measured target
  • it can also measure the dynamics of particles in the atmosphere that are invisible to the naked eye, such as distance and angle, shape and size, and speed and attitude.
  • lidar includes single-line lidar and multi-line lidar.
  • the line beam emitted by the laser source in single-line lidar is single-line, and point cloud information is obtained on a fixed scanning plane, such as the horizontal plane;
  • a laser source can emit laser pulses of multiple beams.
  • the multiple laser sources are arranged and distributed in a vertical direction, and the scanning of the multiple beams is formed by motor rotation, and has multiple scanning planes.
  • Fig. 1 is a flowchart showing a vehicle body positioning method according to an exemplary embodiment. Referring to Fig. 1, the method includes the following steps.
  • step S101 at least one single-line lidar is used to scan and obtain three-dimensional point cloud data of the surrounding environment of the vehicle body, and the direction in which the at least one single-line lidar emits laser light and the plane at the bottom of the vehicle body are at least one kind of clip. Angle relationship.
  • the direction of the at least one single-line lidar emitting laser light is in at least one angle relationship with the plane of the bottom of the vehicle body, and the angle relationship may be an angle number below the plane of the bottom of the vehicle body. , It can also be a number of angles above the bottom plane of the vehicle.
  • the laser emission direction of the single-line lidar can include 45 degrees diagonally upwards, or diagonally. 45 degrees downward; when the single-line lidar is installed at the roof of the vehicle, the laser emission direction of the single-line lidar may include an oblique downward 45 degrees.
  • step S102 point cloud feature information in the three-dimensional point cloud data is extracted.
  • the amount of three-dimensional point cloud data obtained through the step S101 is relatively large, and there are data redundancy and noise.
  • the point cloud feature information refers to the information of some special points in the point cloud data, such as sharp edges. , Smooth edges, ridges or valleys, sharps, etc., the feature points can reflect the most basic geometric shape of the model. Extracting point cloud feature information includes detecting feature points from the three-dimensional point cloud data, and retaining several shapes of the model for subsequent Prepare for registration.
  • step S103 the point cloud feature information is registered with a preset high-precision map to determine the position information of the vehicle body.
  • the GPS of the vehicle can be used to make an approximate location judgment, and then a pre-prepared high-precision map can be used for registration with the point cloud feature information. After the pairing is successful, the location information of the vehicle body can be confirmed.
  • the present disclosure includes that at least one single-line lidar is arranged on the vehicle body and located at a position above the plane of the bottom of the vehicle body, and the direction in which the at least one single-line lidar emits laser light and the plane of the bottom of the vehicle body are at least one kind of clip
  • the angle relationship makes the scanning plane of the single-line radar continuously change during the vehicle's travel, and then obtain the three-dimensional point cloud data of the surrounding environment of the vehicle body, compared with the two-dimensional point cloud data obtained by the single-line radar in the prior art , When the surrounding environment is scarce, reliability is enhanced.
  • the included angle relationship includes: the single-line lidar is arranged on the vehicle body and located at a position above the plane of the bottom of the vehicle body, and the laser pulse emitted by the single-line lidar is in an oblique upward direction Or diagonally downward.
  • the single-line lidar is arranged on the vehicle body and located at a position above the plane of the bottom of the vehicle body, including the single-line lidar is arranged at the front middle position of the vehicle body, such as the headlights. Including the single-line lidar is arranged on the upper part of the front of the vehicle body, such as the roof.
  • the laser emission direction of the single-line lidar may include an oblique upward direction or an oblique downward direction; when the single-line lidar is set on the roof At the time, the laser emission direction of the single-line lidar may include an oblique downward direction.
  • the single-line lidar is arranged on the upper half of the vehicle body in an obliquely downward direction, so that the laser pulse emitted by the single-line lidar is in an obliquely downward direction.
  • the position of the single-line lidar is higher than that of other parts of the vehicle body, and then scans and obtains more comprehensive point cloud information. Transmitting laser pulses can obtain point cloud information of objects that are equivalent to the height of the vehicle body to meet the driving needs of unmanned driving.
  • the laser pulse emitted by the single-line lidar is in the diagonally downward direction, adjust the appropriate diagonal downward direction
  • the number of angles can be within a small range, such as 20 meters, within the three-dimensional point cloud data.
  • Fig. 2 is a flowchart of a vehicle body positioning method according to an exemplary embodiment.
  • the step S101 uses at least one single-line lidar to scan and acquire the surrounding environment of the vehicle body
  • the 3D point cloud data includes step S104 and step S105.
  • step S104 at least one single-line lidar is used to perform multiple scans to obtain three-dimensional point cloud data of the surrounding environment of the vehicle body at different scan positions.
  • the scanning frequency of the single-line lidar can be set according to time or distance parameters. For example, it can be set that the single-line lidar scans once every 2 seconds. According to the driving speed of the vehicle, The acceleration and other information can be used to obtain the driving distance of the vehicle; it can also be set that the single-line lidar scans once every 1m.
  • step S105 the three-dimensional point cloud data of the surrounding environment of the vehicle body at the different scanning positions are merged to generate three-dimensional point cloud data based on the same coordinate system.
  • the position of the single-line lidar can be the origin of the coordinate, and the coordinate system can be defined according to the sequence of data acquisition and the direction of motor rotation.
  • the X axis is defined to be in the horizontal scanning plane and to the right. It is the positive X-axis, the Y-axis is positive when it is perpendicular to the X-axis in the horizontal scanning plane, and the Z-axis is positive when it is perpendicular to the XY plane.
  • the coordinate system of the three-dimensional point cloud data obtained from two adjacent scans is not the same, and the three-dimensional point cloud data needs to be fused to generate the data based on the same Three-dimensional point cloud data of the coordinate system.
  • Fig. 3 is a flowchart showing a vehicle body positioning method according to an exemplary embodiment. Referring to Fig. 3, the difference from the above embodiment is that in step S105, the vehicle body is positioned at the different scanning positions. The three-dimensional point cloud data of the surrounding environment is merged to generate three-dimensional point cloud data based on the same coordinate system, including step S107 and step S108.
  • step S107 determine the observation point in the surrounding environment scanned by the single-line lidar in multiple scans, and obtain the coordinate position of the observation point;
  • a precise clock can be used to obtain the time difference between the time when the single-line lidar emits laser pulses and the time when the reflected signal is received, and the distance S between the observation point and the single-line lidar can be calculated.
  • the precision encoding device that comes with the scanning instrument records the vertical scanning angle ⁇ and the horizontal scanning angle ⁇ between adjacent pulses in the encoder, and uses the polar coordinate method to convert the distance and angle into the coordinates of the observation point P, which defines The X axis is in the horizontal scanning plane, the X axis is positive to the right, the Y axis is positive in the horizontal scanning plane perpendicular to the X axis, and the Z axis and the XY plane are positive.
  • step S108 based on the coordinate position of the observation point, the information corresponding to the observation point is converted into the same coordinate system.
  • the coordinate system used in each scan is different, and the coordinate system used in the observation point in the last scan can be changed to 0 1 -X by means of coordinate conversion.
  • 1 Y 1 Z 1 is converted to the coordinate system used by the observation point in the next scan, such as 0 2 -X 2 Y 2 Z 2 , and the coordinate conversion method may include: First, realize the two coordinate centers 0 1 and 0 through translation.
  • Fig. 4 is a flowchart showing a vehicle body positioning method according to an exemplary embodiment.
  • the difference from the above-mentioned embodiment is that in step S108, based on the coordinate position of the observation point, The information corresponding to the observation point is converted to the same coordinate system, including step S109, step S110, and step S111.
  • step S109 determine the coordinate system information of the observation point during multiple scans
  • step S110 obtain relative pose information of the vehicle body between two adjacent scans in multiple scans
  • step S111 based on the coordinate system information and the relative pose information, the information corresponding to the observation point in the previous scan is sequentially converted to the coordinate system corresponding to the next scan, until the information corresponding to the observation point is transferred to the last One scan corresponds to the coordinate system.
  • the scanning frequency of the single-line lidar can be set according to time or distance parameters. For example, according to the distance parameter setting, for example, the vehicle body travels 1 meter forward, and the starting position of the vehicle body is used as the starting point. During this period, the single-line lidar scans once every 0.1 meters, for a total of 10 scans.
  • the first scan is started, the first coordinate system information of the observation point is determined, and the position where the single-line laser class is located is the coordinate origin; when the vehicle body continues to travel forward 0.2 meters , Start the second scan, determine the second coordinate system information of the observation point, take the position of the single-line laser class as the origin of the coordinate, and use different coordinate systems for the two scans.
  • the point cloud data of the observation point is converted into the coordinate system used in the second scan.
  • the wheel speed sensor can be used to measure the rotation angle of the car body between the first scan and the second scan, which can be obtained by an inertial sensor (IMU, Inertial Measurement Unit). According to the obtained distance and angle, Perform the same translation or rotation on the first coordinate system, and convert the point cloud data of the observation point acquired in the first scan to the second coordinate system. By analogy, the point cloud data of the observation point acquired in the ninth scan is converted to the coordinate system used in the tenth scan.
  • IMU Inertial Measurement Unit
  • the description form of the point cloud feature information includes one or more of normal estimation, point feature histogram, fast point feature histogram, and spin image.
  • the normal estimation method can be used to extract point cloud features.
  • the advantage of the normal estimation method is that the calculation speed is faster, and the point feature histogram can be used for more complex scenes.
  • Method by parameterizing the spatial difference between the query point and the neighboring point, and forming a multi-dimensional histogram to describe the k-neighborhood geometric attributes of the point.
  • the high-dimensional hyperspace where the histogram is located provides measurable information for the feature representation Space, it is invariant to the 6-dimensional posture of the corresponding surface of the point cloud, and it is robust under different sampling densities or noise levels in the neighborhood; fast point feature histograms with a small amount of calculation and Anti-resolution change rotating image method to extract point cloud feature information.
  • Fig. 5 is a flowchart of a vehicle body positioning method according to an exemplary embodiment. Referring to Fig. 5, the difference from the above-mentioned embodiment is that in step S101, at least one single-line lidar is used to scan and acquire the vehicle body
  • the three-dimensional point cloud data of the surrounding environment includes step S112 and step S113.
  • step S112 when at least two single-line lidars are provided on the vehicle body, the relative position relationship between the single-line lidars is acquired;
  • step S113 the point cloud data corresponding to the observation point is converted into the same coordinate system according to the relative position relationship.
  • the lidars when there are multiple lidars installed on the vehicle body, for example, there is a single-line lidar arranged diagonally downward on the roof of the vehicle, and a single-line laser arranged parallel to the plane of the vehicle bottom at the bottom of the vehicle Radar, or a single-line lidar with an oblique upward direction at the headlights, where diagonally downward and diagonally upward refer to the laser pulse direction emitted by the single-line lidar.
  • the multiple single-line lidars When the multiple single-line lidars are installed, the relative positional relationship of the single-line lidars has been determined.
  • the multiple lidars are used to scan and acquire three-dimensional point cloud data of the surrounding environment of the vehicle body, and the single-line lidar is used to translate and rotate the point cloud data corresponding to the observation point.
  • the method is converted to the unified coordinate system.
  • the relative position relationship includes:
  • At least one of the single-line lidar is arranged on the top of the vehicle body, so that the laser pulse emitted by the single-line lidar is obliquely downward;
  • At least one of the single-line lidar is arranged at the bottom of the vehicle body, so that the laser pulse emitted by the single-line lidar is parallel to the direction of the bottom of the vehicle body.
  • At least one of the single-line lidar is arranged at the bottom of the vehicle body, so that the laser pulse emitted by the single-line lidar is parallel to the direction of the bottom of the vehicle body.
  • the plane scanned by the single-line lidar includes a horizontal plane. If the vehicle body is running smoothly, the scanning plane of the single-line lidar does not change, so The acquired point cloud data is two-dimensional point cloud data.
  • the single-line lidar can scan a relatively distant observation point, which can be 80 meters away, and is used in conjunction with the single-line lidar placed diagonally downward, and the single-line lidar placed diagonally downward It is used to scan close observation points, such as observation points within 20 meters, and merge the point cloud data obtained by the two kinds of lidars to achieve a more stable positioning.
  • Fig. 6 is a block diagram showing a vehicle body positioning device according to an exemplary embodiment. 6, the device includes a lidar device 11, an extraction module 12 and a registration module 13.
  • the lidar device 11 includes at least one single-line lidar for scanning and acquiring three-dimensional point cloud data of the surrounding environment of the vehicle body, and the direction in which the at least one single-line lidar emits laser light is at least on the plane of the bottom of the vehicle body. An angle relationship;
  • the extraction module 12 is used to extract point cloud feature information in the three-dimensional point cloud data
  • the registration module 13 is configured to register the point cloud feature information with a preset high-precision map to determine the position information of the vehicle body.
  • the included angle relationship includes: the single-line lidar is arranged on the vehicle body and located at a position above the plane of the bottom of the vehicle body, and the laser pulse emitted by the single-line lidar is in an oblique upward direction Or diagonally downward.
  • the included angle relationship includes: the single-line lidar is arranged on the upper half of the vehicle body in an obliquely downward direction, so that the laser pulse emitted by the single-line lidar is in an obliquely downward direction.
  • Fig. 7 is a block diagram showing a vehicle body positioning device according to an exemplary embodiment.
  • the lidar device 11 includes:
  • the first acquisition module 14 is configured to use at least one single-line lidar to perform multiple scans to acquire three-dimensional point cloud data of the surrounding environment of the vehicle body at different scanning positions;
  • the processing module 15 is used for fusing the three-dimensional point cloud data of the surrounding environment of the vehicle body at the different scanning positions to generate three-dimensional point cloud data based on the same coordinate system.
  • Fig. 8 is a block diagram showing a vehicle body positioning device according to an exemplary embodiment.
  • the processing module 15 includes:
  • the determining sub-module 16 is used to determine the observation point in the surrounding environment scanned by the single-line lidar in multiple scans, and obtain the coordinate position of the observation point;
  • the conversion sub-module 17 is configured to convert the point cloud data corresponding to the observation point into the same coordinate system based on the coordinate position of the observation point.
  • Fig. 9 is a block diagram showing a vehicle body positioning device according to an exemplary embodiment.
  • the conversion unit 17 includes:
  • the determining unit 18 is used to determine the coordinate system information of the observation point during multiple scans
  • the acquiring unit 19 acquires the relative pose information of the vehicle body between two adjacent scans in multiple scans;
  • the conversion unit 20 based on the coordinate system information and the relative pose information, sequentially converts the point cloud data corresponding to the observation point in the previous scan to the coordinate system corresponding to the next scan until the point cloud data corresponding to the observation point Transfer to the coordinate system corresponding to the last scan.
  • the description form of the point cloud feature information includes one or more of normal estimation, point feature histogram, fast point feature histogram, and spin image.
  • Fig. 10 is a block diagram showing a vehicle body positioning device according to an exemplary embodiment.
  • the lidar device 11 includes:
  • the second acquisition module 21 is configured to acquire the relative position relationship between the single-line lidars when at least two single-line lidars are provided on the vehicle body;
  • the conversion module 22 is configured to convert the point cloud data corresponding to the observation point by the single-line lidar to the same coordinate system according to the relative position relationship.
  • the relative position relationship includes:
  • At least one of the single-line lidar is arranged on the top of the vehicle body, so that the laser pulse emitted by the single-line lidar is obliquely downward;
  • At least one of the single-line lidar is arranged at the bottom of the vehicle body, so that the laser pulse emitted by the single-line lidar is parallel to the direction of the bottom of the vehicle body.
  • Fig. 11 is a block diagram showing a vehicle body positioning device 800 according to an exemplary embodiment.
  • the device 800 may be a mobile phone, a computer, a digital broadcasting terminal, a messaging device, a game console, a tablet device, a medical device, a fitness device, a personal digital assistant, etc.
  • the device 800 may include one or more of the following components: a processing component 802, a memory 804, a power supply component 806, a multimedia component 808, an audio component 810, an input/output (I/O) interface 812, a sensor component 814, And the communication component 816.
  • the processing component 802 generally controls the overall operations of the device 800, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations.
  • the processing component 802 may include one or more processors 820 to execute instructions to complete all or part of the steps of the foregoing method.
  • the processing component 802 may include one or more modules to facilitate the interaction between the processing component 802 and other components.
  • the processing component 802 may include a multimedia module to facilitate the interaction between the multimedia component 808 and the processing component 802.
  • the memory 804 is configured to store various types of data to support operations in the device 800. Examples of these data include instructions for any application or method operating on the device 800, contact data, phone book data, messages, pictures, videos, etc.
  • the memory 804 can be implemented by any type of volatile or nonvolatile storage device or a combination thereof, such as static random access memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable Programmable Read Only Memory (EPROM), Programmable Read Only Memory (PROM), Read Only Memory (ROM), Magnetic Memory, Flash Memory, Magnetic Disk or Optical Disk.
  • SRAM static random access memory
  • EEPROM electrically erasable programmable read-only memory
  • EPROM erasable Programmable Read Only Memory
  • PROM Programmable Read Only Memory
  • ROM Read Only Memory
  • Magnetic Memory Flash Memory
  • Magnetic Disk Magnetic Disk or Optical Disk.
  • the power supply component 806 provides power to various components of the device 800.
  • the power supply component 806 may include a power management system, one or more power supplies, and other components associated with the generation, management, and distribution of power for the device 800.
  • the multimedia component 808 includes a screen that provides an output interface between the device 800 and the user.
  • the screen may include a liquid crystal display (LCD) and a touch panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive input signals from the user.
  • the touch panel includes one or more touch sensors to sense touch, sliding, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure related to the touch or slide operation.
  • the multimedia component 808 includes a front camera and/or a rear camera. When the device 800 is in an operation mode, such as a shooting mode or a video mode, the front camera and/or the rear camera can receive external multimedia data. Each front camera and rear camera can be a fixed optical lens system or have focal length and optical zoom capabilities.
  • the audio component 810 is configured to output and/or input audio signals.
  • the audio component 810 includes a microphone (MIC), and when the device 800 is in an operation mode, such as a call mode, a recording mode, and a voice recognition mode, the microphone is configured to receive external audio signals.
  • the received audio signal may be further stored in the memory 804 or transmitted via the communication component 816.
  • the audio component 810 further includes a speaker for outputting audio signals.
  • the I/O interface 812 provides an interface between the processing component 802 and a peripheral interface module.
  • the peripheral interface module may be a keyboard, a click wheel, a button, and the like. These buttons may include but are not limited to: home button, volume button, start button, and lock button.
  • the sensor component 814 includes one or more sensors for providing the device 800 with various aspects of status assessment.
  • the sensor component 814 can detect the on/off status of the device 800 and the relative positioning of the components.
  • the component is the display and the keypad of the device 800.
  • the sensor component 814 can also detect the position change of the device 800 or a component of the device 800. , The presence or absence of contact between the user and the device 800, the orientation or acceleration/deceleration of the device 800, and the temperature change of the device 800.
  • the sensor component 814 may include a proximity sensor configured to detect the presence of nearby objects when there is no physical contact.
  • the sensor component 814 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications.
  • the sensor component 814 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor or a temperature sensor.
  • the communication component 816 is configured to facilitate wired or wireless communication between the device 800 and other devices.
  • the device 800 can access a wireless network based on a communication standard, such as WiFi, 2G, or 3G, or a combination thereof.
  • the communication component 816 receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel.
  • the communication component 816 further includes a near field communication (NFC) module to facilitate short-range communication.
  • the NFC module can be implemented based on radio frequency identification (RFID) technology, infrared data association (IrDA) technology, ultra-wideband (UWB) technology, Bluetooth (BT) technology and other technologies.
  • RFID radio frequency identification
  • IrDA infrared data association
  • UWB ultra-wideband
  • Bluetooth Bluetooth
  • the apparatus 800 may be implemented by one or more application specific integrated circuits (ASIC), digital signal processors (DSP), digital signal processing equipment (DSPD), programmable logic devices (PLD), field programmable A gate array (FPGA), controller, microcontroller, microprocessor, or other electronic components are implemented to implement the above methods.
  • ASIC application specific integrated circuits
  • DSP digital signal processors
  • DSPD digital signal processing equipment
  • PLD programmable logic devices
  • FPGA field programmable A gate array
  • controller microcontroller, microprocessor, or other electronic components are implemented to implement the above methods.
  • non-transitory computer-readable storage medium including instructions, such as the memory 804 including instructions, which can be executed by the processor 820 of the device 800 to complete the foregoing method.
  • the non-transitory computer-readable storage medium may be ROM, random access memory (RAM), CD-ROM, magnetic tape, floppy disk, optical data storage device, etc.
  • Fig. 12 is a block diagram showing a vehicle body positioning device 1900 according to an exemplary embodiment.
  • the device 1900 may be provided as a server. 12
  • the device 1900 includes a processing component 1922, which further includes one or more processors, and a memory resource represented by a memory 1932, for storing instructions executable by the processing component 1922, such as application programs.
  • the application program stored in the memory 1932 may include one or more modules each corresponding to a set of instructions.
  • the processing component 1922 is configured to execute instructions to perform the above-described methods.
  • the device 1900 may also include a power component 1926 configured to perform power management of the device 1900, a wired or wireless network interface 1950 configured to connect the device 1900 to the network, and an input output (I/O) interface 1958.
  • the device 1900 can operate based on an operating system stored in the memory 1932, such as Windows ServerTM, Mac OS XTM, UnixTM, LinuxTM, FreeBSDTM or the like.
  • non-transitory computer-readable storage medium including instructions, such as the memory 1932 including instructions, which may be executed by the processing component 1922 of the device 1900 to complete the foregoing method.
  • the non-transitory computer-readable storage medium may be ROM, random access memory (RAM), CD-ROM, magnetic tape, floppy disk, optical data storage device, etc.

Abstract

La présente invention concerne un procédé et un dispositif permettant de positionner un corps de véhicule. Au moins un lidar linéaire monoligne est utilisé pour balayer et acquérir des données tridimensionnelles de nuage de points de l'environnement d'un corps de véhicule, et au moins une relation angulaire est trouvée respectivement entre la direction d'émission du faisceau laser du ou des lidar monolignes et le plan sur lequel se trouve le bas du corps de véhicule (S101); des informations de caractéristique de nuage de points dans les données tridimensionnelles de nuage de points sont extraites (S102); et les informations de caractéristiques de nuage de points sont enregistrées avec une carte haute définition prédéfinie pour déterminer des informations de localisation du corps de véhicule (S103). Par rapport aux données bidimensionnelles de nuage de points acquises par un radar monoligne de l'état de la technique, la fiabilité est augmentée quand les caractéristiques de l'environnement sont rares.
PCT/CN2019/115903 2019-02-20 2019-11-06 Procédé et dispositif de positionnement d'un corps de véhicule WO2020168742A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910127565.9A CN109725330A (zh) 2019-02-20 2019-02-20 一种车体定位方法及装置
CN201910127565.9 2019-02-20

Publications (1)

Publication Number Publication Date
WO2020168742A1 true WO2020168742A1 (fr) 2020-08-27

Family

ID=66301526

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/115903 WO2020168742A1 (fr) 2019-02-20 2019-11-06 Procédé et dispositif de positionnement d'un corps de véhicule

Country Status (2)

Country Link
CN (1) CN109725330A (fr)
WO (1) WO2020168742A1 (fr)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112731434A (zh) * 2020-12-15 2021-04-30 武汉万集信息技术有限公司 基于激光雷达和标识物的定位方法、系统
CN113534097A (zh) * 2021-08-27 2021-10-22 北京工业大学 一种适用于旋轴激光雷达的优化方法
CN113970756A (zh) * 2021-11-01 2022-01-25 中国海洋大学 一种波浪的激光测量装置与三维波浪场时空反演重构方法
CN114506212A (zh) * 2022-02-15 2022-05-17 国能神东煤炭集团有限责任公司 梭车的空间定位辅助驾驶系统和方法
CN115937069A (zh) * 2022-03-24 2023-04-07 北京小米移动软件有限公司 零件检测方法、装置、电子设备及存储介质

Families Citing this family (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109725330A (zh) * 2019-02-20 2019-05-07 苏州风图智能科技有限公司 一种车体定位方法及装置
CN110276834B (zh) * 2019-06-25 2023-04-11 达闼科技(北京)有限公司 一种激光点云地图的构建方法、终端和可读存储介质
CN110376595B (zh) * 2019-06-28 2021-09-24 湖北大学 自动装车机车辆测量系统
CN110789533B (zh) * 2019-09-25 2021-08-13 华为技术有限公司 一种数据呈现的方法及终端设备
CN112219225A (zh) * 2019-09-26 2021-01-12 深圳市大疆创新科技有限公司 定位方法、系统及可移动平台
CN110677491B (zh) * 2019-10-10 2021-10-19 郑州迈拓信息技术有限公司 用于车辆的旁车位置估计方法
CN110827542A (zh) * 2019-11-11 2020-02-21 江苏中路工程技术研究院有限公司 一种高速公路安全车距预警系统
CN110988848B (zh) * 2019-12-23 2022-04-26 潍柴动力股份有限公司 车载激光雷达相对位姿监测方法及设备
CN111257903B (zh) * 2020-01-09 2022-08-09 广州微牌智能科技有限公司 车辆定位方法、装置、计算机设备和存储介质
CN111273270A (zh) * 2020-03-17 2020-06-12 北京宸控科技有限公司 一种掘进机定位定向方法
CN111693043B (zh) * 2020-06-18 2023-04-07 北京四维图新科技股份有限公司 地图数据处理方法以及设备
CN112082484A (zh) * 2020-09-11 2020-12-15 武汉理工大学 一种基于单线激光雷达检测工程车车身偏移的装置及方法
CN114252883B (zh) * 2020-09-24 2022-08-23 北京万集科技股份有限公司 目标检测方法、装置、计算机设备和介质
CN112904841B (zh) * 2021-01-12 2023-11-03 北京布科思科技有限公司 非水平朝向的单线定位避障方法和装置、设备及存储介质
CN115586511B (zh) * 2022-11-25 2023-03-03 唐山百川工业服务有限公司 一种基于阵列立柱的激光雷达二维定位方法

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4953502B2 (ja) * 2000-10-02 2012-06-13 日本信号株式会社 2次元走査型光レーダセンサ
CN104808192A (zh) * 2015-04-15 2015-07-29 中国矿业大学 进行三维激光扫描的摆动装置及其坐标转换方法
CN106323269A (zh) * 2015-12-10 2017-01-11 上海思岚科技有限公司 自主定位导航设备、定位导航方法及自主定位导航系统
CN106371105A (zh) * 2016-08-16 2017-02-01 长春理工大学 单线激光雷达车辆目标识别方法、装置和汽车
KR20170116305A (ko) * 2016-04-08 2017-10-19 한국전자통신연구원 코파일럿 차량을 위한 이종 다중 센서의 추적정보 융합기반 주변 장애물 인식 장치
CN107957583A (zh) * 2017-11-29 2018-04-24 江苏若博机器人科技有限公司 一种多传感器融合全天候快速无人车探测避障系统
CN108072880A (zh) * 2018-01-17 2018-05-25 上海禾赛光电科技有限公司 激光雷达视场中心指向的调整方法、介质、激光雷达系统
CN109725330A (zh) * 2019-02-20 2019-05-07 苏州风图智能科技有限公司 一种车体定位方法及装置

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019127445A1 (fr) * 2017-12-29 2019-07-04 深圳前海达闼云端智能科技有限公司 Procédé, appareil et système de cartographie tridimensionnelle, plateforme en nuage, dispositif électronique et produit programme informatique

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4953502B2 (ja) * 2000-10-02 2012-06-13 日本信号株式会社 2次元走査型光レーダセンサ
CN104808192A (zh) * 2015-04-15 2015-07-29 中国矿业大学 进行三维激光扫描的摆动装置及其坐标转换方法
CN106323269A (zh) * 2015-12-10 2017-01-11 上海思岚科技有限公司 自主定位导航设备、定位导航方法及自主定位导航系统
KR20170116305A (ko) * 2016-04-08 2017-10-19 한국전자통신연구원 코파일럿 차량을 위한 이종 다중 센서의 추적정보 융합기반 주변 장애물 인식 장치
CN106371105A (zh) * 2016-08-16 2017-02-01 长春理工大学 单线激光雷达车辆目标识别方法、装置和汽车
CN107957583A (zh) * 2017-11-29 2018-04-24 江苏若博机器人科技有限公司 一种多传感器融合全天候快速无人车探测避障系统
CN108072880A (zh) * 2018-01-17 2018-05-25 上海禾赛光电科技有限公司 激光雷达视场中心指向的调整方法、介质、激光雷达系统
CN109725330A (zh) * 2019-02-20 2019-05-07 苏州风图智能科技有限公司 一种车体定位方法及装置

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
WEI, CHONGYANG: "3D Feature Point Clouds-based Research on Mapping and Localization in Urban Environments", ELECTRONIC TECHNOLOGY & INFORMATION SCIENCE, CHINA DOCTORAL DISSERTATIONS FULL-TEXT DATABASE, no. 1, 15 January 2019 (2019-01-15), ISSN: 1674-022X, DOI: 20191231103617Y *

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112731434A (zh) * 2020-12-15 2021-04-30 武汉万集信息技术有限公司 基于激光雷达和标识物的定位方法、系统
CN113534097A (zh) * 2021-08-27 2021-10-22 北京工业大学 一种适用于旋轴激光雷达的优化方法
CN113534097B (zh) * 2021-08-27 2023-11-24 北京工业大学 一种适用于旋轴激光雷达的优化方法
CN113970756A (zh) * 2021-11-01 2022-01-25 中国海洋大学 一种波浪的激光测量装置与三维波浪场时空反演重构方法
CN114506212A (zh) * 2022-02-15 2022-05-17 国能神东煤炭集团有限责任公司 梭车的空间定位辅助驾驶系统和方法
CN114506212B (zh) * 2022-02-15 2023-09-22 国能神东煤炭集团有限责任公司 梭车的空间定位辅助驾驶系统和方法
CN115937069A (zh) * 2022-03-24 2023-04-07 北京小米移动软件有限公司 零件检测方法、装置、电子设备及存储介质
CN115937069B (zh) * 2022-03-24 2023-09-19 北京小米移动软件有限公司 零件检测方法、装置、电子设备及存储介质

Also Published As

Publication number Publication date
CN109725330A (zh) 2019-05-07

Similar Documents

Publication Publication Date Title
WO2020168742A1 (fr) Procédé et dispositif de positionnement d'un corps de véhicule
US11100260B2 (en) Method and apparatus for interacting with a tag in a wireless communication area
WO2019179417A1 (fr) Procédé de fusion de données et dispositif associé
EP3469306B1 (fr) Mise en correspondance géométrique dans des systèmes de navigation visuels
US9646384B2 (en) 3D feature descriptors with camera pose information
CN109668551B (zh) 机器人定位方法、装置及计算机可读存储介质
CN110967011A (zh) 一种定位方法、装置、设备及存储介质
WO2022036980A1 (fr) Procédé et appareil de détermination de pose, dispositif électronique, support de stockage et programme
US9304970B2 (en) Extended fingerprint generation
WO2022078467A1 (fr) Procédé et appareil de recharge automatique de robot, robot et support de stockage
KR20180044279A (ko) 깊이 맵 샘플링을 위한 시스템 및 방법
CN105940429A (zh) 用于确定设备运动的估计的方法和系统
US20210158560A1 (en) Method and device for obtaining localization information and storage medium
US20210390223A1 (en) Method and apparatus for display of digital content associated with a location in a wireless communications area
WO2019019819A1 (fr) Dispositif électronique mobile et procédé de traitement de tâches dans une région de tâche
CN111272172A (zh) 无人机室内导航方法、装置、设备和存储介质
WO2022110653A1 (fr) Procédé et appareil de détermination de pose, dispositif électronique et support de stockage lisible par ordinateur
CN109696173A (zh) 一种车体导航方法和装置
US20220237533A1 (en) Work analyzing system, work analyzing apparatus, and work analyzing program
US20120002044A1 (en) Method and System for Implementing a Three-Dimension Positioning
US11331801B2 (en) System and method for probabilistic multi-robot positioning
CN115407355B (zh) 库位地图的验证方法、装置及终端设备
CN112835021B (zh) 定位方法、装置、系统及计算机可读存储介质
US20130155211A1 (en) Interactive system and interactive device thereof
CN113433566B (zh) 地图建构系统以及地图建构方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19916418

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19916418

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 19916418

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 18/03/2022)

122 Ep: pct application non-entry in european phase

Ref document number: 19916418

Country of ref document: EP

Kind code of ref document: A1