CN115964446B - Radar data interaction processing method based on mobile terminal - Google Patents

Radar data interaction processing method based on mobile terminal

Info

Publication number
CN115964446B
CN115964446B CN202211628850.7A CN202211628850A CN115964446B CN 115964446 B CN115964446 B CN 115964446B CN 202211628850 A CN202211628850 A CN 202211628850A CN 115964446 B CN115964446 B CN 115964446B
Authority
CN
China
Prior art keywords
data
point cloud
vehicle
obstacle
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202211628850.7A
Other languages
Chinese (zh)
Other versions
CN115964446A (en
Inventor
马楠
姚永强
张欢
徐成
郭聪
吴祉璇
许根宝
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing University of Technology
Original Assignee
Beijing University of Technology
Filing date
Publication date
Application filed by Beijing University of Technology filed Critical Beijing University of Technology
Priority to CN202211628850.7A priority Critical patent/CN115964446B/en
Publication of CN115964446A publication Critical patent/CN115964446A/en
Application granted granted Critical
Publication of CN115964446B publication Critical patent/CN115964446B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

A radar data interaction processing method based on a mobile terminal belongs to the field of automatic driving. According to the method, the original point cloud data is rapidly and accurately processed at the calculation generating end, the radar data is processed by the calculation generating end and sent to the mobile end for display, the problems that the display equipment occupies space and is difficult to move are solved, the method has the advantage of high flexibility, and more testers can participate in debugging work at the same time; in addition, radar data is processed at the mobile terminal, obstacle information is represented through a simple model, more passengers can understand the radar data, and each decision made by the unmanned vehicle can be fully trusted when the unmanned vehicle is taken. The method solves the problems that the calculation generating end is difficult to move and the common user does not understand radar data.

Description

Radar data interaction processing method based on mobile terminal
Technical Field
The invention belongs to the field of automatic driving.
Background
The autonomous vehicle makes the correct decisions and controls from the sensed data, while the sensing is independent of sensors such as radar, cameras, etc. The radar emits detection signals (laser beams) to the target, and then compares the received signals (target echoes) reflected from the target with the emission signals, and after proper processing, relevant information of the target, such as parameters of target distance, azimuth, altitude, speed, gesture, even shape and the like, can be obtained, so that the target is detected, tracked and identified. However, the original data of the radar is complex point cloud data, which is not very difficult for workers who are engaged in the automatic driving field to understand, but a simple and easily understood radar data presentation mode is very important for many passengers when the automatic driving vehicle runs on the road. Therefore, the radar data interaction processing method based on the mobile terminal is provided, the calculation generation terminal processes the original radar data, the radar data is transmitted to the mobile terminal through the UDP protocol, and the obstacle is displayed by using the corresponding algorithm, so that the method has the advantages of being popular and easy to understand, flexible and convenient to interact and display.
In a conventional unmanned vehicle, raw data is typically displayed on a central control screen, and a tester must be in the vehicle to know in real time the data perceived by the vehicle and the correctness of the decisions made. Regarding a display interface, a laser radar display interface is designed in a display screen panel (publication number: CN 307549670S) with a laser radar data graphic user interface, the interface is suitable for computers, mobile phones and tablet equipment, but only can play back and display laser radar data, and in unmanned operation, the vehicle has high running speed and needs decision control through real-time perceived data, and the method cannot meet the real-time radar data processing and displaying requirements; in terms of radar data processing, in the method, the system, the equipment and the storage medium (publication number: CN 115166700A) for filtering the ground point cloud of the laser radar, through the structured coding of the point cloud, environment points are reserved by using neighbor point conditional query, and the ground points and noise points in the point cloud are filtered.
Aiming at the defects, the invention provides the radar data interaction processing method based on the mobile terminal, which is used for rapidly and accurately processing the original point cloud data at the calculation generation terminal, and sending the radar data to the mobile terminal for display through the processing of the calculation generation terminal, so that the problems of space occupation and difficulty in movement of the display equipment are solved, the method has the advantage of high flexibility, and more testers can participate in debugging work at the same time; in addition, radar data is processed at the mobile terminal, obstacle information is represented through a simple model, more passengers can understand the radar data, and each decision made by the unmanned vehicle can be fully trusted when the unmanned vehicle is taken. The invention aims to provide a radar data interaction processing method based on a mobile terminal, which aims to solve the problems that a calculation generation terminal is difficult to move and a common user does not understand radar data.
Disclosure of Invention
The whole system comprises a data acquisition unit, a processing unit and a display unit. The data acquisition unit and the processing unit are completed at a calculation generation end, and the display unit is completed at a mobile end. The data acquisition unit comprises two parts of data, wherein one part of the data is used for acquiring a group of unordered original point cloud data through the vehicle-mounted three-dimensional laser radar, and the other part of the data is used for acquiring vehicle information through a can bus; the processing unit divides the point cloud data clusters into a plurality of independent subsets, and performs target classification and identification on the basis; the display unit displays the vehicle information and the identified obstacle on the mobile terminal.
1. Acquisition of point cloud data
A group of unordered original point cloud data is obtained through the vehicle-mounted three-dimensional laser radar, wherein the point cloud data at least comprises point cloud points of a road area scanned by a vehicle in the driving process, and each point cloud point data comprises coordinate information and is provided with a time stamp and a direction of a light beam.
2. Processing of point cloud data
The processing unit processes the original point cloud data, wherein the preprocessing unit comprises ground point cloud data filtering and target clustering segmentation operation, and the recognition unit calculates geometric data of the clustered point cloud so as to recognize the obstacle.
1) The raw point cloud data is converted into a depth image, each pixel of which stores a measured distance from the sensor to the object.
2) Filtering out ground point cloud data
(1) Traversing all the point cloud points in the point cloud set in the point cloud chart, and acquiring the road surface height of the driving road corresponding to each point cloud point;
(2) Determining the height coordinate of each point cloud point relative to the driving road based on the point cloud coordinate information of each point cloud point in the point cloud diagram;
(3) And if the height coordinate of any one point cloud point is smaller than or equal to the corresponding road surface height, removing the point cloud point from the point cloud set, and reducing the data quantity of the point cloud.
3) Clustering point cloud data
(1) Point cloud data clustering
And calculating Euclidean distance from the neighborhood point to the target point through a KD-Tree neighbor query algorithm, and clustering according to the distance. The calculation process is repeated until all new points have been calculated.
(2) Obstacle identification
And obtaining a rectangle with the minimum area around the obstacle based on a minimum convex hull method for each cluster to obtain a cube frame. And extracting and classifying the characteristics of the cube frame area, and identifying the target obstacle. The obstacle information includes corner coordinates, type, and id.
3. Vehicle state information acquisition
Vehicle information is obtained through the can bus, and the vehicle state information comprises steering wheel rotation angle, electric quantity, speed, gear, accelerator opening and brake opening.
4. Data communication
1) UDP protocol transmitted data
(1) And carrying out frame selection on the obstacle information identified by the radar point cloud data. In order to solve the problem of huge data volume of point cloud, corner coordinates of a rectangular obstacle frame and the category of the obstacle are transmitted during data transmission.
(2) Transmitting byte data via UDP protocol
Every eight bytes is specified as one obstacle information or vehicle information. The transmission data occupies two ports, and the data transmitted by each port is 1498-bit byte data. One port transmits vehicle state information, the other port transmits radar data information, and the two ports transmit in a cyclic manner.
5. Display interaction of data
1) The mobile terminal receives the obstacle data and the vehicle data.
2) Coordinate system conversion of obstacle information
The radar obstacle data takes a central point at the tail of the vehicle as a coordinate origin, and the unit is meter, so that a vehicle coordinate system is formed; when the mobile terminal displays, an image coordinate system is formed, a control is drawn in the middle of the interface for displaying the lane information, the upper left corner of the displayed lane is used as the origin of coordinates, the unit is a pixel, and the vehicle coordinate system is required to be converted into the image coordinate system. In a vehicle coordinate system, taking a coordinate origin as an X axis to the right and taking the coordinate origin as a Y axis to the upward; in the image coordinate system, the coordinate origin is taken as the x axis to the right and the y axis to the down, and then the coordinate conversion formula is
x’=w/2+X’*(w/m)
y’=h–Y’*(h/n)
Wherein w is the pixel width of the display lane in the screen, h is the pixel height of the display lane in the screen, m is the maximum lateral distance (unit: m) of the displayed obstacle from the host vehicle, n is the maximum longitudinal distance (unit: m) of the displayed obstacle from the host vehicle, X 'is the actual lateral distance (unit: m) of the obstacle from the host vehicle, Y' is the actual longitudinal distance (unit: m) of the obstacle from the host vehicle, X 'is the calculated lateral distance (unit: pixel) of the obstacle from the host vehicle in the interface, and Y' is the calculated longitudinal distance (unit: pixel) of the obstacle from the host vehicle in the interface;
3) And developing a mobile terminal interface, and displaying vehicle information (vehicle speed, electric quantity, steering wheel rotation angle, accelerator opening, brake opening and gear) and obstacle information.
Drawings
FIG. 1 is a workflow diagram of a mobile-based radar data interaction processing method of the present invention;
FIG. 2 is a coordinate transformation schematic of the radar data interaction processing method based on the mobile terminal;
FIG. 3 is a mobile terminal interface of the radar data interaction processing method based on the mobile terminal;
Detailed Description
The following detailed description of the embodiments of the present invention, such as the shape and construction of the components, the mutual positions and connection relationships between the components, the roles and working principles of the components, the manufacturing process and the operation and use method, etc., will be given by way of example only to assist those skilled in the art in a more complete, accurate and thorough understanding of the present invention.
In this embodiment, the computing generation end selects Jetson AGX Orin, and the computer is the AI supercomputer with the smallest issue of NVIDIA, the strongest function and the highest energy efficiency, and can perform 200 trillion times per second (TOPS). The mobile terminal is a M6 flat plate, has small volume and can basically meet the display requirement.
Example 1 fig. 1 is a workflow diagram of a radar data interaction processing method based on a mobile terminal, as shown in the figure, firstly, acquiring original point cloud data through a vehicle-mounted three-dimensional laser radar, and acquiring vehicle information including steering wheel rotation angle, electric quantity, speed, gear, accelerator opening, brake opening and the like through a can bus; filtering the ground point cloud data, reducing the point cloud data quantity, clustering the same obstacle point cloud by using a clustering algorithm for the filtered point cloud data, and identifying the obstacle by the clustered data; and transmitting the vehicle information and the point cloud data after processing identification to the mobile terminal for display in real time by using a data transmission protocol based on UDP.
Most lidars provide raw data in the form of a single ranging reading for each laser beam, with a time stamp and direction of the beam, which can be directly converted into a depth image. Each pixel of such a depth image stores a measured distance from the sensor to the object. When the ground point cloud data is filtered, traversing all the point cloud points in the point cloud set in the point cloud graph, and inquiring the road surface height of the driving road corresponding to each point cloud point; determining the height coordinate of each point cloud point relative to the driving road based on the point cloud coordinate information of each point cloud point in the point cloud diagram; and if the height coordinate of any one point cloud point is smaller than or equal to the corresponding road surface height, removing the point cloud point from the point cloud set, thereby completing the task of filtering the ground point cloud data and reducing the data volume of the point cloud. And performing Euclidean clustering on the point cloud data, calculating Euclidean distance from the neighborhood point to the target point through a KD-Tree neighbor query algorithm, and clustering according to the distance. The calculation process is repeated until all new points have been calculated. After the point cloud of each obstacle is obtained, the points around the obstacle are obtained based on a minimum convex hull method, the rectangle surrounding the minimum area is obtained on the basis of the points, and when the 3D object is surrounded, a cube frame is obtained. Then, by calculating the geometric relationship of clusters in each cube frame, a target obstacle is identified based on the calculation result.
Example 2UDP protocol data Transmission procedure
The data is transmitted through the UDP protocol, the protocol transmission data occupies two ports, and the data transmitted by each port is 1498-bit byte data. One port transmits vehicle state information, the other port transmits radar data information, and the two ports transmit in a cyclic manner. The 1 st to 6 th bytes describe the source device MAC address, the 7 th to 14 th bytes describe the destination address MAC address, the 15 th to 24 th bytes describe the source device IP address, the 25 th to 34 th bytes describe the destination device IP address, the 35 th to 38 th bytes describe the source device data transmission port, the 39 th to 42 th bytes describe the destination device data reception port, and the following bytes are data bytes. Each barrier data transmitted by the computation generating end at the first port occupies 72 bytes, the first 16 x 4 bytes represent corner coordinate information, each 8 bytes represent x or y coordinates, and the last 8 bytes represent type information. In order to transmit more obstacle data, only four corner coordinates, such as upper left front, upper right front, lower left rear and lower right rear, are transmitted, and a rectangular frame of the obstacle can be restored according to the four corner coordinates. The calculation generating end transmits the vehicle information at the second port, and each 8 bytes represent one data, for example, the 43 th to 50 th bytes represent steering wheel angle information. Since the transmitted vehicle information data is limited, byte bits with no duty can be used for transmitting obstacle data or adding other vehicle information later.
Embodiment 3 fig. 2 is a schematic diagram of coordinate transformation of the radar data interaction processing method based on the mobile terminal. The pixel origin (0, 0) of the image coordinate system displays the upper left corner of the lane in the screen, with the right being the x-axis and the down being the y-axis. Assuming that the width of the lane pixel displayed by the mobile terminal is w and the height is h, that is, the pixel coordinate of the lower right corner of the lane is (w, h), taking the plate M6 as an example, the width of the lane pixel displayed at the interface is w=1800, and the pixel height is h=1600. The origin of the coordinates of the actual vehicle is the center point of the vehicle tail, the right is the X axis, the direction of the vehicle head is the Y axis, for example, 3 lanes are provided, each lane is 4 meters wide and 50 meters long, the vehicle is positioned right below the middle lane, the displayed obstacle is located at the position right below the middle lane, the distance range X epsilon [ -6,6] of the vehicle is the horizontal distance range Y epsilon [0,50], and therefore, the coordinate conversion formula is as follows:
x’=1800/2+X’*(1800/12)
y’=1600–Y’*(1600/50)
For example, the coordinates of the obstacle received by the vehicle are (2, 3), that is, the actual lateral distance x=2m from the vehicle and the longitudinal distance y=3m, the position displayed in the interface lane is
x’=1800/2+2*(1800/12)=1200px
y’=1600–3*(1600/50)=1504px
Embodiment 4 fig. 3 shows a mobile terminal interface of the radar data interaction processing method based on the mobile terminal, wherein three lanes are displayed in the interface, each lane is 4m wide and 50m long, the vehicle is positioned right below a middle lane, and the lane plane rotates 30 degrees in the interface for the purpose of attractive display. The upper part of the interface sequentially displays the electric quantity, the vehicle speed and the steering wheel angle from left to right, and the electric quantity, the vehicle speed and the steering wheel angle are displayed in an integer form. The left side shows the throttle opening and the brake opening, wherein the throttle and the brake are displayed in terms of percentage by a progress bar, for example, the throttle is 34%, and 34% of the progress bar is blue. Right vehicle gear information. For the obstacle, the obstacle of about ±6 meters, about 50 meters in front of the vehicle is displayed in the interface. And displaying a corresponding barrier model according to the barrier type data sent by the calculation generating end, and translating the barrier model through association of barrier ids between transmission data frames, so as to ensure the smoothness of barrier movement.

Claims (1)

1. A radar data interaction processing method based on a mobile terminal is characterized by comprising the following steps of:
Acquisition of Point cloud data
Acquiring a group of unordered original point cloud data through a vehicle-mounted three-dimensional laser radar, wherein the point cloud data at least comprises point cloud points of a road area scanned by a vehicle in the driving process, and each point cloud point data comprises coordinate information and has a time stamp and a direction of a light beam;
second, processing point cloud data
The processing unit processes the original point cloud data, wherein the pre-processing unit comprises ground point cloud data filtering and target clustering segmentation operation, and the recognition unit calculates geometric data of the clustered point cloud so as to recognize the obstacle;
1) Converting the raw point cloud data into a depth image, each pixel of such depth image storing a measured distance from the sensor to the object;
2) Filtering out ground point cloud data
(1) Traversing all the point cloud points in the point cloud set in the point cloud chart, and acquiring the road surface height of the driving road corresponding to each point cloud point;
(2) Determining the height coordinate of each point cloud point relative to the driving road based on the point cloud coordinate information of each point cloud point in the point cloud diagram;
(3) If the height coordinate of any one point cloud point is smaller than or equal to the corresponding road surface height, removing the point cloud point from the point cloud set, and reducing the data volume of the point cloud;
3) Clustering point cloud data
(1) Point cloud data clustering
Calculating Euclidean distance from the neighborhood point to the target point through a KD-Tree neighbor query algorithm, clustering according to the distance, and repeating the calculation process until all new points are calculated;
(2) Obstacle identification
For each cluster, obtaining a rectangle with the minimum area around the obstacle based on a minimum convex hull method, and obtaining a cube frame; extracting and classifying characteristics of the cube frame area, and identifying a target obstacle; the obstacle information comprises corner coordinates, types and ids;
third, vehicle status information acquisition
Vehicle information is obtained through a can bus, and the vehicle state information comprises steering wheel rotation angle, electric quantity, speed, gear, accelerator opening and brake opening;
Fourth, data communication
1) UDP protocol transmitted data
(1) Performing frame selection on the obstacle information identified by the radar point cloud data; during data transmission, corner coordinates and barrier categories of the barrier rectangular frame are transmitted;
(2) Transmitting byte data via UDP protocol
Specifying each eight bytes as one piece of obstacle information or vehicle information; the transmission data occupies two ports, and the data transmitted by each port is 1498-bit byte data; one port transmits vehicle state information, the other port transmits radar data information, and the two ports transmit circularly;
Fifth, data display interaction
1) The mobile terminal receives the obstacle data and the vehicle data;
2) Coordinate system conversion of obstacle information
The radar obstacle data takes a central point at the tail of the vehicle as a coordinate origin, and the unit is meter, so that a vehicle coordinate system is formed; when the mobile terminal displays, an image coordinate system is formed, a control is drawn in the middle of the interface and used for displaying lane information, the upper left corner of a displayed lane is taken as an origin of coordinates, the unit is a pixel, and the vehicle coordinate system is required to be converted into the image coordinate system; in a vehicle coordinate system, taking a coordinate origin as an X axis to the right and taking the coordinate origin as a Y axis to the upward; in the image coordinate system, the coordinate origin is taken as the x axis to the right and the y axis to the down, and then the coordinate conversion formula is
x’=w/2+X’*(w/m)
y’=h–Y’*(h/n)
Wherein w is the pixel width of the display lane in the screen, h is the pixel height of the display lane in the screen, m is the maximum lateral distance of the displayed obstacle from the vehicle, n is the maximum longitudinal distance of the displayed obstacle from the vehicle, X 'is the actual lateral distance of the obstacle from the vehicle, Y' is the actual longitudinal distance of the obstacle from the vehicle, X 'is the calculated pixel lateral distance of the obstacle from the vehicle in the interface, and Y' is the calculated pixel longitudinal distance of the obstacle from the vehicle in the interface;
3) And developing a mobile terminal interface, and displaying vehicle information and obstacle information.
CN202211628850.7A 2022-12-18 Radar data interaction processing method based on mobile terminal Active CN115964446B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211628850.7A CN115964446B (en) 2022-12-18 Radar data interaction processing method based on mobile terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211628850.7A CN115964446B (en) 2022-12-18 Radar data interaction processing method based on mobile terminal

Publications (2)

Publication Number Publication Date
CN115964446A CN115964446A (en) 2023-04-14
CN115964446B true CN115964446B (en) 2024-07-02

Family

ID=

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114488073A (en) * 2022-02-14 2022-05-13 中国第一汽车股份有限公司 Method for processing point cloud data acquired by laser radar

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114488073A (en) * 2022-02-14 2022-05-13 中国第一汽车股份有限公司 Method for processing point cloud data acquired by laser radar

Similar Documents

Publication Publication Date Title
EP4283515A1 (en) Detection method, system, and device based on fusion of image and point cloud information, and storage medium
CN108932736B (en) Two-dimensional laser radar point cloud data processing method and dynamic robot pose calibration method
US11734935B2 (en) Transferring synthetic lidar system data to real world domain for autonomous vehicle training applications
CN112894832A (en) Three-dimensional modeling method, three-dimensional modeling device, electronic equipment and storage medium
CN109840448A (en) Information output method and device for automatic driving vehicle
CN111563450B (en) Data processing method, device, equipment and storage medium
CN114325634A (en) Method for extracting passable area in high-robustness field environment based on laser radar
Xu et al. Object detection based on fusion of sparse point cloud and image information
CN113569958A (en) Laser point cloud data clustering method, device, equipment and medium
CN115079143A (en) Multi-radar external parameter rapid calibration method and device for double-axle steering mine card
CN113160292B (en) Laser radar point cloud data three-dimensional modeling device and method based on intelligent mobile terminal
CN114092778A (en) Radar camera data fusion system and method based on characterization learning
CN115964446B (en) Radar data interaction processing method based on mobile terminal
US11348261B2 (en) Method for processing three-dimensional point cloud data
CN111638487B (en) Automatic parking test equipment and method
WO2021189420A1 (en) Data processing method and device
Li et al. Feature point extraction and tracking based on a local adaptive threshold
CN115267756A (en) Monocular real-time distance measurement method based on deep learning target detection
CN115964446A (en) Radar data interaction processing method based on mobile terminal
TWI843116B (en) Moving object detection method, device, electronic device and storage medium
JP7131612B2 (en) Object recognition device, object recognition system, and program
CN116051629B (en) Autonomous navigation robot-oriented high-precision visual positioning method
EP4279954A1 (en) Dual sensing method of object and computing apparatus for object sensing
KR102618951B1 (en) Method for visual mapping, and computer program recorded on record-medium for executing method therefor
WO2024040964A1 (en) Recognition model training method and apparatus, and movable intelligent device

Legal Events

Date Code Title Description
PB01 Publication
SE01 Entry into force of request for substantive examination
GR01 Patent grant