CN114049767B - Edge computing method and device and readable storage medium - Google Patents

Edge computing method and device and readable storage medium Download PDF

Info

Publication number
CN114049767B
CN114049767B CN202111328580.3A CN202111328580A CN114049767B CN 114049767 B CN114049767 B CN 114049767B CN 202111328580 A CN202111328580 A CN 202111328580A CN 114049767 B CN114049767 B CN 114049767B
Authority
CN
China
Prior art keywords
target
data
targets
field
precision map
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111328580.3A
Other languages
Chinese (zh)
Other versions
CN114049767A (en
Inventor
刘鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN202111328580.3A priority Critical patent/CN114049767B/en
Publication of CN114049767A publication Critical patent/CN114049767A/en
Application granted granted Critical
Publication of CN114049767B publication Critical patent/CN114049767B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/0104Measuring and analyzing of parameters relative to traffic conditions
    • G08G1/0125Traffic data processing
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/09Arrangements for giving variable traffic instructions
    • G08G1/0962Arrangements for giving variable traffic instructions having an indicator mounted inside the vehicle, e.g. giving voice messages
    • G08G1/0967Systems involving transmission of highway information, e.g. weather, speed limits
    • G08G1/096708Systems involving transmission of highway information, e.g. weather, speed limits where the received information might be used to generate an automatic action on the vehicle control
    • G08G1/096725Systems involving transmission of highway information, e.g. weather, speed limits where the received information might be used to generate an automatic action on the vehicle control where the received information generates an automatic action on the vehicle control
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/09Arrangements for giving variable traffic instructions
    • G08G1/0962Arrangements for giving variable traffic instructions having an indicator mounted inside the vehicle, e.g. giving voice messages
    • G08G1/0968Systems involving transmission of navigation instructions to the vehicle
    • G08G1/0969Systems involving transmission of navigation instructions to the vehicle having a display in the form of a map
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/30Services specially adapted for particular environments, situations or purposes
    • H04W4/40Services specially adapted for particular environments, situations or purposes for vehicles, e.g. vehicle-to-pedestrians [V2P]

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Chemical & Material Sciences (AREA)
  • Analytical Chemistry (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Atmospheric Sciences (AREA)
  • Traffic Control Systems (AREA)

Abstract

The application provides an edge calculation method, an edge calculation device and a readable storage medium, wherein the method comprises the steps of firstly layering perception data of traffic elements, then carrying out target perception calculation, then carrying out data fusion on targets to form a high-precision dynamic map in a local area, and finally generating a cooperative control instruction on the traffic elements according to scene touch. Accurate and reliable real-time traffic element dynamic data are acquired through the road sensing and edge computing unit, and a cooperative control instruction for strong cooperation and strong control of traffic elements is sent out, so that a cooperative control function of 'controlling vehicles by roads' is realized, and reliability and safety of multi-vehicle cooperative control in a complex traffic scene are ensured.

Description

Edge computing method and device and readable storage medium
Technical Field
The present disclosure relates to the field of edge computing technologies, and in particular, to an edge computing method, an edge computing device, and a readable storage medium.
Background
In recent years, the automatic driving technology rapidly develops, is suitable for popularization and application of the automatic driving technology in closed traffic environments such as airports, parks and ports, replaces manned driving, can save a large amount of labor cost, effectively plays the advantages of automatic sensing, real-time communication, accurate positioning and the like of an automatic driving vehicle, and improves traffic service safety performance and operation efficiency. However, the intelligent bicycle mode is adopted at present, namely, the intelligent bicycle mode depends on the sensing capability and calculation force of an automatic driving vehicle, the sensing capability and calculation force are limited, and in addition, the influence of the sight distance and the visual angle of the vehicle-mounted sensing equipment when the vehicle runs can exist a sensing blind area, so that the intelligent bicycle mode has the problems of low reliability and poor safety.
The vehicle-road cooperation is that on the basis of intelligent automatic driving of a single vehicle, the road traffic environment is perceived in real time and positioned with high precision through perception detection equipment (such as a camera, a radar and the like) arranged on the road, and meanwhile, the road side RSU and the vehicle-mounted OBU perform data interaction, so that information interaction sharing (network interconnection) of different degrees among vehicles, roads, vehicles, networks and vehicles and people is realized, safety warning is sent to the vehicle in real time, the visual field of the vehicle is expanded, and the safety is improved. However, in the multi-vehicle interaction scenes such as the intersection, the parallel line, the import and export of the routes of the large traffic flows, the safety warning mode of the vehicle-road cooperation is difficult to realize the efficient cooperation and passing of the multiple vehicles.
Disclosure of Invention
The embodiment of the application provides an edge computing method, an edge computing device and a readable storage medium, which at least can solve the problems of poor reliability and poor safety of multi-vehicle cooperative control in a multi-vehicle interaction scene in the related technology.
An embodiment of the present application provides an edge computing method, including:
receiving vehicle sensing data sent by vehicle sensing equipment and road traffic situation sensing data sent by road sensing equipment of a zone to which a vehicle belongs;
calculating target feature data based on the vehicle perception data and the road traffic situation perception data; the target characteristic data comprise an in-field target of the slice region and corresponding characteristic values;
fusing the target characteristic data to obtain fused data, and generating high-precision map dynamic data corresponding to the region based on the fused data;
generating cooperative control instructions of the targets in the field according to the high-precision map dynamic data and the job tasks of the targets in the field;
and respectively sending the corresponding cooperative control instruction to each in-field target.
A second aspect of an embodiment of the present application provides an edge computing device, including:
the receiving module is used for receiving vehicle perception data sent by the vehicle-mounted perception equipment and road traffic situation perception data sent by the road perception equipment of the zone to which the vehicle belongs;
the calculating module is used for calculating target characteristic data based on the vehicle perception data and the road traffic situation perception data; the target characteristic data comprise an in-field target of the slice region and corresponding characteristic values;
the fusion module is used for fusing the target characteristic data to obtain fusion data and generating high-precision map dynamic data corresponding to the region based on the fusion data;
the generation module is used for generating cooperative control instructions of all the on-site targets according to the high-precision map dynamic data and the job tasks of the on-site targets;
and the sending module is used for respectively sending the corresponding cooperative control instructions to the targets in the fields.
A third aspect of an embodiment of the present application provides an electronic device, including: the edge computing method comprises a memory, a processor and a computer program stored in the memory and capable of running on the processor, wherein the processor executes the computer program to realize the steps in the edge computing method provided by the first aspect of the embodiment of the application.
A fourth aspect of the embodiments of the present application provides a computer readable storage medium having a computer program stored thereon, where the computer program, when executed by a processor, implements each step in the edge computing method provided in the first aspect of the embodiments of the present application.
From the above, according to the edge computing method, the edge computing device and the readable storage medium provided by the application scheme, the target perception computing is performed after the perception data of the traffic elements are layered, then the target is subjected to data fusion, a high-precision dynamic map in the local area is formed, and finally the cooperative control instruction for the traffic elements is generated according to the scene touch. Accurate and reliable real-time traffic element dynamic data are acquired through the road sensing and edge computing unit, and a cooperative control instruction for strong cooperation and strong control of traffic elements is sent out, so that a cooperative control function of 'controlling vehicles by roads' is realized, and reliability and safety of multi-vehicle cooperative control in a complex traffic scene are ensured.
Drawings
FIG. 1 is a schematic diagram of an overall system architecture of an edge computing device according to a first embodiment of the present application;
fig. 2 is a flow chart of an edge computing method according to a first embodiment of the present application;
fig. 3 is a schematic diagram of a sensing period of a scene sensing device on an intra-field target according to a first embodiment of the present application;
FIG. 4 is a schematic diagram of a space-time analysis of a scene sensing device according to a first embodiment of the present application for sensing and sampling target features in a scene;
FIG. 5 is a schematic diagram of a program module of an edge computing device according to a second embodiment of the present application;
fig. 6 is a schematic structural diagram of an electronic device according to a third embodiment of the present application.
Detailed Description
In order to make the objects, features and advantages of the present invention more obvious and understandable, the technical solutions of the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is apparent that the described embodiments are only some embodiments of the present application, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments herein without making any inventive effort, are intended to be within the scope of the present application.
In a complex traffic scene with large traffic flow and multiple intersections, a strong cooperative and strong control mode with a road side intelligent system as a main mode is necessary to construct a cooperative traffic system with the road side intelligent system as the main mode, and the cooperative control function (traffic cooperative) between vehicles and roads is realized by covering vehicle automation types (vehicle automation) with different degrees. The road sensing and edge computing unit is used for acquiring accurate and reliable real-time traffic element dynamic data, and sending out a cooperative control instruction for strong cooperation and strong control of traffic elements, so that the cooperative control function of 'controlling vehicles by roads' is realized.
The edge calculation is a core unit of the intelligent system of the road side, bears a series of complex and rapid calculation functions of the road side such as perception calculation, data fusion, scene triggering, cooperative control, instruction sending and the like, and is key equipment of the intelligent system of the road side and the like.
In order to solve the problem of poor reliability and security of multi-vehicle cooperative control in a multi-vehicle interaction scene in the related art, a first embodiment of the present application provides an edge computing method, which is applied to an edge computing device, as shown in fig. 1, which is a general system structure schematic diagram where the edge computing device of the present embodiment is located, different edge computing devices of the present embodiment are respectively responsible for different scene areas, a road sensing device and a road side communication device RSU are peripheral devices of an edge computing system formed by a plurality of edge computing devices, and the three form a road side intelligent system.
The road sensing device is an intelligent sensing device fixed on the road side, and can comprise various intelligent sensing devices such as a laser radar, a millimeter wave radar, a microwave radar, a camera, a buried car detector and the like, V2X road side communication device RSU, vehicle-mounted communication device OBU and the like, and can also be a combination of sensing devices such as a microwave radar video all-in-one machine, a millimeter wave radar video all-in-one machine and the like. The sensing equipment senses the surrounding traffic situation, for example, a camera generates a visual image or video stream by utilizing a light imaging or thermal imaging technology; the laser radar, millimeter wave radar and other devices sense peripheral objects by utilizing the signal reflection principle, generate point cloud signals, realize the sensing of the peripheral environment and the like; besides, other data acquisition modes are available, such as sensing vehicle passing information through geomagnetism, receiving carrier information of the OBU broadcast by the RSU, and the like.
The on-site targets of the present embodiment refer to traffic participants such as vehicles, pedestrians, bicycles, obstacles, etc., and in practical applications, mainly refer to vehicles, including autonomous vehicles and manned vehicles. The vehicle is provided with vehicle-mounted sensing equipment, a vehicle-mounted communication unit OBU, a controller and the like, wherein V2X vehicle-road cooperative communication is carried out between the OBU and the RSU.
The central platform system is a global control system of the system, and is connected with a plurality of edge computing devices, real-time high-precision map dynamic data formed by each edge computing device can be summarized to a central platform system at a higher layer, and finally high-precision map dynamic data in a larger range, such as high-precision map dynamic data of a whole park, an airport and a port, can be formed through calculation, and can be used for the basis of traffic scheduling command in a larger range.
Fig. 2 is a flow chart of an edge computing method according to the present embodiment, where the edge computing method includes the following steps:
step 201, receiving vehicle sensing data sent by a vehicle sensing device and receiving road traffic situation sensing data sent by a road sensing device of a zone to which a vehicle belongs.
Specifically, in practical applications, the vehicle sends the state (position, attitude, speed, angular velocity, acceleration, electric quantity, etc.) of the vehicle itself and surrounding sensing information to the RSU through the OBU unit, and then the information is transmitted to the edge computing device by the RSU. The site sensing equipment senses road traffic situation information at the road side and digitally outputs the road traffic situation information to the edge computing equipment.
Fig. 3 is a schematic diagram of a sensing period of a scene sensing device for sensing a target in a scene, which is provided in this embodiment, and describes a time difference between a series of processes of sensing devices (five sensors such as a microwave radar, a millimeter wave radar, a laser radar, a camera, an RSU module, etc.) on the scene for monitoring traffic elements on the scene and performing target feature extraction, identification, tracking, etc. on a monitoring signal and actual occurrence of the target.
Fig. 4 is a schematic diagram of space-time analysis of the scene sensing device for sensing and sampling the characteristics of the targets in the field according to the present embodiment, which describes periodic analysis of monitoring, identifying and tracking the targets in the field by the scene sensing sensor. Detection periods such as millimeter wave radar and microwave radar are about 20ms; the detection period of the laser radar is about 60ms; the monitoring period of the camera is about 70ms; the period of the RSU is about 100ms.
And 202, calculating target characteristic data based on the vehicle perception data and the road traffic situation perception data.
Specifically, the target feature data in this embodiment includes an in-field target of the tile and a corresponding feature value. The edge computing device calculates and obtains an in-field target and a characteristic attribute vector thereof based on scene perception data, and the edge computing device calculates as follows
Figure BDA0003347855130000051
Where i denotes the ith edge aware device and j denotes the jth in-field object detected by the ith edge aware device. Different errors exist in traffic elements and characteristic attributes of the traffic elements calculated by different edge perception devices.
In some implementations of the present embodiment, the step of calculating the target feature data based on the vehicle sensing data and the road traffic situation sensing data includes: identifying targets in the field based on the vehicle perception data and the road traffic situation perception data to obtain a target set; transforming coordinates of all the targets in the field in the target set, and transforming all the targets in the field into actual coordinates of the field; performing time synchronization processing on the target set based on the data sensing duration and the target identification duration; the data sensing time length is the time length required by the sensor to sense the data, and the target recognition time length is the time length required by recognizing the targets in the field; performing motion trail simulation on targets in each field based on Kalman filtering, and constructing a corresponding first target trail function; and carrying out timing characteristic sampling on targets in each field according to the first target track function to obtain target characteristic data.
Specifically, for target recognition, the format and content of the perceived information also changes due to the diversity of the sensing devices, and the method for performing target recognition according to the acquired data also changes. For image and video data type data, an algorithm such as YOLO-tiny can be adopted to carry out target identification; for point cloud data output by radars such as laser, adopting algorithms such as clustering; the time sequence information of the vehicles received by the RSU can be combined with Kalman filtering to carry out classification recognition and the like. Different edge computing devices may employ different target recognition algorithms. After target identification, generating an identified in-field target to form a target set { O } (i,t) I denotes the i-th intra-field object identified at time t. In addition, in this embodiment, through continuous detection by the sensing device, the edge computing device will obtain a continuous monitoring value of the target in the field, and then perform processing such as target classification, ranging, positioning, etc. according to the target monitoring value, to obtain an observation set { (O) (i,1) ,O (i,2) ,…,O (i,t) ) }. Then, denoising and filtering are carried out on the observed value of the identified target i, and then a Kalman filtering function is adopted to carry out track simulation tracking and prediction on the target i.
In this embodiment, since the features such as the distance and the position of all the identified objects are the information such as the distance and the position calculated with respect to the sensor, it is necessary to transform the coordinates of the objects into the actual coordinates of the field.
In addition, the time from the sensing of the recognized in-field object by the sensor (the imaging of the sensor is performed) to the recognition is denoted as delta t, and after the time of delta t, the in-field object O (i,t) If the characteristics (position, speed, etc.) have changed, then time synchronization is required to eliminate the error. Wherein, the liquid crystal display device comprises a liquid crystal display device,
Δt=Δt 1 +Δt 2
Δt 1 representing the time required for the sensor to sample the data to output; Δt (delta t) 2 Representing the time required to identify the target based on the output data and calculate the characteristics of the target.
Further, based on Kalman filtering, the target O i Simulation, constructing a function target track function T i (T) taking T i (t+dt) is then the target O i And simulating all the identified target motion tracks by the current characteristic value.
Finally, sampling timing characteristics of all the identified targets, namely according to a target track function T i (t) fixed-point sampling to obtain target O i Is a series of samples of the sample.
And 203, fusing the target characteristic data to obtain fused data, and generating high-precision map dynamic data corresponding to the region based on the fused data.
In some implementations of this embodiment, the step of fusing the target feature data to obtain fused data includes: based on different sensor types, grouping and associating the target characteristic data to obtain new target characteristic data; calibrating the identity information of the grouping target in the new target characteristic data; performing motion trail simulation on the grouping target based on Kalman filtering, and constructing a corresponding second target trail function; predicting the motion trail of the grouping target based on the second target trail function; and performing timing characteristic sampling on the grouping simulation track according to the second target track function to obtain fusion data.
Specifically, in this embodiment, the sensing information of different sensors (such as RSU, camera, laser radar, millimeter wave radar, etc.) is processed separately to identify the target, and the target features are extracted at regular time, so that the purpose is toThe method aims to realize the time synchronization problem of identifying the target and sampling the characteristics among different sensors. By using
Figure BDA0003347855130000071
The method comprises the steps of representing an object identified by an ith MEC front end processor and a characteristic sampling set thereof, j representing the j-th identified object, and t representing the sampling of the characteristics of the object at the moment t.
In practical application, the embodiment is based on different target feature data
Figure BDA0003347855130000072
Cluster analysis was performed with the objective function as follows:
Figure BDA0003347855130000073
finally, a new target set is formed and is recorded as
Figure BDA0003347855130000074
Then, according to the target characteristic data formed after the new combination
Figure BDA0003347855130000075
And +.>
Figure BDA00033478551300000713
And (5) carrying out identity calibration, determining the identity of the target in each field, and giving out the ID.
Further, trajectory modeling is performed for grouped objects, i.e. based on
Figure BDA0003347855130000076
Time sequence feature set for in-field targets
Figure BDA0003347855130000077
Kalman filtering and fitting are performed again on the trajectories of the targets in the field to construct a trajectory function +.>
Figure BDA0003347855130000078
Still further, track prediction is performed for grouped targets, i.e. target track functions constructed from the previous step
Figure BDA0003347855130000079
For object->
Figure BDA00033478551300000710
And (5) predicting.
Finally, timing sampling is carried out on the grouped target simulation track, namely fixed-point prediction sampling is carried out according to the grouped target track function, and a target is obtained
Figure BDA00033478551300000711
Is>
Figure BDA00033478551300000712
In other implementations of this embodiment, the generating the high-precision map dynamic data corresponding to the region based on the fused data includes: acquiring a corresponding target type of an in-field target based on the identity information; determining a corresponding target attribute value according to the target type; constructing the geometric structure of the object in the field in the current plane space based on the object attribute value; performing space transformation on the geometric structure, and transforming the geometric structure into a plane space layer taking the scene digital map as a reference; and fusing the plane space layer with the scene basic layer, and marking layer elements to generate high-precision map dynamic data corresponding to the region.
Specifically, the present embodiment is based on the type of the object
Figure BDA0003347855130000081
The target attribute value of this type can be determined from the traffic knowledge base +.>
Figure BDA0003347855130000082
Traffic elements, e.g. cars, having a value equal to the car sizeA series of attribute values. Then, the coordinate transformation of the target area is carried out, namely, the geometric structure in the current plane space is constructed according to the attribute values of the traffic target calculated in the last step, such as the size, the position and the like>
Figure BDA0003347855130000085
Further, a projection of the target area is performed, i.e. geometry +.>
Figure BDA0003347855130000083
Performing spatial transformation to a planar spatial layer L based on a scene digital map t . Finally, the plane space layer L generated in the last step is formed t And scene base layer L 0 Fusion, labeling of layer elements and construction of vector high-precision map M t
And 204, generating cooperative control instructions of targets in each field according to the high-precision map dynamic data and the job tasks of the targets in the field.
Specifically, according to the high-precision map dynamic data and the job task of the targets in the field of the present region, the edge computing device of the embodiment generates a cooperative control instruction of each target in the field according to the scene triggering principle, and the instruction period may be 0.1 seconds, so as to cooperatively control the action behavior of each traffic element. The job task list distributed to the edge computing device by the central platform system is marked as { T } i I denotes the ith intra-field object. The cooperative control instruction set generated by the edge computing device is recorded as
Figure BDA0003347855130000084
Where i represents the ith in-field target and j represents the ith traffic element at the jth time segment, each time segment being 0.1 seconds. />
In practical application, edge computing equipment of different areas sends high-precision map dynamic data to a center platform system, and then receives a job task planned by the center platform system according to the global high-precision map dynamic data; the global high-precision map dynamic data are obtained by fusion according to the high-precision map dynamic data corresponding to different areas.
In some implementations of the present embodiment, the step of generating the cooperative control instruction of each on-site target according to the high-precision map dynamic data and the job task of the on-site target includes: simulating a motion track of an in-field target according to the fusion data, and constructing a corresponding third target track function; combining the third target track function, the high-precision map dynamic data and the job task of the in-field target to generate target motion planning data; judging whether track conflict exists among targets in different fields or not based on the target motion planning data; if the track conflict exists, returning to execute the operation task combining the third target track function, the high-precision map dynamic data and the on-site targets to generate target motion planning data; and if the track conflict does not exist, generating cooperative control instructions of the targets in each field according to the target motion planning data.
Specifically, the present embodiment first refers to the target feature set
Figure BDA0003347855130000091
Motion trail simulation is carried out on the targets in the field to generate a target trail function +.>
Figure BDA0003347855130000092
In order to predict the trajectory of the in-field object. Then, combine the target track function->
Figure BDA0003347855130000093
Scene high-precision map M t And the corresponding task path P with planned target i (t) generating a new target movement plan +.>
Figure BDA0003347855130000094
The target motion planning set at the time of the target t in the field is marked as +.>
Figure BDA0003347855130000095
Further, detect +_in time space one by one>
Figure BDA0003347855130000096
And->
Figure BDA0003347855130000097
Whether space-time conflict exists or not, namely whether the planning is reasonable or not is judged, whether readjustment is needed or not is judged, and the judging function is as follows:
Figure BDA0003347855130000098
where T0 represents the maximum length of time for detection trajectory planning.
It should be noted that if
Figure BDA0003347855130000099
And->
Figure BDA00033478551300000910
If there is a space-time conflict, then according to the in-field target +.>
Figure BDA00033478551300000911
Intra-field object->
Figure BDA00033478551300000912
Task weight or priority, readjusting intra-field target +.>
Figure BDA00033478551300000913
Intra-field object->
Figure BDA00033478551300000914
Until all in-field targets +.>
Figure BDA00033478551300000915
Intra-field object->
Figure BDA00033478551300000916
The track of each field target satisfies the non-conflict condition, and finally the path planning +.>
Figure BDA00033478551300000917
Finally, according to the last step
Figure BDA00033478551300000918
Outputting planned +.>
Figure BDA00033478551300000919
Form a target control command C (i,t) All target instructions constitute the instruction set { C for target control (i,t) }。
Step 205, sending corresponding cooperative control instructions to the targets in each field respectively.
Specifically, in this embodiment, the edge computing device issues, through the RSU, a cooperative control instruction to the in-field target, where the cooperative control instruction may be encapsulated in an information body according to an RSU interface protocol, and the information body weight may further include high-precision map dynamic data, a job task, and the like.
In addition, it should be noted that, in this embodiment, the road sensing device may further monitor the instruction execution effect of the target in the field in real time, and the edge computing device performs sensing computation again for the instruction execution effect, updates the dynamic data of the high-precision map, and repeatedly performs precise control to form a closed loop. After the task work of each traffic element in the venue is completed, the system is in a servo waiting state until a new traffic element enters the venue, and the system is activated. The sensing equipment related to the embodiment is various, the full-range and blind spot-free monitoring is implemented on the field, the sub-meter positioning precision can be achieved, the data refreshing frequency is 10Hz (10 times per second), and the positioning precision of part of vehicles can be achieved in the centimeter level.
Based on the technical scheme of the embodiment of the application, the target perception calculation is performed after the perception data of the traffic elements are layered, then the target is subjected to data fusion, a high-precision dynamic map in the area is formed, and finally the cooperative control instruction of the traffic elements is generated according to scene touch. Accurate and reliable real-time traffic element dynamic data are acquired through the road sensing and edge computing unit, and a cooperative control instruction for strong cooperation and strong control of traffic elements is sent out, so that a cooperative control function of 'controlling vehicles by roads' is realized, and reliability and safety of multi-vehicle cooperative control in a complex traffic scene are ensured.
Fig. 5 is a schematic diagram of an edge computing device according to a second embodiment of the present application. The edge calculation device can be used to implement the edge calculation method in the foregoing embodiment. As shown in fig. 5, the edge calculating device mainly includes:
the receiving module 501 is configured to receive vehicle sensing data sent by a vehicle sensing device, and receive road traffic situation sensing data sent by a road sensing device of a region to which a vehicle belongs;
a calculating module 502, configured to calculate target feature data based on vehicle perception data and road traffic situation perception data; the target characteristic data comprise an in-field target of the slice region and corresponding characteristic values;
the fusion module 503 is configured to fuse the target feature data to obtain fusion data, and generate high-precision map dynamic data corresponding to the region based on the fusion data;
the generating module 504 is configured to generate cooperative control instructions of the targets in each field according to the high-precision map dynamic data and the job tasks of the targets in the field;
and the sending module 505 is configured to send corresponding cooperative control instructions to the targets in each field respectively.
In some implementations of the present embodiment, the computing module is specifically configured to: identifying targets in the field based on the vehicle perception data and the road traffic situation perception data to obtain a target set; transforming coordinates of all the targets in the field in the target set, and transforming all the targets in the field into actual coordinates of the field; performing time synchronization processing on the target set based on the data sensing time length and the target identification time length, wherein the data sensing time length is the time length required by the sensor to sense the data, and the target identification time length is the time length required by the identification of the targets in the field; performing motion trail simulation on targets in each field based on Kalman filtering, and constructing a corresponding first target trail function; and carrying out timing characteristic sampling on targets in each field according to the first target track function to obtain target characteristic data.
In some implementations of this embodiment, when the fusion module performs a function of fusing the target feature data to obtain the fused data, the fusion module is specifically configured to: based on different sensor types, grouping and associating the target characteristic data to obtain new target characteristic data; calibrating the identity information of the grouping target in the new target characteristic data; performing motion trail simulation on the grouping target based on Kalman filtering, and constructing a corresponding second target trail function; predicting the motion trail of the grouping target based on the second target trail function; and performing timing characteristic sampling on the grouping simulation track according to the second target track function to obtain fusion data.
Further, in some implementations of the present embodiment, when the fusion module performs a function of generating high-precision map dynamic data corresponding to a region based on the fusion data, the fusion module is specifically configured to: acquiring a corresponding target type of an in-field target based on the identity information; determining a corresponding target attribute value according to the target type; constructing the geometric structure of the object in the field in the current plane space based on the object attribute value; performing space transformation on the geometric structure, and transforming the geometric structure into a plane space layer taking the scene digital map as a reference; and fusing the plane space layer with the scene basic layer, and marking layer elements to generate high-precision map dynamic data corresponding to the region.
Further, in some implementations of the present embodiment, the generating module is specifically configured to: simulating a motion track of an in-field target according to the fusion data, and constructing a corresponding third target track function; combining the third target track function, the high-precision map dynamic data and the job task of the in-field target to generate target motion planning data; judging whether track conflict exists among targets in different fields or not based on the target motion planning data; if the track conflict exists, returning to execute the operation task combining the third target track function, the high-precision map dynamic data and the on-site targets to generate target motion planning data; and if the track conflict does not exist, generating cooperative control instructions of the targets in each field according to the target motion planning data.
In some implementations of this embodiment, the sending module is further configured to: the high-precision map dynamic data are sent to a central platform system; the receiving module is also used for: receiving a job task planned by the center platform system according to the global high-precision map dynamic data; the global high-precision map dynamic data are obtained by fusion according to the high-precision map dynamic data corresponding to different areas.
It should be noted that, the edge computing method in the first embodiment may be implemented based on the edge computing device provided in the present embodiment, and those skilled in the art can clearly understand that, for convenience and brevity of description, the specific working process of the edge computing device described in the present embodiment may refer to the corresponding process in the foregoing method embodiment, which is not repeated herein.
According to the edge computing device provided by the embodiment, the target perception computing is performed after the perception data of the traffic elements are layered, then the target is subjected to data fusion, a high-precision dynamic map in the area is formed, and finally the cooperative control instruction of the traffic elements is generated according to scene touch. Accurate and reliable real-time traffic element dynamic data are acquired through the road sensing and edge computing unit, and a cooperative control instruction for strong cooperation and strong control of traffic elements is sent out, so that a cooperative control function of 'controlling vehicles by roads' is realized, and reliability and safety of multi-vehicle cooperative control in a complex traffic scene are ensured.
Referring to fig. 6, fig. 6 is an electronic device according to a third embodiment of the present application. The electronic device can be used to implement the edge calculation method in the foregoing embodiment. As shown in fig. 6, the electronic device mainly includes:
memory 601, processor 602, bus 603, and a computer program stored on memory 601 and executable on processor 602, the memory 601 and processor 602 being connected by bus 603. The processor 602, when executing the computer program, implements the edge calculation method in the foregoing embodiment. Wherein the number of processors may be one or more.
The memory 601 may be a high-speed random access memory (RAM, random Access Memory) memory or a non-volatile memory (non-volatile memory), such as a disk memory. The memory 601 is used for storing executable program codes and the processor 602 is coupled to the memory 601.
Further, the embodiment of the application further provides a computer readable storage medium, which may be provided in the electronic device in each embodiment, and the computer readable storage medium may be a memory in the embodiment shown in fig. 6.
The computer-readable storage medium has stored thereon a computer program which, when executed by a processor, implements the edge calculation method in the foregoing embodiments. Further, the computer-readable medium may be any medium capable of storing a program code, such as a usb (universal serial bus), a removable hard disk, a Read-Only Memory (ROM), a RAM, a magnetic disk, or an optical disk.
In the several embodiments provided in this application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of modules is merely a logical function division, and there may be additional divisions of actual implementation, e.g., multiple modules or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or modules, which may be in electrical, mechanical, or other forms.
The modules illustrated as separate components may or may not be physically separate, and components shown as modules may or may not be physical modules, i.e., may be located in one place, or may be distributed over a plurality of network modules. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional module in each embodiment of the present application may be integrated into one processing module, or each module may exist alone physically, or two or more modules may be integrated into one module. The integrated modules may be implemented in hardware or in software functional modules.
The integrated modules, if implemented in the form of software functional modules and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be embodied in essence or a part contributing to the prior art or all or part of the technical solution in the form of a software product stored in a readable storage medium, including several instructions to cause a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the methods of the embodiments of the present application. And the aforementioned readable storage medium includes: a usb disk, a removable hard disk, a ROM, a RAM, a magnetic disk, or an optical disk, etc.
It should be noted that, for the sake of simplicity of description, the foregoing method embodiments are all expressed as a series of combinations of actions, but it should be understood by those skilled in the art that the present application is not limited by the order of actions described, as some steps may be performed in other order or simultaneously in accordance with the present application. Further, those skilled in the art will also appreciate that the embodiments described in the specification are all preferred embodiments, and that the acts and modules referred to are not necessarily all necessary for the present application.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and for parts of one embodiment that are not described in detail, reference may be made to the related descriptions of other embodiments.
The foregoing describes edge computing methods, apparatus and readable storage media provided herein, and those skilled in the art should not be able to interpret the present application in any way as limitations on the scope of embodiments and applications of the present application.

Claims (8)

1. An edge computing method, comprising:
receiving vehicle sensing data sent by vehicle sensing equipment and road traffic situation sensing data sent by road sensing equipment of a zone to which a vehicle belongs;
calculating target feature data based on the vehicle perception data and the road traffic situation perception data; the target characteristic data comprise an in-field target of the slice region and corresponding characteristic values;
grouping and associating the target characteristic data corresponding to different sensor types to obtain new target characteristic data after the grouping targets are combined;
calibrating the identity information of the grouping targets in the new target characteristic data;
performing motion trail simulation on the grouped targets based on Kalman filtering, and constructing a corresponding second target trail function;
according to the new target characteristic data, predicting the motion trail of the grouping targets based on the second target trail function;
performing timing characteristic sampling on the grouping simulation track according to the second target track function to obtain fusion data, and generating high-precision map dynamic data corresponding to the region based on the fusion data;
performing motion trail simulation on the targets in the field according to the fusion data, and constructing a corresponding third target trail function;
combining the third target track function, the high-precision map dynamic data and the job tasks of the targets in the field to generate target motion planning data;
judging whether track conflict exists between different targets in the field or not based on the target motion planning data;
if the track conflict exists, returning to the step of executing the operation task combining the third target track function, the high-precision map dynamic data and the on-site target to generate target motion planning data;
if no track conflict exists, generating cooperative control instructions of the targets in the fields according to the target motion planning data;
and respectively sending the corresponding cooperative control instruction to each in-field target.
2. The edge computing method according to claim 1, wherein the step of computing target feature data based on the vehicle perception data and the road traffic situation perception data includes:
identifying targets in the field based on the vehicle perception data and the road traffic situation perception data to obtain a target set;
transforming coordinates of all the targets in the field in the target set, and transforming all the targets in the field into actual coordinates of the field;
performing time synchronization processing on the target set based on the data perception duration and the target identification duration; the data sensing time length is the time length required by a sensor to sense data, and the target identification time length is the time length required by identifying targets in the field;
performing motion trail simulation on each in-field target based on Kalman filtering, and constructing a corresponding first target trail function;
and carrying out timing characteristic sampling on the targets in each field according to the first target track function to obtain target characteristic data.
3. The edge computing method according to claim 1, wherein the step of generating the high-precision map dynamic data corresponding to the patch based on the fusion data includes:
acquiring a corresponding target type of the in-field target based on the identity information;
determining a corresponding target attribute value according to the target type;
constructing the geometric structure of the in-field target in the current plane space based on the target attribute value;
performing space transformation on the geometric structure, and transforming the geometric structure into a plane space layer taking a scene digital map as a reference;
and fusing the plane space layer with the scene basic layer, and marking layer elements to generate high-precision map dynamic data corresponding to the region.
4. A method according to any one of claims 1 to 3, wherein before the step of performing the task based on the high-definition map dynamic data and the on-site target, the method further comprises:
transmitting the high-precision map dynamic data to a central platform system;
receiving the job task planned by the center platform system according to the global high-precision map dynamic data; the global high-precision map dynamic data are obtained by fusion according to the high-precision map dynamic data corresponding to different areas.
5. An edge computing device, comprising:
the receiving module is used for receiving vehicle perception data sent by the vehicle-mounted perception equipment and road traffic situation perception data sent by the road perception equipment of the zone to which the vehicle belongs;
the calculating module is used for calculating target characteristic data based on the vehicle perception data and the road traffic situation perception data; the target characteristic data comprise an in-field target of the slice region and corresponding characteristic values;
the fusion module is used for carrying out grouping association on the target characteristic data corresponding to different sensor types to obtain new target characteristic data after the grouping targets are combined; calibrating the identity information of the grouping targets in the new target characteristic data; performing motion trail simulation on the grouped targets based on Kalman filtering, and constructing a corresponding second target trail function; according to the new target characteristic data, predicting the motion trail of the grouping targets based on the second target trail function; performing timing characteristic sampling on the grouping simulation track according to the second target track function to obtain fusion data, and generating high-precision map dynamic data corresponding to the region based on the fusion data;
the generation module is used for simulating the motion trail of the in-field target according to the fusion data and constructing a corresponding third target trail function; combining the third target track function, the high-precision map dynamic data and the job tasks of the targets in the field to generate target motion planning data; judging whether track conflict exists between different targets in the field or not based on the target motion planning data; if the track conflict exists, returning to the step of executing the operation task combining the third target track function, the high-precision map dynamic data and the on-site target to generate target motion planning data; if no track conflict exists, generating cooperative control instructions of the targets in the fields according to the target motion planning data;
and the sending module is used for respectively sending the corresponding cooperative control instructions to the targets in the fields.
6. The edge computing device of claim 5, wherein the sending module is further to: transmitting the high-precision map dynamic data to a central platform system; the receiving module is further configured to: receiving the job task planned by the center platform system according to the global high-precision map dynamic data; the global high-precision map dynamic data are obtained by fusion according to the high-precision map dynamic data corresponding to different areas.
7. An electronic device, comprising: a processor, a memory, and a bus;
the bus is used for realizing connection communication between the processor and the memory;
the processor is configured to execute one or more programs stored in the memory to implement the steps of the edge computing method according to any one of claims 1 to 4.
8. A computer-readable storage medium storing one or more programs executable by one or more processors to implement the steps of the edge computing method of any of claims 1-4.
CN202111328580.3A 2021-11-10 2021-11-10 Edge computing method and device and readable storage medium Active CN114049767B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111328580.3A CN114049767B (en) 2021-11-10 2021-11-10 Edge computing method and device and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111328580.3A CN114049767B (en) 2021-11-10 2021-11-10 Edge computing method and device and readable storage medium

Publications (2)

Publication Number Publication Date
CN114049767A CN114049767A (en) 2022-02-15
CN114049767B true CN114049767B (en) 2023-05-12

Family

ID=80208144

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111328580.3A Active CN114049767B (en) 2021-11-10 2021-11-10 Edge computing method and device and readable storage medium

Country Status (1)

Country Link
CN (1) CN114049767B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114615241A (en) * 2022-03-03 2022-06-10 智道网联科技(北京)有限公司 Dynamic road network display method based on high-precision map and related equipment
CN117275232B (en) * 2023-09-28 2024-05-31 广东省电信规划设计院有限公司 Dynamic sensing method and device based on vehicle-road cooperation

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113326719A (en) * 2020-02-28 2021-08-31 华为技术有限公司 Method, equipment and system for target tracking

Family Cites Families (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108022450B (en) * 2017-10-31 2020-07-21 华为技术有限公司 Auxiliary driving method based on cellular network and traffic control unit
CN108919803A (en) * 2018-07-04 2018-11-30 北京踏歌智行科技有限公司 A kind of cooperative control method and device of mining automatic driving vehicle
US11334090B2 (en) * 2019-02-13 2022-05-17 GM Global Technology Operations LLC Method and system for determining autonomous vehicle (AV) action based on vehicle and edge sensor data
CN111601266B (en) * 2020-03-31 2022-11-22 浙江吉利汽车研究院有限公司 Cooperative control method and system
CN111554088B (en) * 2020-04-13 2022-03-22 重庆邮电大学 Multifunctional V2X intelligent roadside base station system
CN111818189B (en) * 2020-09-09 2020-12-25 浙江吉利控股集团有限公司 Vehicle road cooperative control system, method and medium
CN112289059A (en) * 2020-10-22 2021-01-29 中电智能技术南京有限公司 Vehicle-road cooperative road traffic system
CN112562314B (en) * 2020-11-02 2022-06-24 福瑞泰克智能系统有限公司 Road end sensing method and device based on deep fusion, road end equipment and system
CN112435504B (en) * 2020-11-11 2022-07-08 清华大学 Centralized collaborative track planning method and device under vehicle-road collaborative environment
CN112906777A (en) * 2021-02-05 2021-06-04 北京邮电大学 Target detection method and device, electronic equipment and storage medium
CN113485319A (en) * 2021-06-08 2021-10-08 中兴智能汽车有限公司 Automatic driving system based on 5G vehicle-road cooperation
CN113581211B (en) * 2021-08-30 2022-11-29 深圳清航智行科技有限公司 Vehicle driving control method, system and device and readable storage medium

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113326719A (en) * 2020-02-28 2021-08-31 华为技术有限公司 Method, equipment and system for target tracking

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
边缘计算资源分配与任务调度优化综述;王凌 等;系统仿真学报;第33卷(第3期);第509-520页 *

Also Published As

Publication number Publication date
CN114049767A (en) 2022-02-15

Similar Documents

Publication Publication Date Title
US11593950B2 (en) System and method for movement detection
KR102434580B1 (en) Method and apparatus of dispalying virtual route
US10310087B2 (en) Range-view LIDAR-based object detection
Suhr et al. Sensor fusion-based low-cost vehicle localization system for complex urban environments
US11874119B2 (en) Traffic boundary mapping
US11693409B2 (en) Systems and methods for a scenario tagger for autonomous vehicles
CN112700470B (en) Target detection and track extraction method based on traffic video stream
US20190310651A1 (en) Object Detection and Determination of Motion Information Using Curve-Fitting in Autonomous Vehicle Applications
US20180349746A1 (en) Top-View Lidar-Based Object Detection
US10369993B2 (en) Method and device for monitoring a setpoint trajectory to be traveled by a vehicle for being collision free
CN114049767B (en) Edge computing method and device and readable storage medium
Rawashdeh et al. Collaborative automated driving: A machine learning-based method to enhance the accuracy of shared information
CN111983936B (en) Unmanned aerial vehicle semi-physical simulation system and evaluation method
US20220194412A1 (en) Validating Vehicle Sensor Calibration
Wang et al. Realtime wide-area vehicle trajectory tracking using millimeter-wave radar sensors and the open TJRD TS dataset
US11754415B2 (en) Sensor localization from external source data
US20190235506A1 (en) Methods, Devices, and Systems For Analyzing Motion Plans of Autonomous Vehicles
Ma et al. Left-turn conflict identification at signal intersections based on vehicle trajectory reconstruction under real-time communication conditions
Bai et al. Cyber mobility mirror: A deep learning-based real-world object perception platform using roadside LiDAR
US20230103178A1 (en) Systems and methods for onboard analysis of sensor data for sensor fusion
US20220198714A1 (en) Camera to camera calibration
CN115471526A (en) Automatic driving target detection and tracking method based on multi-source heterogeneous information fusion
CN116229703A (en) Method, system and storage medium for detecting traffic signals
CN113771845A (en) Method, device, vehicle and storage medium for predicting vehicle track
Duan et al. Multi-sensor fusion detection method for vehicle target based on kalman filter and data association filter

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant