CN117917881A - Traffic scheduling method and device and vehicle - Google Patents

Traffic scheduling method and device and vehicle Download PDF

Info

Publication number
CN117917881A
CN117917881A CN202211296669.0A CN202211296669A CN117917881A CN 117917881 A CN117917881 A CN 117917881A CN 202211296669 A CN202211296669 A CN 202211296669A CN 117917881 A CN117917881 A CN 117917881A
Authority
CN
China
Prior art keywords
queue
data stream
data
bandwidth
traffic
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211296669.0A
Other languages
Chinese (zh)
Inventor
余振波
徐艳琴
马松君
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN202211296669.0A priority Critical patent/CN117917881A/en
Publication of CN117917881A publication Critical patent/CN117917881A/en
Pending legal-status Critical Current

Links

Landscapes

  • Mobile Radio Communication Systems (AREA)

Abstract

The application discloses a traffic scheduling method, a traffic scheduling device and a vehicle. The flow scheduling method can be used in an automatic driving vehicle, and comprises the following steps: a newly generated first data stream is acquired and the type of the first data stream is identified. And if the type of the first data flow is determined to be burst traffic, acquiring queue allocation information corresponding to the queue allocation request, wherein the queue allocation information is determined based on the idle bandwidth of at least one queue in the gating list, the idle bandwidth corresponds to the total bandwidth available to each queue in the gating list, and the queue allocation information indicates that the first queue in the at least one queue is allocated to the first data flow. After the queue allocation information is obtained, the first data stream and the queue allocation information are sent to a time sensitive network TSN protocol stack. Therefore, the full utilization of the bandwidth resources of the Ethernet is realized, the waste of the bandwidth resources is reduced, and the utilization efficiency of the bandwidth is further optimized.

Description

Traffic scheduling method and device and vehicle
Technical Field
The present application relates to the field of vehicle technologies, and in particular, to a traffic scheduling method and apparatus, and a vehicle.
Background
With the rapid development of intelligent driving equipment such as automatic driving and interconnected automobiles, functions carried on vehicles are more and more, and data transmission requirements of vehicle-mounted networks are also more and more. In order to meet the increasing data transmission demands of the vehicle network, it is important to reduce the bandwidth waste of the vehicle network.
In an autopilot vehicle, a data distribution service (Data Distribution Service, DDS) is a technical specification of a distributed real-time communication middleware, the DDS defines a data-centric publishing/subscribing model, a cross-platform middleware framework is provided, a rich QoS (Quality of Service ) strategy is defined, the problem of efficient and real-time data distribution can be solved, and unified standards are provided for interfaces and behaviors of data distribution, transmission and reception in a real-time system.
Time sensitive networks (Time-SENSITIVE NETWORKING, TSNs) are a set of protocol standards that define Time sensitive mechanisms for ethernet data transmission, adding certainty and reliability to standard ethernet to ensure real-Time, deterministic, and reliable data transmission. In the TSN, in order to accurately transmit a data stream, thereby providing a delay guarantee for a deterministic service flow, a time-aware Scheduling (TAS) manner is used to implement flow Scheduling in the TSN. TAS decides which queue is selected for transmission through the transmission gate and gate control list (Gate Control List, GCL). The GCL maintains the gate state and the time interval of each transmission gate, so that the TSN switch controls the development and closing of each transmission gate in the TSN switch according to the GCL, and the transmission control of the data stream is realized.
One of the key points for the use of DDS in combination with TSN is the scheduling of scheduling. After the schedule is determined, a gating list is generated in the TSN based on the schedule, and the gating list is issued to each node in the communication network. After the DDS obtains the gating list from the network node, the DDS sends the data stream to a queue corresponding to the TSN switch according to the gating list, so that the transmission of the data stream is realized.
Because DDS has rich QoS policies, the data flows of DDS may not be "orderly" as expected at the beginning of scheduling planning. For example, a reliability (Reliablity) policy can enable the DDS to generate retransmission behavior in the scene of packet loss; a persistence (Durability) policy will allow the DDS to generate the behavior of packets that resend historical data in the context of new data subscribers being online. These actions can produce unplanned bursty traffic. Unplanned bursty traffic, once generated, needs to be handled in time and sent out.
The problem of handling bursty traffic is generally addressed by reserving dedicated bandwidth for bursty traffic in a gating list. However, since the bursty traffic is generated irregularly, the reserved bandwidth is wasted if the reserved bandwidth for the bursty traffic is not used.
Disclosure of Invention
The application provides a traffic scheduling method, a traffic scheduling device and a vehicle, which can schedule generated burst traffic in time.
In a first aspect, the present application provides a traffic scheduling method applied to an autonomous vehicle, in particular to a communication middleware of the DDS type of a data distribution service of a communication system of the autonomous vehicle, the method comprising:
A newly generated first data stream is acquired and the type of the first data stream is identified. And if the type of the first data flow is determined to be burst flow, acquiring queue allocation information corresponding to the queue allocation request, wherein the queue allocation information is determined based on the idle bandwidth of at least one queue in the gating list, the idle bandwidth corresponds to the total bandwidth available for each queue in the gating list, and the queue allocation information indicates that the first queue in the at least one queue is allocated to the first data flow. After the queue allocation information is obtained, the first data stream and the queue allocation information are sent to a time sensitive network TSN protocol stack.
In the application, when the first data flow is determined to be burst flow, an attempt is made to select a transmission queue meeting the condition from the idle bandwidths corresponding to the gating list. The specific selection mode is that according to the queue allocation request, the queue allocation information is determined based on the idle bandwidth of at least one queue in the gating list. Therefore, the problem that the burst traffic misses the time slot and delays the sending time is avoided, the full utilization of the bandwidth resources of the Ethernet is realized, the waste of the bandwidth resources is reduced, and the use efficiency of the bandwidth is further optimized. After the queue allocation information corresponding to the first data stream is obtained, the queue allocation information and the first data stream are sent to a time sensitive network TSN protocol stack, so that the TSN protocol stack sends the first data stream out through the corresponding queue according to the queue allocation information, and further dynamic scheduling and deterministic communication of burst traffic are achieved.
In one possible implementation, the queue allocation request includes a transmission bandwidth required by the first data flow and QoS requirements of the first data flow;
The first queue satisfies the following condition:
the free bandwidth of the first queue meets the transmission bandwidth required by the first data flow, and the QoS performance of the first queue meets the QoS requirement of the first data flow.
In this possible implementation, when the first data stream performs bandwidth allocation, it needs to consider whether the allocated free bandwidth can meet the transmission bandwidth required by the first data stream. Meanwhile, qoS requirements corresponding to different types of data flows may also be different, and whether QoS performance of an allocated queue can meet the requirements needs to be considered. By setting the screening condition, when the first data stream generated by the unplanned process is matched with the queue meeting the condition, the first data stream can be successfully transmitted through the TSN protocol stack, so that the problem that the burst traffic misses a time slot and delays the transmission time is avoided, the full utilization of the bandwidth resource of the Ethernet is realized, and the waste of the bandwidth resource is reduced.
In a possible implementation manner, the method further includes:
acquiring a first bandwidth allocated to each queue in at least one queue in a gating list;
acquiring a second bandwidth actually used by each queue in at least one queue in a gating list;
And determining the idle bandwidth of at least one queue in the gating list according to the difference value between the first bandwidth and the second bandwidth of each queue in the at least one queue.
In this possible implementation, for each queue of the at least one queue, the free bandwidth of the queue may be obtained by obtaining, for each queue, a difference between the rated bandwidth allocated to the queue and the actual bandwidth used, so as to quantitatively calculate the reserved bandwidth resources. Therefore, whether the available bandwidth resources remained in the queue can meet the transmission requirement of the first data stream can be judged according to the idle bandwidth of the queue.
In a possible implementation manner, the method further includes:
and acquiring a clock reference source, and performing clock synchronization according to the clock reference source, wherein the source of the clock reference source is the same as the source of the TSN protocol stack.
In this possible implementation, each node in the network of the TSN protocol stack is clock synchronized. In order to ensure the synchronous data transmission between the DDS type communication middleware and the TSN protocol stack, the clock reference source of the DDS type communication middleware is the same as that of the TSN protocol stack so as to ensure the clock synchronization between the DDS type communication middleware and the TSN protocol stack, thereby further realizing the deterministic communication of data traffic.
In a possible implementation manner, determining the type of the first data stream as burst traffic includes:
Acquiring a DDS theme corresponding to the first data stream, and if the data stream type corresponding to the DDS theme is burst traffic, determining the first data stream as burst traffic; and/or the number of the groups of groups,
If the first data stream is a retransmission data stream, determining the type of the first data stream as burst traffic.
In the possible implementation manner, by means of specific means of definitely determining that the data traffic is the burst traffic, the burst traffic can be timely processed when the unplanned burst traffic is generated, and the burst traffic is timely sent out, so that the normal operation of the vehicle is ensured.
In a possible implementation manner, the method further includes:
Acquiring a second data stream;
And if the type of the second data stream is determined to be the periodic flow, acquiring a second queue of the second data stream in the gating list, and sending the second data stream and information of the second queue to the TSN protocol stack.
In the possible implementation manner, when the generated data traffic is identified as the periodic traffic, the processing manner is to directly send the periodic traffic to the queue corresponding to the TSN, so that static scheduling of the periodic traffic is realized.
In a possible implementation, the gating list is generated based on QoS configuration information of at least one data flow, the data flow comprising a first data flow;
The QoS configuration information includes at least one of:
The data type of the data stream, the transmission period requirement of the data stream, the delay requirement of the data stream, the size of the data stream and the priority identification of the data stream.
In the possible implementation manner, the TSN protocol stack generates a gating list according to QoS configuration information of at least one data stream, and sends basic information of the gating list to each node in the vehicle-mounted network, wherein the basic information comprises DDS type communication middleware, so that the DDS type communication middleware timely schedules the data stream according to the gating list, and normal communication of the vehicle-mounted network is ensured.
In one possible implementation, the free bandwidth in the gating list is updated after the first data stream and the queue allocation information are sent to the time sensitive network TSN protocol stack.
In this possible implementation manner, after the transmission of one burst traffic is completed, the idle bandwidth in the gating list needs to be updated and maintained in real time, so that the correct idle bandwidth can be obtained when the idle bandwidth is used for allocating the bandwidth for the burst traffic next time.
In a second aspect, the present application provides a flow scheduling device, and the beneficial effects can be seen from the description of the first aspect, which is not repeated here. The apparatus has the functionality to implement the actions in the method example of the first aspect described above. The functions can be realized by hardware, and can also be realized by executing corresponding software by hardware. The hardware or software includes one or more modules corresponding to the functions described above. In one possible design, the device is applied to an autonomous vehicle, comprising:
And the sending adaptation module is used for acquiring the first data flow, and sending a queue allocation request to the flow scheduling module if the type of the first data flow is determined to be burst flow.
And the traffic scheduling module is used for generating queue allocation information according to the queue allocation request and sending the queue allocation information to the sending adaptation module, wherein the queue allocation information is determined based on the idle bandwidth of at least one queue in the gating list, and the queue allocation information indicates that a first queue in the at least one queue is allocated to the first data stream.
The transmission adaptation module is further configured to transmit the first data stream and the queue allocation information to a TSN protocol stack of the time sensitive network.
In one possible implementation, the queue allocation request includes a transmission bandwidth required by the first data flow and QoS requirements of the first data flow;
The first queue satisfies the following condition:
the free bandwidth of the first queue meets the transmission bandwidth required by the first data flow, and the QoS performance of the first queue meets the QoS requirement of the first data flow.
In a possible implementation manner, the traffic scheduling module is further configured to obtain a first bandwidth allocated to each queue in at least one queue in the gating list;
The traffic scheduling module is also used for acquiring a second bandwidth actually used by each queue in at least one queue in the gating list;
And the traffic scheduling module is also used for determining the idle bandwidth of at least one queue in the gating list according to the difference value between the first bandwidth and the second bandwidth of each queue in the at least one queue.
In a possible implementation manner, the apparatus further includes a clock management module, configured to:
and acquiring a clock reference source, and performing clock synchronization according to the clock reference source, wherein the source of the clock reference source is the same as the source of the TSN protocol stack.
In a possible implementation manner, the sending adapting module is specifically configured to:
Acquiring a DDS theme corresponding to the first data stream, and if the data stream type corresponding to the DDS theme is burst traffic, determining the first data stream as burst traffic; and/or the number of the groups of groups,
If the first data stream is a retransmission data stream, determining the type of the first data stream as burst traffic.
In a possible implementation manner, the sending adapting module is further configured to obtain a second data stream;
And the sending adapting module is further used for acquiring a second queue of the second data stream in the gating list if the type of the second data stream is determined to be the periodic flow, and sending the second data stream and the information of the second queue to the TSN protocol stack.
In a possible implementation, the gating list is generated based on QoS configuration information of at least one data flow, the data flow comprising a first data flow;
The QoS configuration information includes at least one of:
The data type of the data stream, the transmission period requirement of the data stream, the delay requirement of the data stream, the size of the data stream and the priority identification of the data stream.
In a possible implementation manner, after the first data stream and the queue allocation information are sent to the TSN protocol stack of the time sensitive network, the sending adaptation module is further configured to update the idle bandwidth in the gating list.
In a third aspect, the present application provides a traffic scheduling device comprising at least one memory storing code and a processor configured to execute the code to cause the traffic scheduling device to perform the method of the first aspect or any one of the possible implementations of the first aspect.
In a fourth aspect, the present application provides an autonomous vehicle that may include a memory for storing a program, a processor for executing the program in the memory, and a bus system, comprising the steps of: the bus system is used to connect the memory and the processor to communicate the memory and the processor. The processor is configured to perform the method of the first aspect or any of the possible implementation manners of the first aspect, or the autonomous vehicle comprises the flow scheduling device of the second aspect or the third aspect.
In a fifth aspect, the present application provides a computer storage medium storing a computer program which, when executed by a computer, causes the computer to carry out the method of the first aspect or any of the possible implementations of the first aspect.
In a sixth aspect, the present application provides circuitry comprising processing circuitry configured to perform the method of the first aspect or any of the possible implementations of the first aspect.
In a seventh aspect, the application provides a computer program product which, when executed by a computer, implements the method of the first aspect or any of the possible implementations of the first aspect.
In an eighth aspect, the present application provides a chip system, which includes a processor for implementing the functions of the autonomous vehicle in the methods of the above aspects. In one possible design, the chip system further includes a memory for holding program instructions and/or data. The chip system can be composed of chips, and can also comprise chips and other discrete devices.
The solutions of the second to eighth aspects are used to implement or cooperate to implement the method in the first aspect or any possible implementation manner of the foregoing second aspect, so that the same or corresponding beneficial effects as those of the first aspect can be achieved, and are not described herein again.
Drawings
Fig. 1 is a schematic structural diagram of an autonomous vehicle according to an embodiment of the present application;
fig. 2a is a schematic diagram of a network architecture of an on-vehicle communication system according to an embodiment of the present application;
Fig. 2b is a schematic flow chart of a flow scheduling method according to an embodiment of the present application;
Fig. 3 is a schematic diagram of an architecture of a DCPS model according to an embodiment of the present application;
Fig. 4 is a schematic diagram of a communication flow between a publisher and a subscriber in a DDS according to an embodiment of the present application;
FIG. 5 is a schematic diagram of a gating mechanism according to an embodiment of the present application;
Fig. 6 is another flow chart of a flow scheduling method according to an embodiment of the present application;
FIG. 7a is a schematic diagram of a first bandwidth allocated to a queue in a gating list of a traffic scheduling method according to an embodiment of the present application;
fig. 7b is a schematic diagram of a second bandwidth actually used by a queue in a gating list of a traffic scheduling method according to an embodiment of the present application;
fig. 8 is a schematic diagram of an idle bandwidth of a queue in a gating list of a traffic scheduling method according to an embodiment of the present application;
Fig. 9 is a schematic structural diagram of an 802.1Q protocol frame of a traffic scheduling method according to an embodiment of the present application;
Fig. 10 is another flow chart of a flow scheduling method according to an embodiment of the present application;
fig. 11 is a schematic structural diagram of a flow scheduling device according to an embodiment of the present application;
Fig. 12 is a schematic structural diagram of a DDS type communication middleware according to an embodiment of the present application;
fig. 13 is a schematic diagram of a hierarchical structure of a traffic scheduling module according to an embodiment of the present application;
fig. 14 is a schematic diagram of a flow detection module according to an embodiment of the present application;
fig. 15 is a schematic structural diagram of another flow scheduling device according to an embodiment of the present application;
Fig. 16 is a schematic view of another structure of an autonomous vehicle according to an embodiment of the present application.
Detailed Description
The technical solutions of the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is apparent that the described embodiments are only some embodiments of the present application, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to fall within the scope of the application. As a person skilled in the art can know, with the development of technology and the appearance of new scenes, the technical scheme provided by the embodiment of the application is applicable to similar technical problems.
The terms first, second and the like in the description and in the claims and in the above-described figures, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments described herein may be implemented in other sequences than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or modules is not necessarily limited to those steps or modules that are expressly listed or inherent to such process, method, article, or apparatus.
The term "and/or" appearing in the present application may be an association relationship describing an associated object, and indicates that three relationships may exist, for example, a and/or B may indicate: a exists alone, A and B exist together, and B exists alone. In the present application, the character "/" generally indicates that the front and rear related objects are an or relationship.
It should also be noted that, in some alternative implementations, the functions/acts noted may occur out of the order noted in the figures. For example, two figures shown in succession may in fact be executed substantially concurrently or the reverse order may sometimes be executed, depending upon the functionality/acts involved.
In the embodiments of the present application, unless otherwise indicated, the meaning of "at least one" means one or more, and the meaning of "a plurality" means two or more. It is to be understood that in the present application, the terms "when …", "if" and "if" are used to indicate that the device is performing the corresponding process under some objective condition, and are not intended to limit the time and require no judgment in the implementation of the device, nor are other limitations meant to be implied. In addition, the specialized word "exemplary" means "serving as an example, embodiment, or illustration. Any embodiment described as "exemplary" is not necessarily to be construed as preferred or advantageous over other embodiments.
The scheme provided by the embodiment of the application can be applied to an automobile open system architecture (Automotive Open System Architecture, AUTOSAR) of an automatic driving vehicle, particularly to the AUTOSAR of an AP (Adaptive Platform, self-adaptive platform), and particularly can be applied to an intermediate layer of a communication system of the automatic driving vehicle. For locating and illustrating an intermediate layer of a communication system of a vehicle, the following description is made with reference to fig. 1 and 2a. Referring to fig. 1, fig. 1 is a schematic structural diagram of an autonomous vehicle according to an embodiment of the present application, and fig. 1 illustrates an autonomous vehicle as an example.
The autonomous vehicle 10 is configured in a fully or partially autonomous mode, for example, the autonomous vehicle 10 may control itself while in the autonomous mode, and may determine the current state of the vehicle and its surroundings by human operation, determine possible behaviors of at least one other vehicle in the surroundings, and determine a confidence level corresponding to the likelihood of the other vehicle performing the possible behaviors, and control the autonomous vehicle 10 based on the determined information. While the autonomous vehicle 10 is in the autonomous mode, the autonomous vehicle 10 may also be configured to operate without human interaction.
The autonomous vehicle 10 may include various subsystems such as a travel system 102, a sensor system 104, a control system 106, one or more peripherals 108, as well as a power source 110, a computer system 112, and a user interface 116. Alternatively, the autonomous vehicle 10 may include more or fewer subsystems, and each subsystem may include multiple components. In addition, each of the subsystems and components of the autonomous vehicle 10 may be interconnected by wires or wirelessly.
The travel system 102 may include components that provide powered movement of the autonomous vehicle 10. In one embodiment, the travel system 102 may include an engine 118, an energy source 119, a transmission 120, and wheels/tires 121.
The engine 118 may be an internal combustion engine, an electric motor, an air compression engine, or other types of engine combinations, such as a hybrid engine of a gasoline engine and an electric motor, or a hybrid engine of an internal combustion engine and an air compression engine. Engine 118 converts energy source 119 into mechanical energy. Examples of energy sources 119 include gasoline, diesel, other petroleum-based fuels, propane, other compressed gas-based fuels, ethanol, solar panels, batteries, and other sources of electricity. The energy source 119 may also provide energy to other systems of the autonomous vehicle 10. The transmission 120 may transmit mechanical power from the engine 118 to the wheels 121. The transmission 120 may include a gearbox, a differential, and a drive shaft. In one embodiment, the transmission 120 may also include other devices, such as a clutch. Wherein the drive shaft may comprise one or more axles that may be coupled to one or more wheels 121.
The sensor system 104 may include a number of sensors that sense information about the environment surrounding the autonomous vehicle 10. For example, the sensor system 104 may include a positioning system 122 (which may be a global positioning GPS system, or a Beidou system or other positioning system), an inertial measurement unit (inertial measurement unit, IMU) 124, radar 126, laser rangefinder 128, and camera 130. The sensor system 104 may also include sensors (e.g., in-vehicle air quality monitors, fuel gauges, oil temperature gauges, etc.) that are monitored for internal systems of the autonomous vehicle 10. The sensed data from one or more of these sensors may be used to detect the object and its corresponding characteristics (location, shape, direction, speed, etc.). Such detection and identification is a critical function of the safe operation of the autonomous vehicle 10.
Wherein the positioning system 122 may be used to estimate the geographic location of the autonomous vehicle 10. The IMU 124 is configured to sense changes in the position and orientation of the autonomous vehicle 10 based on inertial acceleration. In one embodiment, the IMU 124 may be a combination of an accelerometer and a gyroscope. The radar 126 may utilize radio signals to sense objects within the surrounding environment of the autonomous vehicle 10, which may embody millimeter wave radar or lidar in particular. In some embodiments, radar 126 may be used to sense the speed and/or heading of an object in addition to sensing the object. The laser rangefinder 128 may utilize a laser to sense objects in the environment in which the autonomous vehicle 10 is located. In some embodiments, laser rangefinder 128 may include one or more laser sources, a laser scanner, and one or more detectors, among other system components. The camera 130 may be used to capture a plurality of images of the surroundings of the autonomous vehicle 10. The camera 130 may be a still camera or a video camera.
The control system 106 is configured to control the operation of the autonomous vehicle 10 and its components. The control system 106 may include various components including a steering system 132, a throttle 134, a brake unit 136, a computer vision system 140, a line control system 142, and an obstacle avoidance system 144.
Wherein the steering system 132 is operable to adjust the heading of the autonomous vehicle 10. For example, in one embodiment may be a steering wheel system. The throttle 134 is used to control the operating speed of the engine 118 and thus the speed of the autonomous vehicle 10. The brake unit 136 is used to control the speed of the autonomous vehicle 10. The brake unit 136 may use friction to slow the wheel 121. In other embodiments, the braking unit 136 may convert the kinetic energy of the wheels 121 into electric current. The brake unit 136 may take other forms to slow the rotational speed of the wheels 121 to control the speed of the autonomous vehicle 10. The computer vision system 140 may be operable to process and analyze images captured by the camera 130 to identify objects and/or features in the environment surrounding the autonomous vehicle 10. The objects and/or features may include traffic signals, road boundaries, and obstacles. The computer vision system 140 may use object recognition algorithms, in-motion restoration structure (Structure from Motion, SFM) algorithms, video tracking, and other computer vision techniques. In some embodiments, the computer vision system 140 may be used to map an environment, track objects, estimate the speed of objects, and so forth. The route control system 142 is used to determine the travel route and the travel speed of the autonomous vehicle 10. In some embodiments, the route control system 142 may include a lateral planning module 1421 and a longitudinal planning module 1422, the lateral planning module 1421 and the longitudinal planning module 1422 being configured to determine a travel route and a travel speed for the autonomous vehicle 10 in conjunction with data from the obstacle avoidance system 144, the GPS 122, and one or more predetermined maps, respectively. The obstacle avoidance system 144 is operable to identify, evaluate, and avoid or otherwise override obstacles in the environment of the autonomous vehicle 10 that may embody, in particular, actual obstacles and virtual mobiles that may collide with the autonomous vehicle 10. In one example, control system 106 may additionally or alternatively include components other than those shown and described. Or some of the components shown above may be eliminated.
The autonomous vehicle 10 interacts with external sensors, other vehicles, other computing systems, or users through peripheral devices 108. Peripheral devices 108 may include a wireless communication system 146, a vehicle computer 148, a microphone 150, and/or a speaker 152. In some embodiments, the peripheral device 108 provides a means for a user of the autonomous vehicle 10 to interact with the user interface 116. For example, the vehicle computer 148 may provide information to a user of the autonomous vehicle 10. The user interface 116 is also operable with the vehicle computer 148 to receive user input. The vehicle computer 148 may be operated by a touch screen. In other cases, the peripheral device 108 may provide a means for the autonomous vehicle 10 to communicate with other devices located within the vehicle. For example, microphone 150 may receive audio (e.g., voice commands or other audio inputs) from a user of autonomous vehicle 10. Similarly, the speaker 152 may output audio to a user of the autonomous vehicle 10.
The wireless communication system 146 communicates wirelessly with one or more devices, either directly or via a communication network. For example, the wireless communication system 146 may use 3G cellular communication, 4G cellular communication, 5G cellular communication, or wireless local area network (wireless localarea network, WLAN) communication, etc. In some embodiments, the wireless communication system 146 may utilize an infrared link, bluetooth, or ZigBee to communicate directly with the device. Other wireless protocols, such as various vehicle communication systems, for example, the wireless communication system 146 may include one or more dedicated short-range communication (DEDICATED SHORT RANGE COMMUNICATIONS, DSRC) devices, which may include public and/or private data communications between vehicles and/or roadside stations.
The power source 110 may provide power to various components of the autonomous vehicle 10. In one embodiment, the power source 110 may be a rechargeable lithium ion or lead acid battery. One or more battery packs of such batteries may be configured as a power source to provide power to the various components of the autonomous vehicle 10. In some embodiments, the power source 110 and the energy source 119 may be implemented together, such as in some all-electric vehicles.
Some or all of the functions of the autonomous vehicle 10 are controlled by a computer system 112. The autonomous vehicle 10 may include one or more computer systems 112, each computer system 112 may include at least one processor 113, the processor 113 executing instructions 115 stored in a non-transitory computer readable medium such as memory 114. If multiple computer systems 112 are present in the autonomous vehicle 10, the multiple computer systems 112 may be configured to control various components or subsystems of the autonomous vehicle 10 in a distributed manner, i.e., various components or subsystems of the autonomous vehicle 10 may each have their own processor, and some components, such as the steering and retarding components, may each have their own processor that only performs calculations related to component-specific functions.
Each processor 113 may be any conventional processor, such as a central processing unit (central processing unit, CPU). Alternatively, the processor 113 may be a special purpose device such as an Application SPECIFIC INTEGRATED Circuit (ASIC) or other hardware-based processor.
Although FIG. 1 functionally illustrates a processor, memory, and other components of computer system 112 in the same block, one of ordinary skill in the art will appreciate that the processor, or memory, may in fact comprise multiple processors, or memories, that are not stored within the same physical housing. For example, memory 114 may be a hard disk drive or other storage medium located in a different housing than computer system 112. Thus, references to processor 113 or memory 114 will be understood to include references to a collection of processors or memories that may or may not operate in parallel.
In some embodiments, the memory 114 may contain instructions 115 (e.g., program logic) that the instructions 115 may be executed by the processor 113 to perform various functions of the autonomous vehicle 10, including those described above. The memory 114 may also contain additional instructions, including instructions to send data to, receive data from, interact with, and/or control one or more of the travel system 102, the sensor system 104, the control system 106, and the peripherals 108. In addition to instructions 115, memory 114 may store data such as road maps, route information, vehicle location, direction, speed, and other such vehicle data, as well as other information. Such information may be used by the autonomous vehicle 10 and the computer system 112 during operation of the autonomous vehicle 10 in autonomous, semi-autonomous, and/or manual modes. A user interface 116 for providing information to or receiving information from a user of the autonomous vehicle 10. Optionally, the user interface 116 may include one or more input/output devices within the set of peripheral devices 108, such as a wireless communication system 146, a vehicle computer 148, a microphone 150, and a speaker 152.
The computer system 112 may control the functions of the autonomous vehicle 10 based on inputs received from various subsystems (e.g., the travel system 102, the sensor system 104, and the control system 106) and from the user interface 116. If there are multiple computer systems 112 in the autonomous vehicle 10, the computer systems 112 may perform data interaction by using a wired communication manner, so as to realize control over the autonomous vehicle 10. For example, the computer system 112 may utilize inputs from the control system 106 to control the steering system 132 to avoid obstacles detected by the sensor system 104 and the obstacle avoidance system 144. In some embodiments, the computer system 112 is operable to provide control over many aspects of the autonomous vehicle 10 and its subsystems.
Alternatively, one or more of these components may be mounted separately from or associated with the autonomous vehicle 10. For example, the memory 114 may exist partially or completely separate from the autonomous vehicle 10. The above components may be communicatively coupled together in a wired and/or wireless manner.
Alternatively, the above components are only an example, and in practical applications, components in the above modules may be added or deleted according to actual needs, and fig. 1 should not be construed as limiting the embodiments of the present application. An autonomous vehicle traveling on a roadway, such as autonomous vehicle 10 above, may identify objects within its surrounding environment to determine adjustments to the current speed. The object may be another vehicle, a traffic control device, or another type of object. In some examples, each identified object may be considered independently and based on its respective characteristics, such as its current speed, acceleration, spacing from the vehicle, etc., may be used to determine the speed at which the autonomous vehicle is to adjust.
Alternatively, the autonomous vehicle 10 or a computing device associated with the autonomous vehicle 10, such as the computer system 112, computer vision system 140, memory 114 of fig. 1, may predict the behavior of the identified object based on the characteristics of the identified object and the state of the surrounding environment (e.g., traffic, rain, ice on the road, etc.). Alternatively, each identified object depends on each other's behavior, so all of the identified objects can also be considered together to predict the behavior of a single identified object. The autonomous vehicle 10 is able to adjust its speed based on the predicted behavior of the identified object. In other words, the autonomous vehicle 10 is able to determine what steady state the vehicle will need to adjust to (e.g., accelerate, decelerate, or stop) based on the predicted behavior of the object. In this process, other factors may also be considered to determine the speed of the autonomous vehicle 10, such as the lateral position of the autonomous vehicle 10 in the road on which it is traveling, the curvature of the road, the proximity of static and dynamic objects, and so forth. In addition to providing instructions to adjust the speed of the autonomous vehicle, the computing device may also provide instructions to modify the steering angle of the autonomous vehicle 10 such that the autonomous vehicle 10 follows a given trajectory and/or maintains safe lateral and longitudinal distances from objects in the vicinity of the autonomous vehicle 10 (e.g., cars in adjacent lanes on a roadway).
The autonomous vehicle 10 may be a car, truck, motorcycle, bus, boat, airplane, helicopter, mower, casino vehicle, construction equipment, trolley, golf cart, train, etc., and the embodiment of the present application is not particularly limited.
With reference to fig. 2a in conjunction with the above description, fig. 2a is a schematic diagram of a network architecture of an on-vehicle communication system according to an embodiment of the present application. Illustratively, the communication system of the vehicle may be a wired communication system in the computer system 112 in the autonomous vehicle 10 or may be a wireless communication system 146 in the autonomous vehicle 10.
As shown in fig. 2a, the communication system in the vehicle may include an application layer (Application Layer), a DDS layer, a TSN layer, and a physical layer. It should be noted that the communication system of the vehicle may include more or fewer layers, and the example in fig. 2a is merely for convenience of locating the position of the intermediate layer, and is not limited to this scheme.
(1) Application layer
The application layer deploys one or more applications. The multiple applications of the application layer may involve one or more communication services (services), i.e. part of the applications may need to configure the communication services involved, each of which needs to bind communication middleware specified by a communication protocol. Different communication services can bind the same kind of communication middleware, and also bind different kinds of communication middleware. Which types of communication services are present in the vehicle may be predefined, and which communication services an application involves may also be predefined. For example, a communication service may be defined as transmitting one or more types of data. By way of example, the sensor system 104 may obtain location information, motion information, radar data, laser data, and image data, and there may be one processor in the processor 113 for controlling the sensor system 104, and applications deployed on the processor's operating system may be related to three communication services. One of the aforementioned three communication services may be defined as transmitting position information, radar data, or laser data, another communication service may be defined as transmitting motion information, and another communication service may be defined as transmitting image data.
It should be understood that the examples herein are merely for convenience of understanding the present solution, and the scope of a specific communication service may be flexibly configured in connection with an actual application scenario, which is not limited herein. It should be noted that, since the application layer may deploy a plurality of applications, different applications may relate to the same communication service or may relate to different communication services.
(2) DDS layer
The DDS layer is a DDS type communication middleware (hereinafter simply referred to as "DDS middleware"). Unlike communication middleware defined by the SOME/IP protocol (Scalable service-Oriented Middleware over IP, an IP-based extensible service-oriented middleware), or other types of communication middleware, the most important feature of DDS middleware is data-centric. The DDS middleware provides distributed data transceiving services for upper-layer applications so that application developers can easily concentrate on defining and using a structured data model for managing QoS of data exchange and security policies around data objects.
In the present application, referring to fig. 2b, fig. 2b is a flow chart of a flow scheduling method according to an embodiment of the present application. Wherein, A1, DDS middleware acquires first data stream. A2, if the DDS middleware determines that the type of the first data stream is burst traffic, the DDS middleware acquires queue allocation information corresponding to the queue allocation request, wherein the queue allocation information is determined based on the idle bandwidth of at least one queue in the gating list, and the queue allocation information indicates that the first queue in the at least one queue is allocated to the first data stream. A3, the DDS middleware transmits the first data stream and the queue allocation information to a time sensitive network TSN protocol stack. Specifically, when the DDS middleware determines that the first data stream is burst traffic, it may attempt to select a transmit queue meeting a condition in the idle bandwidth corresponding to the gating list. The specific selection mode is that according to the queue allocation request, the queue allocation information is determined based on the idle bandwidth of at least one queue in the gating list. And then, after the DDS middleware acquires the queue allocation information corresponding to the first data stream, the queue allocation information and the first data stream are sent to a time sensitive network TSN protocol stack.
(3) TSN (Time-SENSITIVE NETWORKING, time sensitive network) layer
The TSN layer adopts TSN hardware, such as TSN protocol stack and DDS layer to carry out data interaction. TSN is a standardized technology based on standard ethernet and providing deterministic information transmission, and can provide reliable quality of service (Quality of Service, qoS) guarantees for time-sensitive services.
Specifically, an application program is run on the terminal node, and the TSN layer issues or updates configuration information such as a gating list to each network node, including the terminal node. The terminal node runs the application program, the terminal node sends the data stream generated by the application program to the TSN layer through the DDS layer, and the TSN layer sends the data to the network node, so that the high-reliability low-delay delivery of the data from end to end is realized.
(4) Physical layer
In the open systems interconnection (Open System Interconnection, OSI) reference model, the physical layer is the lowest layer of the reference model and is also the first layer of the OSI model. The main functions of the physical layer are: the transmission medium is used for providing physical connection for the data link layer, so that transparent transmission of the bit stream (the transmission unit is bits) is realized, and correct transmission of the bit stream through the transmission medium is ensured. The physical layer is used for realizing transparent transmission of bit streams between adjacent computer nodes, and shielding the difference between specific transmission media and physical equipment as much as possible, so that the data link layer does not need to consider what the specific transmission media of the network are. In the embodiment of the application, after the TSN layer acquires the data of the DDS layer, the data is sent to the network node through the physical layer.
In summary, the reasons why the DDS layer and the TSN layer can be well combined mainly include:
the TSN layer defines the timing requirements for each data stream and configures the network paths to ensure that the requirements are met. The TSN also provides isolation for different data streams so that real-time traffic is not interfered with by other communications occurring on the same network. But since the technology is at a lower level in the configuration stack, applications must configure flows, packet sizes, frequencies, priorities, network endpoints, etc. While this can be done for simple applications of several nodes and flows, it becomes tricky for more complex systems.
The DDS layer is located closer to the application layer and can provide a higher level of interface in terms of theme, application data type, qoS (e.g., reliability, persistence, priority, expiration date) associated with the application, and handle lower levels of detail such as discovering endpoints and establishing communications.
By placing the DDS layer above the TSN layer, the DDS layer is able to pass data streams generated by an application to the TSN layer based on a data-centric publish/subscribe topic-based mechanism. Therefore, application program developers can easily utilize the DDS software data bus with TSN advanced network function to create a strong distributed data center software integration framework with high certainty, reliability, expandability and usability characteristics.
It should be noted that, if the DDS is to be placed on the TSN, a gating list is generated by making a scheduling plan on the TSN layer before the vehicle network runs in advance, and is sent to the terminal node. The scheduling of TSNs has long been an important research direction, the end purpose of which is to generate a compact, efficient and timely scheduling plan. Wherein a good scheduling plan should meet the following requirements:
(1) Within the constraint range of QoS, the transmission bandwidth can be allocated to various data traffic in time, so that the data traffic can be transmitted in time.
(2) Bandwidth resources can be maximally utilized.
But creating an effective compact scheduling plan is not easy for the following reasons:
(1) The size of the data frames may be different. Scheduling must take into account the worst case, not the most frequent case. If a data frame of a certain priority is maximally 1500 bytes in size, then the allocated queue must be able to send a data frame of 1500 bytes in size when allocating a queue for that priority.
(2) Data frames appear aperiodic due to the unplanned bursty traffic (e.g., rare commands, retransmitted data, etc.) to be sent.
(3) The data streams with different cycle times need to coexist on one line. One approach to solving such problems is to use a slower TSN schedule and open the gate multiple times for faster traffic during the TSN period.
The gating list is a carrier of the dispatch plan. After the scheduling plan is determined, the gating list is then issued to nodes in the network. Specifically, the TSN layer issues or updates configuration information such as a gating list to routing nodes and end nodes on the network. And running an application program on the terminal node, sending the data stream generated by the application program to the DDS layer by the terminal node, performing gate control queue scheduling shaping on the generated data stream by the DDS layer, and sending the data stream out by the TSN layer to realize high-reliability low-delay delivery of the data from end to end.
Illustratively, during a transmission period of one TSN, the data interaction process of the communication system of the vehicle is specifically:
The transmission process comprises the following steps: in the application layer, an application running on the terminal node sends the corresponding type of data to the DDS layer. In the DDS layer, according to the acquired gating list, the data traffic under the same theme is packaged and sent to the TSN layer. After the TSN layer acquires the data of the DDS layer, qoS service of the data is read, a control strategy matched with the TSN layer is searched according to the QoS service, and the data is sent to a network node through a physical layer by using a special queue through the matched control strategy.
The receiving process comprises the following steps: when the network node receives the data, the TSN layer analyzes the data and transmits the data to the DDS layer. The monitor corresponding to the data reader monitors new data in the DDS layer at any time, receives the message when the subscribed data are found, analyzes and delivers the message to the corresponding data reader through the publishing and subscribing middleware, and sends the message to inform the application program after the data reader acquires the data, so that data exchange is completed.
The scenario of the application of the traffic scheduling method provided by the embodiment of the present application is described above, and the specific implementation process of the traffic scheduling method provided by the embodiment of the present application will be described in detail below.
First, in order to better understand the aspects of the embodiments of the present application, related terms and concepts that may be related to the embodiments of the present application are described below.
(1) DDS (Data Distribution Service )
The DDS describes interfaces and behaviors of data publishing, transmitting and receiving in a distributed real-time system, defines a publishing and subscribing mechanism with data as a core, and provides a basic service of a data model and communication which is irrelevant to a platform so as to complete the publishing and subscribing of the data. The DDS provides a Data-centric publishing and subscribing DCPS (Data-Centric Publish-subscriber) model, refer to fig. 3, and fig. 3 is a schematic diagram of an architecture of the DCPS model according to an embodiment of the present application. The data interaction comprises four forms: one-to-one, one-to-many, many-to-one, and many-to-many.
In the DCPS model, specifically, it includes: domain, data writer (DATAWRITER), publisher (Publisher), data reader (DATAREADER), subscriber (Subscriber), and Topic (Topic). In this model, the party that writes data to the "global data space" is referred to as the publisher and the data writer, and the party that reads data in the "global data space" is referred to as the subscriber and the data reader. The data sharing of the DDS takes Topic as a unit, and the application program can judge the data type contained in the DDS through the Topic without depending on other context information. The concrete introduction is as follows:
Domain and domain participants: the basic structure in DDS is Domain (Domain), which binds various applications together for communication, and uses the Domain to divide subspaces of data communication. Any two Entity role (Entity) communications in a DDS must interact within the same Domain, i.e. the identities (Domain IDs) at which they initialize are the same, and Domain IDs for different domains must be unique. Domain participants (Domain Participant) within Domain are entry points for services, and any DDS application needs to first acquire Domain participants and then acquire other services through Domain participants, such as Publisher, subscriber, topic. Domains are basic structures that link individual programs together for communication, and only components within the same domain can communicate with each other, and data cannot be exchanged between different domains. The domains allow applications in the same segment of a communicable network to logically isolate applications, each with a domain participant acting as a proxy for the application in the "global data space".
Publisher and data writer: there may be multiple publishers and subscribers within the same domain. One publisher/subscriber may have multiple data readers/data writers. The publisher is a manager of data transmission, is responsible for the actual transmission of data, and creates and manages data writers (one publisher may have multiple data writers, which are in one-to-one correspondence with topics).
Subscribers and data readers: subscribers are recipients of the data transmission, responsible for the actual receipt of the data, and create and manage the data readers (a subscriber may have multiple data readers, one for each topic).
Subject matter: the topic is an abstraction of the data set in the global data space, which is the channel of interaction data between publishers and subscribers. One theme can only encapsulate one data type. The data sharing of the DDS takes the theme as a unit, and the application program can judge the type of data contained in the DDS through the theme without depending on other context information. Taking the autonomous vehicle 10 shown in fig. 1 as an example, the autonomous vehicle 10 may need to acquire a lot of surrounding information through the sensor system 104, and extract the following information of the vehicle into a plurality of topics, including the position Topic of the obstacle, the current vehicle speed Topic, and so on. The data is then processed by the DDS to subscribers subscribed to the Topic.
The DDS provides a rich QoS (Quality of Service ) policy at the same time, which can meet various performance and control requirements on resources of the application system. The user can adjust QoS at any time according to different application requirements, and the controllability and tailorability of the system are greatly enhanced. All communicating entities can set corresponding QoS policies, but can successfully communicate only if the QoS of the communicating parties match. If there is no match, the data distribution service will provide a corresponding fault tolerance mechanism.
For a clearer explanation of a DDS publish-subscribe mechanism, refer to fig. 4, and fig. 4 is a schematic communication flow diagram of a publisher and subscriber in a DDS according to an embodiment of the present application. The method specifically comprises the following steps:
The issuing party:
domain participants, publishers, publishing topics and data writers are created in sequence, and information topics are published to the global data space.
When receiving the confirmation information sent by the global data space, confirming whether a subscriber subscribes to the publishing subject, if so, sending data corresponding to the information subject to the subscriber;
subscriber:
Domain participants, subscribers, subscription topics, and data readers are created in sequence, and subscription information topics are sent to the global data space.
Waiting for receiving data corresponding to the information subject sent by the publisher and reading the data.
(2) TSN (Time-SENSITIVE NETWORKING, time sensitive network)
The TSN is a protocol cluster consisting of a series of protocol standards, each protocol being used to implement a different function. The IEEE 802.1 working group defines a Time-aware shaping mechanism (Time-AWARE SHAPER, TAS) in the 802.1Qbv standard, through which the certainty of the transmission delay of critical traffic on end systems and switches is guaranteed. IEEE 802.1Qbv also introduces a scheduling mechanism for gate operations to achieve orderly scheduling of data frames by switches and terminals. Each switch port configures a gating list (Gate Control List, GCL) comprising: gate status and time slots, each queue associated with a gate, each entry of the gating list corresponding to a transmission gate operation. The gating list is periodically circulated, data can be sent only when the door is opened, and the sending of other data can be closed before and during the sending of the key data so as to ensure that the key data is not influenced, which is also the key of ensuring certainty by TAS. Typically, the switch has 8 queues at the exit of each port.
Referring to fig. 5, fig. 5 is a schematic structural diagram of a gating mechanism according to an embodiment of the present application. Setting 8 gating at the standard 8 queue output ends, wherein the gating state is on, which indicates that the queue is allowed to output the message, and the gating state is off, which indicates that the queue is not allowed to output the message. The gating status is determined by the gating list and the current time. The gating list describes gating states of each time period in the gating period, and the gating states are set through circulation. Each entry defines the gating state of each queue at each slot, and the validation time of that entry. When the gating states of the queues are open, the output queue selection algorithm selects a specific queue output message according to the strategy. When the state of the transmission gate is "o" (meaning "on"), a data frame may be selected from the queue for transmission; when the state of the transmission gate is "C" (meaning "off"), a frame cannot be selected from this queue for transmission. T00-T79 refer to 80 different time slots, each time slot corresponding to the gating state of eight queues. For example, the gating states of 8 queues in the time slot of T05 are "CoCCoCCC", and after the time slot T05 passes, the gating state becomes "oCooCooo" until T06.
The state of the door switch is determined by the gating list shown in table 1 below.
Sequence number Time slot (ms) Queue 1 Queue 2
1 0-229 Switch for closing Switch for closing
2 229-230 Opening device Switch for closing
3 230-409 Switch for closing Switch for closing
4 409-459 Switch for closing Opening device
127 9429-9430 Switch for closing Switch for closing
TABLE 1
For the configuration of the gate status and time slots in the gating list, the transmission period of the data frames in each queue needs to be considered. In table 1, it is assumed that queue 1 in the gating list is open every 230ms for 1ms, i.e., the period of queue 1 is 230ms. Queue 2 opens every 410ms for 1ms, i.e., the period of queue 2 is 410ms. The period of the gating list is the least common multiple of the period of queue 1 and the period of queue 2, 9430ms.
It should be understood that the gating list shown in table 1 is merely an example, and the gating list is not limited to what is shown in table 1. In actual operation, the content of the gating list may be configured according to actual requirements, which is not limited herein.
(3) Quality of service (Quality of Service, qoS)
The service quality refers to that the network can provide better service capability for specified network communication by utilizing various basic technologies, is a security mechanism of the network, and is a technology for solving the problems of network delay, blocking and the like.
The user can control the manner in which data is shared among applications by setting QoS policies, and each DCPS entity, including Topic, DATAWRITER, publisher, DATAREADER, subscniber, etc., can independently configure the corresponding QoS policies. See table 2 below, which is a few QoS policies in common use.
TABLE 2
It should be appreciated that, in order to ensure that messages of different priorities are received with different QoS treatments, including delay, bandwidth, etc., when the network is congested, messages of different priorities may be allocated to different queues, where the different queues correspond to different scheduling priorities. In addition, only a portion of the QoS policies are shown in table 2, which is not limited herein.
(4) RTPS (Real-time publish-subscribe) protocol
The DDS technique uses RTPS protocol for transmission. The RTPS protocol divides communication intervals by domain concept, each domain may include many participants, and after each participant is online, other online participants in the domain can be matched through a participant discovery protocol (PARTICIPANT DISCOVERY PHASE, PDP) and an endpoint discovery protocol (Endpoint Discovery Phase, EDP), and the matched participants can communicate with other participants through subscribers and publishers in the domain.
It will be appreciated that the transport layer protocol is not included in the DDS standard, so that different DDS implementations may use different message interaction means, and even different transport protocols, which may result in DDS implementations from different vendors being non-interoperable. As DDS is applied more and more widely in large-scale distributed systems, the need for formulating a unified transport layer standard is more and more intense. The RTPS protocol is produced in the background, and mainly aims to meet the requirements of a large-scale distributed system in the field of industrial automation, and can well fit with the characteristics of the DDS protocol. The specification defines a message format, a message interaction mode under various use scenarios, and the like.
(5) Bandwidth of a communication device
Bandwidth (band width), which is also called bandwidth, refers to the amount of data that a link can pass over in a fixed time, i.e., the ability to transfer data in a transmission pipe. In digital devices, bandwidth is typically expressed in bps, the number of bits that can be transmitted per second.
In the embodiment of the application, the bandwidth represents the data quantity which can be transmitted by each queue in the gating list, and the specific calculation mode is that the transmission speed of the network card is multiplied by the time slot corresponding to the queue.
(6) Terminal node
Nodes on the network are core components of network activity, including end nodes and intermediate nodes. Terminal nodes are typically network-connected applications and devices that send or receive information using a network; the intermediate node provides network service functions such as information transfer service and information flow control. For example, the end node may represent a control device such as a computer, while the non-end node is a device such as a switch, a router, or the like that provides a data forwarding function. The non-terminal nodes provide network connectivity services for the terminal nodes.
In order to facilitate understanding, a flow scheduling method provided by the embodiment of the application is specifically described below with reference to the accompanying drawings and application scenarios.
Referring to fig. 6, fig. 6 is a schematic flow chart of another flow scheduling method according to an embodiment of the present application. In one alternative embodiment, the flow scheduling method shown in FIG. 6 may be applied to the autonomous vehicle shown in FIG. 1, and specifically to an intermediate layer of the communication system of the autonomous vehicle. It will be appreciated that the autonomous vehicle shown in fig. 1 is only one application scenario, and is not specifically limited herein.
As shown in fig. 6, the traffic scheduling method includes the following steps 601 to 605.
In step 601, the dds middleware acquires a first data stream.
In the embodiment of the application, when a new data stream is generated, the DDS middleware acquires the data stream to identify the flow type of the data stream.
In step 602, the dds middleware determines the type of the first data stream, if the determination result is the burst traffic, the step 603 is entered, and if the determination result is the periodic traffic, the step 605 is entered.
In the embodiment of the application, it can be understood that the type of the data traffic can be divided into periodic traffic, burst traffic, best effort traffic and the like. Specifically, traffic types with QoS requirements are collectively referred to as QoS traffic, including periodic traffic, bursty traffic, and the like. Some other traffic types, such as downloads, emails, etc., do not require any strict QoS support. Traffic types without QoS requirements are collectively referred to as Best Effort (BE) traffic. The bursty traffic refers to extra traffic generated during normal operation of the network, for example: emergency braking information, fault information and the like, and has the characteristics of non-periodic generation, high real-time requirement and the like. Illustratively, at the on-board network operation stage, the DDS has rich QoS capability. For example, reliablity (reliability) QoS may enable the DDS layer to generate retransmission in the scene of packet loss. Durability (persistent) QoS, which allows the DDS layer to generate the behavior of retransmitting historical data packets in the context of a new data subscriber being online. These actions all produce unplanned bursty traffic. Therefore, it is necessary to first judge the type of the data stream to perform the corresponding operation.
In step 603, the dds middleware obtains queue allocation information corresponding to the queue allocation request, where the queue allocation information is determined based on the free bandwidth of at least one queue in the gating list, and the queue allocation information indicates that a first queue in the at least one queue is allocated to the first data stream.
In the embodiment of the application, if the DDS middleware identifies that the data type of the acquired first data stream is burst traffic, the first data stream needs to be scheduled in time. Because the first data stream is an unplanned burst traffic, the DDS middleware distributes the first data stream to a first queue meeting the condition based on the idle bandwidth of at least one queue in the gating list configured in advance, and generates queue distribution information.
It will be appreciated that when unplanned bursty traffic is generated, the bursty traffic is typically sent out in a timely manner by reserving a specified bandwidth (i.e., corresponding to a specified queue and time slot) for the bursty traffic in a gating list. But there are two problems with this approach. 1. If unscheduled bursty traffic is not generated, the reserved bandwidth is unused and wasted. 2. After the designated queue and the designated time slot are allocated to the burst traffic, if the burst traffic misses the designated transmission time slot, the burst traffic can only be transmitted through the designated time slot in the next period, so that the burst traffic cannot be processed in time. In order to avoid the problems, the embodiment of the application can realize the dynamic scheduling of the burst traffic under the condition of not occupying extra bandwidth resources based on the idle bandwidth in the gating list.
In a possible implementation manner, the DDS middleware may determine that the type of the first data stream is burst traffic by:
Acquiring a DDS theme corresponding to the first data stream, and if the data stream type corresponding to the DDS theme is burst traffic, determining the first data stream as burst traffic; and/or the number of the groups of groups,
If the first data stream is a retransmission data stream, determining the type of the first data stream as burst traffic.
In this possible implementation, two implementations are used to identify the type of the first data stream. In the first method, all traffic under the specified Topic is determined as burst traffic based on the Topic. In the second method, the retransmitted data streams are each determined as burst traffic. The two implementations may be implemented in combination or separately.
Further, after the DDS middleware determines that the type of the first data stream is burst traffic, the queue allocation request is first acquired. In one possible implementation, the queue allocation request includes a transmission bandwidth required by the first data flow and QoS requirements of the first data flow;
The first queue satisfies the following condition:
the free bandwidth of the first queue meets the transmission bandwidth required by the first data flow, and the QoS performance of the first queue meets the QoS requirement of the first data flow.
In this possible implementation manner, before the vehicle network operates, qoS requirements corresponding to each data flow may be configured to the data writer by QoS configuration information, so that QoS requirements corresponding to each data type may be directly obtained. In order to allocate the first data stream in the gating list to the available free bandwidth, the free bandwidth needs to meet the transmission requirements of the first data stream. The DDS middleware configures transmission requirements of the first data stream in the queue allocation request of the first data stream. Specifically, the transmission requirement of the first data stream may include a transmission bandwidth required by the first data stream and a QoS requirement of the first data stream, and the transmission requirement of the first data stream is matched with a queue having an idle bandwidth in the gating list, so as to obtain a first queue meeting the transmission requirement of the first data stream, so that the first data stream can be successfully sent out through the first queue.
It can be appreciated that, since the DDS technology uses the RTPS protocol to perform data transmission, the transmission bandwidth required for the first data stream may be the packet length corresponding to encapsulating the first data stream into the RTPS packet. The first data stream may be transmitted only if the free bandwidth of the first queue is greater than or equal to the transmission bandwidth required by the first data stream. In addition, when the gating list is configured, the QoS configuration information corresponding to different data flows can be configured in different queues of the gating list, so that the QoS performance of different queues in the gating list is different. And the data flows can also correspond to multiple QoS requirements, and the QoS requirements corresponding to different data flows can be different. Therefore, it is necessary to determine whether the QoS performance of the first queue meets the QoS requirement of the first data flow, and only if so, the first data flow can be successfully sent out through the first queue.
Illustratively, the QoS requirements of the first data flow are assumed to include a packet length requirement, a latency requirement (delay), a PRIORITY requirement (transport_priority), and the gating list is assumed to further include latency requirements for each queue.
(1) Packet length requirements. I.e. the free bandwidth of the first queue meets the transmission bandwidth required by the first data stream.
(2) Time delay requirements. Assuming that the delay requirement of the first data stream is t, if the DDS middleware receives the queue allocation request at the time t1 in the current TSN period, it is required to find an idle bandwidth meeting the condition at the time t1+t.
(3) Priority requirements. The matching is generally performed according to a rule selected by parallel priority, for example, assuming that the priority level of the first data stream is 3, the priority requirement of the first data stream can be satisfied by finding the priority level of 3 in the gating list. However, it should be noted that the priority levels of the first data stream and the priority levels in the gating list are not necessarily in one-to-one correspondence, that is, the priority level 3 of the first data stream and the priority level 3 in the gating list are not in an equivalent relationship. In addition, the rule of selecting the parallel priority is just an implementation manner, and other priority selecting manners can also be adopted, which are not limited herein.
In addition, if the QoS requirement of the first data flow further includes DURABILITY (persistence), the priority requirement of the first data flow does not need to be considered, and after determining that the requirement of the packet length is satisfied, the queue may be selected directly from the available free bandwidth (preferably, from the low priority). That is, if the first data stream has a DURABILITY QoS requirement, priority requirements do not need to be considered. It will thus be appreciated that in actual operation, if the first data flow contains a plurality of QoS requirements, then it is necessary to take into account the nature of each QoS requirement itself when combining conditions, and to perform different conditions. That is, the specific content of the condition to be satisfied by the first queue needs to be specifically set according to the QoS requirement of the data flow, which is not limited herein.
Optionally, there may be multiple sub-functional modules in the DDS middleware. The type of the data stream may be identified by a first module in the DDS middleware, and a queue allocation request may be sent to a second module in the DDS middleware, where the second module receives the queue allocation request and generates queue allocation information. In the embodiment of the application, after generating the queue allocation request, the DDS middleware obtains and generates the queue allocation information according to the queue allocation request, and when explaining the method embodiment, the source of the queue allocation request is not limited.
Further, after the DDS middleware obtains the queue allocation request, since the queue allocation information is determined based on the free bandwidth of at least one queue in the gating list, the gating list needs to be obtained first before the corresponding queue allocation information is generated according to the queue allocation request. Wherein, in one possible implementation manner, the gating list acquired by the DDS middleware is generated based on QoS configuration information of at least one data stream, and the data stream includes a first data stream;
The QoS configuration information includes at least one of:
The data type of the data stream, the transmission period requirement of the data stream, the delay requirement of the data stream, the size of the data stream and the priority identification of the data stream.
In this possible implementation manner, optionally, the central network controller of the TSN layer calculates a gating list based on QoS configuration information of each data flow configured in advance, and issues the gating list to each network node (including the terminal node). For example, referring to table 3 below, qoS configuration information for a data flow is given in table 3, with different data flow types being classified into different priority levels.
TABLE 3 Table 3
The TSN protocol stack generates a gating list according to QoS configuration information of at least one data stream, and sends basic information of the gating list to each node in the vehicle-mounted network, wherein the node comprises DDS type communication middleware, so that the DDS type communication middleware timely schedules the data stream according to the gating list, and normal communication of the vehicle-mounted network is ensured.
It should be noted that the types and parameters of the respective data streams shown in table 3 are only examples. The QoS configuration information of each data flow is not limited to the respective indices shown in table 3, and is not limited thereto.
In one possible implementation, the free bandwidth in the gating list is updated after the first data stream and the queue allocation information are sent to the time sensitive network TSN protocol stack.
In this possible implementation manner, in a transmission period of one TSN, the idle bandwidth in the gating list acquired by the DDS middleware may decrease with the generation of the burst traffic, so after each burst traffic is transmitted, the idle bandwidth in the gating list needs to be updated and maintained in real time, so that when the idle bandwidth is used for allocating bandwidth for the burst traffic next time, the correct idle bandwidth can be acquired.
Further, after the DDS middleware obtains the gating list, queue allocation information is determined according to the idle bandwidth of at least one queue in the gating list. Wherein in one possible implementation, the idle bandwidth may be determined by:
acquiring a first bandwidth allocated to each queue in at least one queue in a gating list;
acquiring a second bandwidth actually used by each queue in at least one queue in a gating list;
And determining the idle bandwidth of at least one queue in the gating list according to the difference value between the first bandwidth and the second bandwidth of each queue in the at least one queue.
In this possible implementation, when the gating list is generated, bandwidth resources are allocated to each queue in the gating list, where the bandwidth allocated to each queue is referred to as a rated bandwidth, i.e., a first bandwidth. During the running process of the vehicle-mounted network, the bandwidth actually used by each queue of the gating list is the second bandwidth. For each queue in the gating list, the idle bandwidth of the queue can be determined by calculating the difference between the first bandwidth and the second bandwidth of the queue.
It can be appreciated that in the actual running process, the idle bandwidth may be calculated in advance in the current period, and after the current period is finished, the idle bandwidth of each queue in the gating list needs to be recalculated and updated.
For example, please refer to fig. 7a, 7b and 8, respectively. Fig. 7a is a schematic diagram of a first bandwidth allocated to a queue in a gating list of a traffic scheduling method according to an embodiment of the present application, fig. 7b is a schematic diagram of a second bandwidth actually used by a queue in a gating list of a traffic scheduling method according to an embodiment of the present application, and fig. 8 is a schematic diagram of an idle bandwidth of a queue in a gating list of a traffic scheduling method according to an embodiment of the present application. Wherein the abscissa represents the time slot of each queue, and the ordinate represents the priority level corresponding to each queue. In the DDS middleware, different data streams correspond to different topics, priority levels and time delay requirements corresponding to the different topics are different, and it is assumed that the data stream corresponding to Topic a is allocated in a time slot range of (0, 1) and a time slot range of (2, 3), the priority level is 3, and the data stream corresponding to Topic B is allocated in a time slot range of (2, 3) and a time slot range of (3, 4), and the priority level is 1. And determining the idle bandwidth of each queue in the gating list by calculating the difference between the first bandwidth and the second bandwidth of each queue in the gating list. As shown in fig. 8, in the schematic diagram of the finally generated free bandwidth, the proportion of the width of the white square to the current time slot represents the available free bandwidth of the current time slot. For example, within the time slot range of t= (0, 1), half of the free bandwidth is available.
It should be noted that, for simplicity of explanation, only six queues are shown in fig. 7a and fig. 7b, and the priority level of each queue is only an example, which is not limited herein.
Step 604, a first data stream and queue allocation information are sent to a time sensitive network TSN protocol stack.
In the embodiment of the application, the queue allocation information comprises priority information of the first queue meeting the condition and the sending time corresponding to the first queue. Specifically, after generating queue allocation information meeting the conditions according to the queue allocation request, priority information of a first queue and a first data stream are sent to the TSN protocol stack in corresponding sending time slots according to corresponding time information in the queue allocation information. It should be understood that different queues in the gating list correspond to different time slots, and that the queues corresponding to the time slots can only be used to send data out in a designated time slot.
In one possible implementation manner, the method for sending the first data flow and the queue allocation information to the time sensitive network TSN protocol stack specifically includes:
And sending an Ethernet frame carrying a virtual local area network tag to the time sensitive network TSN, wherein the Ethernet frame comprises a first data stream and priority information of a first queue in the queue allocation information.
In this possible implementation, in particular, the IEEE 802.1Q standard inserts a virtual local area network (Virtual Local Area Network, VLAN) TAG (TAG) of 4 bytes between the source MAC (MEDIA ACCESS Control, medium access Control) address (6 bytes) and the type/length (2 bytes) of a standard ethernet frame, which may be referred to as an 802.1Q protocol frame or TSN frame, for defining its characteristics. The transmission of the second data stream and the queue allocation information is completed by carrying the queue allocation information in the PCP field in the VLAN TAG and the second data stream in the data portion of the 802.1Q protocol frame.
Referring to fig. 9, fig. 9 is a schematic structural diagram of an 802.1Q protocol frame of a traffic scheduling method according to an embodiment of the present application. The VLAN TAG field of the 802.1Q protocol frame is 4 bytes, and specifically comprises:
(1) TPID (Tag Protocol Identifier, tag protocol identification). The TPID consists of 2 bytes and is used to identify whether it is a VLAN and non-VLAN data frame, here a value of 0x8100.
(2) TCI (Tag Control Information ). The TCI consists of 2 bytes, including three parts, PCP (Priority Code Point ), CFI (Canonical Format Indicator, canonical format indicator) and virtual local area network number VLAN ID. PCP represents the priority of a frame, consists of 3 bits, and has a value ranging generally from 0 to 7 for a total of 8 priorities. CFI identifies whether the MAC address is encapsulated in a standard format in a different transmission medium, consisting of 1 bit, typically 0 (standard format). Vlan ID marks the VLAN number to which the message belongs, consists of 12 bits, and has the value range of 0-4095, and is reserved in general 0 and 4095.
It can be understood that the data transmission with the TSN protocol stack is completed by sending an ethernet frame carrying a virtual local area network tag, whether it is a periodic traffic or a burst traffic. After receiving the 802.1Q protocol frame, the TSN protocol stack parses the PCP field and the data field in the 802.1Q protocol frame to obtain priority information of the first data stream and the first queue. And then, according to the mapping relation between the priority of the first queue and the priority of the queue in the TSN protocol stack, distributing the first data stream to the corresponding queue in the TSN protocol stack, and sending the first data stream out through the queue so as to complete the data transmission of the first data stream. In the embodiment of the application, the VLAN PCP flag bit corresponding to the data stream is dynamically modified, so that the data stream can be indirectly sent out through different queues in the TSN protocol stack.
Further, after receiving the 802.1Q protocol frame, the TSN protocol stack parses the PCP field and the data field in the 802.1Q protocol frame to obtain information of the second data stream and the second queue. And then, distributing the second data stream to the corresponding queue in the TSN protocol stack according to the mapping relation between the priority of the second queue and the priority of the queue in the TSN protocol stack, and sending the second data stream out through the queue so as to complete the data transmission of the second data stream.
It should be noted that services with different priorities correspond to different PCP codes. The 3-bit PCP code defines 8 priorities of 0to 7. In the embodiment of the application, it is assumed that the mapping from the VLAN PCP flag bit to the queue of the TSN protocol stack is a simple linear mapping, i.e. PCP is 7, and the PCP of the TSN protocol stack mapped by the mapping is also 7. The present application assumes such a mapping relationship, but is not limited to such a mapping relationship. The VLAN PCP is only one expression of the priority of the data stream, and the expression of the priority of the data stream may be other, and is not limited herein.
Step 605, a second queue of the first data stream in the gating list is obtained, and information of the first data stream and the second queue is sent to the TSN protocol stack.
In this possible implementation manner, before the vehicle network operates, the TSN layer generates a gating list according to QoS configuration information corresponding to each data traffic type, and issues the gating list to each network node. Therefore, if the DDS middleware determines that the type of the first data stream is the periodic traffic, the DDS middleware may directly obtain the second queue corresponding to the first data stream in the gating list, and send the first data stream to the TSN protocol stack through the second queue when the first data stream arrives in the time slot corresponding to the second queue.
In one possible implementation manner, the method for sending the first data flow and the queue allocation information to the time sensitive network TSN protocol stack specifically includes:
And sending an Ethernet frame carrying a virtual local area network tag to the time sensitive network TSN, wherein the Ethernet frame comprises a first data stream and priority information of a first queue in the queue allocation information.
With respect to the specific content of the ethernet frame, reference may be made to the schematic diagram shown in fig. 9. It can be understood that the data transmission with the TSN protocol stack is completed by sending an ethernet frame carrying a virtual local area network tag, whether it is a periodic traffic or a burst traffic. In addition, it should be understood that there is a certain mapping relationship between the priority of each queue in the gating list and the priority of the transmission queue of the TSN protocol stack. After the periodic traffic is sent to the TSN protocol stack through the second queue, the TSN protocol stack sends the periodic traffic out through a mapping queue corresponding to the second queue.
Step 606, a clock reference source is obtained, and clock synchronization is performed according to the clock reference source, where the source of the clock reference source is the same as the source of the TSN protocol stack.
In the embodiment of the application, each node in the network of the TSN protocol stack is synchronized by a clock. In order to ensure the synchronous data transmission between the DDS type communication middleware and the TSN protocol stack, the clock reference source of the DDS type communication middleware is the same as that of the TSN protocol stack so as to ensure the clock synchronization between the DDS type communication middleware and the TSN protocol stack, thereby further realizing the deterministic communication of data traffic. Alternatively, the clock reference source may be determined using a precision time protocol (Precision Time Protocol, PTP) technique. The clock reference source may be selected according to actual requirements or experiments, and is not limited herein.
It should be noted that, the step numbers shown in fig. 6 do not represent the actual execution sequence of the traffic scheduling method according to the embodiment of the present application. In the actual operation process, optionally, according to actual requirements, step 605 may be executed first, and then other steps are executed on the basis that the DDS middleware and the TSN protocol stack keep clock synchronization.
Referring to fig. 10, fig. 10 is a schematic flow chart of a flow scheduling method according to an embodiment of the present application. The flow rate scheduling method shown in fig. 10 may be applied to the intelligent driving apparatus shown in fig. 1. As shown in fig. 10, the traffic scheduling method includes the following steps 1001 to 1002.
In step 1001, a first data stream is acquired.
It should be noted that, step 1001 is similar to step 601 described above, and specific embodiments and technical details are not described herein with reference to step 601.
Step 1002, if the type of the first data stream is determined to be burst traffic, a first queue corresponding to the first data stream in the gating list is obtained, and information of the first data stream and the first queue is sent to the TSN protocol stack of the time sensitive network, where the first queue corresponds to a queue allocated for burst traffic in the gating list.
In the embodiment of the present application, if the type of the first data flow is determined to be burst traffic, the operation is performed according to the steps shown in fig. 6, but if the first queue meeting the condition is not selected according to the queue allocation request, the queue allocation information corresponding to the first data flow cannot be obtained. In this case, the bursty traffic can only be handled by other methods. As an alternative embodiment, bandwidth resources are reserved for the burst traffic in the gating list, and when the newly generated first data stream is determined to be the burst traffic, the first data stream is sent out through the corresponding first queue in the time slot corresponding to the reserved bandwidth through the reserved bandwidth.
It can be appreciated that, before the vehicle network operates, the TSN layer generates a gating list according to QoS configuration information corresponding to each data traffic type, and issues the gating list to each network node. Therefore, when the newly generated first data stream is identified as burst traffic and the matched queue allocation information is not found, the first data stream is directly transmitted to the TSN protocol stack through the corresponding first queue in the gating list, and when the first data stream arrives in the time slot corresponding to the first queue, the first data stream is transmitted to the TSN protocol stack through the first queue.
It should be noted that, if the first data stream generated in the current period has missed the transmission time slot corresponding to the burst traffic, the next period needs to be waited, and the first data stream is transmitted in the time slot corresponding to the next period.
In order to better implement the above-described scheme of the embodiment of the present application on the basis of the embodiments corresponding to fig. 1 to 10, the following provides a related apparatus for implementing the above-described scheme. Referring to fig. 11, fig. 11 is a schematic structural diagram of a flow dispatching device according to an embodiment of the present application, where the flow dispatching device is deployed on the intelligent driving apparatus shown in fig. 1. The traffic scheduling device 1100 includes:
The transmit adaptation module 1101 is configured to acquire the first data flow, and if it is determined that the type of the first data flow is burst traffic, send a queue allocation request to the traffic scheduling module 1102.
The traffic scheduling module 1102 is configured to generate queue allocation information according to the queue allocation request, and send the queue allocation information to the sending adapting module 1101, where the queue allocation information is determined based on an idle bandwidth of at least one queue in the gating list, and the queue allocation information indicates that a first queue in the at least one queue is allocated to the first data stream.
The transmit adaptation module 1101 is further configured to transmit the first data stream and the queue allocation information to a TSN protocol stack of the time sensitive network.
For a specific description of the transmission adaptation module 1101 and the traffic scheduling module 1102, reference may be made to descriptions of steps 601 to 603 in the above embodiments, which are not repeated herein.
In one possible implementation, the queue allocation request includes a transmission bandwidth required by the first data flow and QoS requirements of the first data flow;
The first queue satisfies the following condition:
the free bandwidth of the first queue meets the transmission bandwidth required by the first data flow, and the QoS performance of the first queue meets the QoS requirement of the first data flow.
In a possible implementation manner, the traffic scheduling module is further configured to:
acquiring a first bandwidth allocated to each queue in at least one queue in a gating list;
acquiring a second bandwidth actually used by each queue in at least one queue in a gating list;
And determining the idle bandwidth of at least one queue in the gating list according to the difference value between the first bandwidth and the second bandwidth of each queue in the at least one queue.
In this possible implementation manner, the traffic scheduling device 1100 optionally further includes a traffic detection module, where the traffic detection module is configured to detect the second bandwidth actually used by each queue in at least one queue in the gating list, and send the second bandwidth actually used by each queue to the traffic scheduling module. During operation of the on-board network, the flow detection module monitors the actual bandwidth used by each DATAWRITER during a TSN period. Where the TSN period represents the sum of the time slots of the various queues in the gating list, and also corresponds to the deadline shown in table 3.
In a possible implementation manner, the apparatus further includes a clock management module, configured to:
and acquiring a clock reference source, and performing clock synchronization according to the clock reference source, wherein the source of the clock reference source is the same as the source of the TSN protocol stack.
In a possible implementation manner, the sending adapting module 1101 is specifically configured to:
Acquiring a DDS theme corresponding to the first data stream, and if the data stream type corresponding to the DDS theme is burst traffic, determining the first data stream as burst traffic; and/or the number of the groups of groups,
If the first data stream is a retransmission data stream, determining the type of the first data stream as burst traffic.
In a possible implementation manner, the sending adaptation module 1101 is further configured to obtain a second data stream;
The sending adapting module 1101 is further configured to, if it is determined that the type of the second data stream is periodic traffic, obtain a second queue of the second data stream in the gating list, and send information of the second data stream and the second queue to the TSN protocol stack.
In a possible implementation, the gating list is generated based on QoS configuration information of at least one data flow, the data flow comprising a first data flow;
The QoS configuration information includes at least one of:
The data type of the data stream, the transmission period requirement of the data stream, the delay requirement of the data stream, the size of the data stream and the priority identification of the data stream.
In this embodiment, the operations performed by each unit in the flow scheduling apparatus 1100 are similar to those described in the foregoing method embodiment shown in fig. 6, and may be used to implement the functions of the foregoing method embodiment, and also may implement the beneficial effects of the foregoing method embodiment, which are not described herein again.
In order to facilitate understanding of the traffic scheduling method provided by the embodiment of the application. In the following, an end-to-end transmission scenario existing in the in-vehicle network is taken as an example, and implementation contents of functional modules in the traffic scheduling device are taken as an example. Suppose that end node 1 needs to transmit data to end node 2. Wherein, DDS service is deployed on the terminal nodes.
Referring to fig. 12, fig. 12 is a schematic structural diagram of a DDS type communication middleware according to an embodiment of the present application. In order to timely and accurately send the data stream to the TSN protocol stack through the DDS middleware, four virtual function modules are newly added in the DDS middleware, namely a clock management module 1201, a sending adaptation module 1202, a flow scheduling module 1203 and a flow detection module 1204. The diagram assumes that only one DDS process is running on one end node. If a plurality of DDS processes are to be run on the terminal node, the same function can be achieved after the four main modules are adapted. Alternatively, the above components are only an example, and in practical applications, components in the above modules may be added or deleted according to actual needs, and fig. 12 should not be construed as limiting the embodiments of the present application. In addition, only the data stream transmission relationship between the newly added functional modules is shown in the schematic diagram, and other specific connection relationships and other modules included in the DDS type communication middleware are not described here too much.
The communication process is specifically as follows:
(1) Before the network communication system of the vehicle operates:
The TSN protocol stack generates a gating list according to QoS configuration information (table 3 above) of at least one data flow, where the QoS configuration items control packet sending frequency, priority, life cycle, etc. of DATAWRITER, and sends basic information of the gating list to each network node in the vehicle-mounted network, including the terminal node and the DDS type communication middleware.
(2) An initialization stage:
The DDS service is configured by using the transmission PRIORITY of the DDS middleware, and for example, the data types corresponding to the four topics shown in table 3 above are taken as an example, 4 DATAWRITER are generated inside the DDS service, which are DATAWRITER1, DATAWRITER2, DATAWRITER3, DATAWRITER4 respectively, and these 4 DATAWRITER respectively correspond to the theme A, B, C, D in table 3 above and are configured as PRIORITY levels corresponding to the theme. And corresponding to the gating list, and sending out the corresponding data stream by using the corresponding priority level in the gating list.
(3) During the operation process:
a) If it is detected that a new first data stream is generated, the first data stream is acquired by the transmit adaptation module 1202 and traffic identification is performed. If the first data stream is identified as the periodic traffic, the first data stream is directly sent to a corresponding network card queue in the TSN protocol stack. If the first data stream is identified as burst traffic, a current free bandwidth in the gating list is acquired and an available queue is selected from the free bandwidth. The specific manner of queue selection is that the sending adaptation module 1202 sends a queue allocation request to the traffic scheduling module 1203 to obtain the available free bandwidth. Wherein the queue allocation request includes a transmission bandwidth required by the first data stream and a QoS requirement of the first data stream.
B) The traffic scheduling module 1203 receives the queue allocation request, generates queue allocation information according to the queue allocation request, and then sends the queue allocation information to the transmission adaptation module 1202. Wherein the queue allocation information includes priority information (PCP) of the first queue satisfying the condition and a corresponding slot (t). The specific way of generating the queue allocation information is that the traffic scheduling module 1203 obtains the actually used bandwidth of each queue in the gating list from the traffic detecting module 1204 in advance, obtains the idle bandwidth of each queue according to the actually used bandwidth of each queue in the gating list and the allocated rated bandwidth, and then generates the queue allocation information when the following conditions are satisfied. The conditions specifically include: the free bandwidth of the first queue meets the transmission bandwidth required by the first data flow, and the QoS performance of the first queue meets the QoS requirement of the first data flow.
Optionally, referring to fig. 13, fig. 13 is a schematic diagram of a hierarchical structure of a traffic scheduling module according to an embodiment of the present application. As shown in fig. 13, the traffic scheduling module 1203 includes at least a buffer layer 1301, a free bandwidth layer 1302, a resource pool layer 1303, and a queue allocation layer 1304. The traffic scheduling module 1203 calculates, in the buffer layer 1301, a quota bandwidth allocated to each queue according to the acquired gating list. The idle bandwidth layer 1302 receives the bandwidth actually used by each queue in the gating list sent by the traffic detection module 1204, calculates the difference between the rated bandwidth of each queue and the actually used bandwidth, obtains the idle bandwidth corresponding to each queue, and stores the result in the resource pool layer 1303. If a queue allocation request sent from the sending adaptation module 1202 is received in the resource pool layer 1303, the available free bandwidth of each queue is traversed in the resource pool layer 1303 according to the scheduling order of each queue in the gating list, and if a queue satisfying the condition is found, information of PCP (representing a queue sequence number) and t (representing when the queue is available) is returned to the sending adaptation module 1202 in the queue allocation layer 1304.
It should be noted that the above layering manner adopted in the traffic scheduling module 1203 is only an alternative embodiment, and is not limited herein. By arranging the layered structure in the flow scheduling module 1203, the dependency relationship among different layers is reduced, and the maintenance is convenient.
C) The transmit adaptation module 1202 receives the queue allocation information, encapsulates the queue allocation information into an ethernet frame carrying a VLAN TAG, and transmits the ethernet frame to the TSN protocol stack.
It should be noted that, before the present period starts, the historical data of the actual bandwidths used by each DATAWRITER detected by the traffic detection module 1204 are generally obtained in advance, and after the present period ends, the idle bandwidths in the gating list are calculated and updated again by the traffic detection module 1204. The operation method of the actual bandwidth used by each DATAWRITER detected by the flow detection module 1204 may refer to fig. 14, and fig. 14 is a schematic detection diagram of the flow detection module according to an embodiment of the present application. In fig. 14, the actual bandwidths used by each DATAWRITER are detected by the traffic detection module, where a hatched area corresponding to each topic in the graph indicates a queue depth corresponding to the topic in the gating list, and the queue depth indicates a transmission buffer size corresponding to DATAWRITER. The queue depth is pre-configured in relation to QoS configuration items of the data flow. Taking the theme a as an example, assume that the data type corresponding to the theme a includes brake data. A1 represents the last received brake data and A2-A4 represent the historical brake data, respectively, for longer production times. The main reason for storing the history data in the transmission buffer is to facilitate retransmission of the data, and to transmit the history data when the data is delayed.
It will be appreciated that since the most up-to-date brake data is most effective, there is no need to buffer too much brake data in the queue. For data with higher value of the historical data, for example, the topic C, it is assumed that the topic C corresponds to the image data, and under the condition that the demand of the historical image data is larger, the corresponding queue depth is larger, and the historical data stored in the queue is also more.
D) Clock synchronization to each module in the DDS type communication middleware is completed periodically through the clock management module 1201.
Referring to fig. 15, fig. 15 is a schematic diagram of another structure of a flow scheduling device according to an embodiment of the present application. As shown in fig. 15, the traffic scheduler 1500 is implemented by a general bus architecture.
The traffic scheduler 1500 includes at least one processor 1501, a communication bus 1502, a memory 1503, and at least one communication interface 1504.
The processor 1501, the memory 1503, and the communication interface 1504 communicate via the communication bus 1502, or may communicate via other means such as wireless transmission. The memory 1503 is used for storing instructions, and the processor 1501 is used for executing the instructions stored in the memory 1503. The memory 1503 stores program codes, and the processor 1501 may call the program codes stored in the memory 1503 to execute steps 601 to 603 in the embodiment shown in fig. 6, and the detailed description of steps 601 to 603 in the embodiment shown in fig. 6 will be omitted herein.
The processor 1501 is optionally a general purpose central processing unit (central processing unit, CPU), but may be other general purpose processors, digital signal processors (DIGITAL SIGNAL processors, DSPs), application-specific integrated circuits (ASICs), field programmable gate arrays (field programmable GATE ARRAY, FPGAs) or other programmable logic devices (programmable logic device, PLDs), transistor logic devices, hardware components or any combination thereof. The PLD is a complex programmable logic device (complex programmable logic device, CPLD), a field-programmable gate array (FPGA) GATE ARRAY, a generic array logic (GENERIC ARRAY logic, GAL), or any combination thereof.
A communication bus 1502 is used to transfer information between the processor 1501, memory 1503, and communication interface 1504. The communication bus 1502 is classified into an address bus, a data bus, a control bus, and the like. For ease of illustration, the figures are shown with only one line, but not with only one bus or one type of bus.
Alternatively, the memory 1503 is a read-only memory (ROM) or other type of static storage device that can store static information and instructions. Memory 1503 is alternatively a random access memory (random access memory, RAM) or other type of dynamic storage device that can store information and instructions. Alternatively, memory 1503 is an electrically erasable programmable read-only Memory (EEPROM), a compact disk (compact disc read-only Memory, CD-ROM) or other optical disk storage, optical disk storage (including compact disk, laser disk, optical disk, digital versatile disk, blu-ray disk, etc.), magnetic disk storage or other magnetic storage devices, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer, but is not limited to such. Alternatively, the memory 1503 is independent and is coupled to the processor 1501 via the communication bus 1502. Optionally, the memory 1503 and the processor 1501 are integrated.
The communication interface 1504 uses any transceiver-like device for communicating with other devices or communication networks. The communication interface 1504 includes a wired communication interface. Optionally, the communication interface 1504 further includes a wireless communication interface. The wired communication interface is, for example, an ethernet interface. The ethernet interface is an optical interface, an electrical interface, or a combination thereof. The wireless communication interface is a wireless local area network (wireless local area networks, WLAN) interface, a cellular network communication interface, a combination thereof, or the like.
In a specific implementation, as one embodiment, processor 1501 includes one or more CPUs, such as CPU0 and CPU1 shown in FIG. 15.
In a specific implementation, as an embodiment, the traffic scheduling device 1500 includes a plurality of processors, such as the processor 1501 and the processor 1505 shown in fig. 15. Each of these processors is a single-core processor (single-CPU) or a multi-core processor (multi-CPU). A processor herein refers to one or more devices, circuits, and/or processing cores for processing data (e.g., computer program instructions).
In some embodiments, the memory 1503 is used to store program code for performing aspects of the present application, and the processor 1501 executes the program code stored in the memory 1503. That is, the flow scheduling apparatus 1500 implements the above-described embodiments of the flow scheduling method by the processor 1501 and the program code in the memory 1503.
It will be appreciated that the method steps of embodiments of the application may be implemented in hardware, or in software instructions executable by the processor 1501. The software instructions may be comprised of corresponding software modules that may be stored in random access memory, flash memory, read only memory, programmable read only memory, erasable programmable read only memory, electrically erasable programmable read only memory, registers, hard disk, removable disk, CD-ROM, or any other form of storage medium known in the art. In addition, the scope of the device described in the present application is not limited thereto, and the structure of the device may not be limited by fig. 15. The apparatus may be a stand-alone device or may be part of a larger device. For example, the device may be:
(1) A stand-alone integrated circuit IC, or chip, or a system-on-a-chip or subsystem;
(2) Having a set of one or more ICs, which may optionally also include storage means for storing data and/or instructions;
(3) Modules that may be embedded within other devices;
(4) A receiver, terminal, intelligent terminal, wireless device, handset, mobile unit, vehicle-mounted device, artificial intelligent device, machine device, home device, medical device, industrial device, etc.;
(5) Others, and so on.
It will be clear to those skilled in the art that, for convenience and brevity of description, reference may be made to the corresponding process in the foregoing method embodiment for the specific working process of the apparatus described above, which is not described herein again.
The embodiment of the application also provides an automatic driving vehicle. Referring to fig. 16 in combination with the above description of fig. 1, fig. 16 is a schematic view of another structure of an autopilot vehicle according to an embodiment of the present application. The autonomous vehicle 1600 may be provided with the flow rate scheduling device 1100 described in the embodiment shown in fig. 11, or the flow rate scheduling device 1500 described in the embodiment shown in fig. 15. Since in some embodiments, autonomous vehicle 1600 may also include communication functions, autonomous vehicle 1600 may include, in addition to the components shown in FIG. 1: a receiver 1601 and a transmitter 1602, wherein the processor 113 may include an application processor 1131 and a communication processor 1132. In some embodiments of the application, the receiver 1601, transmitter 1602, processor 113, and memory 114 may be connected by a bus or other means.
The processor 113 controls the operation of the autonomous vehicle. In a particular application, various components of autonomous vehicle 1600 are coupled together by a bus system that may include, in addition to a data bus, a power bus, a control bus, a status signal bus, and the like. For clarity of illustration, however, the various buses are referred to in the figures as bus systems.
The receiver 1601 may be used to receive input numeric or character information and to generate signal inputs related to the relevant settings and function control of the autonomous vehicle. The transmitter 1602 is operable to output numeric or character information via a first interface; the transmitter 1602 may also be used to send instructions to the disk group through the first interface to modify data in the disk group; the transmitter 1602 may also include a display device such as a display screen.
In an embodiment of the present application, the application processor 1131 is configured to perform any of the method embodiments described above. It should be noted that, for the specific implementation manner and the beneficial effects of the application processor 1131 executing the traffic scheduling method, any of the method embodiments described above may be described, and will not be described herein in detail.
The embodiment of the application also provides a computer storage medium storing a computer program which, when executed by a computer, causes the computer to implement any of the method embodiments described above.
The present application also provides a computer program product which, when executed by a computer, implements the functions of any of the method embodiments described above.
In an embodiment of the present application, a circuit system is further provided, where the circuit system includes a processing circuit configured to perform any of the method embodiments described above.
The automatic driving vehicle provided by the embodiment of the application can be a chip, and the chip comprises: a processing unit, which may be, for example, a processor, and a communication unit, which may be, for example, an input/output interface, pins or circuitry, etc. The processing unit may execute the computer-executable instructions stored by the storage unit to cause the chip to perform any of the method embodiments described above. Optionally, the storage unit is a storage unit in the chip, such as a register, a cache, etc., and the storage unit may also be a storage unit in the wireless access device side located outside the chip, such as a read-only memory or other type of static storage device that may store static information and instructions, a random access memory, etc. The processor mentioned in any of the above may be a general-purpose central processing unit, a microprocessor, an ASIC, or one or more integrated circuits for controlling the execution of the program of the method of the first aspect.
It will be appreciated that the apparatus and methods described in this application may be implemented in other ways. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of elements is merely a logical functional division, and there may be additional divisions of actual implementation, e.g., multiple elements or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
In this specification, each embodiment is described in a progressive manner, and identical and similar parts of each embodiment are referred to each other, and each embodiment is mainly described as a difference from other embodiments.
The naming or numbering of the steps in the present application does not mean that the steps in the method flow must be executed according to the time/logic sequence indicated by the naming or numbering, and the execution sequence of the steps in the flow that are named or numbered may be changed according to the technical purpose to be achieved, so long as the same or similar technical effects can be achieved. The division of the units in the present application is a logical division, and may be implemented in another manner in practical application, for example, a plurality of units may be combined or integrated in another system, or some features may be omitted or not implemented, and in addition, coupling or direct coupling or communication connection between the units shown or discussed may be through some interfaces, and indirect coupling or communication connection between the units may be electrical or other similar manners, which are not limited in the present application. The units or sub-units described as separate components may be physically separated or not, may be physical units or not, or may be distributed in a plurality of circuit units, and some or all of the units may be selected according to actual needs to achieve the purpose of the present application.
The foregoing is merely illustrative of the present application, and the present application is not limited thereto, and any person skilled in the art will readily recognize that variations or substitutions are within the scope of the present application. The various numbers referred to in the embodiments of the present application are merely for ease of description and are not intended to limit the scope of the embodiments of the present application. The sequence number of each process does not mean the sequence of the execution sequence, and the execution sequence of each process should be determined according to the function and the internal logic.
The above embodiments are only for illustrating the technical solution of the present application, and are not limiting; although the application has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit of the application.

Claims (18)

1. A method of flow scheduling, the method being applied to an autonomous vehicle, the method comprising:
Acquiring a first data stream;
if the type of the first data flow is determined to be burst flow, queue allocation information corresponding to a queue allocation request is acquired, wherein the queue allocation information is determined based on idle bandwidth of at least one queue in a gating list, and the queue allocation information indicates that a first queue in the at least one queue is allocated to the first data flow;
And sending the first data stream and the queue allocation information to a time sensitive network TSN protocol stack.
2. The method of claim 1, wherein the queue allocation request comprises a transmission bandwidth required by the first data flow and a QoS requirement of the first data flow;
The first queue satisfies the following condition:
The free bandwidth of the first queue meets the transmission bandwidth required by the first data flow, and the QoS performance of the first queue meets the QoS requirement of the first data flow.
3. The method according to claim 1 or 2, characterized in that the method further comprises:
Acquiring a first bandwidth allocated to each queue in the at least one queue in the gating list;
acquiring a second bandwidth actually used by each queue in the at least one queue in the gating list;
and determining the idle bandwidth of at least one queue in the gating list according to the difference value between the first bandwidth and the second bandwidth of each queue in the at least one queue.
4. A method according to any one of claims 1 to 3, further comprising:
And acquiring a clock reference source, and performing clock synchronization according to the clock reference source, wherein the source of the clock reference source is the same as the source of the TSN protocol stack.
5. The method according to any one of claims 1 to 4, wherein said determining that the type of the first data stream is bursty traffic comprises:
Acquiring a DDS theme corresponding to the first data stream, and if the data stream type corresponding to the DDS theme is the burst flow, determining the first data stream as the burst flow; and/or the number of the groups of groups,
And if the first data stream is a retransmission data stream, determining the type of the first data stream as the burst traffic.
6. The method according to any one of claims 1 to 5, further comprising:
Acquiring a second data stream;
And if the type of the second data stream is determined to be the periodic flow, acquiring a second queue of the second data stream in the gating list, and sending information of the second data stream and the second queue to the TSN protocol stack.
7. The method according to any of claims 1 to 6, wherein the gating list is generated based on QoS configuration information of the at least one data flow, the data flow comprising the first data flow;
The QoS configuration information includes at least one of:
The data type of the data stream, the transmission period requirement of the data stream, the time delay requirement of the data stream, the size of the data stream and the priority identification of the data stream.
8. A flow dispatching device for use in an autonomous vehicle, the device comprising:
The transmission adaptation module is used for acquiring a first data stream, and if the type of the first data stream is determined to be burst traffic, a queue allocation request is transmitted to the traffic scheduling module;
The traffic scheduling module is configured to generate queue allocation information according to the queue allocation request, and send the queue allocation information to the sending adaptation module, where the queue allocation information is determined based on an idle bandwidth of at least one queue in a gating list, and the queue allocation information indicates that a first queue in the at least one queue is allocated to the first data flow;
The sending adapting module is further configured to send the first data stream and the queue allocation information to a TSN protocol stack of a time sensitive network.
9. The apparatus of claim 8, wherein the queue allocation request comprises a transmission bandwidth required by the first data flow and a QoS requirement of the first data flow;
The first queue satisfies the following condition:
The free bandwidth of the first queue meets the transmission bandwidth required by the first data flow, and the QoS performance of the first queue meets the QoS requirement of the first data flow.
10. The apparatus according to claim 8 or 9, wherein the traffic scheduling module is further configured to obtain a first bandwidth allocated by each queue of the at least one queue in the gating list;
The traffic scheduling module is further configured to obtain a second bandwidth actually used by each queue in the at least one queue in the gating list;
The traffic scheduling module is further configured to determine an idle bandwidth of each queue in the gating list according to a difference between the first bandwidth and the second bandwidth of the at least one queue.
11. The apparatus according to any of claims 8 to 10, further comprising a clock management module for:
And acquiring a clock reference source, and performing clock synchronization according to the clock reference source, wherein the source of the clock reference source is the same as the source of the TSN protocol stack.
12. The apparatus according to any one of claims 8 to 11, wherein the transmission adaptation module is specifically configured to:
Acquiring a DDS theme corresponding to the first data stream, and if the data stream type corresponding to the DDS theme is the burst flow, determining the first data stream as the burst flow; and/or the number of the groups of groups,
And if the first data stream is a retransmission data stream, determining the type of the first data stream as the burst traffic.
13. The apparatus according to any of claims 8 to 12, wherein the transmit adaptation module is further configured to obtain a second data stream;
And the sending adapting module is further configured to, if the type of the second data stream is determined to be the periodic traffic, obtain a second queue of the second data stream in the gating list, and send information of the second data stream and the second queue to the TSN protocol stack.
14. The apparatus according to any of claims 8 to 13, wherein the gating list is generated based on QoS configuration information of the at least one data flow, the data flow comprising the first data flow;
The QoS configuration information includes at least one of:
The data type of the data stream, the transmission period requirement of the data stream, the time delay requirement of the data stream, the size of the data stream and the priority identification of the data stream.
15. A traffic scheduling device comprising at least one memory storing code and a processor configured to execute the code to cause the traffic scheduling device to perform the method of any one of claims 1 to 7.
16. An autonomous vehicle comprising a processor coupled with a memory storing program instructions that when executed by the processor implement the method of any of claims 1-7, or the vehicle comprising the apparatus of any of claims 8-15.
17. A computer storage medium, characterized in that the computer storage medium stores a computer program which, when executed by a computer, causes the computer to implement the method of any one of claims 1 to 7.
18. Circuitry, characterized in that it comprises processing circuitry configured to perform the method of any of claims 1 to 7.
CN202211296669.0A 2022-10-21 2022-10-21 Traffic scheduling method and device and vehicle Pending CN117917881A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211296669.0A CN117917881A (en) 2022-10-21 2022-10-21 Traffic scheduling method and device and vehicle

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211296669.0A CN117917881A (en) 2022-10-21 2022-10-21 Traffic scheduling method and device and vehicle

Publications (1)

Publication Number Publication Date
CN117917881A true CN117917881A (en) 2024-04-23

Family

ID=90729695

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211296669.0A Pending CN117917881A (en) 2022-10-21 2022-10-21 Traffic scheduling method and device and vehicle

Country Status (1)

Country Link
CN (1) CN117917881A (en)

Similar Documents

Publication Publication Date Title
Wang et al. Networking and communications in autonomous driving: A survey
WO2022056894A1 (en) Vehicle communication method and vehicle communication device
EP3585078A1 (en) V2x communication device and method for transmitting and receiving v2x message thereof
CN114073108A (en) For implementing collective awareness in a vehicle network
CN109672996A (en) One kind being based on V2X roadside device system and its information dispensing method
WO2019133180A1 (en) Service level agreement-based multi-hardware accelerated inference
Syed et al. Dynamic scheduling and routing for TSN based in-vehicle networks
CN107347030A (en) A kind of message management apparatus and method based on V2X communications
CN103354991A (en) Vehicle communication network
Zhou et al. Edge-facilitated augmented vision in vehicle-to-everything networks
US11202273B2 (en) Aggregating messages into a single transmission
US20170124871A1 (en) System and method for vehicle data communication
CN110036603A (en) Switch, communication control method and communication control program
CN110267228A (en) A kind of V2X car-mounted terminal message adaptive scheduling management system and method
CN115997244A (en) Collective perception service in intelligent transportation system
CN111786862A (en) Control system and control method thereof and all-terrain vehicle
US20230110467A1 (en) Collective perception service reporting techniques and technologies
Johri et al. A multi-scale spatiotemporal perspective of connected and automated vehicles: Applications and wireless networking
CN114884998B (en) Cooperative software defined vehicle-mounted network system, scheduling method and CACC
Hbaieb et al. In-car gateway architecture for intra and inter-vehicular networks
CN106713092A (en) Conversion system for vehicle-mounted CAN bus data and FlexRay bus data and conversion method thereof
CN106357499A (en) Automobile bus heterogeneous network data sharing system and automobile bus heterogeneous network data sharing method
CN117917881A (en) Traffic scheduling method and device and vehicle
Oza et al. Deadline-aware task offloading for vehicular edge computing networks using traffic light data
Pilz et al. Collective perception: A delay evaluation with a short discussion on channel load

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication