EP4378205A1 - Apparatus, method and computer program for a vehicle - Google Patents

Apparatus, method and computer program for a vehicle

Info

Publication number
EP4378205A1
EP4378205A1 EP21773591.9A EP21773591A EP4378205A1 EP 4378205 A1 EP4378205 A1 EP 4378205A1 EP 21773591 A EP21773591 A EP 21773591A EP 4378205 A1 EP4378205 A1 EP 4378205A1
Authority
EP
European Patent Office
Prior art keywords
computation
remote server
vehicle
offloading
latency
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
EP21773591.9A
Other languages
German (de)
French (fr)
Inventor
Matthias Priebe
Jithin Reju
Wolfgang Theimer
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Volkswagen AG
Original Assignee
Volkswagen AG
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Volkswagen AG filed Critical Volkswagen AG
Publication of EP4378205A1 publication Critical patent/EP4378205A1/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W28/00Network traffic management; Network resource management
    • H04W28/02Traffic management, e.g. flow control or congestion control
    • H04W28/0268Traffic management, e.g. flow control or congestion control using specific QoS parameters for wireless networks, e.g. QoS class identifier [QCI] or guaranteed bit rate [GBR]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/30Services specially adapted for particular environments, situations or purposes
    • H04W4/40Services specially adapted for particular environments, situations or purposes for vehicles, e.g. vehicle-to-pedestrians [V2P]
    • H04W4/44Services specially adapted for particular environments, situations or purposes for vehicles, e.g. vehicle-to-pedestrians [V2P] for communication between vehicles and infrastructures, e.g. vehicle-to-cloud [V2C] or vehicle-to-home [V2H]
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/0104Measuring and analyzing of parameters relative to traffic conditions
    • G08G1/0108Measuring and analyzing of parameters relative to traffic conditions based on the source of data
    • G08G1/0112Measuring and analyzing of parameters relative to traffic conditions based on the source of data from the vehicle, e.g. floating car data [FCD]
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/16Anti-collision systems
    • G08G1/164Centralised systems, e.g. external to vehicles
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/28Flow control; Congestion control in relation to timing considerations

Definitions

  • the present invention relates to an apparatus, a method, and a computer program for a vehicle, and to a corresponding vehicle comprising such an apparatus or being configured to perform the method or to execute the computer program.
  • the autonomous or semi-autonomous operation of vehicles is a field of research and development.
  • autonomous driving level 5 the need for offloading computing to a cloud (i.e., a server that is accessible via the internet) or edge cloud (i.e., a server that is located in close proximity to a base station of a mobile communication system) is likely to gain importance.
  • 5G i.e., a mobile communication system according to the 5 th Generation networks of the 3 rd -Generation Partnership Project, 3GPP
  • edge computing it is possible to offload real time application to nearby computing servers (edge or cloud) or even nearby cars that have compute capabilities.
  • US patent application US 2019/0364492 A1 relates to a system that a learning function to determine a user movement, predict a route of the user and, based on the predicted radio conditions along the route, predict a latency of the communication of the user.
  • the communication latency is only a portion of the overall latency incurred by offloading the computation.
  • US patent application US 2020/0284883 A1 relates to a LIDAR system.
  • a binary exponential back-off is used to determine a contention window size in a LIDAR ranging medium access scheme.
  • the present disclosure relates to the question of how often it is necessary for a nearby computing server to be queried and to check if the connection meets a latency requirement.
  • the car sends packets to a server being in close proximity to the car. Since cars move, the conditions, and thus also whether the offloading meets the latency requirements change dynamically.
  • a periodic querying of the nearby compute environment is performed to constantly re-evaluate the resulting latency.
  • CPU Central Processing Unit
  • the latency limit might not be met.
  • the car will query the edge server again after a pre-determined, fixed interval, in order to verify whether the compute offloading meets the latency requirement.
  • the proposed concept introduces an improved technique for querying the nearby server to check the compute latency between the client and server.
  • the present invention is based on the finding, that a periodic check of the compute latency at a fixed interval results in a high CPU usage and generates unnecessary network traffic.
  • periodically querying the backend can cause network overload in scenarios with many vehicles attempting to offload computation to the backend.
  • a back-off time that is based on a number of subsequent unsuccessful attempts is proposed, e.g., an exponential back-off for congestion control.
  • the method comprises attempting to offload a computation of a vehicular function to a remote server via a wireless communication link.
  • the method comprises determining a performance of the computation being offloaded to the remote server.
  • the method comprises determining, whether the offloading of the computation is deemed to be successful or unsuccessful, based on the performance of the computation being offloaded to the remote server.
  • the method comprises cancelling the offloading if the offloading is deemed to be unsuccessful.
  • the method comprises reattempting to offload the computation of the vehicular function to the remote server after a pre-defined waiting time.
  • the waiting time is based on a number of subsequent unsuccessful attempts. By adjusting the waiting time based on the number of subsequent unsuccessful attempts, the CPU usage and network traffic may be reduced, in particular in scenarios with an increased number of unsuccessful attempts.
  • offloading a computation comprises a number of tasks, as vehicular computations often rely on local sensor data, with the result of the computation being required within the vehicle. Accordingly, offloading the computation may comprise transmitting working data to the remote server and receiving a result of the computation from the remote server.
  • the latency or computation latency
  • the term “latency” is not limited to the communication roundtrip time between the vehicle and the remote server. Instead, crucially, the time required by the remote server to obtain the working data, perform the computation, and respond with the result of the computation may constitute the computation latency. Accordingly, the performance may be based at least on a time required for transmitting the working data, a time required for waiting for the computation to be performed by the remote server, and a time required for receiving the result from the remote server.
  • the performance may be based on an overall latency of the computation. This overall latency may compete with the overall latency of the computation when the computation is being performed by a processor of the vehicle.
  • the overall latency may comprise at least a time required for transmitting the working data, a time required waiting for the computation to be performed by the remote server, and a time required for receiving the result from the remote server.
  • the latency of the communication between the vehicle and the remote server may contribute to the overall latency, influencing the time required to transmit the working data and the result of the computation, with the time required for the computation being added on top of the communication latency.
  • the overall latency may be compared with a latency threshold, which may be used to determine, whether the offloading is deemed to be successful or unsuccessful. For example, the attempt may be deemed to be unsuccessful if the overall latency violates a latency threshold.
  • the waiting time may increase with each unsuccessful attempt. The more subsequent unsuccessful attempts, the less likely it is that the next attempt is successful. Thus, an approach with increasing waiting times may reduce the CPU usage and bandwidth usage in particular in cases, where the likelihood of a successful subsequent attempt is low.
  • the waiting time may double with each unsuccessful attempt.
  • TCP Transmission Control Protocol
  • the waiting time resets to an initial value after a successful attempt. After a successful attempt, the likelihood is high that a subsequent attempt is successful again (e.g., after a short communication or computation hiccup).
  • cancelling the offloading may comprise performing the computation using one or more processors of the vehicle.
  • the computation may be performed by the vehicle instead of by the remote server.
  • the computation may be a computation that is based on working data being provided by the vehicle. This leads to increased overhead when the computation is being offloaded to the remote server, as the working data is being transmitted to the remote server.
  • the computation is being performed by an edge application server as defined by the 3 rd Generation Partnership Project, 3GPP.
  • Edge application servers are multipurpose application servers that can be used for various types of computations.
  • the apparatus comprises an interface for communicating with the remote server via a wireless communication link.
  • the apparatus comprises one or more processors, configured to perform the above method.
  • Various aspects of the present disclosure relate to a vehicle comprising the apparatus.
  • Various aspects of the present disclosure relate to a computer program having a program code for performing the above method, when the computer program is executed on a computer, a processor, or a programmable hardware component.
  • Fig. 1 shows a schematic drawing of a vehicular computation being performed by a remote server
  • Figs. 2a and 2b show flow charts of examples of a method for a vehicle
  • Fig. 2c shows a block diagram of an example of an apparatus for a vehicle and of the vehicle comprising the apparatus;
  • Fig. 3 shows a schematic diagram of the performance of the computation during various phases
  • Fig. 4a shows a schematic diagram of a flow of data according to the proposed concept
  • Fig. 4b shows a schematic diagram of a flow of data according to a different concept.
  • Fig. 1 shows a schematic drawing of a vehicular computation being performed by a remote server.
  • Fig. 1 shows a compute environment 100, such as a compute edge cloud environment or a cloud compute environment, which is used as a remote server to remotely perform computations for a vehicle 200.
  • the vehicle 200 transmits a latency check request to the compute environment 100, which transmits a latency check response.
  • the performance is based on the overall latency, which can be determined by attempting to offload the computation to the remote server, the latency check request may correspond to an attempt to offload the computation to the remote server, and the latency check response may correspond to the result of the computation.
  • this check, or attempt at offloading is re-attempted after a pre-defined, constant waiting time.
  • the proposed concept in contrast to this approach, uses a waiting time that is dependent on the number of subsequent unsuccessful attempts, in order to save CPU and wireless resources.
  • Figs. 2a and 2b show flow charts of examples of a method for a vehicle.
  • the method comprises attempting 210 to offload a computation of a vehicular function to a remote server via a wireless communication link.
  • the method comprises determining 220 a performance of the computation being offloaded to the remote server.
  • the method comprises determining 230, whether the offloading of the computation is deemed to be successful or unsuccessful, based on the performance of the computation being offloaded to the remote server.
  • the method comprises cancelling 240 the offloading if the offloading is deemed to be unsuccessful.
  • the method comprises reattempting 210 to offload the computation of the vehicular function to the remote server after a pre-defined waiting time. The waiting time is based on a number of subsequent unsuccessful attempts. For example, the method may be performed by the vehicle.
  • Fig ,2c shows a block diagram of an example of a corresponding apparatus 20 for a vehicle 200 and of the vehicle 200 comprising the apparatus.
  • the apparatus 20 comprises an interface 22 for communicating with the remote server 100 via a wireless communication link.
  • the apparatus 20 further comprises one or more processors 24, configured to perform the method of Figs. 2a and/or 2b.
  • the apparatus 20 further comprises a storage device 26.
  • the method may be performed by the one or more processors, with the help of the at least one interface (for communicating via the wireless communication link) and, optionally, with the help of the one or more storage devices (for storing and retrieving information).
  • the one or more processors 24 may be coupled with the at least one interface 22 and with the one or more storage devices 26.
  • the determination, whether the performance of the computation being offloaded to the remote server is determined by attempting to offload the computation to the remote server.
  • the remote server is instructed to perform the computation, with the result being provided to the vehicle.
  • the types of computation that most benefits from being offloaded rely on working data that is available locally in the vehicle.
  • this working data is often based on sensor data of the vehicle, and first has to be transmitted to the remote server if the remote server is to be used for performing the computation.
  • the computation may be a computation that is based on working data being provided by the vehicle.
  • the computation may be a computation that is based on sensor data being collected by the vehicle, e.g., based on environmental perception sensor data, such as camera sensor data, radar sensor data or lidar sensor data. Accordingly, the computation may be based on processing environmental perception sensor data of the vehicle.
  • the sensor data, or more general working data which may comprise additional information, such as a state of an autonomous or semiautonomous driving system, may be provided to the remote server. Accordingly, the remote server may be supplied, by the vehicle, with the working data that is required for performing the computation.
  • offloading the computation may comprise transmitting 212 working data to the remote server (via the wireless communication link) and receiving (via the wireless communication link) 214 a result of the computation from the remote server.
  • the attempt at offloading comprises the tasks of transmitting 212 the working data and receiving 214 the result from the remote server.
  • the vehicle may use a two-pronged strategy - while offloading is being attempted, the same computation may be performed by a processor of the vehicle, so the result of the computation is continuously available in the vehicle.
  • the method may comprise performing the computation using a processor of the vehicle while the offloading is being attempted (e.g., until the offloading is deemed to be successful for a pre-defined amount of time).
  • the computation being performed by the remote server may have several advantages - for once, the remote server may have a higher amount of computational resources, enabling a more precise or a faster computation. Additionally or alternatively, the remote server may have access to working data from multiple vehicles, enabling the inclusion of additional working data in the computation. Furthermore, by offloading the computation, the vehicle may be enabled to perform, or expand, other computations.
  • the computation may be a real-time computation, i.e., a computation being used in a time-sensitive manner, with a delayed computation being (mostly) useless for the vehicle.
  • the performance of the offloading is being determined 220.
  • the performance of the computation primarily relates to an overall latency of the offloading - with the latency going beyond the communication roundtrip time between the vehicle and the remote server, by including the time it takes to transmit the working data to the remote server, the time required by the remote server to perform the computation, and the time required for transmitting the result from the remote server to the vehicle.
  • the performance may be based at least on the time required for transmitting the working data, the time required for waiting for the computation to be performed by the remote server, and the time required for receiving the result from the remote server.
  • the overall latency may comprise at least the time required for transmitting the working data, the time required waiting for the computation to be performed by the remote server, and the time required for receiving the result from the remote server. Additional latencies may be included as well, such as the time it takes to prepare and encode the working data for transmission and the time takes to decode the result. Accordingly, the performance may be based on an overall latency of the computation.
  • the performance may be based by measuring the overall latency of the computation, from a time the working data is being transmitted (or prepared) to a time where the result is received (or decoded). Consequently, the performance may be computed without requiring additional packets being transmitted in addition to the communication required for offloading the computation.
  • a determination 230 is being made on whether the attempt is deemed to be successful or unsuccessful.
  • the performance, and in particular the overall latency may be compared to a latency threshold.
  • the attempt is deemed to be unsuccessful if the overall latency violates a latency threshold. Consequently, the attempt may be deemed to be successful if the overall latency is in line with the latency threshold.
  • the determination 230 of the performance, and the determination on whether the attempt is deemed to be successful or unsuccessful may be continuously repeated (as indicated by the arrow in Figs. 2a and 2b leading from block 230 to block 220.
  • the method may comprise continuously monitoring the performance of the offloaded computation and deeming the offloading to be unsuccessful if, or when/once, the performance violates the latency threshold.
  • An example will be shown in connection with Fig. 3.
  • the offloading is cancelled 240. This may be done by ceasing the transmission of the working data to the remote server, and performing the computation using a processor in the vehicle.
  • cancelling the offloading may comprise performing 242 the computation using one or more processors of the vehicle. For example, some of the calculations may be repeated, e.g., to recover a computation initially being offloaded to the remote server.
  • the method comprises re-attempting 210 to offload the computation of the vehicular function to the remote server after a pre-defined waiting time.
  • the waiting time is based on a number of subsequent unsuccessful attempts (at offloading the computation).
  • an example of the concept is shown in more detail in connection with Fig. 3. The concept is based on increasing the waiting time with each attempt. For example, between the first and second attempt, the waiting time may be a first time interval. Between the second and third attempt, the waiting time may be second time interval, with the second time interval being longer than the first time interval.
  • the third and fourth attempt may be a third time interval, with the third time interval being longer than the second time interval.
  • the waiting time increases with each unsuccessful attempt.
  • the first time interval is 2° • T long.
  • the second time interval is 2 1 • T long.
  • the third time interval is 2 2 • T long etc.
  • the n-th time interval may be 2 n-1 • T long.
  • the waiting time may be reset to an initial value.
  • the waiting time may reset to an initial value after a successful attempt.
  • any server that is remote from the vehicle (i.e., outside the vehicle) and that can be communicated with via a wireless communication link can be the remote server 100 within the context of the present disclosure.
  • the proposed concept is tailored to the use with so-called edge servers (or edge cloud servers), which are remote servers, which are placed in close proximity to the base stations being used for communication via the wireless communication link.
  • the remote sever may be a server that is co-located with a base station of a mobile communication system.
  • the remote server may be located within a core network of a mobile communication system being used for the wireless communication link.
  • the computation is being performed by an edge application server as defined by the 3 rd Generation Partnership Project, 3GPP.
  • Embodiments may be compliant to or even comprised in certain standard specifications, such as those specified by the 3GPP.
  • the remote server may be an edge application server as introduced in 3GPP TR 23.758.
  • provisioning of the remote server to perform the computation server may be performed via an edge enabler server, and a corresponding service in the vehicle.
  • the working data and the result may be transmitted, as application data traffic, via the 3GPP mobile communication system, between the vehicle and the remote server.
  • the architecture for support of edge computing of TS 23.548 and TS 23.501 may be used or implementing the remote server, the offloading of the computation and/or the communication between the vehicle and the remote server.
  • the at least one interface 22 may correspond to one or more inputs and/or outputs for receiving and/or transmitting information, which may be in digital (bit) values according to a specified code, within a module, between modules or between modules of different entities.
  • the at least one interface 22 may comprise interface circuitry configured to receive and/or transmit information.
  • the at least one interface 22 may be configured to communicate via the wireless communication using a mobile communication system.
  • the wireless communication link may be a wireless communication link according to a mobile communication system.
  • the wireless communication link may be based on cellular communication.
  • the mobile communication system may, for example, correspond to one of the Third Generation Partnership Project (3GPP)-standardized mobile communication networks, where the term mobile communication system is used synonymously to mobile communication network.
  • the mobile or wireless communication system may correspond to, for example, a 5th Generation system (5G), a Long-Term Evolution (LTE), an LTE-Advanced (LTE-A), High Speed Packet Access (HSPA), a Universal Mobile Telecommunication System (UMTS) or a UMTS Terrestrial Radio Access Network (UTRAN), an evolved-UTRAN (e-UTRAN), a Global System for Mobile communication (GSM) or Enhanced Data rates for GSM Evolution (EDGE) network, a GSM/EDGE Radio Access Network (GERAN), or mobile communication networks with different standards, for example, a Worldwide Inter-operability for Microwave Access (WIMAX) network IEEE 802.16 or Wireless Local Area Network (WLAN) IEEE 802.11 , generally an Orthogonal Frequency Division Multiple Access (OFDMA) network, a Time Division Multiple Access (TDMA)
  • the mobile communication system may be a vehicular mobile communication system.
  • Such communication may be carried out using 3GPP systems, such as 3G, 4G, NR and beyond, adapted to vehicular Vehicle-to-Everything (V2X) communication.
  • 3GPP systems such as 3G, 4G, NR and beyond, adapted to vehicular Vehicle-to-Everything (V2X) communication.
  • V2X Vehicle-to-Everything
  • the one or more processors 24 may be implemented using one or more processing units, one or more processing devices, any means for processing, such as a processor, a computer or a programmable hardware component being operable with accordingly adapted software.
  • the described function of the one or more processors 24 may as well be implemented in software, which is then executed on one or more programmable hardware components / processors.
  • Such hardware components may comprise a general purpose processor, a Digital Signal Processor (DSP), a micro-controller, etc.
  • DSP Digital Signal Processor
  • the one of more storage devices 26 may comprise at least one element of the group of a computer readable storage medium, such as a magnetic or optical storage medium, e.g. a hard disk drive, a flash memory, Floppy-Disk, Random Access Memory (RAM), Programmable Read Only Memory (PROM), Erasable Programmable Read Only Memory (EPROM), an Electronically Erasable Programmable Read Only Memory (EEPROM), or a network storage.
  • a computer readable storage medium such as a magnetic or optical storage medium, e.g. a hard disk drive, a flash memory, Floppy-Disk, Random Access Memory (RAM), Programmable Read Only Memory (PROM), Erasable Programmable Read Only Memory (EPROM), an Electronically Erasable Programmable Read Only Memory (EEPROM), or a network storage.
  • the vehicle 200 may be a land vehicle, a road vehicle, a car, an automobile, an off-road vehicle, a motor vehicle, a truck, or a lorry.
  • the method, apparatus, computer program or vehicle may comprise one or more additional optional features corresponding to one or more aspects of the proposed concept or one or more examples described above or below.
  • Fig. 3 shows a schematic diagram of the performance of the computation during various phases.
  • Fig. 3 shows the overall (or end-to-end) latency (E2EL) threshold 302, the overall latency of the computation (E2EL), the exponential count(down) 306 and a counter 308.
  • the E2EL is the end to end latency, which may correspond to the overall latency of the computation, which may contain all elements of latency possible from remote computing.
  • the E2EL latency may contain computation delays, network delays, application layer delays, scheduling delays, processor load delays, codec delays, hardware accelerator delays and other unknown delays.
  • the E2EL threshold i.e., the latency threshold
  • the E2EL threshold is the required latency below which the real time application can be offloaded to the nearby compute environment.
  • the E2EL shown in Fig. 3 is based on the respective computing environment being used. For example, if the nearby server(edge) is computing, the E2EL corresponds to nearby server computed results and if the car is computing, the E2EL corresponds to the car computed results. In this strategy, if an E2EL deviation occurs in the edge compute state, the system falls back into car compute state (i.e., the computation is being performed by the vehicle). The counter (c) is incremented during this transition.
  • Fig. 3 shows a series of phases.
  • a second phase 315 following the first phase an attempt is made at offloading the computation to an edge server, with the overall latency increasing above the overall latency threshold.
  • a fourth phase 325 following the third phase, an attempt is made at offloading the computation to an edge server, with the overall latency increasing above the overall latency threshold.
  • a sixth phase 335 following the fifth phase an attempt is made at offloading the computation to an edge server, with the overall latency increasing above the overall latency threshold.
  • an eight phase 345 following the seventh phase an attempt is made at offloading the computation to an edge server, with the overall latency increasing above the overall latency threshold.
  • a tenth phase 355 following the nineth phase an attempt is made at offloading the computation to an edge server, with the overall latency staying under the latency threshold for some time. Thus, the offloading is deemed to be successful for a time.
  • the overall latency increases above the latency threshold again, such that the offloading is cancelled again.
  • Fig. 3 The example shown in Fig. 3 is inspired by the exponential backoff concept known as congestion control in the TCP.
  • the vehicle in the following denoted, without loss of generality, car queries a nearby compute environment in a first step, the latency requirement is not met, the car would query again after a stipulate time period “T”. In case at time period T, the latency requirements are still not met, then the car would wait another 2T units to query again. If the requirements are still not met at ‘2T, the next query would happen after ‘4T’. Therefore, in Fig. 3, the time between queries increases exponentially, continuing with ‘8T’, ‘16T’ etc. If the requirements are met in between, the waiting time is reset, and any new query starts again from T time units.
  • Fig. 4a shows a schematic diagram of a flow of data according to the proposed concept.
  • the apparatus 20 of Fig. 2c is shown, which is used to execute an application client 42, an application server 44 and an aperiodic delta modulation logic 406.
  • the application client 42 and application server 44 are the two blocks available locally.
  • the application client collects the raw data from the sensors and provides them to the app server 44 for computation.
  • the computed results are sent back to the application client 42 which forwards the results to the actuators.
  • the computation has to be offloaded to the external server 100, the raw data is sent over to the external server and the latency requirements are checked before a complete offloading to the external server is triggered.
  • the Application client uses the exponential back off logic 46, where it checks aperiodically if the latency requirements are met or not.
  • Fig. 4b shows a schematic diagram of a flow of data according to a different concept.
  • a periodic querying logic 48 is used instead of the aperiodic logic 46 used in Fig. 4a.
  • the application client checks the network latency in a periodic manner, i.e., the latency is checked at regular time intervals. This results in unnecessary network traffic, and the system may be slowed down, which decreases the efficiency.
  • the proposed concept may also be used in other devices, such as smartphones, or basically any device that transmits data from a client to server (internet).
  • Examples may further be or relate to a (computer) program including a program code to execute one or more of the above methods when the program is executed on a computer, processor or other programmable hardware component.
  • steps, operations or processes of different ones of the methods described above may also be executed by programmed computers, processors or other programmable hardware components.
  • Examples may also cover program storage devices, such as digital data storage media, which are machine-, processor- or computer-readable and encode and/or contain machine-executable, processor-executable or computer-executable programs and instructions.
  • Program storage devices may include or be digital storage devices, magnetic storage media such as magnetic disks and magnetic tapes, hard disk drives, or optically readable digital data storage media, for example.
  • Other examples may also include computers, processors, control units, (field) programmable logic arrays ((F)PLAs), (field) programmable gate arrays ((F)PGAs), graphics processor units (GPU), application-specific integrated circuits (ASICs), integrated circuits (ICs) or system-on-a-chip (SoCs) systems programmed to execute the steps of the methods described above.
  • F programmable logic array
  • F field programmable gate arrays
  • GPU graphics processor units
  • ASICs application-specific integrated circuits
  • ICs integrated circuits
  • SoCs system-on-a-chip
  • aspects described in relation to a device or system should also be understood as a description of the corresponding method.
  • a block, device or functional aspect of the device or system may correspond to a feature, such as a method step, of the corresponding method.
  • aspects described in relation to a method shall also be understood as a description of a corresponding block, a corresponding element, a property or a functional feature of a corresponding device or a corresponding system.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Mobile Radio Communication Systems (AREA)

Abstract

Embodiments relate to an apparatus, a method and a computer program for a vehicle, and to a corresponding vehicle comprising such an apparatus or being configured to perform the method or to execute the computer program. The method comprises attempting (210) to offload a computation of a vehicular function to a remote server via a wireless communication link. The method comprises determining (220) a performance of the computation being offloaded to the remote server. The method comprises determining (230), whether the offloading of the computation is deemed to be successful or unsuccessful, based on the performance of the computation being offloaded to the remote server. The method comprises cancelling (240) the offloading if the offloading is deemed to be unsuccessful. The method comprises re-attempting (210) to offload the computation of the vehicular function to the remote server after a pre-defined waiting time, wherein the waiting time is based on a number of subsequent unsuccessful attempts.

Description

Description
Apparatus, Method and Computer Program for a Vehicle
The present invention relates to an apparatus, a method, and a computer program for a vehicle, and to a corresponding vehicle comprising such an apparatus or being configured to perform the method or to execute the computer program.
The autonomous or semi-autonomous operation of vehicles is a field of research and development. With autonomous driving level 5, the need for offloading computing to a cloud (i.e., a server that is accessible via the internet) or edge cloud (i.e., a server that is located in close proximity to a base station of a mobile communication system) is likely to gain importance. With the help of 5G (i.e., a mobile communication system according to the 5th Generation networks of the 3rd-Generation Partnership Project, 3GPP) and edge computing it is possible to offload real time application to nearby computing servers (edge or cloud) or even nearby cars that have compute capabilities.
However, there are latency requirements that are to be fulfilled to make offloading the computing worthwhile.
US patent application US 2019/0364492 A1 relates to a system that a learning function to determine a user movement, predict a route of the user and, based on the predicted radio conditions along the route, predict a latency of the communication of the user. However, the communication latency is only a portion of the overall latency incurred by offloading the computation.
US patent application US 2020/0284883 A1 relates to a LIDAR system. In the application, a binary exponential back-off is used to determine a contention window size in a LIDAR ranging medium access scheme.
The present disclosure relates to the question of how often it is necessary for a nearby computing server to be queried and to check if the connection meets a latency requirement. To determine if the nearby compute environment is meeting the latency requirement for the realtime application to be offloaded, the car sends packets to a server being in close proximity to the car. Since cars move, the conditions, and thus also whether the offloading meets the latency requirements change dynamically. In general, a periodic querying of the nearby compute environment is performed to constantly re-evaluate the resulting latency. However, such an approach may use CPU (Central Processing Unit) resources inefficiently and increase network traffic. For example, if a car connects to a nearby compute server and checks whether the compute offloading meets the latency requirement, the latency limit might not be met. When using periodic querying, the car will query the edge server again after a pre-determined, fixed interval, in order to verify whether the compute offloading meets the latency requirement.
The proposed concept introduces an improved technique for querying the nearby server to check the compute latency between the client and server.
The present invention is based on the finding, that a periodic check of the compute latency at a fixed interval results in a high CPU usage and generates unnecessary network traffic. In particular, periodically querying the backend can cause network overload in scenarios with many vehicles attempting to offload computation to the backend. In the proposed concept, instead of using a constant back-off time, a back-off time that is based on a number of subsequent unsuccessful attempts is proposed, e.g., an exponential back-off for congestion control.
Various aspects of the present disclosure relate to a method for a vehicle. The method comprises attempting to offload a computation of a vehicular function to a remote server via a wireless communication link. The method comprises determining a performance of the computation being offloaded to the remote server. The method comprises determining, whether the offloading of the computation is deemed to be successful or unsuccessful, based on the performance of the computation being offloaded to the remote server. The method comprises cancelling the offloading if the offloading is deemed to be unsuccessful. The method comprises reattempting to offload the computation of the vehicular function to the remote server after a pre-defined waiting time. The waiting time is based on a number of subsequent unsuccessful attempts. By adjusting the waiting time based on the number of subsequent unsuccessful attempts, the CPU usage and network traffic may be reduced, in particular in scenarios with an increased number of unsuccessful attempts.
In general, offloading a computation comprises a number of tasks, as vehicular computations often rely on local sensor data, with the result of the computation being required within the vehicle. Accordingly, offloading the computation may comprise transmitting working data to the remote server and receiving a result of the computation from the remote server. Above, the latency, or computation latency, has been named as a major factor determining the performance of the offloading of the computation. However, the term “latency” is not limited to the communication roundtrip time between the vehicle and the remote server. Instead, crucially, the time required by the remote server to obtain the working data, perform the computation, and respond with the result of the computation may constitute the computation latency. Accordingly, the performance may be based at least on a time required for transmitting the working data, a time required for waiting for the computation to be performed by the remote server, and a time required for receiving the result from the remote server.
As outlined above, the performance may be based on an overall latency of the computation. This overall latency may compete with the overall latency of the computation when the computation is being performed by a processor of the vehicle.
As outlined above, the overall latency may comprise at least a time required for transmitting the working data, a time required waiting for the computation to be performed by the remote server, and a time required for receiving the result from the remote server. As is evident, the latency of the communication between the vehicle and the remote server may contribute to the overall latency, influencing the time required to transmit the working data and the result of the computation, with the time required for the computation being added on top of the communication latency.
Moreover, the overall latency may be compared with a latency threshold, which may be used to determine, whether the offloading is deemed to be successful or unsuccessful. For example, the attempt may be deemed to be unsuccessful if the overall latency violates a latency threshold.
In general, the waiting time may increase with each unsuccessful attempt. The more subsequent unsuccessful attempts, the less likely it is that the next attempt is successful. Thus, an approach with increasing waiting times may reduce the CPU usage and bandwidth usage in particular in cases, where the likelihood of a successful subsequent attempt is low.
For example, the waiting time may double with each unsuccessful attempt. Such an exponential back-off strategy has proven to work in the context of transmission speed control in the Transmission Control Protocol (TCP). In various examples, the waiting time resets to an initial value after a successful attempt. After a successful attempt, the likelihood is high that a subsequent attempt is successful again (e.g., after a short communication or computation hiccup).
In general, cancelling the offloading may comprise performing the computation using one or more processors of the vehicle. In other words, if the performance of the offloaded computation is deemed to be unsatisfactory, the computation may be performed by the vehicle instead of by the remote server.
As outlined above, the computation may be a computation that is based on working data being provided by the vehicle. This leads to increased overhead when the computation is being offloaded to the remote server, as the working data is being transmitted to the remote server.
In various examples, the computation is being performed by an edge application server as defined by the 3rd Generation Partnership Project, 3GPP. Edge application servers are multipurpose application servers that can be used for various types of computations.
Various aspects of the present disclosure relate to a corresponding apparatus for a vehicle. The apparatus comprises an interface for communicating with the remote server via a wireless communication link. The apparatus comprises one or more processors, configured to perform the above method.
Various aspects of the present disclosure relate to a vehicle comprising the apparatus.
Various aspects of the present disclosure relate to a computer program having a program code for performing the above method, when the computer program is executed on a computer, a processor, or a programmable hardware component.
Some examples of apparatuses and/or methods will be described in the following by way of example only, and with reference to the accompanying figures, in which
Fig. 1 shows a schematic drawing of a vehicular computation being performed by a remote server;
Figs. 2a and 2b show flow charts of examples of a method for a vehicle; Fig. 2c shows a block diagram of an example of an apparatus for a vehicle and of the vehicle comprising the apparatus;
Fig. 3 shows a schematic diagram of the performance of the computation during various phases;
Fig. 4a shows a schematic diagram of a flow of data according to the proposed concept; and
Fig. 4b shows a schematic diagram of a flow of data according to a different concept.
Some examples are now described in more detail with reference to the enclosed figures. However, other possible examples are not limited to the features of these embodiments described in detail. Other examples may include modifications of the features as well as equivalents and alternatives to the features. Furthermore, the terminology used herein to describe certain examples should not be restrictive of further possible examples.
Throughout the description of the figures same or similar reference numerals refer to same or similar elements and/or features, which may be identical or implemented in a modified form while providing the same or a similar function. The thickness of lines, layers and/or areas in the figures may also be exaggerated for clarification.
When two elements A and B are combined using an 'or', this is to be understood as disclosing all possible combinations, i.e., only A, only B as well as A and B, unless expressly defined otherwise in the individual case. As an alternative wording for the same combinations, "at least one of A and B" or "A and/or B" may be used. This applies equivalently to combinations of more than two elements.
If a singular form, such as “a”, “an” and “the” is used and the use of only a single element is not defined as mandatory either explicitly or implicitly, further examples may also use several elements to implement the same function. If a function is described below as implemented using multiple elements, further examples may implement the same function using a single element or a single processing entity. It is further understood that the terms "include", "including", "comprise" and/or "comprising", when used, describe the presence of the specified features, integers, steps, operations, processes, elements, components and/or a group thereof, but do not exclude the presence or addition of one or more other features, integers, steps, operations, processes, elements, components and/or a group thereof. As outlined above, the present disclosure relates to the question of how often it is necessary for a nearby computing server to be queried and to check if the connection, and moreover the overall computation latency, meets a performance requirement, such as a latency requirement.
In Fig. 1 , an overall setting is shown. Fig. 1 shows a schematic drawing of a vehicular computation being performed by a remote server. Fig. 1 shows a compute environment 100, such as a compute edge cloud environment or a cloud compute environment, which is used as a remote server to remotely perform computations for a vehicle 200. The vehicle 200 transmits a latency check request to the compute environment 100, which transmits a latency check response.
As, in various examples of the present disclosure, the performance is based on the overall latency, which can be determined by attempting to offload the computation to the remote server, the latency check request may correspond to an attempt to offload the computation to the remote server, and the latency check response may correspond to the result of the computation.
In some systems, this check, or attempt at offloading, is re-attempted after a pre-defined, constant waiting time. The proposed concept, in contrast to this approach, uses a waiting time that is dependent on the number of subsequent unsuccessful attempts, in order to save CPU and wireless resources.
Figs. 2a and 2b show flow charts of examples of a method for a vehicle. The method comprises attempting 210 to offload a computation of a vehicular function to a remote server via a wireless communication link. The method comprises determining 220 a performance of the computation being offloaded to the remote server. The method comprises determining 230, whether the offloading of the computation is deemed to be successful or unsuccessful, based on the performance of the computation being offloaded to the remote server. The method comprises cancelling 240 the offloading if the offloading is deemed to be unsuccessful. The method comprises reattempting 210 to offload the computation of the vehicular function to the remote server after a pre-defined waiting time. The waiting time is based on a number of subsequent unsuccessful attempts. For example, the method may be performed by the vehicle.
Fig ,2c shows a block diagram of an example of a corresponding apparatus 20 for a vehicle 200 and of the vehicle 200 comprising the apparatus. The apparatus 20 comprises an interface 22 for communicating with the remote server 100 via a wireless communication link. The apparatus 20 further comprises one or more processors 24, configured to perform the method of Figs. 2a and/or 2b. Optionally, the apparatus 20 further comprises a storage device 26. In general, the method may be performed by the one or more processors, with the help of the at least one interface (for communicating via the wireless communication link) and, optionally, with the help of the one or more storage devices (for storing and retrieving information). As shown in Fig.2c, the one or more processors 24 may be coupled with the at least one interface 22 and with the one or more storage devices 26.
As outlined above, in the proposed concept, the determination, whether the performance of the computation being offloaded to the remote server, is determined by attempting to offload the computation to the remote server. In a simple example, it may suffice that the remote server is instructed to perform the computation, with the result being provided to the vehicle. However, in general, the types of computation that most benefits from being offloaded rely on working data that is available locally in the vehicle. Moreover, this working data is often based on sensor data of the vehicle, and first has to be transmitted to the remote server if the remote server is to be used for performing the computation. Accordingly, the computation may be a computation that is based on working data being provided by the vehicle. In particular, the computation may be a computation that is based on sensor data being collected by the vehicle, e.g., based on environmental perception sensor data, such as camera sensor data, radar sensor data or lidar sensor data. Accordingly, the computation may be based on processing environmental perception sensor data of the vehicle. In order for the computation to be performed, the sensor data, or more general working data, which may comprise additional information, such as a state of an autonomous or semiautonomous driving system, may be provided to the remote server. Accordingly, the remote server may be supplied, by the vehicle, with the working data that is required for performing the computation.
In effect, as shown in Fig. 2b, offloading the computation may comprise transmitting 212 working data to the remote server (via the wireless communication link) and receiving (via the wireless communication link) 214 a result of the computation from the remote server. Thus, also the attempt at offloading comprises the tasks of transmitting 212 the working data and receiving 214 the result from the remote server. When the offloading is being attempted, the vehicle may use a two-pronged strategy - while offloading is being attempted, the same computation may be performed by a processor of the vehicle, so the result of the computation is continuously available in the vehicle. Accordingly, the method may comprise performing the computation using a processor of the vehicle while the offloading is being attempted (e.g., until the offloading is deemed to be successful for a pre-defined amount of time).
In general, the computation being performed by the remote server may have several advantages - for once, the remote server may have a higher amount of computational resources, enabling a more precise or a faster computation. Additionally or alternatively, the remote server may have access to working data from multiple vehicles, enabling the inclusion of additional working data in the computation. Furthermore, by offloading the computation, the vehicle may be enabled to perform, or expand, other computations. For example, the computation may be a real-time computation, i.e., a computation being used in a time-sensitive manner, with a delayed computation being (mostly) useless for the vehicle.
Once the attempt at offloading has been initiated and the first results are being received (or at least expected) by the vehicle, the performance of the offloading is being determined 220. As has been outlined above, the performance of the computation primarily relates to an overall latency of the offloading - with the latency going beyond the communication roundtrip time between the vehicle and the remote server, by including the time it takes to transmit the working data to the remote server, the time required by the remote server to perform the computation, and the time required for transmitting the result from the remote server to the vehicle. Thus, the performance may be based at least on the time required for transmitting the working data, the time required for waiting for the computation to be performed by the remote server, and the time required for receiving the result from the remote server. These three time-intervals may be part of, or form, the overall latency of the computation, which is vastly larger than the communication roundtrip time between the vehicle and the remote server. In other words, the overall latency may comprise at least the time required for transmitting the working data, the time required waiting for the computation to be performed by the remote server, and the time required for receiving the result from the remote server. Additional latencies may be included as well, such as the time it takes to prepare and encode the working data for transmission and the time takes to decode the result. Accordingly, the performance may be based on an overall latency of the computation. For example, the performance may be based by measuring the overall latency of the computation, from a time the working data is being transmitted (or prepared) to a time where the result is received (or decoded). Consequently, the performance may be computed without requiring additional packets being transmitted in addition to the communication required for offloading the computation.
Based on the determined performance, a determination 230 is being made on whether the attempt is deemed to be successful or unsuccessful. For this purpose, the performance, and in particular the overall latency, may be compared to a latency threshold. In other words, the attempt is deemed to be unsuccessful if the overall latency violates a latency threshold. Consequently, the attempt may be deemed to be successful if the overall latency is in line with the latency threshold. For example, the determination 230 of the performance, and the determination on whether the attempt is deemed to be successful or unsuccessful, may be continuously repeated (as indicated by the arrow in Figs. 2a and 2b leading from block 230 to block 220. In other words, the method may comprise continuously monitoring the performance of the offloaded computation and deeming the offloading to be unsuccessful if, or when/once, the performance violates the latency threshold. An example will be shown in connection with Fig. 3.
If the offloading is deemed to be unsuccessful (e.g., instantly upon first determination of the performance, or at a later stage, after the computation has been offloaded for some time), the offloading is cancelled 240. This may be done by ceasing the transmission of the working data to the remote server, and performing the computation using a processor in the vehicle. In other words, cancelling the offloading may comprise performing 242 the computation using one or more processors of the vehicle. For example, some of the calculations may be repeated, e.g., to recover a computation initially being offloaded to the remote server.
As indicated by arrow between blocks 240 and 210 in Figs. 2a and 2b, the method comprises re-attempting 210 to offload the computation of the vehicular function to the remote server after a pre-defined waiting time. In contrast to a periodic approach, the waiting time is based on a number of subsequent unsuccessful attempts (at offloading the computation). Again, an example of the concept is shown in more detail in connection with Fig. 3. The concept is based on increasing the waiting time with each attempt. For example, between the first and second attempt, the waiting time may be a first time interval. Between the second and third attempt, the waiting time may be second time interval, with the second time interval being longer than the first time interval. Between the third and fourth attempt, may be a third time interval, with the third time interval being longer than the second time interval. In consequence, the waiting time increases with each unsuccessful attempt. For example, as shown in Fig .3, the waiting time may double with each unsuccessful attempt. For example, in the example of Fig. 3, the first time interval is 2° • T long. The second time interval is 21 • T long. The third time interval is 22 • T long etc. For example, the n-th time interval may be 2n-1 • T long.
After a successful attempt at offloading, e.g., when at least a pre-defined number of computations have been performed without violating the latency threshold, the waiting time may be reset to an initial value. In other words, the waiting time may reset to an initial value after a successful attempt.
In general, any server that is remote from the vehicle (i.e., outside the vehicle) and that can be communicated with via a wireless communication link can be the remote server 100 within the context of the present disclosure. In particular, however, the proposed concept is tailored to the use with so-called edge servers (or edge cloud servers), which are remote servers, which are placed in close proximity to the base stations being used for communication via the wireless communication link. For example, the remote sever may be a server that is co-located with a base station of a mobile communication system. Alternatively, the remote server may be located within a core network of a mobile communication system being used for the wireless communication link. For example, the computation is being performed by an edge application server as defined by the 3rd Generation Partnership Project, 3GPP.
Embodiments may be compliant to or even comprised in certain standard specifications, such as those specified by the 3GPP. For example, the remote server may be an edge application server as introduced in 3GPP TR 23.758. For example, provisioning of the remote server to perform the computation server may be performed via an edge enabler server, and a corresponding service in the vehicle. The working data and the result may be transmitted, as application data traffic, via the 3GPP mobile communication system, between the vehicle and the remote server. For example, the architecture for support of edge computing of TS 23.548 and TS 23.501 may be used or implementing the remote server, the offloading of the computation and/or the communication between the vehicle and the remote server.
The at least one interface 22 may correspond to one or more inputs and/or outputs for receiving and/or transmitting information, which may be in digital (bit) values according to a specified code, within a module, between modules or between modules of different entities. For example, the at least one interface 22 may comprise interface circuitry configured to receive and/or transmit information. For example, the at least one interface 22 may be configured to communicate via the wireless communication using a mobile communication system. In other words, the wireless communication link may be a wireless communication link according to a mobile communication system. In particular, the wireless communication link may be based on cellular communication.
In general, the mobile communication system may, for example, correspond to one of the Third Generation Partnership Project (3GPP)-standardized mobile communication networks, where the term mobile communication system is used synonymously to mobile communication network. The mobile or wireless communication system may correspond to, for example, a 5th Generation system (5G), a Long-Term Evolution (LTE), an LTE-Advanced (LTE-A), High Speed Packet Access (HSPA), a Universal Mobile Telecommunication System (UMTS) or a UMTS Terrestrial Radio Access Network (UTRAN), an evolved-UTRAN (e-UTRAN), a Global System for Mobile communication (GSM) or Enhanced Data rates for GSM Evolution (EDGE) network, a GSM/EDGE Radio Access Network (GERAN), or mobile communication networks with different standards, for example, a Worldwide Inter-operability for Microwave Access (WIMAX) network IEEE 802.16 or Wireless Local Area Network (WLAN) IEEE 802.11 , generally an Orthogonal Frequency Division Multiple Access (OFDMA) network, a Time Division Multiple Access (TDMA) network, a Code Division Multiple Access (CDMA) network, a Wideband-CDMA (WCDMA) network, a Frequency Division Multiple Access (FDMA) network, a Spatial Division Multiple Access (SDMA) network, etc.
In various examples, the mobile communication system may be a vehicular mobile communication system. Such communication may be carried out using 3GPP systems, such as 3G, 4G, NR and beyond, adapted to vehicular Vehicle-to-Everything (V2X) communication.
In embodiments the one or more processors 24 may be implemented using one or more processing units, one or more processing devices, any means for processing, such as a processor, a computer or a programmable hardware component being operable with accordingly adapted software. In other words, the described function of the one or more processors 24 may as well be implemented in software, which is then executed on one or more programmable hardware components / processors. Such hardware components may comprise a general purpose processor, a Digital Signal Processor (DSP), a micro-controller, etc.
In at least some embodiments, the one of more storage devices 26 may comprise at least one element of the group of a computer readable storage medium, such as a magnetic or optical storage medium, e.g. a hard disk drive, a flash memory, Floppy-Disk, Random Access Memory (RAM), Programmable Read Only Memory (PROM), Erasable Programmable Read Only Memory (EPROM), an Electronically Erasable Programmable Read Only Memory (EEPROM), or a network storage.
For example, the vehicle 200 may be a land vehicle, a road vehicle, a car, an automobile, an off-road vehicle, a motor vehicle, a truck, or a lorry.
More details and aspects of the method, apparatus, computer program or vehicle are mentioned in connection with the proposed concept or one or more examples described above or below (e.g. Fig. 1 to 3 to 4b). The method, apparatus, computer program or vehicle may comprise one or more additional optional features corresponding to one or more aspects of the proposed concept or one or more examples described above or below.
In Fig. 3, a practical example of the proposed concept is given. Fig .3 shows a schematic diagram of the performance of the computation during various phases. Fig. 3 shows the overall (or end-to-end) latency (E2EL) threshold 302, the overall latency of the computation (E2EL), the exponential count(down) 306 and a counter 308. The E2EL is the end to end latency, which may correspond to the overall latency of the computation, which may contain all elements of latency possible from remote computing. The E2EL latency may contain computation delays, network delays, application layer delays, scheduling delays, processor load delays, codec delays, hardware accelerator delays and other unknown delays. The E2EL threshold (i.e., the latency threshold) is the required latency below which the real time application can be offloaded to the nearby compute environment.
As seen in Fig. 3, there are two parameters to the waiting time - one is counter (c) and the other is an exponential count (e). The relation between c and e can be seen in the following equation. e = 2C. The E2EL shown in Fig. 3 is based on the respective computing environment being used. For example, if the nearby server(edge) is computing, the E2EL corresponds to nearby server computed results and if the car is computing, the E2EL corresponds to the car computed results. In this strategy, if an E2EL deviation occurs in the edge compute state, the system falls back into car compute state (i.e., the computation is being performed by the vehicle). The counter (c) is incremented during this transition. The system then waits in the car compute state for e = 2C units of time (e.g., computation/clock cycles). Once the wait period is over it queries the edge node again. If a successive failure happens, the counter c is incremented again and this time it waits two times the last time in the car compute state before it queries the edge again. Successive failures in edge compute state increase the wait time in car compute state exponentially. In doing so, the number of transitions is reduced significantly. If the edge recovers, then the values of c and e are reset, e.g., to zero.
Fig. 3 shows a series of phases. In a first phase 310, the computation is performed, for a time interval T (2C = 1, with the counter C being 0), within the vehicle (car), with the overall latency being below the latency threshold. In a second phase 315 following the first phase, an attempt is made at offloading the computation to an edge server, with the overall latency increasing above the overall latency threshold. In a third phase 320 following the second phase, the computation is performed, for a time interval 2CT = 2T (C = 1), in the vehicle, with the overall latency being below the latency threshold. In a fourth phase 325 following the third phase, an attempt is made at offloading the computation to an edge server, with the overall latency increasing above the overall latency threshold. In a fifth phase 330 following the fourth phase, the computation is performed, for a time interval 2CT = 4T (C = 2), in the vehicle, with the overall latency being below the latency threshold. In a sixth phase 335 following the fifth phase, an attempt is made at offloading the computation to an edge server, with the overall latency increasing above the overall latency threshold. In a seventh phase 340 following the sixth phase, the computation is performed, for a time interval 2CT = ST (C = 3), in the vehicle, with the overall latency being below the latency threshold. In an eight phase 345 following the seventh phase, an attempt is made at offloading the computation to an edge server, with the overall latency increasing above the overall latency threshold. In a ninth phase 350 following the eighth phase, the computation is performed, for a time interval 2CT = 16T (C = 4), in the vehicle, with the overall latency being below the latency threshold. In a tenth phase 355 following the nineth phase, an attempt is made at offloading the computation to an edge server, with the overall latency staying under the latency threshold for some time. Thus, the offloading is deemed to be successful for a time. At the end of the tenth phase, the overall latency increases above the latency threshold again, such that the offloading is cancelled again. In an eleventh phase 360 following the tenth phase, the computation is performed, for a time interval 2CT = T (C = 0), in the vehicle, with the overall latency being below the latency threshold. It is evident, that the counter C has been reset after the successful attempt. In a twelfth phase 365 following the eleventh phase, an attempt is made at offloading the computation to an edge server, with the overall latency increasing above the overall latency threshold. In a thirteenth phase 370 following the twelfth phase 365, the computation is performed, for a time interval 2CT = 2T (C = 1), in the vehicle, with the overall latency being below the latency threshold (...).
The example shown in Fig. 3 is inspired by the exponential backoff concept known as congestion control in the TCP. Based on the proposed concept, the vehicle (in the following denoted, without loss of generality, car) queries a nearby compute environment in a first step, the latency requirement is not met, the car would query again after a stipulate time period “T”. In case at time period T, the latency requirements are still not met, then the car would wait another 2T units to query again. If the requirements are still not met at ‘2T, the next query would happen after ‘4T’. Therefore, in Fig. 3, the time between queries increases exponentially, continuing with ‘8T’, ‘16T’ etc. If the requirements are met in between, the waiting time is reset, and any new query starts again from T time units.
Fig. 4a shows a schematic diagram of a flow of data according to the proposed concept. In Fig. 4a, the apparatus 20 of Fig. 2c is shown, which is used to execute an application client 42, an application server 44 and an aperiodic delta modulation logic 406. The application client 42 and application server 44 are the two blocks available locally. The application client collects the raw data from the sensors and provides them to the app server 44 for computation. The computed results are sent back to the application client 42 which forwards the results to the actuators. In case the computation has to be offloaded to the external server 100, the raw data is sent over to the external server and the latency requirements are checked before a complete offloading to the external server is triggered. The Application client uses the exponential back off logic 46, where it checks aperiodically if the latency requirements are met or not.
Fig. 4b shows a schematic diagram of a flow of data according to a different concept. Compared to the example shown in Fig. 4a, a periodic querying logic 48 is used instead of the aperiodic logic 46 used in Fig. 4a. In this example, the application client checks the network latency in a periodic manner, i.e., the latency is checked at regular time intervals. This results in unnecessary network traffic, and the system may be slowed down, which decreases the efficiency.
In conclusion, the system of Fig ,4a was found to be more efficient, with the frequency of querying the network latency being reduced. Raw data is put on the network aperidically to check if latency requirements are met. Hence, the substantial network traffic being caused by the transmission of raw data is reduced in a situation where many cars trying to offload computation.
The proposed concept may also be used in other devices, such as smartphones, or basically any device that transmits data from a client to server (internet).
The aspects and features described in relation to a particular one of the previous examples may also be combined with one or more of the further examples to replace an identical or similar feature of that further example or to additionally introduce the features into the further example.
Examples may further be or relate to a (computer) program including a program code to execute one or more of the above methods when the program is executed on a computer, processor or other programmable hardware component. Thus, steps, operations or processes of different ones of the methods described above may also be executed by programmed computers, processors or other programmable hardware components. Examples may also cover program storage devices, such as digital data storage media, which are machine-, processor- or computer-readable and encode and/or contain machine-executable, processor-executable or computer-executable programs and instructions. Program storage devices may include or be digital storage devices, magnetic storage media such as magnetic disks and magnetic tapes, hard disk drives, or optically readable digital data storage media, for example. Other examples may also include computers, processors, control units, (field) programmable logic arrays ((F)PLAs), (field) programmable gate arrays ((F)PGAs), graphics processor units (GPU), application-specific integrated circuits (ASICs), integrated circuits (ICs) or system-on-a-chip (SoCs) systems programmed to execute the steps of the methods described above. It is further understood that the disclosure of several steps, processes, operations or functions disclosed in the description or claims shall not be construed to imply that these operations are necessarily dependent on the order described, unless explicitly stated in the individual case or necessary for technical reasons. Therefore, the previous description does not limit the execution of several steps or functions to a certain order. Furthermore, in further examples, a single step, function, process or operation may include and/or be broken up into several sub-steps, - functions, -processes or -operations.
If some aspects have been described in relation to a device or system, these aspects should also be understood as a description of the corresponding method. For example, a block, device or functional aspect of the device or system may correspond to a feature, such as a method step, of the corresponding method. Accordingly, aspects described in relation to a method shall also be understood as a description of a corresponding block, a corresponding element, a property or a functional feature of a corresponding device or a corresponding system.
The following claims are hereby incorporated in the detailed description, wherein each claim may stand on its own as a separate example. It should also be noted that although in the claims a dependent claim refers to a particular combination with one or more other claims, other examples may also include a combination of the dependent claim with the subject matter of any other dependent or independent claim. Such combinations are hereby explicitly proposed, unless it is stated in the individual case that a particular combination is not intended. Furthermore, features of a claim should also be included for any other independent claim, even if that claim is not directly defined as dependent on that other independent claim.
List of reference signs
20 Apparatus
22 Interface
24 Processor
26 Storage device
40 Apparatus
42 Application client
44 Application server
46 Delta modulation logic (aperiodic)
48 Periodic querying logic
100 Remote server
200 Vehicle
210 Attempting to offload a computation of a vehicular function
212 Transmitting working data
214 Receiving a result of the computation
220 Determining a performance of the computation being offloaded
230 Determining, whether the offloading is deemed to be successful
240 Cancelling the offloading
242 Performing the computation using a processor of the vehicle
302 End-to-End Latency threshold
304 End-to-End Latency
306 Counter
308 Exponential count
310-370 Phases

Claims

Claims A method for a vehicle, the method comprising:
Attempting (210) to offload a computation of a vehicular function to a remote server via a wireless communication link;
Determining (220) a performance of the computation being offloaded to the remote server;
Determining (230), whether the offloading of the computation is deemed to be successful or unsuccessful, based on the performance of the computation being offloaded to the remote server;
Cancelling (240) the offloading if the offloading is deemed to be unsuccessful; and
Re-attempting (210) to offload the computation of the vehicular function to the remote server after a pre-defined waiting time, wherein the waiting time is based on a number of subsequent unsuccessful attempts. The method according to claim 1 , wherein offloading the computation comprises transmitting (212) working data to the remote server and receiving (214) a result of the computation from the remote server. The method according to claim 2, wherein the performance is based at least on a time required for transmitting the working data, a time required for waiting for the computation to be performed by the remote server, and a time required for receiving the result from the remote server. The method according to one of the claims 1 or 3, wherein the performance is based on an overall latency of the computation. The method according to claim 4, wherein the overall latency comprises at least a time required for transmitting the working data, a time required waiting for the computation to be performed by the remote server, and a time required for receiving the result from the remote server. The method according to one of the claims 4 or 5, wherein the attempt is deemed to be unsuccessful if the overall latency violates a latency threshold. The method according to one of the claims 1 to 4, wherein the waiting time increases with each unsuccessful attempt. The method according to claim 5, wherein the waiting time doubles with each unsuccessful attempt. The method according to one of the claims 1 to 6, wherein the waiting time resets to an initial value after a successful attempt. The method according to one of the claims 1 to 9, wherein cancelling the offloading comprises performing (242) the computation using one or more processors of the vehicle. The method according to one of the claims 1 to 10, wherein the computation is a computation that is based on working data being provided by the vehicle. The method according to one of the claims 1 to 11 , wherein the computation is being performed by an edge application server as defined by the 3rd Generation Partnership Project, 3GPP. An apparatus (20) for a vehicle, the apparatus comprising:
An interface (22) for communicating with a remote server (100) via a wireless communication link; and
One or more processors (24), configured to perform the method according to one of the claims 1 to 12. A vehicle (200) comprising the apparatus (10) according to claim 13. 19 A computer program having a program code for performing the method of one of the claims 1 to 12, when the computer program is executed on a computer, a processor, or a programmable hardware component.
EP21773591.9A 2021-09-09 2021-09-09 Apparatus, method and computer program for a vehicle Pending EP4378205A1 (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/EP2021/074793 WO2023036417A1 (en) 2021-09-09 2021-09-09 Apparatus, method and computer program for a vehicle

Publications (1)

Publication Number Publication Date
EP4378205A1 true EP4378205A1 (en) 2024-06-05

Family

ID=77864593

Family Applications (1)

Application Number Title Priority Date Filing Date
EP21773591.9A Pending EP4378205A1 (en) 2021-09-09 2021-09-09 Apparatus, method and computer program for a vehicle

Country Status (3)

Country Link
EP (1) EP4378205A1 (en)
CN (1) CN117941410A (en)
WO (1) WO2023036417A1 (en)

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102715376B1 (en) 2016-12-30 2024-10-11 인텔 코포레이션 Methods and devices for radio communications
CA3239810A1 (en) 2019-03-08 2020-09-17 Leddartech Inc. Method, system and computer readable medium for evaluating influence of an action performed by an external entity

Also Published As

Publication number Publication date
WO2023036417A1 (en) 2023-03-16
CN117941410A (en) 2024-04-26

Similar Documents

Publication Publication Date Title
EP3574487B1 (en) Request-response-based sharing of sensor information
WO2020108355A1 (en) Method and apparatus for selecting prach resource
RU2510598C2 (en) Method and device in wireless communication system
EP3803832B1 (en) Techniques for sharing of sensor information
US11871275B2 (en) Node apparatus, method for controlling the same, and storage medium
WO2021138891A1 (en) Road side unit message scheduling and congestion control
US20230224693A1 (en) Communication method and device, and electronic device and computer-readable storage medium
US11115169B2 (en) Parent node device, terminal device for wireless network and data transmission method thereof
US20150181606A1 (en) Communication State Setting Based on Expected Downlink Data Amount
JP2023536315A (en) Method and device for managing sidelink transmission
WO2023036417A1 (en) Apparatus, method and computer program for a vehicle
CN111343729B (en) Wireless data transmission method and device, storage medium and station
EP2930617A1 (en) Resource management method and device
US10178176B2 (en) Methods and systems for managing network communications to and from a vehicle network
Ikuma et al. Rigorous analytical model of saturated throughput for the IEEE 802.11 p EDCA
TWI727546B (en) Method for adjusting packet length, mobile device and computer readable storage medium
CN116134848A (en) Communication method, device and readable storage medium
EP3918818A1 (en) Methods and nodes for ue-to-ue event monitoring
US11848868B2 (en) Methods, systems and devices for network management using control packets
JP7339381B2 (en) DATA TRANSMISSION METHOD, APPARATUS AND COMPUTER STORAGE MEDIUM
CN110278598B (en) Wireless access method, electronic device and readable storage medium
EP4271090A1 (en) Resource selection method for sidelink in vehicle to everything, and terminal
EP4344263A1 (en) Method for transmitting message including terminal information in wireless communication system, and apparatus therefor
KR101844661B1 (en) Apparatus and method of communicating for low latency service
CN118235466A (en) Communication method, device, equipment and storage medium

Legal Events

Date Code Title Description
STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: UNKNOWN

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE INTERNATIONAL PUBLICATION HAS BEEN MADE

PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE

17P Request for examination filed

Effective date: 20240227

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR