WO2022039292A1 - Procédé d'informatique à la frontière, dispositif électronique et système pour fournir une mise à jour de mémoire cache et une attribution de bande passante pour une réalité virtuelle sans fil - Google Patents

Procédé d'informatique à la frontière, dispositif électronique et système pour fournir une mise à jour de mémoire cache et une attribution de bande passante pour une réalité virtuelle sans fil Download PDF

Info

Publication number
WO2022039292A1
WO2022039292A1 PCT/KR2020/011014 KR2020011014W WO2022039292A1 WO 2022039292 A1 WO2022039292 A1 WO 2022039292A1 KR 2020011014 W KR2020011014 W KR 2020011014W WO 2022039292 A1 WO2022039292 A1 WO 2022039292A1
Authority
WO
WIPO (PCT)
Prior art keywords
fov
cache
communication
electronic device
information
Prior art date
Application number
PCT/KR2020/011014
Other languages
English (en)
Korean (ko)
Inventor
최완
이재덕
Original Assignee
서울대학교산학협력단
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 서울대학교산학협력단 filed Critical 서울대학교산학협력단
Priority to PCT/KR2020/011014 priority Critical patent/WO2022039292A1/fr
Publication of WO2022039292A1 publication Critical patent/WO2022039292A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0806Multiuser, multiprocessor or multiprocessing cache systems
    • G06F12/0815Cache consistency protocols
    • G06F12/0831Cache consistency protocols using a bus scheme, e.g. with bus monitoring or watching means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0866Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches for peripheral storage systems, e.g. disk cache
    • G06F12/0868Data transfer between cache memory and other subsystems, e.g. storage devices or host systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0866Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches for peripheral storage systems, e.g. disk cache
    • G06F12/0871Allocation or management of cache space
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0866Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches for peripheral storage systems, e.g. disk cache
    • G06F12/0873Mapping of cache memory to specific storage devices or parts thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/2183Cache memory
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/231Content storage operation, e.g. caching movies for short term storage, replicating data over plural servers, prioritizing data for deletion
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/238Interfacing the downstream path of the transmission network, e.g. adapting the transmission rate of a video stream to network bandwidth; Processing of multiplex streams
    • H04N21/2385Channel allocation; Bandwidth allocation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/81Monomedia components thereof

Definitions

  • the present invention relates to cache update and bandwidth allocation for wireless virtual reality, and to an edge computing method capable of providing content in an ultra-low latency manner in wireless virtual reality, an electronic device and an edge computing system providing content in the same way .
  • the present invention intends to propose a cache update strategy and bandwidth allocation technique for minimizing the average latency of wireless virtual reality by utilizing the cache and computing power of the edge node (base station) and the virtual reality device.
  • An object of the present invention is to provide an electronic device implementing a cache update technique for minimizing the average delay time of wireless virtual reality.
  • An object of the present invention is to provide an edge computing method and system for implementing a bandwidth allocation technique for minimizing the average delay time of wireless virtual reality.
  • the electronic device includes a buffer configured to store current Field of View (FoV) information corresponding to a current viewpoint of a user in a first frame of content including a series of frames, expected from the current viewpoint a cache configured to store at least one candidate FoV information corresponding to a user's expected time point in a second frame of content, a communication unit configured to provide a connection to an external edge node through a communication channel, and a processor;
  • the processor performs first communication for acquiring current FoV information from the edge node through the communication unit with a first bandwidth determined based on a predetermined bandwidth allocation rate, and performs a first communication unit with the second bandwidth determined based on the bandwidth allocation rate. and perform second communication to obtain at least one piece of candidate FoV information from the edge node through the
  • the edge computing method includes: obtaining a predetermined bandwidth allocation rate for a communication channel with an edge node; a first bandwidth determined based on the bandwidth allocation rate; Performing a first communication for obtaining FoV information, and obtaining at least one piece of candidate FoV information corresponding to an expected time of the user expected from the current time from the edge node with a second bandwidth determined based on the bandwidth allocation rate It may include performing the second communication.
  • An edge computing system includes an edge node that stores FoV information of VR content, and an electronic device, wherein the electronic device corresponds to a current view point of a user in a first frame of content including a series of frames a buffer configured to store the current FoV information of a communication unit and a processor configured to provide a connection to the communicator, wherein the processor performs a first communication for obtaining current FoV information from the edge node through the communication unit with a first bandwidth determined based on a predetermined bandwidth allocation rate; The second bandwidth determined based on the bandwidth allocation rate may be configured to perform second communication for obtaining at least one piece of candidate FoV information from the edge node through the communication unit.
  • Another system and a computer-readable recording medium storing a computer program for executing the method may be further provided.
  • FIG. 1 is an exemplary diagram of an edge computing system environment including an edge node and an electronic device according to an embodiment.
  • FIG. 2 is a block diagram of an electronic device according to an embodiment.
  • FIG. 3 is a diagram for explaining a cache update strategy according to an embodiment.
  • FIG. 4 is a diagram for explaining a cache update strategy according to an embodiment.
  • FIG. 5 is a diagram for explaining a cache update strategy according to an embodiment.
  • FIG. 6 is a diagram for explaining a bandwidth allocation scheme according to an embodiment.
  • 7A is a diagram for explaining a delay time in case of a cache hit according to an embodiment.
  • 7B is a diagram for explaining a delay time in case of a cache miss according to an embodiment.
  • FIG. 8 is a flowchart illustrating an edge computing method according to an embodiment.
  • FIG. 9 is a flowchart illustrating an edge computing method according to an embodiment.
  • FIG. 10 is a table showing a simulation environment of an edge computing method according to an embodiment.
  • 11 is a graph showing a simulation result of a 2D update scenario.
  • 12 is a graph showing a simulation result of a 3D update scenario.
  • 13 is a pseudo code for finding a value of a bandwidth allocation rate in a 2D update scenario.
  • the wireless virtual reality framework in which the edge computing system according to the embodiment operates first, extracts the 2D FoV of the viewpoint of the frame through a pre-processing process, and converts the 2D FoV to the 3D FoV It is a framework that renders after going through the post-processing process of projection.
  • the edge computing method according to the embodiment derives the average delay time in consideration of the probability of data transmission failure due to an unstable wireless channel in this wireless virtual reality framework, and a bandwidth allocation technique and time point ( We propose a cache update strategy that updates the cache of the virtual reality device through prediction).
  • the present invention deals with an efficient method of using wireless communication, cache, and computing resources to reduce latency in virtual reality applications requiring very short latency. Specifically, it is a technology related to a cache update strategy and bandwidth allocation technique for minimizing the average delay time of wireless virtual reality.
  • the cache update strategy while updating the cache of the virtual reality device to viewpoints expected to be requested by the user in the next frame, it means to reconfigure the cache of the virtual reality device for each frame.
  • a part for downloading a field of view (FoV) currently required for a communication band and storing it in the buffer of the virtual reality device and the FoV of the next frame are downloaded in advance to create a virtual
  • This is a technique used by dividing into two parts to update the cache of the real device.
  • the bandwidth is the part for downloading the FoV of the viewpoint viewed by the user using the virtual reality device in the current frame from the base station and the bandwidth to be viewed in the next frame. It is used by dividing the cache into parts for updating the cache with the FoV of the expected time points.
  • the edge computing method utilizes this bandwidth allocation technique to minimize the average latency in a scenario in which the cache of a virtual reality device is updated only in 2D FoV and in a scenario in which cache is updated only in 3D FoV. Find the bandwidth allocation.
  • the edge computing system may set the bandwidth between the edge node and the electronic device according to this optimal bandwidth allocation, and determine the dimension of FoV to be transmitted from the edge node to the electronic device (eg, 2D FoV or 3D FoV). there is. Accordingly, it is possible to provide a cache update strategy based on bandwidth allocation that can minimize the average delay time of wireless virtual reality transmission.
  • the embodiments according to the present invention are applicable to wireless communication, wireless caching, and mobile edge computing fields as well as wireless virtual reality by dealing with communication, caching, and computing techniques in wireless virtual reality.
  • FIG. 1 is an exemplary diagram of an edge computing system environment including an edge node and an electronic device according to an embodiment.
  • the edge computing system 10 includes an electronic device 100 and an edge node 200 .
  • the edge computing system 10 may be modeled as a mobile edge computing network including an edge node 200 having a cache and computing capability and an electronic device 100 having a cache and computing capability.
  • the electronic device 100 is a user-portable terminal, and may receive VR content through wireless communication and reproduce the received VR content in real time.
  • the electronic device 100 includes, for example, a virtual reality device such as a head mounted display (HMD), a smart phone, a tablet, a laptop, and the like, and is not limited now and may include various terminal devices supporting wireless communication.
  • the electronic device 100 is a desktop computer operated by a user, a smart TV, a personal digital assistant (PDA), a media player, a micro server, a global positioning system (GPS) device, an e-book terminal, a digital broadcasting terminal, a navigation system, a kiosk, It can be, but is not limited to, MP3 players, digital cameras, home appliances, and other mobile or non-mobile computing devices.
  • the electronic device 100 may be a wearable terminal such as a watch, glasses, a hair band, and a ring having a communication function and a data processing function.
  • the edge node 200 is a base station and may provide VR content to the electronic device 100 located in a service radius of the edge node 200 .
  • the edge node 200 may communicate with a cloud server and other base stations.
  • the edge computing system 10 may provide a network between the electronic device 100 and the edge node 200 .
  • the network may include, for example, wireless networks such as wireless LANs, CDMA, Bluetooth, and satellite communications, but the scope of the present invention is not limited thereto.
  • the network may transmit and receive information using short-distance communication and/or long-distance communication.
  • the short-distance communication may include Bluetooth, radio frequency identification (RFID), infrared data association (IrDA), ultra-wideband (UWB), ZigBee, and wireless fidelity (Wi-Fi) technologies.
  • Communication may include code division multiple access (CDMA), frequency division multiple access (FDMA), time division multiple access (TDMA), orthogonal frequency division multiple access (OFDMA), single carrier frequency division multiple access (SC-FDMA) technology.
  • CDMA code division multiple access
  • FDMA frequency division multiple access
  • TDMA time division multiple access
  • OFDMA orthogonal frequency division multiple access
  • SC-FDMA single carrier frequency division multiple access
  • a network may include a connection of network elements such as hubs, bridges, routers, switches, and gateways.
  • a network may include one or more connected networks, eg, a multi-network environment, including a public network such as the Internet and a private network such as a secure enterprise private network. Access to the network may be provided via one or more wired or wireless access networks.
  • the network may support an Internet of Things (IoT) network and/or 5G communication that exchanges and processes information between distributed components such as objects.
  • IoT Internet of Things
  • Virtual reality video is composed of a continuous series of frames, and each frame may be composed of a set of non-overlapping viewpoints. According to the movement of the user's viewpoint while watching the virtual reality video using the electronic device 100 , the electronic device 100 renders a Field of View (FoV) corresponding to the user's viewpoint in each frame within a given time to provide the user with to provide.
  • FoV Field of View
  • the edge node 200 may store both the 2D FoV and the 3D FoV of the entire video in the storage (not shown) of the edge node 200 .
  • the electronic device 100 may store only some 2D FoV and 3D FoV because the cache size is relatively small.
  • the electronic device 100 may request the FoV corresponding to the user's viewpoint from the edge node 200 .
  • the electronic device 100 may determine whether to request the FoV corresponding to the user's viewpoint as 2D FoV or 3D FoV based on a bandwidth allocation rate to be described later.
  • the edge node 200 transmits the FoV requested by the electronic device 100 to the electronic device 100 in a dimension (eg, 2D FoV or 3D FoV) requested by the electronic device 100 .
  • a dimension eg, 2D FoV or 3D FoV
  • the electronic device 100 may perform a post-processing operation of projecting the received 2D FoV in three dimensions.
  • 3D FoV corresponding to the user's viewpoint is stored in the electronic device 100 or the corresponding 3D FoV is downloaded from the edge node 200, there is no need to calculate a separate post-processing process.
  • 3D FoV takes up a lot of cache storage space because of the large data size, and takes longer than 2D FoV to transmit.
  • the embodiment will be described in more detail.
  • FIG. 2 is a block diagram of an electronic device according to an embodiment.
  • the electronic device 100 includes a communication unit 110 that communicates with the outside, a buffer 120 that stores current FoV information, a cache 130 that stores candidate FoV information, a processor 140 , and logical/physical connections therebetween. It may include a bus 150 that is a communication path.
  • the buffer 120 may be configured to store current Field of View (FoV) information corresponding to the user's current viewpoint in a first frame of content including a series of frames.
  • MoV Field of View
  • the buffer 120 is a memory or a portion of memory such as random access memory (RAM), flash memory, read only memory (ROM), hard disk drive (HDD), solid state drive (SSD), and the processor 140 ) may be a readable and writable storage space.
  • RAM random access memory
  • ROM read only memory
  • HDD hard disk drive
  • SSD solid state drive
  • the processor 140 may be a readable and writable storage space.
  • the cache 130 may be configured to store at least one piece of candidate FoV information corresponding to the user's expected time point in the second frame of the content, which is expected from the user's current time point.
  • the cache 130 may be a memory such as a RAM flash memory, a ROMHDD (hard disk drive), or an SSD, or a portion of a memory, and may be a storage space in which the processor 140 can read and write.
  • the buffer 120 and the cache 130 may be implemented as separate physical memories, or may be respectively accessed by the processor 140 as separate partitions or independent logical address spaces on the same physical memory.
  • the communication unit 110 may be configured to provide a connection to an external edge node 200 through a communication channel.
  • the communication unit 110 may provide a communication interface necessary to provide a transmission/reception signal between the edge node 200 and the electronic device 100 in the form of packet data.
  • the communication unit 110 may be a device including hardware and software necessary for transmitting and receiving signals such as control signals or data signals through wired/wireless connection with an external network device.
  • the processor 140 performs a first communication for obtaining current FoV information from the edge node 200 through the communication unit 110 with a first bandwidth determined based on a predetermined bandwidth allocation rate, and receives the predetermined bandwidth allocation rate. Based on the determined second bandwidth, the second communication may be configured to obtain at least one piece of candidate FoV information from the edge node 200 through the communication unit 110 .
  • the processor 140 may include any type of device capable of processing data.
  • the processor may refer to, for example, a data processing device embedded in hardware having a physically structured circuit to perform a function expressed as a code or an instruction included in a program.
  • a microprocessor a central processing unit (CPU), a processor core, a multiprocessor, an application-specific integrated (ASIC) circuit), a field programmable gate array (FPGA), and a processing device such as a graphics processing unit (GPU) may be included, but the scope of the present invention is not limited thereto.
  • the processor 140 may provide content rendering to the user of the electronic device 100 based on a series of current FoV information read from the buffer 120 .
  • the processor 140 may be configured to re-determine the current viewpoint of the user according to the actual movement of the user's viewpoint, and search for FoV information corresponding to the re-determined current viewpoint in the cache 130 .
  • the processor 140 through the communication unit 110, when the result of the above-mentioned search is a hit, performs the second communication, and when the above-described search result is a miss, performs the first communication and the second communication.
  • the processor 140 may be configured to read, as current FoV information, the cache hit FoV information from the cache 130 to the buffer 120 when a result of the above-described search is a cache hit. .
  • the processor 140 may control the communication unit 110 to perform the second communication using the entire bandwidth of the communication channel with the edge node 200 .
  • the processor 140 requests the cache miss FoV information to the edge node 200 through the communication unit 110, and in response to such a request, by the first communication, receive the cache-missed FoV information, and store the received cache-missed FoV information, as the current FoV information, in the buffer 120 .
  • the processor 140 transmits, to the edge node 200 , a FoV request including information indicating a current time point and dimension information, to the edge node 200 to obtain a cache-missed current FoV, and in response to the FoV request, from the edge node , it is possible to receive the current FoV of a dimension corresponding to such dimension information.
  • the processor 140 requests at least one piece of candidate FoV information from the edge node 200 through the communication unit 110 based on the user's expected time information, and in response to the request, at least through the second communication, Receive one piece of candidate FoV information, and store the received at least one candidate FoV information in the cache 130 .
  • the processor 140 compares the bandwidth allocation rate and the predetermined reference allocation rate, and, according to the result of the comparison, of the cache miss FoV information to be obtained in the first communication.
  • the electronic device 100 may further include an input interface and an output interface.
  • the input interface may include a keypad, a touch screen, and a button for receiving an input signal from a user.
  • the input interface may include a sensor capable of detecting a user's movement, gesture, biometric information, and the like, and a camera or image sensor that captures an image.
  • the output interface may include a display that visually provides VR content to a user, a speaker that outputs an auditory signal, and the like.
  • FIG. 3 is a diagram for explaining a cache update strategy according to an embodiment.
  • the first frame FRAME_A is a frame of content including a series of frames, and includes a set of FoVs A1 to A9.
  • 9 FoVs are exemplary for description, and the embodiment is not limited thereto.
  • the current FoV corresponding to the current view of the user in the first frame FRAME_A is the FoV A5 located in the center.
  • the second frame FRAME_B is a frame following the first frame. For example, it is assumed that the candidate FoV corresponding to the user's expected viewpoint in the second frame FRAME_B is B4.
  • the FoV information may include a FoV identifier (ID) and FoV data.
  • the FoV data may include coordinate information of the FoV.
  • FIG. 4 is a diagram for explaining a cache update strategy according to an embodiment.
  • the cache update strategy stores FoV information corresponding to the user's current time point and candidate FoV information expected from the current time point in advance.
  • the electronic device 100 may store FoV information corresponding to the user's current viewpoint in the buffer 120 , and store at least one piece of candidate FoV information expected from the current viewpoint in the cache 130 .
  • the buffer 120 stores FoV(A5) and FoV(A5) data DATA_A5 corresponding to the user's current time point.
  • the cache 130 may store, as at least one candidate FoV in the second frame, the FoV(B5, B6) selected with a uniform probability among the FoV(B4) expected from the user's current time point and the remaining FoVs of the second frame. . Also, the cache 130 may store information (not shown) on each candidate FoV.
  • the electronic device 100 provides FoV information corresponding to the current time point and at least one piece of candidate FoV information, respectively, for a first communication (hereinafter, also referred to as 'download') and a second communication (hereinafter, referred to as 'download'), which will be described later with reference to FIG. 6 . , also referred to as 'update'), respectively, from the edge node 200 .
  • the cache 130 preferentially stores the candidate FoV (B4) expected from the user's current time point, and if there is an extra space, the remaining FoVs (B5, B6) of the second frame (FRAME_B) with reference to FIG. 3 . can store more.
  • FIG. 5 is a diagram for explaining a cache update strategy according to an embodiment.
  • the user's viewpoint is moved from the first state shown in FIG. 4 , and a new user's current viewpoint is determined in the second state.
  • 5 exemplarily shows the buffer 120 and the cache 130 in the second state.
  • the candidate FoVs (B4, B5, B6) in the first state there is a FoV that matches the user's current viewpoint in the second state (cash hit), for example, that B4 matches the re-determined user's current viewpoint Assume
  • the buffer 120 stores information (DATA_B4) on FoV(B4) and FoV(B4) corresponding to the user's current time point.
  • the cache 130 stores candidate FoVs (C1, C2, C3) expected from the current time of the new user and information (not shown) on each candidate FoV.
  • the cache 130 may be reset prior to storing the candidate FoVs (C1, C2, C3) in the second state.
  • the cache 130 may flush the candidate FoVs (B4, B5, B6) in the first state prior to storing the candidate FoVs (C1, C2, C3) in the second state.
  • the edge computing system model according to the embodiment is a mobile edge computing network composed of one base station having cache and computing power and one virtual reality device.
  • the packet duration means the time from when the base station transmits one packet until the virtual reality device receives it.
  • N ⁇ 1,2, ..., N ⁇ .
  • virtual reality device users require one viewpoint n ⁇ N per frame.
  • the 2D FoV consists of M packets
  • the 3D FoV consists of KM packets.
  • K is a constant representing the ratio of the size of 2D and 3D FoV data, and K ⁇ 2.
  • the 3D FoV at the time requested by the user is stored in the virtual reality device or the 3D FoV is downloaded from the edge node 200, there is no need to calculate a separate post-processing process.
  • the 3D FoV occupies a lot of cache storage space because the data size is large, and the transmission time takes longer than the 2D FoV.
  • the virtual reality device performs a post-processing process of projecting the 2D FoV into the 3D FoV to render the video.
  • T proj MW / F [packet duration]
  • W and F are the number of CPU cycles required to project one packet of two-dimensional FoV, respectively, and the processor 140 of the electronic device 100 of CPU frequency [CPU cycles/packet duration].
  • the electronic device 100 can start the post-processing process only when it has all packets of the 2D FoV.
  • the embodiment proposes a cache update strategy using a bandwidth allocation technique.
  • the electronic device 100 can predict a point in time at which the user of the virtual reality content will see in the next frame.
  • the electronic device 100 is described in the paper F. Qian, L. Ji, B. Han, and V. Gopalakrishnan, “Optimizing 360 video delivery over cellular networks,” in Proc. 5th Workshop All Things Cellular Oper. Appl. Challenges, pp. 1-6, Oct. 2016. and paper A. Mahzari, AT Nasrabadi, A. Samiei, and R. Prakash, “FoV-aware edge caching for adaptive 360° video streaming,” in Proc. 26th Int. Conf. Multimedia, pp. 173-181, Oct.
  • the user's viewpoint in the next frame can be predicted using the technique of predicting the viewpoint using the user's head movement data proposed in 2018.
  • the electronic device 100 resets the cache 130 every frame, and when updating the cache 130 for the next frame, it updates the FoV at the predicted time point, and calculates the FoV of the remaining time points in case the prediction is wrong. Update as many as possible before the next frame.
  • FIG. 6 is a diagram for explaining a bandwidth allocation scheme according to an embodiment.
  • a bandwidth allocation technique for cache update will be described with reference to FIG. 6 .
  • the bandwidth is divided into two parts, one part downloads the FoV of the currently requested time point to the buffer 120 , and the other part is used to update the FoV of the next frame in the cache 130 .
  • the ratio of the bandwidth allocated to download the currently requested time point to the buffer 120 is expressed as ⁇ (0 ⁇ ⁇ 1).
  • the bandwidth allocation rate is an optimal value set to minimize the average delay time of a communication channel between the edge node 200 and the electronic device 100 .
  • the electronic device 100 connects between the edge node 200 and the electronic device 100 .
  • the second communication may be performed in which the edge node 200 receives at least one candidate FoV expected from the current time newly determined as the entire bandwidth of the communication channel.
  • the electronic device 100 may update the cache 130 with at least one candidate FoV received as a result of the second communication.
  • the diagram 610 of FIG. 6 shows that the entire bandwidth is allocated to the second communication in case of a cache hit.
  • the electronic device 100 communicates between the edge node 200 and the electronic device 100 .
  • the bandwidth of the channel is divided into a first communication for receiving the FoV of the current time from the edge node 200 and a second communication for receiving at least one candidate FoV from the edge node 200 according to a predetermined bandwidth allocation rate ⁇ . can be done
  • the electronic device 100 may store the FoV of the current time received through the first communication in the buffer 120 and store at least one candidate FoV received through the second communication in the cache 130 .
  • a diagram 620 of FIG. 6 shows a bandwidth ⁇ allocated for a first communication
  • a diagram 630 shows a bandwidth 1- ⁇ allocated for a second communication.
  • the first communication and the second communication may be performed in parallel at the same time.
  • the present invention deals with finding an optimal bandwidth allocation that can minimize the average delay time of wireless virtual reality in each of two scenarios of updating the cache 130 with 2D FoV and 3D FoV.
  • the wireless communication channels considered in the present invention are as follows. First, it is assumed that packet transmission and reception occur during the packet duration.
  • the wireless communication channel assumes a Rayleigh block fading channel model that follows an independent and identically distributed channel gain with a constant channel gain for each packet duration.
  • the packet transmission failure probability is defined as an outage probability. If the electronic device 100 fails to decode the packet due to the unstable radio channel, it is assumed that the edge node 200 retransmits the packet until the electronic device 100 successfully receives the packet.
  • the packet transmission failure probability when the cache 130 is updated is P e,h
  • the received SNR is ⁇ h
  • ⁇ d and ⁇ u mean reception SNRs in each case.
  • the received SNR is averaged It is assumed that P e,h, P e,d, P e,u can be calculated as follows.
  • 7A is a diagram for explaining a delay time in case of a cache hit according to an embodiment.
  • the cache-hit FoV When the cache-hit FoV is stored as a 2D FoV in the cache 130 , it takes time for the electronic device 100 to project the cache-hit 2D FoV, and this time is short do.
  • the cache-hit FoV is stored as a 3D FoV in the cache 130 , it can be rendered immediately without requiring additional time.
  • 7B is a diagram for explaining a delay time in case of a cache miss according to an embodiment.
  • a scenario of updating at least one candidate FoV to 2D FoV through second communication and updating to 3D FoV Let's break it down into scenarios. That is, in the present invention, two scenarios of updating the cache 130 with only two-dimensional FoV for at least one candidate FoV of the next frame and a scenario of updating the cache 130 with only three-dimensional FoV are considered, each of which is a Markov chain ( Markov chain).
  • the state of the Markov chain is defined as the number of 2D FoVs stored in the cache 130 .
  • S 2D ⁇ 0,1,...,L ⁇ .
  • the state of the cache 130 is s ⁇ S 2D , it is a state in which s two-dimensional FoVs are stored in the cache 130 of the electronic device 100 .
  • the transition probability of the Markov chain was obtained using the number of packets that can be successfully received from the edge node 200 for a time T f before the next frame.
  • the number of packets that can be successfully received during T f is expressed as X.
  • Pr[X x
  • H] and Pr[X x
  • H c ] can be calculated as follows.
  • the probability that event H occurs and the probability that event H does not occur in state i can be calculated as follows.
  • the state transition probability ⁇ ij,2D is the packet transmission failure probability P e,h of the first communication when the result of searching the cache 130 for the current FoV is a cache hit, and the first communication in the case of a cache miss. It is a function based on the packet transmission failure probability P e,d of , and the packet transmission failure probability P e,u of the second communication.
  • the average delay time in a steady state is analyzed, and the steady-state distribution in the two-dimensional FoV update scenario is defined as ⁇ i,2D ⁇ .
  • ⁇ i,2D ⁇ can be obtained by solving the following linear equations.
  • the average delay time may be defined based on a steady state distribution ⁇ i,2D ⁇ of the electronic device 100 .
  • the electronic device 100 directly calculates the projection from the 2D FoV to the 3D FoV . If the FoV corresponding to the time point requested by the user is not stored in the cache 130 of the electronic device 100, the 2D or 3D FoV at the requested time point is downloaded from the edge node 200. At this time, the average In order to reduce the delay time, it is necessary to determine whether to download the 2D FoV or the 3D FoV. According to an embodiment, this determination may be determined based on a threshold value for the bandwidth allocation ⁇ .
  • the average delay time when the 2D FoV of the corresponding time point is received from the edge device 200 is analyzed The delay time in this case is was expressed.
  • the average delay time when an event H in which the 2D FoV corresponding to the time requested by the user is stored in the cache 130 occurs is am.
  • the event H does not occur, it takes time for the electronic device 100 to download the 2D FoV at the point in time requested by the current user from the edge node 200 and time to project the 2D FoV into the 3D FoV.
  • the number of transmissions for successful reception of each packet follows an independent and identically distributed geometric distribution with a success probability of 1-P e,d .
  • the average delay time when the requested time point is not stored in the cache 130 of the electronic device 100 is am.
  • the average delay time in state i can be obtained as follows.
  • the average delay time may be determined according to the average delay time in all states i of the steady state distribution ⁇ i,2D ⁇ . For example, it may be determined as the average of the average delay times in all states i of the steady-state distribution ⁇ i,2D ⁇ .
  • the average delay in downloading the 3D FoV of the corresponding viewpoint from the edge node 200 through the first communication time can be calculated. delay time in this case was expressed.
  • the difference from the previous case of downloading the 2D FoV is that when the 3D FoV is downloaded from the edge node 200, a separate projection process is not required. Therefore, when the corresponding time point is not stored in the cache 130, the average delay time for successfully receiving KM packets from the base station is can be calculated with In addition, the average delay time in state i can be calculated as follows.
  • the average delay in the case of downloading the 3D FoV of the viewpoint from the edge node 200 when the viewpoint currently viewed by the user is not stored in the cache 130 . time is can be calculated with
  • the present invention shows that this determination should be made with reference to the threshold ⁇ th .
  • the delay time in the two-dimensional FoV update scenario is expressed as T 2D . saved earlier Wow If a selection is made to reduce the average delay time by comparing the size of , the average delay time can be expressed as follows.
  • the cache 130 of the electronic device 100 is updated only with the 3D FoV. Like the 2D FoV update scenario, it is expressed using Markov chains.
  • the transition probability of the Markov chain can be expressed using the number of packets that can be successfully received from the edge node 200 before the next frame. If the transition probability from state i to state j in this scenario is defined as ⁇ ij,3D , it can be calculated as follows.
  • the viewpoint viewed by the user in the current frame is stored in the cache 130, rendering is possible without a separate post-processing process.
  • the 2D or 3D FoV must be downloaded from the edge node 200 . Therefore, as in the 2D FoV update scenario, when the viewpoint is not stored in the cache, the average delay time for downloading the 2D and 3D FoV from the edge device 200 is calculated, respectively, and then the average delay time is reduced. Analyze which FoV you need to download for
  • the delay time of downloading the 2D FoV from the edge node 200 and projecting it into the 3D FoV is calculated.
  • the average delay time am when the corresponding time point is stored in the cache 130, a separate post-processing process is not required, so the average delay time am.
  • the corresponding time point is not stored in the cache 130 and the 2D FoV is downloaded by the first communication, it takes time to download M packets and computing time, and the average delay time is am. Therefore, the average delay time in state i can be calculated as follows.
  • Average delay time to obtain by taking the average of all states cast can be calculated as
  • the time taken to download the 3D FoV from the edge node 200 through the first communication when the viewpoint viewed by the user in the current frame is not stored in the cache 130 indicate that As in the previous case, .
  • the corresponding time point is not stored in the cache 130, since the 3D FoV is downloaded through the first communication, it takes time to transmit KM packets, and the average delay time is am. Therefore, the average delay time in state i can be calculated as follows.
  • the average delay time to be obtained is can be expressed as
  • the average delay time of the 3D FoV update scenario can be expressed as follows.
  • FIG 8 and 9 are flowcharts for explaining an edge computing method according to an embodiment.
  • the edge computing method includes the steps of obtaining a predetermined bandwidth allocation rate for a communication channel with the edge node 200 ( S810 ), a first bandwidth determined based on the bandwidth allocation rate, and a user from the edge node 200 Performing the first communication to obtain FoV information corresponding to the current time point of (S820), and the second bandwidth determined based on the bandwidth allocation rate, the user's expected time point expected from the current time point from the edge node 200
  • the method may include performing a second communication for obtaining at least one piece of candidate FoV information corresponding to ( S830 ).
  • the electronic device 100 may obtain a predetermined bandwidth allocation rate ⁇ for a communication channel with the edge node 200 .
  • the predetermined bandwidth allocation rate ⁇ may be determined for each of the 2D FoV update scenario and the 3D FoV update scenario based on the aforementioned wireless communication channel model between the edge node 200 and the electronic device 100 .
  • the predetermined bandwidth allocation rate ⁇ may be a value pre-calculated for a wireless communication channel environment between the edge node 200 and the electronic device 100 .
  • the electronic device 100 may receive a value of the bandwidth allocation rate ⁇ from the edge node 200 .
  • the electronic device 100 may additionally acquire a threshold value ⁇ th for the bandwidth allocation rate together.
  • the edge computing method according to the embodiment may further include step S920 with reference to FIG. 9 .
  • the processor 140 of the electronic device 100 re-determines the current viewpoint of the user according to the actual movement of the user's viewpoint, and in the cache 130 storing at least one candidate FoV, the re-determined current viewpoint and searching for a FoV corresponding to .
  • the electronic device 100 determines whether there is a FoV that matches the FoV corresponding to the re-determined current time point among at least one candidate FoV stored in the cache 130 . and generating, by the electronic device 100, a result of the search as a cache hit or a cache miss according to whether such a match is made. Steps 820 and 830 according to the results of the search may be included.
  • the first bandwidth and the second bandwidth may be determined in .
  • step S820 the electronic device 100 acquires FoV information corresponding to the current view of the user from the edge node 200 with the first bandwidth determined based on the bandwidth allocation rate obtained in step S810 . communication can be performed.
  • step S820 when the result of the search in step S920 is a cache miss, the processor 140 requests cache miss FoV information to the edge node 200 through the communication unit 110 , and , in response to such a request, may be configured to receive, by the first communication, cache-missed FoV information, and store the received cache-missed FoV information, as current FoV information, in the buffer 120 .
  • step S820 the processor 140 transmits, to the edge node 200 , a FoV request including information indicating the user's current view and dimension information, to the edge node 200 to obtain a cache-missing current FoV, and responds to the FoV request.
  • a current FoV of a dimension corresponding to this dimension information may be received from the edge node.
  • Step S820 may include steps S940, S950, and S960 performed when the result of the search in step S920 is a cache miss with reference to FIG. 9 . That is, when the result of the search is a cache miss, the step S820 may include the step S950 or the step S960 of performing the first communication.
  • the electronic device 100 compares the bandwidth allocation rate ⁇ obtained in operation S910 to a threshold value ⁇ th for the bandwidth allocation rate in operation S940 .
  • the electronic device 100 may determine the dimension information of the cache-missing FoV to be obtained through the first communication in steps S950 and S960 according to the result of the comparison in step S940 .
  • the electronic device 100 downloads the FoV corresponding to the current time re-determined in step S950 as a 2D FoV (first communication) and may be stored in the buffer 120 .
  • the electronic device 100 sends, to the edge node 200, a FoV request including information indicating the current time and dimensional information (eg, 2D) of the FoV in order to obtain the FoV corresponding to the current time point that is cache-missed.
  • a FoV request it is possible to receive, from the edge node 200, a FoV of a dimension corresponding to the requested dimension information.
  • the electronic device 100 converts the FoV corresponding to the current time re-determined in step S960 to the 3D FoV. It can be downloaded (first communication) and stored in the buffer 120 .
  • the electronic device 100 sends, to the edge node 200, a FoV request including information indicating the current time and dimensional information (eg, 3D) of the FoV to obtain the FoV corresponding to the cache-missed current time point.
  • a FoV request it is possible to receive, from the edge node 200, a FoV of a dimension corresponding to the requested dimension information.
  • the electronic device 100 may perform first communication with the first bandwidth determined according to the bandwidth allocation rate.
  • the first bandwidth is a value obtained by multiplying the total bandwidth between the edge node 200 and the electronic device 100 by a bandwidth allocation rate.
  • step S830 the electronic device 100 uses the second bandwidth determined based on the bandwidth allocation rate obtained in step S810 , and at least one corresponding to the user's expected time point expected from the current time point from the edge node 200 .
  • Second communication may be performed to obtain candidate FoV information of .
  • Step S830 is performed when the result of the search is a cache hit in step S920 with reference to FIG. 9 and steps S930 and S940 and steps S970 performed when the result of the search is a cache miss with reference to FIG. ) may be included.
  • step S830 the electronic device 100 receives at least one candidate FoV from the edge node 200 (second communication), and stores (updates) the received at least one FoV information in the cache 130 . there is.
  • the electronic device 100 may perform second communication with the entire bandwidth between the edge node 200 and the electronic device 100 .
  • the electronic device 100 may perform second communication with the second bandwidth determined according to the bandwidth allocation rate 1- ⁇ .
  • the second bandwidth is a bandwidth corresponding to a ratio of (1- ⁇ ) among the entire bandwidth.
  • the electronic device 100 may perform second communication in which at least one candidate FoV is one of 2D FoV or 3D FoV according to a predetermined scenario.
  • the electronic device 100 may receive a predetermined scenario from the edge node 200 in step S910 .
  • the electronic device 100 may determine whether to perform the 2D update or the 3D update according to the size of the cache 130 of the electronic device 100 or the computing power of the processor 140 , and transmit it to the edge node 200 .
  • the electronic device 100 may include the update scenario information determined by the electronic device 100 in a message requesting each FoV to the edge node 200 through the communication unit 110 .
  • FIG. 10 is a table showing a simulation environment of an edge computing method according to an embodiment.
  • the performance of the present invention was analyzed through computer simulation. A detailed simulation implementation environment can be found in the table of FIG. 10 . For each scenario, the average received SNR ( ) was 3dB, 5dB, and 7dB.
  • 11 is a graph showing simulation results of a 2D update scenario
  • 12 is a graph showing a simulation result of a 3D update scenario.
  • the average delay time according to bandwidth allocation in each scenario can be confirmed.
  • both scenarios have average received SNR ( ) increases to 3dB, 5dB, and 7dB, the probability of packet transmission failure decreases, so it can be seen that the average delay time is reduced, and it can be confirmed that the average delay time can be lowered than the short required delay time of virtual reality. That is, the bandwidth allocation rate is an optimal value set to minimize the average delay time of the communication channel between the edge node 200 and the electronic device 100 .
  • FIG. 13 is a pseudo code for finding a value of a bandwidth allocation rate in a 2D update scenario
  • FIG. 14 is a pseudo code for finding a value of a bandwidth allocation ratio in a 3D update scenario.
  • the present invention since it is not easy to access some values in the problem of finding the optimal bandwidth allocation for minimizing the average delay time in the 2D FoV update scenario and the 3D FoV update scenario, the present invention uses a numerical search to optimize Found bandwidth allocation.
  • a specific numerical search algorithm in each scenario is presented in FIGS. 13 and 14, respectively.
  • the variable representing the optimal average delay time is ⁇
  • a variable representing bandwidth allocation
  • the reason for setting the initial value of bandwidth allocation to 0.1 rather than 0 is that when the time point requested by the user is not stored in the cache 130 of the electronic device 100 when it is set to 0, the Because there is no available bandwidth.
  • the average delay time at this time is becomes the average delay time E[T 2D ] at a given ⁇ . Conversely becomes the average delay time E[T 2D ] at a given ⁇ . This corresponds to lines 5-9 of FIG. 13 .
  • the process of updating the optimal value of ⁇ having the smallest average delay time while continuously increasing the bandwidth allocation ⁇ by 0.1 corresponds to lines 10-14 of FIG. 13 . If you want to find the optimal bandwidth allocation more precisely, you can search more precisely by increasing the ⁇ value in the 14th line to a smaller value instead of increasing it by 0.1 unit.
  • the above-described embodiment according to the present invention may be implemented in the form of a computer program that can be executed through various components on a computer, and such a computer program may be recorded in a computer-readable medium.
  • the medium includes a hard disk, a magnetic medium such as a floppy disk and a magnetic tape, an optical recording medium such as CD-ROM and DVD, a magneto-optical medium such as a floppy disk, and a ROM. , RAM, flash memory, and the like, and hardware devices specially configured to store and execute program instructions.
  • the computer program may be specially designed and configured for the present invention, or may be known and used by those skilled in the computer software field.
  • Examples of the computer program may include not only machine code generated by a compiler, but also high-level language code that can be executed by a computer using an interpreter or the like.
  • the present invention was carried out as part of the following research project.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Databases & Information Systems (AREA)
  • Information Transfer Between Computers (AREA)
  • Mobile Radio Communication Systems (AREA)

Abstract

La présente invention concerne une mise à jour de mémoire cache et une attribution de bande passante pour une réalité virtuelle sans fil. L'invention concerne un procédé d'informatique à la frontière qui peut fournir un contenu d'une manière à latence ultra-faible dans une réalité virtuelle sans fil, et un dispositif électronique et un système d'informatique à la frontière pour fournir un contenu par un tel procédé.
PCT/KR2020/011014 2020-08-19 2020-08-19 Procédé d'informatique à la frontière, dispositif électronique et système pour fournir une mise à jour de mémoire cache et une attribution de bande passante pour une réalité virtuelle sans fil WO2022039292A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/KR2020/011014 WO2022039292A1 (fr) 2020-08-19 2020-08-19 Procédé d'informatique à la frontière, dispositif électronique et système pour fournir une mise à jour de mémoire cache et une attribution de bande passante pour une réalité virtuelle sans fil

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/KR2020/011014 WO2022039292A1 (fr) 2020-08-19 2020-08-19 Procédé d'informatique à la frontière, dispositif électronique et système pour fournir une mise à jour de mémoire cache et une attribution de bande passante pour une réalité virtuelle sans fil

Publications (1)

Publication Number Publication Date
WO2022039292A1 true WO2022039292A1 (fr) 2022-02-24

Family

ID=80322955

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2020/011014 WO2022039292A1 (fr) 2020-08-19 2020-08-19 Procédé d'informatique à la frontière, dispositif électronique et système pour fournir une mise à jour de mémoire cache et une attribution de bande passante pour une réalité virtuelle sans fil

Country Status (1)

Country Link
WO (1) WO2022039292A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115278290A (zh) * 2022-06-30 2022-11-01 华中科技大学 一种基于边缘节点的虚拟现实视频缓存方法及装置

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20130074953A (ko) * 2011-12-27 2013-07-05 한국과학기술정보연구원 동적 가상 머신 배치 장치 및 방법
KR20190068148A (ko) * 2017-12-08 2019-06-18 주식회사 이누씨 Vr 영상 스트리밍 방법 및 장치
KR102100161B1 (ko) * 2014-02-04 2020-04-14 삼성전자주식회사 Gpu 데이터 캐싱 방법 및 그에 따른 데이터 프로세싱 시스템
US10728744B2 (en) * 2018-04-27 2020-07-28 Hewlett Packard Enterprise Development Lp Transmission outside of a home network of a state of a MEC application

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20130074953A (ko) * 2011-12-27 2013-07-05 한국과학기술정보연구원 동적 가상 머신 배치 장치 및 방법
KR102100161B1 (ko) * 2014-02-04 2020-04-14 삼성전자주식회사 Gpu 데이터 캐싱 방법 및 그에 따른 데이터 프로세싱 시스템
KR20190068148A (ko) * 2017-12-08 2019-06-18 주식회사 이누씨 Vr 영상 스트리밍 방법 및 장치
US10728744B2 (en) * 2018-04-27 2020-07-28 Hewlett Packard Enterprise Development Lp Transmission outside of a home network of a state of a MEC application

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
YAPING SUN; ZHIYONG CHEN; MEIXIA TAO; HUI LIU: "Communications, Caching and Computing for Mobile Virtual Reality: Modeling and Tradeoff", ARXIV.ORG, CORNELL UNIVERSITY LIBRARY, 201 OLIN LIBRARY CORNELL UNIVERSITY ITHACA, NY 14853, 23 June 2018 (2018-06-23), 201 Olin Library Cornell University Ithaca, NY 14853 , XP080893770 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115278290A (zh) * 2022-06-30 2022-11-01 华中科技大学 一种基于边缘节点的虚拟现实视频缓存方法及装置
CN115278290B (zh) * 2022-06-30 2024-04-19 华中科技大学 一种基于边缘节点的虚拟现实视频缓存方法及装置

Similar Documents

Publication Publication Date Title
WO2021085825A1 (fr) Dispositif électronique et procédé de réalisation d'une télémétrie par l'intermédiaire de uwb
WO2020071809A1 (fr) Procédé et appareil de gestion d'assertion améliorée dans un traitement multimédia en nuage
WO2014081264A1 (fr) Procédé de transmission de paquets à partir d'un nœud, et propriétaire de contenu, dans une mise en réseau axée sur le contenu
WO2016039576A2 (fr) Dispositif et procédé d'accès à une pluralité de réseaux dans un système de communications sans fil
WO2014038902A1 (fr) Procédé et dispositif permettant d'exécuter une application
WO2014157886A1 (fr) Procédé et dispositif permettant d'exécuter une application
WO2013151374A1 (fr) Procédé et système de transfert de données entre une pluralité de dispositifs
WO2014038860A1 (fr) Procédé d'exécution d'une application et terminal utilisant le procédé
WO2021107739A1 (fr) Procédé et appareil de délestage de données dans un système de communication sans fil
WO2021150060A1 (fr) Procédé et appareil de service informatique périphérique
WO2014175694A1 (fr) Dispositif électronique pour accès radio multiple et procédé associé
WO2015026058A1 (fr) Procédé, terminal et système de reproduction de contenu
WO2022015020A1 (fr) Procédé et dispositif de réalisation de rendu utilisant une prédiction de pose compensant la latence par rapport à des données multimédias tridimensionnelles dans un système de communication prenant en charge une réalité mixte/réalité augmentée
EP3266201A1 (fr) Procédé et dispositif de synthétisation de contenu d'arrière-plan tridimensionnel
WO2016072721A1 (fr) Procédé de transmission et de réception de données de dispositif électronique et dispositif électronique utilisant ledit procédé
WO2019156506A1 (fr) Système et procédé de fourniture de contenus conversationnels
WO2022039292A1 (fr) Procédé d'informatique à la frontière, dispositif électronique et système pour fournir une mise à jour de mémoire cache et une attribution de bande passante pour une réalité virtuelle sans fil
WO2016111502A1 (fr) Système et procédé d'envoi d'informations concernant une tâche à un dispositif externe
WO2014142532A1 (fr) Système de fourniture d'informations comportant un mécanisme d'annonce et son procédé de fonctionnement
WO2021251694A1 (fr) Procédé et appareil pour échanger des informations de service dans un système à ultra-large bande
WO2022131465A1 (fr) Dispositif électronique et procédé permettant d'afficher un contenu de réalité augmentée
WO2015005718A1 (fr) Procédé de commande d'un mode de fonctionnement et dispositif électronique associé
WO2024025199A1 (fr) Dispositif informatique et procédé de fonctionnement associé
WO2015119361A1 (fr) Système de service de diffusion en continu en nuage, procédé de fourniture de service de diffusion en continu en nuage, et dispositif associé
WO2013125920A1 (fr) Procédé, appareil et système pour effectuer un téléchargement non sollicité basé sur un emplacement

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20950373

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20950373

Country of ref document: EP

Kind code of ref document: A1