WO2024120309A1 - 渲染方法、装置及系统 - Google Patents

渲染方法、装置及系统 Download PDF

Info

Publication number
WO2024120309A1
WO2024120309A1 PCT/CN2023/135811 CN2023135811W WO2024120309A1 WO 2024120309 A1 WO2024120309 A1 WO 2024120309A1 CN 2023135811 W CN2023135811 W CN 2023135811W WO 2024120309 A1 WO2024120309 A1 WO 2024120309A1
Authority
WO
WIPO (PCT)
Prior art keywords
rendering
computing power
information
units
unit
Prior art date
Application number
PCT/CN2023/135811
Other languages
English (en)
French (fr)
Inventor
唐小伟
范强
娄崇
曾清海
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Publication of WO2024120309A1 publication Critical patent/WO2024120309A1/zh

Links

Definitions

  • the present application relates to image rendering, and more particularly to a rendering method, device and system.
  • Extended reality refers to various environments that combine reality and virtuality and the interaction between people and machines generated by computing technology and wearable devices. Specifically, it includes the following forms: augmented reality (AR), mixed reality (MR), and virtual reality (VR).
  • AR augmented reality
  • MR mixed reality
  • VR virtual reality
  • the process of VR service from initiation to termination can be described as follows: the VR device (for example, a head-mounted display device) first captures the user's posture or the user's location, or first receives the user's instructions, etc.; then, the user information including the user's posture information, the user's location information, or the user's instructions is sent to the server as a user request; the server then renders the video content associated with the user request to generate video frames, and then transmits the video frames back to the VR device for display.
  • rendering refers to the process of generating images from a model, and the model is a representation of a three-dimensional object or virtual environment defined by a programming language or data structure.
  • VR services require both low latency and high speed.
  • VR video services require a head motion response latency (motion-to-photo, MTP) of less than 25 milliseconds (ms), that is, the time from the change of user posture or user position to the appearance of the corresponding video image on the head display device is less than 25ms, or the time from the user issuing a command to the appearance of the corresponding video image on the head display device is less than 25ms. If the MTP latency is greater than 25ms, the user will feel dizzy.
  • MTP latency head motion response latency
  • the bandwidth requirement is roughly 30 megabits per second (Mbps).
  • VR services are gradually developing from the initial immersion stage to the deep immersion stage, characterized by the development of resolution from 2K to 4K and 8K, and the development of frame rate from 60FPS to 90FPS and 120FPS.
  • the bandwidth requirement is also developing from Mbps to gigabits per second (Gbps).
  • VR services are gradually evolving from local rendering to cloud rendering.
  • the rendering latency of VR services is also increasing. How to reduce the rendering latency is a problem that needs to be solved urgently.
  • the present application provides a rendering method, device and system, which can reduce the interaction between computing nodes responsible for rendering and reduce the rendering delay.
  • the present application provides a rendering method, the method comprising: obtaining information of a rendering task, the rendering task being divided into multiple rendering units, the information of the rendering task including a dependency relationship between at least two of the multiple rendering units; and determining, based on the dependency relationship, to allocate one or more of the multiple rendering units to a first computing power node for rendering.
  • the dependency it is determined that one or more rendering units are allocated to the first computing power node, that is, the first computing power node is determined to be responsible for rendering the one or more rendering units. Since the dependency of the rendering units is considered in the process of allocating the rendering units, compared with the solution that does not consider the dependency, the above method can reduce the interaction between the computing power nodes responsible for rendering, thereby reducing the interaction overhead between the computing power nodes, and further reducing the rendering delay.
  • the dependency relationship indicates that a first rendering unit among the multiple rendering units depends on a second rendering unit among the multiple rendering units, it is determined to allocate both the first rendering unit and the second rendering unit to the first computing node.
  • multiple rendering units with a dependency relationship are allocated to the same computing node, avoiding interaction between computing nodes caused by allocation to different computing nodes, thereby reducing interaction between computing nodes.
  • the information of the rendering task also includes the computing power required for rendering the third rendering unit
  • the method further includes: receiving computing power resource sizes of one or more computing power nodes from a computing power management function module, the one or more computing power nodes including the first computing power node; and rendering the third rendering unit according to the computing power resource size of the one or more computing power nodes.
  • the computing power required by the rendering unit and the computing power resource size of the computing node are considered when allocating computing nodes, and a computing node that can meet the computing power required by the rendering unit is selected, thereby meeting the computing power requirements of the rendering task.
  • the information of the rendering task also includes a task type indicating rendering
  • the method further includes: receiving a computing power type of one or more computing power nodes from a computing power management function module, wherein the one or more computing power nodes include the first computing power node, and the computing power type of the first computing power node includes a graphics processing unit; and according to the task type and the computing power type of the first computing power node, determining to allocate the third rendering unit to the first computing power node.
  • the task type is rendering and the computing power type that can be provided by the computing power node is considered, and a computing power node that can support rendering (that is, the computing power type includes a graphics processing unit) is selected to implement the rendering task.
  • the computing power management function module when the execution subject of the method is an access network device, the computing power management function module is a function module in the access network device, or a network element in a core network connected to the access network device; when the execution subject of the method is a head display device HMD, the computing power management function module is a function module in the access network device connected to the HMD, or a network element in a core network connected to the access network device.
  • the computing power management function module located on the access network side or the core network side manages the computing power nodes, the number of computing power nodes managed by it can be more than the number of computing power nodes managed by the HMD, thereby realizing the utilization of more computing power.
  • the information of the rendering task also includes the data volume of the third rendering unit
  • the method further includes: obtaining the communication status information of the first computing power node; and according to the data volume of the third rendering unit and the communication status information of the first computing power node, determining to allocate the third rendering unit to the first computing power node.
  • the data volume of the rendering unit and the communication status information of the computing power node are considered, and a computing power node that can meet the data transmission requirements of the rendering unit is selected, thereby meeting the transmission requirements during the rendering task processing.
  • the information of the rendering task also includes the latency requirement of the rendering task. Then, according to the data volume of the third rendering unit, the communication status information of the first computing node and the latency requirement of the rendering task, it is determined to allocate the third rendering unit to the first computing node. That is to say, when allocating computing nodes, the latency requirement of the rendering task is also considered, and a computing node that can support the transmission of the data volume of the rendering unit within the latency requirement is selected, so as to meet the latency requirement of the rendering task.
  • the execution subject of the method is an access network device
  • the information of the rendering task is obtained from the HMD; and the method further includes: sending the identification information of the one or more rendering units and the identification information of the first computing power node to the HMD.
  • the access network device is responsible for allocating computing power nodes and sending the allocation results to the HMD.
  • rendering subtask information is received from the HMD, and the rendering subtask information is used to indicate rendering a subtask including the one or more rendering units; and the rendering subtask information is forwarded to the first computing power node.
  • the access network device forwards the rendering subtask information from the HMD to the first computing power node, so that the first computing power node can render one or more rendering units based on the rendering subtask information.
  • the method when the execution subject of the method is an HMD, the method further includes: sending rendering subtask information to the first computing power node, wherein the rendering subtask information is used to indicate rendering a subtask including the one or more rendering units.
  • the HMD is responsible for allocating computing power nodes and determining rendering subtask information, and sending rendering subtask information to the first computing power node, so that the first computing power node can render one or more rendering units based on the rendering subtask information.
  • the present application provides a device (or a device for allocating rendering tasks), which has the function of implementing the above-mentioned first aspect.
  • the device includes a module or unit or means corresponding to the operation involved in the above-mentioned first aspect.
  • the module or unit or means can be implemented by software, or by hardware, or the corresponding software can be implemented by hardware.
  • the device includes a processing module and a communication module, wherein the communication module can be used to send and receive signals to achieve communication between the device and other devices; the processing module can be used to perform some internal operations of the device.
  • the functions performed by the processing module and the communication module can correspond to the operations involved in the first aspect above.
  • the device includes a processor, which can be coupled to a memory.
  • the memory can store necessary computer programs or instructions for implementing the functions involved in the first aspect.
  • the processor can execute the computer program or instructions stored in the memory, and when the computer program or instructions are executed, the device implements the method in any possible design or implementation of the first aspect.
  • the device includes a processor and a memory, and the memory can store necessary computer programs or instructions for implementing the functions involved in the first aspect.
  • the processor can execute the computer program or instructions stored in the memory, and when the computer program or instructions are executed, the device implements the method in any possible design or implementation of the first aspect.
  • the device includes a processor and an interface circuit, wherein the processor is used to communicate with other devices through the interface circuit and execute the method in any possible design or implementation of the first aspect above.
  • the processor can be implemented by hardware or by software.
  • the processor can be a logic circuit, an integrated circuit, etc.; when implemented by software, the processor can be a general-purpose processor, which is implemented by reading the software code stored in the memory.
  • the above processors can be one or more, and the memories can be one or more.
  • the memory can be integrated with the processor, or the memory can be separately set from the processor. In the specific implementation process, the memory can be integrated with the processor on the same chip, or can be set on different chips respectively.
  • the embodiment of the present application does not limit the type of memory and the setting method of the memory and the processor.
  • the present application provides a system (or a rendering system), which may include the device in the second aspect and other devices that interact with the device (such as access network equipment, computing power nodes, computing power management function modules, or HMDs, etc.).
  • a system or a rendering system
  • the device in the second aspect may include the device in the second aspect and other devices that interact with the device (such as access network equipment, computing power nodes, computing power management function modules, or HMDs, etc.).
  • the present application provides a computer-readable storage medium, in which computer-readable instructions are stored.
  • a computer reads and executes the computer-readable instructions, the computer executes the method in any possible design of the first aspect above.
  • the present application provides a computer program product.
  • the computer reads and executes the computer program product, the computer executes the method in any possible design of the first aspect.
  • the present application provides a chip, comprising a processor, wherein the processor is coupled to a memory and is used to read and execute a software program stored in the memory to implement a method in any possible design of the first aspect above.
  • FIG1 is a schematic diagram of a distributed collaborative rendering architecture provided in an embodiment of the present application.
  • FIG2 is a schematic diagram of the dependency relationship between rendering units in a rendering process provided by an embodiment of the present application
  • FIG3 is a schematic diagram of a flow chart of a rendering method provided in an embodiment of the present application.
  • FIG4 is a schematic diagram of a flow chart of another rendering method provided in an embodiment of the present application.
  • FIG5 is a schematic diagram of the structure of a device provided in an embodiment of the present application.
  • the embodiment of the present application relates to a distributed collaborative rendering architecture for VR services.
  • a distributed collaborative rendering architecture is proposed in the embodiment of the present application, and the devices included in the distributed rendering architecture are introduced below.
  • a head-mounted display (HMD) device 101 (hereinafter referred to as HMD or VR glasses) is responsible for sensing the user's posture or the user's location, or receiving the user's instructions to determine the content to be rendered. HMD 101 can complete tasks such as reporting user information, foreground extraction, foreground content rendering task allocation, and bitstream merging.
  • the surrounding computing power node (CPN) (or edge computing power node) 102 can complete the feedback of channel state information (CSI) between CPN 102 and other devices or nodes, such as other CPN, HMD or access network equipment, registration of computing power resources and feedback of computing power status, and can also assist VR glasses to complete decoding and rendering tasks.
  • CPN can be a base station computing power board, a dedicated computing power board, a personal computer (PC) in a home network, a mobile phone, a smart watch, a customer premise equipment (CPE), an intelligent gateway (ING), a sinking content delivery network (CDN) device, a mobile edge computing (MEC) device, or a terminal of other VR devices. It can be understood that CPN is closer to HMD than a cloud server and has less latency.
  • CPNs can be connected through sidelink, in which case CPNs can communicate directly without base stations for relay; or CPNs can be connected through 5G system (5G system, 5GS), in which case the communication between CPNs requires base stations for relay.
  • CPN and HMD can communicate through the wireless port between the terminal and the base station, such as the Uu port.
  • CPN The communication between the CPN and the HMD requires the base station to transfer; or, the CPN and the HMD can communicate through the sidelink mode, in which the CPN and the HMD can communicate directly without the base station to transfer.
  • the sidelink is an interface for direct communication between terminals in proximity-based services (ProSe).
  • terminals can both be terminal devices (referred to as terminals).
  • Terminals also known as user equipment (UE), mobile stations (MS), mobile terminals (MT), etc.
  • UE user equipment
  • MS mobile stations
  • MT mobile terminals
  • V2X vehicle-to-everything
  • Access network equipment 103 refers to a radio access network (RAN) node (or device) that connects a terminal to a wireless network, which can also be called a base station, for example, it can be an evolved NodeB (eNodeB), a next generation NodeB (gNB) in a fifth generation (5G) mobile communication system, and other types of base stations.
  • the base station can adopt a centralized unit (CU)-distributed unit (DU) separation architecture, that is, the base station is composed of a CU and a DU.
  • CU centralized unit
  • DU distributed unit
  • the computing management function (CMF) device 104 refers to a functional module that can be arranged on the base station side or a network element arranged in the core network. Its main function is to manage the registration information of the computing nodes or provide the registration information of the computing nodes to other devices according to the request of other devices.
  • HMD101 first divides the rendering task into several rendering units. Then, HMD101 or access network device 103 assigns subtasks including rendering one or more rendering units to different CPNs to collaboratively complete the rendering based on the status information of CPN.
  • the status information of CPN may include channel information (such as signal-to-noise ratio (SNR)) between CPN and other devices or nodes, such as other CPNs, HMDs or access network devices, and CPN computing power information (such as GPU computing power).
  • SNR signal-to-noise ratio
  • CPN computing power information such as GPU computing power
  • the advantage of the distributed collaborative rendering proposed in the embodiment of the present application is that it uses multiple CPNs to form computing power from small to large, which not only meets the high computing power requirements of VR services, but also reduces latency, thereby improving the experience and saving energy.
  • the rendering unit is the smallest unit after the rendering task is divided, such as the object or tile described below.
  • CMF is responsible for managing CPN. Compared with HMD being responsible for the management of CPN, CMF can manage a wider range of CPNs and a greater number of CPNs.
  • rendering tasks can be divided into the following three ways:
  • Tile-based segmentation model which is easy to implement, has a large amount of interactive data, and can improve the single-frame rendering speed;
  • This segmentation model is complex to implement and has a medium amount of interactive data. It can significantly improve the single-frame rendering speed.
  • the rendering task can be divided into multiple rendering units, and there may be dependencies between rendering units.
  • Dependencies between rendering units means that the rendering of one rendering unit depends on the rendering result of another rendering unit, that is, the rendering of one rendering unit requires the rendering result of another rendering unit as input (or premise).
  • the following takes the object-based segmentation model as an example to explain the problems caused by the dependency between objects during the rendering process. If the rendering effect of an object may affect another object, it can be considered that there is a dependency between the two objects. As shown in Figure 2, assume that there are a total of 4 objects that need to be rendered in a video frame. These 4 objects are numbered 1, 2, 3, and 4, respectively, where object 1 overlaps with object 2. For example, the rendering of object 1 depends on the rendering result of object 2, so it can be considered that there is a dependency between object 1 and object 2. Therefore, during the rendering process, the CPNs that render the dependent objects need to interact with each other about the rendering results and configuration information of the dependent objects.
  • the CPN that renders object 2 needs to send the rendering results and configuration information of object 2 to the CPN that renders object 1 during the rendering process.
  • the overhead generated by such interactions will be very large, possibly reaching GB level.
  • the tile-based segmentation model also has problems in the rendering process due to the dependencies between tiles. For example, if the pixels of a video frame are 100*100, the video frame can be divided into 100 10*10 small blocks, each of which is a tile. Among them, the rendering between tiles may have dependencies, specifically, a tile needs other tiles to provide corresponding information (such as objects and locations contained in other titles) in the rendering process to complete the rendering of this tile.
  • Distributed collaborative rendering can use multiple CPNs that are closer to the head-mounted display device than the cloud to assist in completing rendering tasks, thereby solving the problem of limited computing power in local rendering due to the limited computing power resources of the rendering device and the latency problem in cloud rendering due to the long distance between the cloud and the HMD.
  • different CPNs may need to interact with the context data of rendering subtasks during the rendering process (such as the rendering results and configuration information of the above-mentioned object, or the corresponding information provided by the above-mentioned other tiles).
  • This context data may reach the GB level, which has a significant impact on the air interface capacity and causes a decrease in rendering speed.
  • an embodiment of the present application provides a rendering method.
  • a rendering task is divided into multiple rendering units, and the multiple rendering units include at least two rendering units with dependencies.
  • the multiple rendering units are assigned to computing nodes (such as CPN) for rendering, the dependency of the at least two rendering units needs to be considered, thereby reducing the interaction of context data of rendering subtasks between computing nodes, thereby achieving faster rendering speed and reduced rendering latency.
  • rendering units with dependencies are assigned to the same computing node. For example, if the first rendering unit among multiple rendering units depends on the second rendering unit, the first rendering unit and the second rendering unit are assigned to the first computing node for rendering. In this way, compared with the solution of assigning the first rendering unit to the first computing node and the second rendering unit to the second computing node, this implementation avoids the exchange of context data of the second rendering unit between the first computing node and the second computing node.
  • the computing power required for rendering a certain rendering unit and the computing power resources that the computing node can provide are also considered, and a computing node whose computing power resources can meet the computing power required for rendering a certain rendering unit is selected. For example, for the third rendering unit among multiple rendering units, if the computing power resources of the first computing node can meet the computing power required for rendering the third rendering unit (for example, the computing power resources of the first computing node are greater than or equal to the computing power required for the third rendering unit), the third rendering unit is assigned to the first computing node.
  • the task is rendering and the computing power type that the computing node can provide supports rendering. For example, for a third rendering unit among multiple rendering units, if the computing power type of the first computing node includes an image processing unit, the third rendering unit is assigned to the first computing node.
  • the size of the data volume corresponding to a rendering unit and the communication status of the computing nodes are also considered, and a computing node whose communication status supports the transmission of the data volume corresponding to a rendering unit is selected. For example, for a third rendering unit among multiple rendering units, if the communication status of the first computing node supports the transmission of the data volume of the third rendering unit, the third rendering unit is assigned to the first computing node.
  • the latency requirement range of the rendering task is also considered, and a computing node whose communication state supports the transmission of the data volume corresponding to a certain rendering unit within the latency requirement range is selected. For example, for a third rendering unit among multiple rendering units, if the communication state of the first computing node supports the transmission of the data volume of the third rendering unit within the latency requirement range, the third rendering unit is assigned to the first computing node.
  • the size of computing power resources that the above-mentioned computing power nodes can provide and the type of computing power that the computing power nodes can provide can be derived from a computing power management function module (such as CMF).
  • the computing power management function module can be a function module in the access network device, or a network element in the core network connected to the access network device;
  • the computing power management function module can be a function module in the access network device connected to the HMD, or a network element in the core network connected to the access network device.
  • the communication status information of the above-mentioned computing power node can be derived from the access network device or the computing power node corresponding to the information.
  • the third rendering unit may be the same rendering unit as the first rendering unit or the second rendering unit, or may be a different rendering unit, which is not limited in the present application.
  • Obtaining the rendering task information may be receiving the rendering task information from the head display device or obtaining the local rendering task information.
  • the access network device can send the identification information of the rendering unit assigned to the computing power node and the identification information of the computing power node to the HMD, and the HMD determines the rendering subtask information corresponding to the computing power node and then forwards it to the computing power node.
  • the identification information of the rendering unit can be the number or identifier (ID) of the rendering unit, that is, the information used to identify a certain rendering unit;
  • the identification information of the computing power node can be the address or identifier of the computing power node, that is, the information used to identify a certain computing power node.
  • the HMD determines the rendering subtask information corresponding to the computing power node and then forwards it to the computing power node.
  • the HMD forwards the rendering subtask information to the computing power node, which can be achieved through the access network device.
  • multiple computing nodes may render subtasks of the rendering task.
  • the embodiment of the present application considers assigning rendering units with dependencies to the same computing node, it does not exclude the possibility that the computing nodes will exchange context data of rendering units with dependencies.
  • the fourth rendering unit and the fifth rendering unit with dependencies are assigned to the second computing node and the third computing node respectively.
  • the method of the embodiment of the present application can minimize the interactions between rendering nodes and reduce the interaction overhead between computing nodes, thereby achieving the effect of reducing rendering latency.
  • a rendering method is provided in an embodiment of the present application.
  • the access network device gNB is taken as an example below
  • the HMD and CMF respectively provide the gNB with the computing power information of the service and CPN to assist the gNB in allocating the rendering task.
  • the method includes the following steps:
  • S101 gNB requests the computing resource table from CMF, and CMF receives the request for the computing resource table from gNB.
  • the computing power resource table can be used for rendering task scheduling of users within the gNB service range.
  • the gNB may periodically request the computing power resource table from the CMF; or the gNB may request the computing power resource table from the CMF after receiving the rendering task information (i.e. after executing S103).
  • the gNB may send a request message for the computing power resource table to the CMF, which may include information indicating the requested CPN address, computing power type (optional, for example, when all CPNs support a certain computing power type, no request is required), and computing power resource size.
  • the CMF sends a computing resource table to the gNB, and the gNB receives the computing resource table from the CMF.
  • the computing power resource table may include information such as CPN address, computing power type (optional, for example, when all CPNs support a certain computing power type, the computing power resource table does not need to include it), and computing power resource size.
  • the CPN address may be an IP address, or may be an identity identifier on the RAN side.
  • the identity identifier on the RAN side may be, for example, a radio network temporary identifier (RNTI), which is not limited here.
  • the computing power type includes a central processing unit (CPU) and/or a graphics processing unit (GPU).
  • the central processing unit is a central processing unit that controls the operation of the computer and is mainly responsible for floating-point operations and integer operations.
  • the graphics processing unit is an auxiliary processor that processes work related to graphics calculations in the computer and presents data better on the display, and is mainly responsible for floating-point operations.
  • the computing power resource size may be expressed in units of operations per second (OPS) or floating-point operations per second (FLOPS), and the larger the value, the greater the computing power resource.
  • the computing power resource table sent by CMF is shown in Table 1 below.
  • CMF is responsible for creating and maintaining the information in the computing resource table, and can be a functional entity that provides the computing resource table.
  • CMF can be located at the gNB, for example, a functional module in the gNB, or can be located at the core network, for example, a network element in the core network.
  • any CPN has computing power and wants to share it, it can submit a registration application to the CMF node and report it.
  • Computing resource information in the computing resource table When the computing resource information of the CPN changes, the CPN can send a new message to the CMF to update the computing resource information saved by the CMF.
  • gNB can obtain the computing resource table from CMF in the following two ways: one is that CMF periodically sends the computing resource table, and the period can be configured by the core network or base station; the other is that gNB executes S101, that is, actively requests the computing resource table from CMF.
  • S101 is an optional step and may not be executed in the following method.
  • the HMD sends rendering task information to the gNB, and the gNB receives the rendering task information from the HMD.
  • the rendering task information may include any one or more of the task type, latency requirement, number of divisions, number of rendering units, computing power of rendering units, data volume of rendering units, or dependencies between rendering units.
  • the task type indicates that this task is rendering;
  • the latency requirement indicates the latency requirement of this task;
  • the number of divisions is the number of rendering units after the rendering task is divided;
  • the computing power of the rendering unit is the computing power required to render a certain rendering unit;
  • the data volume of the rendering unit is the data volume corresponding to a certain rendering unit, which can be understood as the data volume corresponding to the transmission of a certain rendering unit.
  • the rendering task information can be sent from the HMD to the gNB via the media access control (MAC) layer, the radio resource control (RRC) layer, the packet data convergence protocol (PDCP) layer, or the uplink control information (UCI) message.
  • MAC media access control
  • RRC radio resource control
  • PDCP packet data convergence protocol
  • UCI uplink control information
  • the task type indicates that this task is rendering; the latency requirement indicates 10ms, that is, this rendering task is required to be completed within 10ms; if an object-based task segmentation method is adopted, and there are 5 objects to be rendered in a frame, the segmentation number is indicated as 5; if the above 5 objects are numbered respectively, the number of rendering units can correspond to 1-5; the computing power size indicates the size of the computing power required to render each of the above objects; the size of the data volume indicates the size of the data volume corresponding to each of the above objects (that is, the size of the data volume of the object transmitted between nodes); if object1 has a dependency relationship with object2 and object3, and object3 has a dependency relationship with 4, then the dependency relationship can indicate that the rendering of object1 depends on the rendering of object2 and object3, and indicates that the rendering of object3 depends on the rendering of object4. Part of the information in the rendering task information in this example can be shown in Table 2 below.
  • S101 - S102 may be executed first and then S103 , or S103 may be executed first and then S101 - S102 .
  • S104 gNB selects a CPN based on the computing resource table, rendering task information, and communication status information of the CPN.
  • the communication state information of the CPN may include channel information of the CPN, and the channel information of the CPN represents the real-time transmission rate that the CPN can support.
  • the channel information of the CPN may be, for example, channel state information (CSI) between the CPN and other devices or nodes, such as other CPNs, gNBs, or HMD nodes. It can be understood that the CPN belongs to a communication service object of the gNB, and the gNB can obtain the communication state information of the CPN from the CPN.
  • CSI channel state information
  • the communication status information of the CPN obtained by the gNB may be as shown in Table 3 below.
  • the gNB selects a CPN based on the computing resource table, rendering task information, and CPN communication status information, which may include any one or more of the following:
  • (1) gNB selects a CPN with a computing power type of GPU based on the task type in the rendering task information.
  • the gNB selects CPN2-CPN4 in Table 1.
  • the gNB determines to allocate multiple rendering units with dependencies to the same CPN based on the dependencies in the rendering task information.
  • the gNB determines to allocate object1, object2, object3 and object4 to the same CPN; or to allocate object1, object2 and object3 to one CPN and object4 to another CPN; or to allocate object1 and object2 to one CPN and object3 and object4 to another CPN.
  • the specific division method may depend on the computing power required by each rendering unit and the computing power that each CPN can provide, or the data size transmitted to each rendering unit and the communication status of each CPN.
  • the method of allocating CPNs according to dependencies can reduce the number of interactions between CPNs or reduce the amount of data interacted between CPNs, thereby reducing latency and improving rendering speed.
  • gNB allocates object1 to CPN2, object2 and object3 to CPN3, and object4 and object5 to CPN4.
  • CPN4 completes the rendering of object4
  • CPN3 completes the rendering of object3 based on the rendering result of object4 and completes the rendering of object2, and then sends the rendering results of object2 and object3 to CPN2.
  • the rendering results of object4 are exchanged between CPN4 and CPN3, and the rendering results of object2 and object3 are exchanged between CPN3 and CPN2. If gNB takes dependencies into consideration, for example, assigning object1, object2, and object3 to CPN2, and assigning object4 to CPN3, the rendering of object1-object4 can be completed only by exchanging the rendering result of object4 between CPN2 and CPN3 once.
  • the gNB determines the CPN that can meet the computing power required by a certain rendering unit (i.e., the CPN whose computing power resources are greater than or equal to the computing power required by a certain rendering unit) based on the computing power of the rendering unit in the rendering task information and the computing power resources that can be provided by the CPN in the computing power resource table.
  • the computing power required to render the multiple rendering units and the CPN that can provide this computing power can be considered.
  • the total computing power required for object1-object4 is 33G FLOPS
  • CPN2 with a resource size of 50G FLOPS can meet the computing power requirements of object1-object4.
  • the computing power required for object5 is 5G FLOPS
  • CPN2-4 can all meet the computing power requirements of object5.
  • the gNB determines the CPN that can support the transmission of the data volume of a certain rendering unit within the delay requirement range (i.e., the communication state of the CPN can support the transmission of the data volume of a certain rendering unit within the delay requirement range) based on the latency requirement in the rendering task information, the data volume corresponding to the rendering unit, and the communication status information of the CPN.
  • the data volume, latency requirement and communication status information of the multiple rendering units can be considered to determine the CPN that can support the transmission of the data of the multiple rendering units within the latency requirement. For example, according to the latency requirement of 10ms and the data volume of 66Mbits of object1-object4, it is determined that CPN2 with a CSI of 20dB can support 66Mbits of data transmission within 10ms, so object1-object4 is allocated to CPN2.
  • the principle for gNB to select CPN is that the delay caused by rendering and transmission is less than the delay requirement of the rendering task, and the data interaction between nodes is reduced as much as possible.
  • the gNB determines that the CPN in the computing resource table cannot support the requirements of the rendering task, the gNB can choose the cloud for rendering.
  • the rendering task when the computing power provided by the CPN in the computing power resource table can meet the rendering task requirements, the rendering task will be assigned to the CPN; if it cannot be met, the rendering units whose requirements are greater than the computing power that each CPN can provide can be assigned to the cloud, or the entire rendering task can be directly handed over to the cloud for rendering.
  • gNB When allocating CPNs, gNB can aggregate rendering units with dependencies as a whole, such as aggregating object1-object4 as a large object, and the corresponding computing power of the large object is 33G FLOPS. Then, objects without dependencies are sorted according to the size of computing power requirements, for example, the sorting can be: the large object->object5. Then, according to the computing power that the CPN in the computing power resource table can provide, determine whether there is a CPN that can meet the computing power requirements of the sorted objects, such as determining CPN2 that meets the 33G FLOPS computing power requirements of the large object and CPN3 that meets the 5G FLOPS computing power requirements of object5.
  • the aggregated object can be split (or directly For example, if there is no CPN that meets the 33G FLOPS computing power requirement of the large object (assuming that the computing power of CPN2 is 30G FLOPS), the computing power requirements of object1-object4 can be split, for example, into object1-object3 and object4, and then sorted in combination with object5 that has no dependency, that is, the sorting is object1-object3->object4->object5, and then the CPN is determined based on the computing power that the CPN in the computing power resource table can provide.
  • gNB can also reduce rendering latency by allocating rendering units with greater computing power requirements to CPNs with greater computing power resources.
  • the gNB sends the combination mode of the rendering units and the corresponding CPN information to the HMD, and the HMD receives the combination mode of the rendering units and the corresponding CPN information from the gNB.
  • the combination of rendering units may include the numbers of one or more rendering units assigned to a certain CPN, and the corresponding CPN information may be the address of the CPN.
  • object1-object4 is assigned to CPN2
  • object5 is assigned to CPN3
  • the combination of rendering units and the corresponding CPN information may be as shown in Table 4 below.
  • object1-object3 is assigned to CPN2
  • object4 is assigned to CPN3
  • object5 is assigned to CPN4
  • the combination of rendering units and the corresponding CPN information may be as shown in Table 5 below.
  • the HMD determines rendering subtask information according to the combination mode of the rendering units and the corresponding CPN information.
  • the rendering subtask information includes the number of the rendering unit and the dependency of the rendering unit (optional, for example, if there is no dependency between the rendering units, the rendering subtask information may not include the dependency of the rendering units).
  • object1-object4 are assigned to CPN2
  • object5 is assigned to CPN3.
  • the rendering subtask information of CPN2 may include the information shown in Table 6 below
  • the rendering subtask information of CPN3 may include the information shown in Table 7 below, or the rendering subtask information of CPN3 includes information indicating that the object number to be rendered is 5, but does not include the dependency.
  • the HMD determines the dependency (if any) between the rendering unit assigned to a certain CPN and the rendering unit according to the combination mode of the rendering units and the corresponding CPN information, and packages the dependency between the rendering unit and the rendering unit into a subtask, and further determines a subtask information.
  • the rendering subtask information of the CPN may also include the address of the dependent CPN. If the rendering result of a certain CPN is relied upon by the rendering unit of another CPN, the rendering subtask information of the CPN may also include the address of the dependent CPN.
  • object1-object3 are assigned to CPN2
  • object4 is assigned to CPN3
  • object5 is assigned to CPN4.
  • the rendering subtask information of CPN2 may include the information shown in Table 8 below
  • the rendering subtask information of CPN3 may include the information shown in Table 9 below
  • the rendering subtask information of CPN4 may include the information shown in Table 10 below.
  • the rendering subtask information of CPN3 in the above example may include information indicating that the rendered object is numbered 4 and is dependent on the CPN at address 2, without being limited to the dependency relationship or being limited to the form of a table; similarly, the rendering subtask information of CPN4 may include information indicating that the rendered object is numbered 5, without being limited to the dependency relationship or being limited to the form of a table.
  • the number of the rendering unit, the dependency of the rendering unit (optional), the address of the dependent CPN (optional), and the address of the dependent CPN (optional) may be included in the header of a packet data convergence protocol (PDCP) data packet.
  • PDCP packet data convergence protocol
  • the rendering subtask information may also include configuration information (or data) for rendering a certain rendering unit, and the configuration information may be in the PDCP data packet.
  • the HMD sends the rendering subtask information to the corresponding CPN, and the corresponding CPN receives the rendering subtask information from the HMD.
  • CPN2 receives rendering subtask information as shown in Table 6, and CPN3 receives rendering subtask information as shown in Table 7; or CPN2 receives rendering subtask information as shown in Table 8, CPN3 receives rendering subtask information as shown in Table 9, and CPN4 receives rendering subtask information as shown in Table 10.
  • the HMD and the CPN communicate through a base station, for example, the HMD and the CPN communicate through a Uu port.
  • the HMD sends each rendering subtask information to the gNB, and the gNB forwards the corresponding rendering subtask information to the corresponding CPN.
  • the HMD can also send the CPN information corresponding to the rendering subtask information to the gNB, and the gNB knows to forward the rendering subtask information to the corresponding CPN based on the CPN information.
  • the CPN information may be included in the packet header (such as the PDCP packet header) of the data packet where the rendering subtask information is located, and the gNB can check the packet header of the data packet where the rendering subtask information is located to know to forward it to the corresponding CPN.
  • the CPN information may be included in the rendering subtask information, or may not be included in the rendering subtask information, and this application does not limit this.
  • the HMD can send the number of the rendering unit, the dependency of the rendering unit (optional), the dependent CPN address (optional), and the address of the dependent CPN (optional) in the rendering subtask information to the gNB through the control plane, and send the configuration information in the rendering subtask information to the gNB through the data plane.
  • the gNB then sends the complete rendering subtask information including the number of the rendering unit, the dependency of the rendering unit, the dependent CPN address, the address of the dependent CPN and the configuration information to the corresponding CPN. In this way, the gNB cannot perceive the configuration information.
  • the HMD sends the rendering subtask information to the gNB through the data plane, and the gNB can view the header part of the data packet, but cannot view the configuration information in the data packet.
  • the HMD and the CPN may communicate directly without being transferred through a base station, for example, the HMD and the CPN may communicate through a sidelink.
  • the embodiment of the present application may include both of the above two implementation methods, that is, the HMD and part of the CPN may communicate directly, and at the same time, the HMD and other CPNs may communicate through a base station.
  • the CPN can be rendered in the order of dependent->no dependence->dependent. It is understandable that the dependent subtask may affect the rendering of other CPNs, so the rendering needs to be completed first, and then the rendering results of the dependent subtasks are transmitted to the CPN that depends on the rendering results. Correspondingly, the CPN that depends on the rendering results of other CPNs also needs to wait until the rendering results of other CPNs are completed. For example, when object1-object3 are assigned to CPN2, object4 is assigned to CPN3, and object5 is assigned to CPN4, CPN3 needs to send the rendering results of object4 to CPN2, CPN2 can render in the order of object2->object3->object1.
  • a CPN does not depend on the rendering results of other CPNs, it can be rendered in the order of dependent->no-dependent->dependent.
  • CPNs can communicate directly with each other, for example, using sidelink communication; or, CPNs may not be able to communicate directly with each other, for example, CPNs are connected via 5GS, in which case a CPN can forward the rendering result to the CPN that depends on the rendering result through the gNB.
  • the gNB can store the dependency relationship of the rendering units and the information of the CPN corresponding to each rendering unit, thereby achieving forwarding.
  • the rendering subtask information may not include the address of the dependent CPN, or may not include the address of the dependent CPN (for example, the rendering subtask information of CPN2 in Table 8 does not include the address of the dependent CPN; the rendering subtask information of CPN3 in Table 9 does not include the address of the dependent CPN). That is, the gNB can forward the dependent rendering results to the CPN that depends on the rendering results based on the dependency relationship of the local rendering units and the information of the CPNs corresponding to each rendering unit, without including the address of the dependent CPN or the address of the dependent CPN in the rendering subtask information.
  • the rendering result may include information such as position, lighting, shadow, or texture after the subtask rendering is completed.
  • the CPN communicates with the HMD through the base station, for example, the CPN and the HMD communicate through the Uu port; or the CPN and the HMD communicate directly without being transferred through the base station, for example, the CPN and the HMD communicate through the sidelink; or some CPNs communicate directly with the HMD, while other CPNs and the HMD communicate through the base station.
  • the gNB allocates rendering tasks to the corresponding CPN based on the communication status information of the CPN and the computing resource table of the CPN, and considers the dependencies between the rendering units included in the rendering tasks during the allocation process. Compared with the allocation method that does not consider the dependencies, the method of the embodiment of the present application can reduce the interactions between CPNs and reduce the interaction overhead between CPNs, thereby reducing the rendering delay.
  • the embodiment of the present application provides another rendering method as shown in FIG4 , in which the HMD is responsible for scheduling decisions for rendering tasks, and the gNB and CMF respectively provide the HMD with CPN communication status information and CPN computing power information to assist the HMD in allocating rendering tasks.
  • the method includes the following steps:
  • the HMD requests the CMF for a computing resource table, and the CMF receives the request for the computing resource table from the HMD.
  • the computing power resource table can be used to determine the segmentation method of rendering tasks and the allocation of rendering tasks.
  • the HMD receives a rendering request from the user and then requests a computing resource table from the CMF.
  • the HMD and CMF can communicate through a base station, or through a base station and a core network.
  • the CMF can be a functional module in the gNB or a network element of the core network;
  • the CMF can be a third-party network element outside the 3rd Generation Partnership Project (3GPP) network system.
  • 3GPP 3rd Generation Partnership Project
  • the CMF sends a computing resource table to the gNB, and the gNB receives the computing resource table from the CMF.
  • the gNB adds the communication status information of the CPN to the received computing power resource table.
  • the gNB can add the communication status information of the CPN shown in Table 3 to form the information shown in Table 11 below.
  • the embodiment of the present application can be applied to the scenario where the base station consists of a CU and a DU (which can be called a CU-DU scenario).
  • the CMF first sends the computing power resource table to the CU
  • the DU then sends the communication status information of the CPN to the CU
  • the CU finally adds the CPN communication status information to the computing power resource table and sends it to the HMD.
  • the gNB sends the CPN communication status information and the CPN computing resource information to the HMD, and the HMD receives the CPN communication status information and the CPN computing resource information from the gNB.
  • the computing power resource information of the CPN can be a computing power resource table without addition (such as Table 1), and the CPN communication status information and the computing power resource information of the CPN can be a computing power resource table with addition (such as Table 11).
  • the CMF may send the computing power resource table to the HMD
  • the gNB may send the communication status information of the CPN to the HMD, that is, the gNB does not need to add the communication status information of the CPN to the computing power resource table
  • the HMD receives the computing power resource table from the CMF and the communication status information of the gNB respectively.
  • S203 and S204 are optional steps.
  • the HMD selects a CPN according to the communication status information of the CPN, the computing resource information of the CPN, and the rendering task information.
  • the HMD can save the rendering task information locally.
  • the HMD selects the CPN based on the communication status information of the CPN, the computing resource information of the CPN, and the rendering task information, please refer to the description of gNB selecting the CPN in S104, which will not be repeated here.
  • the principle for HMD to select CPN can be that the delay generated by rendering and transmission is less than or equal to the delay requirement of the rendering task, and the interaction between CPNs is minimized as much as possible.
  • the HMD determines that the CPN in the computing power resource table cannot support the requirements of the rendering task, the HMD can choose the cloud for rendering.
  • the relevant description of whether the HMD selects the cloud and how to allocate it specifically when selecting CPN please refer to the description of the gNB-related actions in S104, which will not be repeated here.
  • the HMD can choose the method of splitting the rendering task and/or the number of rendering subtasks. For example, if the computing power is sufficient, the object-based segmentation model or the tile-based segmentation model is preferred. If the computing power of the available CPN is small but the number of available CPNs is large, and a single computing power node is not enough to support the rendering of an object, the HMD can choose to adopt a tile-based cutting method. The number of split objects, or the size and number of split tiles can be determined based on the computing power of the CPN.
  • the HMD may determine one or more rendering units assigned to a certain CPN (which may correspond to the combination of rendering units in S105), for example, determining to assign object1-object4 to CPN2 and object5 to CPN3; or determining to assign object1-object3 to CPN2, object4 to CPN3, and object5 to CPN4.
  • the HMD determines rendering subtask information according to the selected CPN and the rendering unit corresponding to the CPN.
  • the rendering subtask information may include the number of the rendering unit, the dependency of the rendering unit (optional), the address of the dependent CPN (optional), and the address of the dependent CPN (optional).
  • the rendering subtask information may include the number of the rendering unit, the dependency of the rendering unit (optional), the address of the dependent CPN (optional), and the address of the dependent CPN (optional).
  • S207-S210 may refer to the description in S107-S110 and will not be elaborated here.
  • the HMD allocates rendering tasks to corresponding CPNs based on the communication status information of the CPNs and the computing resource information of the CPNs, and considers the dependencies between the rendering units included in the rendering tasks during the allocation process.
  • the method of the embodiment of the present application can reduce the interactions between CPNs and reduce the interaction overhead between CPNs, thereby reducing the rendering delay.
  • step numbers of the flowcharts described in FIG3 and FIG4 are only examples of the execution process and do not constitute a limitation on the order of execution of the steps. In the embodiment of the present application, there is no strict execution order between the steps that have no timing dependency relationship with each other. Not all the steps shown in the flowcharts are steps that must be executed. Some steps can be deleted based on each flowchart according to actual needs, or other possible steps can be added based on each flowchart according to actual needs.
  • the HMD (or terminal), access network equipment, CMF, or computing node may include hardware structures and/or software modules corresponding to the execution of each function.
  • the embodiments of the present application can be implemented in the form of hardware or a combination of hardware and computer software. Whether a function is performed in the form of hardware or computer software driving hardware depends on the specific application and design constraints of the technical solution. Professional and technical personnel can use different methods to implement the described functions for each specific application, but such implementation should not be considered to be beyond the scope of this application.
  • the embodiment of the present application can divide the HMD, access network device, CMF, or computing node into functional units according to the above method example.
  • each functional unit can be divided according to each function, or two or more functions can be integrated into one unit.
  • the above integrated unit can be implemented in the form of hardware or in the form of software functional units.
  • FIG5 shows a possible exemplary block diagram of the device involved in the embodiments of the present application.
  • the device 500 may include: a processing module 501 and a communication module 502.
  • the processing module 501 is used to control and manage the actions of the device 500.
  • the communication module 502 is used to support the communication between the device 500 and other devices.
  • the communication module 502 is also called a transceiver module, and may include a receiving module and/or a sending module, which are respectively used to perform receiving and sending operations.
  • the device 500 may also include a storage module 503 for storing program code and/or data of the device 500.
  • the device 500 may be the HMD or access network device in the above embodiment.
  • the processing module 501 may support the device 500 to execute the actions of the HMD or access network device in the above method examples.
  • the processing module 501 mainly executes the internal actions of the HMD or access network device in the method example, and the communication module 502 may support the communication between the device 500 and other devices.
  • the processing module 501 is used to: obtain information about a rendering task, where the rendering task is divided into multiple rendering units, and the information about the rendering task includes a dependency relationship between at least two of the multiple rendering units; and determine, based on the dependency relationship, to allocate one or more of the multiple rendering units to a first computing power node for rendering.
  • the dependency relationship indicates that a first rendering unit among the multiple rendering units depends on a second rendering unit among the multiple rendering units, and the processing module 501 is specifically used to determine to allocate both the first rendering unit and the second rendering unit to the first computing power node.
  • the information of the rendering task also includes the computing power required to render the third rendering unit
  • the communication module 502 is used to: receive the computing power resource size of one or more computing power nodes from the computing power management function module, and the one or more computing power nodes include the first computing power node; the processing module 501 is specifically used to: determine to allocate the third rendering unit to the first computing power node based on the computing power required to render the third rendering unit and the computing power resource size of the first computing power node.
  • the information of the rendering task also includes an indication of the rendering task type
  • the communication module 502 is used to: receive the computing power type of one or more computing power nodes from the computing power management function module, wherein the one or more computing power nodes include the first computing power node, and the computing power type of the first computing power node includes a graphics processing unit; the processing module 501 is specifically used to: determine to assign the third rendering unit to the first computing power node based on the task type and the computing power type of the first computing power node.
  • the computing power management functional module when the device 500 is an access network device, the computing power management functional module is a functional module in the access network device, or is a network element in a core network connected to the access network device; or, when the device 500 is an HMD, the computing power management functional module is a functional module in an access network device connected to the HMD, or is a network element in a core network connected to the access network device.
  • the information of the rendering task also includes the data volume of the third rendering unit
  • the processing module 501 is also used to: obtain the communication status information of the first computing power node; and specifically to: determine to allocate the third rendering unit to the first computing power node based on the data volume of the third rendering unit and the communication status information of the first computing power node.
  • the processing module 501 is specifically used to: obtain information about the rendering task from the HMD; the communication module 502 is used to: send identification information of the one or more rendering units and identification information of the first computing power node to the HMD.
  • the communication module 502 is further used to: receive rendering subtask information from the HMD, where the rendering subtask information is used to indicate rendering a subtask including the one or more rendering units; and forward the rendering subtask information to the first computing power node.
  • the communication module 502 is further used to: send rendering subtask information to the first computing power node, where the rendering subtask information is used to indicate rendering a subtask including the one or more rendering units.
  • the access network device may include one or more DUs and one or more CUs.
  • the DU is mainly used for receiving and transmitting radio frequency signals, converting radio frequency signals into baseband signals, and part of the baseband processing.
  • the CU is mainly used for baseband processing, controlling the access network device, etc.
  • the CU can be used to control the access network device to execute the operation process of the access network device in the above method embodiment.
  • the DU and the CU can be physically set together or physically separated, that is, a distributed base station.
  • modules or units in the above device is only a division of logical functions. In actual implementation, they can be fully or partially integrated into one physical entity, or they can be physically separated. And the modules in the device can all be implemented in the form of software calling through processing elements; they can also be all implemented in the form of hardware; they can also be partially implemented in the form of software calling through processing elements, and partially implemented in the form of hardware.
  • each module can be a separately established processing element, or it can be integrated in a certain chip of the device for implementation. In addition, it can also be stored in the memory in the form of a program, and called and executed by a certain processing element of the device. The function of the module.
  • each operation of the above method or each module above can be implemented by an integrated logic circuit of hardware in the processor element or in the form of software calling through a processing element.
  • the module in any of the above devices can be one or more integrated circuits configured to implement the above method, such as one or more application specific integrated circuits (ASIC), or one or more digital singnal processors (DSP), or one or more field programmable gate arrays (FPGA), or a combination of at least two of these integrated circuit forms.
  • ASIC application specific integrated circuits
  • DSP digital singnal processors
  • FPGA field programmable gate arrays
  • the module in the device can be implemented in the form of a processing element scheduler
  • the processing element can be a processor, such as a general-purpose central processing unit (CPU), or other processors that can call programs.
  • these modules can be integrated together and implemented in the form of a system-on-a-chip (SOC).
  • SOC system-on-a-chip
  • the above module for receiving is an interface circuit of the device, which is used to receive signals from other devices.
  • the receiving module is an interface circuit of the chip used to receive signals from other chips or devices.
  • the above module for sending is an interface circuit of the device, which is used to send signals to other devices.
  • the sending module is an interface circuit of the chip used to send signals to other chips or devices.
  • system and “network” in the embodiments of the present application can be used interchangeably.
  • At least one refers to one or more
  • at least two refers to two or more
  • multiple refers to two or more.
  • “And/or” describes the association relationship of the associated objects, indicating that there can be three relationships, for example, A and/or B, which can represent: the situation where A exists alone, A and B exist at the same time, and B exists alone, where A and B can be singular or plural.
  • the character “/” generally indicates that the associated objects before and after are in an "or” relationship.
  • At least one of the following (individuals)” or its similar expressions refers to any combination of these items, including any combination of single items (individuals) or plural items (individuals).
  • at least one of A, B or C includes A, B, C, AB, AC, BC or ABC
  • at least one of A, B and C can also be understood to include A, B, C, AB, AC, BC or ABC.
  • the ordinal words such as “first” and “second” mentioned in the embodiments of the present application are used to distinguish multiple objects, and are not used to limit the order, timing, priority or importance of multiple objects.
  • the embodiments of the present application may be methods, devices, systems, or computer program products. Therefore, the present application may adopt the form of a complete hardware embodiment, a complete software embodiment, or an embodiment combining software and hardware. Moreover, the present application may adopt the form of a computer program product implemented on one or more computer-usable storage media (including but not limited to disk storage, optical storage, etc.) containing computer-usable program codes.
  • a computer-usable storage media including but not limited to disk storage, optical storage, etc.
  • These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing device to work in a specific manner, so that the instructions stored in the computer-readable memory produce a manufactured product including an instruction device that implements the functions specified in one or more processes in the flowchart and/or one or more boxes in the block diagram.
  • These computer program instructions can also be loaded into a computer or other programmable data processing device so that A series of operational steps are performed on the device to produce a computer-implemented process, so that the instructions executed on the computer or other programmable device provide steps for implementing the functions specified in one or more flows of the flowchart and/or one or more blocks of the block diagram.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • Power Sources (AREA)
  • Architecture (AREA)
  • Image Generation (AREA)

Abstract

本申请涉及图像渲染,公开了一种渲染方法、装置及系统。其中,该方法包括:获取渲染任务的信息,该渲染任务被分割为多个渲染单元,该渲染任务的信息包括该多个渲染单元中的至少两个渲染单元之间的依赖关系;根据该依赖关系,确定将该多个渲染单元中的一个或多个渲染单元分配给用于渲染的第一算力节点。也就是说,根据渲染单元之间的依赖关系分配算力节点,从而减少算力节点之间的交互,降低渲染的时延。

Description

渲染方法、装置及系统
本申请要求于2022年12月8日提交中国专利局、申请号为202211574911.6、申请名称为“渲染方法、装置及系统”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及图像渲染,尤其涉及一种渲染方法、装置及系统。
背景技术
扩展现实(extended reality,XR)是指各类由计算技术和可穿戴设备生成的现实和虚拟相结合的环境以及人和机之间的交互,具体包括如下几种形式:增强现实(augmented reality,AR)、混合现实(mixed reality,MR)、虚拟现实(virtual reality,VR)。以XR中的VR业务为例,VR业务从发起到结束的流程可以如下所述:VR设备(例如,头显设备)首先捕捉用户的姿势或用户所在的位置,或首先接收用户的指令等;之后将包括用户姿势的信息、用户所在位置的信息或用户指令的用户信息作为用户请求发送给服务器;服务器随即对与用户请求相关联的视频内容进行渲染生成视频帧,再将视频帧传输回VR设备以显示。其中,渲染是指从模型生成图片的过程,模型是由编程语言或数据结构定义的三维对象或虚拟环境的表示。
VR业务同时要求低时延以及高速率。VR视频业务要求头动响应时延(motion-to-photo,MTP)小于25毫秒(ms),即从用户姿势或用户所在位置发生变化到头显设备上出现该变化对应的视频画面的时间要小于25ms,或从用户发出指令到头显设备上出现该指令对应的视频画面的时间要小于25ms。如果MTP时延大于25ms,用户便会产生眩晕感。此外,对于2千(K)清晰度以及60视频帧率(frames per second,FPS)的VR视频业务而言,对带宽的需求大致在30兆位/秒(megabits per second,Mbps)。而随着时间的推移,VR业务正逐步从初步沉浸阶段向深度沉浸阶段发展,其特征表现为清晰度从2K向4K、8K发展,帧率从60FPS向90FPS、120FPS发展,带宽需求也在由Mbps向千兆位/秒(gigabits per second,Gbps)量级发展。此外,随着VR业务的不断演进,实时渲染VR视频画面所需的算力也越来越大。为了满足VR业务巨大的算力需求,VR业务也逐步从本地渲染向云端渲染演进。然而,在这样的演进之下,VR业务的渲染时延也在增大,如何降低渲染时延,是目前亟待解决的问题。
发明内容
本申请提供一种渲染方法、装置及系统,能够减少负责渲染的算力节点之间的交互,降低渲染的时延。
第一方面,本申请提供一种渲染方法,该方法包括:获取渲染任务的信息,所述渲染任务被分割为多个渲染单元,所述渲染任务的信息包括所述多个渲染单元中的至少两个渲染单元之间的依赖关系;根据所述依赖关系,确定将所述多个渲染单元中的一个或多个渲染单元分配给用于渲染的第一算力节点。
采用上述方法,根据该依赖关系,确定将一个或多个渲染单元分配给第一算力节点,即确定第一算力节点负责渲染该一个或多个渲染单元。由于在分配渲染单元的过程中考虑了渲染单元的依赖关系,因此,相较于不考虑依赖关系的方案,上述方法能够减少负责渲染的算力节点之间的交互,从而降低算力节点之间的交互开销,进而降低渲染的时延。
在一种可能的设计中,所述依赖关系指示所述多个渲染单元中的第一渲染单元依赖于所述多个渲染单元中的第二渲染单元,则确定将所述第一渲染单元和所述第二渲染单元都分配给所述第一算力节点。该设计中,将具有依赖关系的多个渲染单元分配给同一个算力节点,避免分配给不同的算力节点而导致的算力节点之间的交互,从而减少了算力节点之间的交互。
在一种可能的设计中,对于所述多个渲染单元中的第三渲染单元,所述渲染任务的信息还包括渲染所述第三渲染单元所需的算力大小,则所述方法还包括:接收来自算力管理功能模块的一个或多个算力节点的算力资源大小,所述一个或多个算力节点包括所述第一算力节点;且根据渲染所述第三渲染单元 所需的算力大小和所述第一算力节点的算力资源大小,确定将所述第三渲染单元分配给所述第一算力节点。该设计中,在分配算力节点时,考虑渲染单元所需的算力大小和算力节点的算力资源大小,选择能满足渲染单元所需算力大小的算力节点,从而满足渲染任务的算力需求。
在一种可能的设计中,对于所述多个渲染单元中的第三渲染单元,所述渲染任务的信息还包括指示渲染的任务类型,则所述方法还包括:接收来自算力管理功能模块的一个或多个算力节点的算力类型,其中,所述一个或多个算力节点包括所述第一算力节点,所述第一算力节点的算力类型包括图形处理单元;且根据所述任务类型和所述第一算力节点的算力类型,确定将所述第三渲染单元分配给所述第一算力节点。该设计中,在分配算力节点时,考虑任务类型为渲染和算力节点的所能提供的算力类型,选择能够支持渲染(即算力类型包括图形处理单元)的算力节点,从而实现该渲染任务。
在一种可能的设计中,当所述方法的执行主体为接入网设备时,所述算力管理功能模块为所述接入网设备中的功能模块,或者为与所述接入网设备连接的核心网中的网元;当所述方法的执行主体为头显设备HMD时,所述算力管理功能模块为与所述HMD连接的接入网设备中的功能模块,或者为与所述接入网设备连接的核心网中的网元。该设计中,由于是位于接入网侧或者核心网侧的算力管理功能模块管理算力节点,其所管理的算力节点的数量可多于HMD所管理的算力节点的数量,从而实现了更多算力的利用。
在一种可能的设计中,对于所述多个渲染单元中的第三渲染单元,所述渲染任务的信息还包括所述第三渲染单元的数据量大小,则所述方法还包括:获取所述第一算力节点的通信状态信息;且根据所述第三渲染单元的数据量大小和所述第一算力节点的通信状态信息,确定将所述第三渲染单元分配给所述第一算力节点。该设计中,在分配算力节点时,考虑渲染单元的数据量大小和算力节点的通信状态信息,选择能满足渲染单元的数据传输需求的算力节点,从而满足渲染任务处理过程中的传输需求。
在上述设计中,所述渲染任务的信息还包括所述渲染任务的时延要求,则根据所述第三渲染单元的数据量大小、所述第一算力节点的通信状态信息和所述渲染任务的时延要求,确定将所述第三渲染单元分配给所述第一算力节点。也就是说,在分配算力节点时,还考虑渲染任务的时延要求,选择能在时延要求范围内支持渲染单元的数据量的传输的算力节点,从而满足渲染任务的时延要求。
在一种可能的设计中,当所述方法的执行主体为接入网设备,获取来自HMD的所述渲染任务的信息;且所述方法还包括:将所述一个或多个渲染单元的标识信息以及所述第一算力节点的标识信息发送给所述HMD。该设计中,接入网设备负责分配算力节点,并将分配的结果发送给HMD。
在上述设计中,接收来自所述HMD的渲染子任务信息,所述渲染子任务信息用于指示渲染包括所述一个或多个渲染单元的子任务;向所述第一算力节点转发所述渲染子任务信息。该设计中,接入网设备向第一算力节点转发来自HMD的渲染子任务信息,使得第一算力节点能够基于该渲染子任务信息渲染一个或多个渲染单元。
在一种可能的设计中,当所述方法的执行主体为HMD,所述方法还包括:向所述第一算力节点发送渲染子任务信息,所述渲染子任务信息用于指示渲染包括所述一个或多个渲染单元的子任务。该设计中,HMD负责分配算力节点和确定渲染子任务信息,并向第一算力节点发送渲染子任务信息,使得第一算力节点能够基于该渲染子任务信息渲染一个或多个渲染单元。
第二方面,本申请提供一种装置(或者说用于分配渲染任务的装置),所述装置具备实现上述第一方面的功能,比如,所述装置包括执行上述第一方面涉及操作所对应的模块或单元或手段(means),所述模块或单元或手段可以通过软件实现,或者通过硬件实现,也可以通过硬件执行相应的软件实现。
在一种可能的设计中,所述装置包括处理模块、通信模块,其中,通信模块可以用于收发信号,以实现该装置和其它装置之间的通信;处理模块可以用于执行该装置的一些内部操作。处理模块、通信模块执行的功能可以和上述第一方面涉及的操作相对应。
在一种可能的设计中,所述装置包括处理器,处理器可以用于与存储器耦合。所述存储器可以保存实现上述第一方面涉及的功能的必要计算机程序或指令。所述处理器可执行所述存储器存储的计算机程序或指令,当所述计算机程序或指令被执行时,使得所述装置实现上述第一方面中任意可能的设计或实现方式中的方法。
在一种可能的设计中,所述装置包括处理器和存储器,存储器可以保存实现上述第一方面涉及的功能的必要计算机程序或指令。所述处理器可执行所述存储器存储的计算机程序或指令,当所述计算机程序或指令被执行时,使得所述装置实现上述第一方面中任意可能的设计或实现方式中的方法。
在一种可能的设计中,所述装置包括处理器和接口电路,其中,处理器用于通过所述接口电路与其它装置通信,并执行上述第一方面中任意可能的设计或实现方式中的方法。
可以理解地,上述第二方面中,处理器可以通过硬件来实现也可以通过软件来实现,当通过硬件实现时,该处理器可以是逻辑电路、集成电路等;当通过软件来实现时,该处理器可以是一个通用处理器,通过读取存储器中存储的软件代码来实现。此外,以上处理器可以为一个或多个,存储器可以为一个或多个。存储器可以与处理器集成在一起,或者存储器与处理器分离设置。在具体实现过程中,存储器可以与处理器集成在同一块芯片上,也可以分别设置在不同的芯片上,本申请实施例对存储器的类型以及存储器与处理器的设置方式不做限定。
第三方面,本申请提供一种系统(或者说一种渲染系统),该系统可以包括第二方面中的装置和与所述装置交互的其他装置(如接入网设备、算力节点、算力管理功能模块、或HMD等)。
第四方面,本申请提供一种计算机可读存储介质,所述计算机存储介质中存储有计算机可读指令,当计算机读取并执行所述计算机可读指令时,使得计算机执行上述第一方面的任一种可能的设计中的方法。
第五方面,本申请提供一种计算机程序产品,当计算机读取并执行所述计算机程序产品时,使得计算机执行上述第一方面的任一种可能的设计中的方法。
第六方面,本申请提供一种芯片,所述芯片包括处理器,所述处理器与存储器耦合,用于读取并执行所述存储器中存储的软件程序,以实现上述第一方面的任一种可能的设计中的方法。
附图说明
图1为本申请实施例提供的一种分布式协同渲染架构示意图;
图2为本申请实施例提供的渲染过程中渲染单元之间依赖关系的示意图;
图3为本申请实施例提供的一种渲染方法的流程示意图;
图4为本申请实施例提供的另一种渲染方法的流程示意图;
图5为本申请实施例提供的一种装置的结构示意图。
具体实施方式
本申请实施例涉及VR业务的分布式协同渲染架构。如图1所示,本申请实施例提出的一种分布式协同渲染架构,下面对该分布式渲染架构包括的设备进行介绍。
1、头显(head-mounted display,HMD)设备101(以下可简称为HMD或称为VR眼镜),负责感知用户的姿势或用户所在的位置,或负责接收用户的指令,从而确定需要渲染的内容。HMD101可以完成用户信息的上报、前景提取、前景内容的渲染任务分配、以及码流合并等任务。
2、周围算力节点(computing power node,CPN)(或称为边缘算力节点)102,可以完成CPN102与其它设备或节点,例如与其它CPN、HMD或接入网设备等节点之间的信道状态信息(channel state information,CSI)的反馈、算力资源的注册以及算力状态的反馈,还可以辅助VR眼镜完成解码和渲染的任务。CPN可以是基站算力板、专用算力版、家庭组网中的个人计算机(personal computer,PC)、手机、智能手表、客户前置设备(customer premise equipment,CPE)、智能网关(intelligence network gateway,ING)、下沉的内容分发网络(content delivery network,CDN)设备、移动边缘计算(mobile edge computing,MEC)设备、或者可以是其他VR设备等的终端。可以理解的是,CPN相比于云端服务器更靠近HMD,时延也更少。
其中,CPN之间可通过侧行链路(sidelink)方式连接,该方式中CPN之间可直接进行通信,无需基站进行中转;或者CPN之间可通过5G系统(5G system,5GS)连接,该方式中CPN之间的通信需要基站进行中转。CPN和HMD可通过终端和基站之间的无线端口,如Uu口进行通信,该方式中CPN 和HMD之间的通信需要基站进行中转;或者,CPN和HMD可通过sidelink的方式进行通信,该方式中CPN和HMD可直接进行通信而无需基站进行中转。其中,sidelink为用于临近业务(proximity-based services,ProSe)中终端与终端之间进行直接通信的接口。
上述的CPN和HMD都可以为终端设备(可简称为终端)。终端,又称之为用户设备(user equipment,UE)、移动台(mobile station,MS)、移动终端(mobile terminal,MT)等,是指向用户提供语音和/或数据连通性的设备,也包括能够进行sidelink通信的设备,如车载终端,或者能进行车联网(vehicle-to-everything,V2X)通信的手持终端等。
3、接入网设备103,是指将终端接入到无线网络的无线接入网(radio access network,RAN)节点(或设备),又可以称为基站,例如可以是演进型基站(evolved NodeB,eNodeB)、第五代(5th generation,5G)移动通信系统中的下一代基站(next generation NodeB,gNB)等各种类型基站。其中基站可以采用集中单元(central unit,CU)-分布式单元(distributed unit,DU)分离的架构,即基站由CU和DU构成的架构。
4、算力管理功能(computing management function,CMF)设备104,是指可以布置在基站侧的一个功能模块或是布置在核心网的一个网元,其功能主要负责算力节点的注册信息管理或者根据其他设备的请求向该其他设备提供算力节点的注册信息等。
在该分布式架构中,HMD101首先将渲染任务划分为若干个渲染单元。然后,HMD101或者接入网设备103根据CPN的状态信息,将包括渲染一个或多个渲染单元的子任务分配给不同的CPN来协同完成渲染。CPN的状态信息可以包括CPN与其他设备或节点,例如与其它CPN、HMD或接入网设备等节点之间的信道信息(例如信噪比(signal-to-noise ratio,SNR))、以及CPN的算力信息(例如GPU算力)等。参与渲染的CPN完成渲染子任务后,将渲染后的内容返还给HMD101。HMD101对接收的渲染子任务的结果进行合并以显示。
相较于在本地VR一体机或者与VR设备通过有线方式连接的服务器完成渲染任务的本地渲染架构,或者在云端对视频画面进行实时渲染的云渲染架构,本申请实施例提出的分布式协同渲染的优势在于利用多个CPN形成从小到大算力,既满足了VR业务高算力的需求,又降低了时延,从而提升体验、节省能耗。
需要说明的是,渲染单元为将渲染任务分割后的最小单元,例如为下述的对象(object)或块(tile)。
可以理解的是,该分布式架构中,由CMF负责管理CPN,相较于HMD负责CPN的管理,CMF所能管理的CPN的范围更大,能管理到的CPN的数量也会更多。
在分布式协同渲染架构中,渲染任务的划分方式可分为以下三种:
1.基于帧(frame)的分割模型,该分割模型实现简单,交互数据量少;
2.基于tile的分割模型,该分割模型实现容易,交互数据量大,可提高单帧渲染速度;
3.基于object的分割模型,该分割模型实现复杂,交互数据量中等,可明显提高单帧渲染速度。
需要说明的是,本申请实施例列出了上述三种渲染任务的划分方式,不排除本申请实施例可适用于其他的渲染任务的划分方式。
基于上述的分割模型,可以将渲染任务分割为多个渲染单元,而渲染单元之间可能具有依赖关系。渲染单元之间具有依赖关系,意味着一个渲染单元的渲染依赖于另一个渲染单元的渲染结果,即完成一个渲染单元的渲染,需要以另一个渲染单元的渲染结果为输入(或者说为前提)。
下面以基于object的分割模型为例,阐述在渲染过程中因object之间的依赖关系而产生的问题。若某个object的渲染效果可能会对另一个object产生影响,可以认为这两个object之间存在依赖关系。如图2所示,假设某个视频帧中共计有4个object需要进行渲染。这4个object分别编号为1、2、3、4,其中object 1与object2有重叠,,例如object1的渲染依赖于object2的渲染结果,可以认为object1和object2之间存在依赖关系。因而,在渲染过程中,渲染有依赖关系的object的CPN之间需要交互被依赖object的渲染结果和配置信息,例如渲染object2的CPN需要在渲染过程中向渲染object1的CPN发送object2的渲染结果和配置信息。在object多且依赖关系复杂的情况下,这样的交互所产生的开销会很大,可能会达到GB级。
可以理解的是,基于tile的分割模型在渲染过程也存在因tile之间的依赖关系而产生的问题。例如,某个视频帧的像素为100*100,该视频帧可以划分为100个10*10的小块,每个小块就是一个tile。其中,tile之间的渲染可能有依赖关系,具体可为某个tile在渲染过程中需要其他tile提供相应的信息(例如其他title中包含的物体、位置等)才能完成本tile的渲染。
分布式协同渲染可利用多个相较于云端更接近头显设备的CPN辅助完成渲染任务,从而解决了本地渲染中因渲染装置的算力资源有限导致的算力受限问题以及云端渲染中因云端和HMD之间距离远导致的时延问题。但是分布式协同渲染架构下不同的渲染子任务之间可能存在依赖关系,因此不同CPN在渲染过程中可能需要交互渲染子任务的上下文数据(例如上述object的渲染结果和配置信息,或上述的其他tile提供的相应的信息)。这种上下文数据有可能达到GB级,从而对空口容量产生较大影响以及导致渲染速度下降。
基于上述问题,本申请实施例提供了一种渲染方法。在该方法中,渲染任务被分割为多个渲染单元,且该多个渲染单元中包括具有依赖关系的至少两个渲染单元。在将该多个渲染单元分配给算力节点(例如CPN)进行渲染时,需考虑该至少两个渲染单元的依赖关系,从而减少算力节点之间对渲染子任务的上下文数据的交互,实现渲染速度的加快和渲染时延的降低。
在一种实现方式中,将具有依赖关系的渲染单元分配给同一个算力节点,例如,多个渲染单元中的第一渲染单元依赖于第二渲染单元,则将第一渲染单元和第二渲染单元分配给第一算力节点进行渲染。这样,比起将第一渲染单元分配给第一算力节点,第二渲染单元分配给第二算力节点的方案,本实现方式避免了第一算力节点和第二算力节点之间交互第二渲染单元的上下文数据。
在一种实现方式中,在将渲染单元分配给算力节点时,还考虑渲染某一渲染单元所需的算力大小和算力节点所能提供的算力资源大小,选择算力资源大小能够满足渲染某一渲染单元所需算力大小的算力节点。例如,对于多个渲染单元中的第三渲染单元,在第一算力节点的算力资源大小能满足渲染第三渲染单元所需的算力大小的情况下(例如,第一算力节点的算力资源大小大于或等于第三渲染单元所需的算力大小),将第三渲染单元分配给第一算力节点。
在一种实现方式中,在将渲染单元分配给算力节点时,还考虑本次任务为渲染以及算力节点所能提供的算力类型支持渲染。例如,对于多个渲染单元中的第三渲染单元,在第一算力节点的算力类型包括图像处理单元的情况下,将第三渲染单元分配给第一算力节点。
在一种实现方式中,在将渲染单元分配给算力节点时,还考虑某一渲染单元对应的数据量的大小和算力节点的通信状态,选择通信状态支持传输某一渲染单元对应的数据量的算力节点。例如,对于多个渲染单元中的第三渲染单元,在第一算力节点的通信状态支持传输第三渲染单元的数据量的情况下,将第三渲染单元分配给第一算力节点。
在上述实现方式中,在将渲染单元分配给算力节点时,还考虑该渲染任务的时延要求范围,选择在该时延要求范围内通信状态支持传输某一渲染单元对应的数据量的算力节点。例如,对于多个渲染单元中的第三渲染单元,在时延要求的范围内第一算力节点的通信状态支持传输第三渲染单元的数据量的情况下,将第三渲染单元分配给第一算力节点。
其中,上述的算力节点所能提供的算力资源大小、算力节点所能提供的算力类型可以来源于算力管理功能模块(如CMF)。当渲染任务的调度决策(即分配渲染单元)的主体是接入网设备时,算力管理功能模块可以是该接入网设备中的功能模块,或者是与该接入网设备连接的核心网中的网元;当渲染任务的调度决策的主体是HMD时,算力管理功能模块可以是与该HMD连接的接入网设备中的功能模块,或者是与该接入网设备连接的核心网中的网元。上述的算力节点的通信状态信息可以来源于接入网设备或者该信息对应的算力节点。
需要说明的是,第三渲染单元可以和第一渲染单元或者第二渲染单元为同一个渲染单元,也可以为不同的渲染单元,本申请对此不作限定。
为了实现上述方法,需获取渲染任务的信息,该渲染任务的信息包括该至少两个渲染单元的依赖关系,可选的还可以包括该多个渲染单元的编号、渲染任一渲染单元所需的算力大小、指示本次任务为渲染的任务类型、任一渲染单元的数据量的大小、或该渲染任务的时延要求中的任意一项或多项。其中, 获取渲染任务的信息可以是接收来自头显设备或者获取本地的渲染任务的信息。
为了实现渲染,还需将包括分配给某一算力节点的渲染单元的渲染子任务信息发送给该算力节点。当渲染任务的调度决策的主体是接入网设备时,该接入网设备可以将分配给该算力节点的渲染单元的标识信息和该算力节点的标识信息发送给HMD,由HMD确定该算力节点对应的渲染子任务信息之后转发给该算力节点。其中,渲染单元的标识信息可以为渲染单元的编号或者标识(identifier,ID)等,即用于标识某个渲染单元的信息;算力节点的标识信息可以为算力节点的地址或者标识等,即用于标识某个算力节点的信息。
或者,当渲染任务的调度决策的主体是接入网设备时,HMD确定该算力节点对应的渲染子任务信息之后转发给该算力节点。
其中,HMD转发渲染子任务信息给算力节点,可以通过接入网设备实现。
可以理解的是,在本申请实施例中可以有多个算力节点渲染该渲染任务的子任务。本申请实施例虽考虑将具有依赖关系的渲染单元分配给同一个算力节点,但并不排除算力节点之间会交互具有依赖关系的渲染单元的上下文数据。例如,因为算力资源或通信状态的限制,将具有依赖关系的第四渲染单元和第五渲染单元分别分配给第二算力节点和第三算力节点。然而,相较于不考虑依赖关系的分配方法,本申请实施例的方法能尽可能地减少渲染节点之间的交互、降低算力节点之间的交互开销,从而达到降低渲染时延的效果。
如图3所示,为本申请实施例提供的一种渲染方法,该渲染方法中由接入网设备(下述以gNB为例)负责渲染任务的调度决策,HMD和CMF分别向gNB提供业务和CPN的算力信息以辅助gNB对渲染任务的分配。该方法包括以下步骤:
S101:gNB向CMF请求算力资源表,CMF接收来自gNB的算力资源表的请求。
其中,该算力资源表可以用于gNB服务范围内的用户的渲染任务调度。
在一种实现方式中,gNB可以周期性地向CMF请求算力资源表;或者gNB可以在接收到渲染任务信息之后(即执行S103之后),向CMF请求算力资源表。其中,gNB可以向CMF发送算力资源表的请求消息,该消息中可以包括指示请求CPN地址、算力类型(可选,例如当所有的CPN都支持某种算力类型时无需请求)、以及算力资源大小的信息。
S102:CMF向gNB下发算力资源表,gNB接收来自CMF的算力资源表。
该算力资源表可以包括CPN地址、算力类型(可选,例如当所有的CPN都支持某种算力类型时该算力资源表无需包括)、以及算力资源大小等信息。其中,CPN地址可以是IP地址,或者可以是RAN侧的身份标识,RAN侧的身份标识例如可以是无线网络临时标识(radio network temporary identifier,RNTI),此处不作限定。算力类型包括中央处理单元(central processing unit,CPU)和/或图形处理单元(graphical processing unit,GPU),其中,中央处理单元为中央处理器,起着控制计算机运行的作用,主要负责浮点数运算和整数运算;图形处理单元是一个附属型的处理器,处理计算机中与图形计算有关的工作,并将数据更好地呈现在显示器中,主要负责浮点数运算。算力资源大小可以每秒操作次数(operations per second,OPS)或每秒浮点运算次数FLOPS为(floating-point operations per second)为单位,且数值越大的表示算力资源越大。
在一个示例中,CMF下发的算力资源表如下表1所示。
表1
可以理解的是,CMF负责创建和维护算力资源表中的信息,可以为提供算力资源表的一个功能实体。CMF可位于gNB处,例如是gNB中的一个功能模块,或者可以位于核心网处,例如为核心网中的一个网元。具体地,当任意一个CPN有算力并且希望进行共享时,可向CMF节点提出注册申请并上报 算力资源表中的算力资源信息。当CPN的算力资源信息发生变化时,CPN可以向CMF发送新的消息以实现CMF所保存的算力资源信息的更新。
需要说明的是,gNB获取来自CMF的算力资源表可以通过以下两种方式:一是CMF周期性下发算力资源表,周期可以是核心网或者基站配置的;二是gNB执行S101,即主动向CMF请求算力资源表。也就是说,S101为可选步骤,在方式一下可不执行。
S103:HMD向gNB发送渲染任务信息,gNB接收来自HMD的渲染任务信息。
该渲染任务信息可以包括任务类型、时延要求、分割数量、渲染单元的编号、渲染渲染单元的算力大小、渲染单元的数据量的大小、或渲染单元之间的依赖关系中的任意一项或多项。其中,任务类型指示本次任务为渲染;时延要求指示本次任务对时延的要求;分割数量即为将本次渲染任务分割后的渲染单元的个数;渲染渲染单元的算力大小即为渲染某一渲染单元所需的算力大小;渲染单元的数据量大小即为某一渲染单元所对应的数据量,可以理解为传输某一渲染单元时对应的数据量。该渲染任务信息可通过媒体接入控制(media access control,MAC)层、无线资源控制(radio resource control,RRC)层、分组数据汇聚层协议(packet data convergence protocol,PDCP)层、或上行控制信息(uplink control information,UCI)消息从HMD发送给gNB。需要说明的是,本申请实施例中以object为渲染单元的示例,并不排除对tile的适用。
在一个示例中,任务类型指示本次任务为渲染;时延要求指示10ms,即要求本次渲染任务在10ms内完成;若采用基于object的任务分割方式,某帧中共有5个object需要渲染,则分割数量指示为5;若对上述5个object分别进行编号,则渲染单元的编号可对应于1-5;算力大小指示渲染上述每个object所需要的算力的大小;数据量上的大小指示上述每个object所对应的数据量的大小(即节点之间传输该object的数据量的大小);若object1与object2和object3之间具有依赖关系,以及object3和4之间具有依赖关系,则依赖关系可以指示object1的渲染依赖于object2和object3的渲染,以及指示object3的渲染依赖于object4的渲染。其中,该示例中的渲染任务信息中的部分信息可以如下表2所示。
表2
需要说明的是,S101-S102与S103之间无时序限制,即可以先执行S101-S102,后执行S103,也可以先执行S103,后执行S101-S102。
S104:gNB根据算力资源表、渲染任务信息以及CPN的通信状态信息,选择CPN。
其中,CPN的通信状态信息可以包括CPN的信道信息,CPN的信道信息表征CPN所能支持的实时传输速率,CPN的信道信息例如可以是CPN与其他设备或节点,例如其他CPN、gNB、或HMD等节点之间的信道状态信息(channel state information,CSI)。可以理解的是,CPN属于gNB的一个通信服务对象,gNB可以获取来自CPN的该CPN的通信状态信息。
在一个示例中,gNB获取的CPN的通信状态信息可以如下表3所示。
表3
gNB根据算力资源表、渲染任务信息和CPN的通信状态信息,选择CPN可以包括以下任意一项或多项:
(1)gNB根据渲染任务信息中的任务类型,选择算力类型为GPU的CPN。
例如,gNB选择表1中的CPN2-CPN4。
(2)gNB根据渲染任务信息中的依赖关系,确定将具有依赖关系的多个渲染单元的分配给同一个CPN。
例如,gNB根据依赖关系指示的object1的渲染依赖于object2和object3的渲染,以及object3的渲染依赖于object4的渲染,确定将object1、object2、object3和object4分配给同一个CPN;或者将object1、object2和object3分配给一个CPN,将object4分配给另一个CPN;或者将object1和object2分配给一个CPN,将object3和object4分配给另一个CPN。具体分割方式可取决于各渲染单元所需算力以及各CPN所能提供的算力,或传输各渲染单元的数据大小以及各CPN的通信状态。
可以理解的是,比起gNB不考虑依赖关系分配CPN,根据依赖关系分配CPN的方式能够减少CPN之间的交互次数或者减少CPN之间交互的数据量,从而减少时延和提升渲染速度。例如,gNB将objec1分配给CPN2,将object2和object3分配给CPN3,将object4和object5分配给CPN4。为了完成object1-object4的渲染,CPN4完成object4的渲染之后,需要将object4的渲染结果发送给CPN3,CPN3基于object4的渲染结果完成object3的渲染以及完成object2的渲染后,将object2和object3的渲染结果发送给CPN2。可见,CPN4和CPN3之间交互了object4的渲染结果、CPN3和CPN2之间交互了object2和object3的渲染结果。而如果gNB考虑了依赖关系,例如将object1、object2和object3分配给CPN2,将object4分配给CPN3,则只需要CPN2和CPN3之间交互一次object4的渲染结果,即可完成object1-object4的渲染。
(3)gNB根据渲染任务信息中的渲染渲染单元的算力大小和算力资源表中的CPN所能提供的算力资源大小,确定能满足某渲染单元所需算力大小的CPN(即算力资源大于或等于某渲染单元所需算力的CPN)。
若gNB根据(2)中的方式,确认将具有依赖关系的多个渲染单元分配给同一个CPN,则可以考虑渲染该多个渲染单元所需的算力大小和能够提供该大小算力的CPN。例如object1-object4所需算力总和为33G FLOPS,资源大小为50G FLOPS的CPN2可满足object1-object4的算力需求。object5所需的算力为5G FLOPS,CPN2-4皆可满足object5的算力需求。
(4)gNB根据渲染任务信息中的时延要求、渲染单元对应的数据量的大小和CPN的通信状态信息,确定在时延要求的范围内能支持某个渲染单元的数据量的传输的CPN(即该CPN的通信状态能够支持在时延要求范围内传输某个渲染单元的数据量)。
若gNB根据(2)中的方式,确认将具有依赖关系的多个渲染单元分配给同一个CPN,则可以考虑该多个渲染单元传输的数据量、时延要求和CPN的通信状态信息,以确定在时延要求范围内能支持传输该多个渲染单元的数据的CPN。例如根据10ms的时延要求和传输object1-object4的66Mbits的数据量,确定CSI为20dB的CPN2能够在10ms范围内支持66Mbits的数据传输,因此将object1-object4分配给CPN2。
可以理解的是,本申请实施例中,gNB选择CPN的原则可以为渲染和传输产生的时延要小于渲染任务的时延要求,且尽可能地减少节点间的数据交互。然而,若gNB确定算力资源表中的CPN不能支持渲染任务的需求,则gNB可以选择云端进行渲染。
具体地,在算力资源表中的CPN提供的算力能够满足渲染任务需求的情况下,将渲染任务分配给CPN;在不能满足的情况下,可以将需求大于各个CPN所能提供的算力的渲染单元分配给云端,或者直接将整个渲染任务交由云端渲染。
在分配CPN时,gNB可以将具有依赖关系的渲染单元聚合起来看成一个整体,如将object1-object4聚合起来视为一个大的object,该大的object对应的算力大小为33G FLOPS。再结合无依赖关系的object按照算力需求的大小进行排序,例如该排序可为:该大的object->object5。进而再根据算力资源表中CPN能提供的算力大小,确定是否有能够满足所排序的object算力需求的CPN,例如确定满足该大的object的33G FLOPS算力需求的CPN2和满足object5的5G FLOPS算力需求的CPN3。
而如果没有能够满足聚合的object的算力需求的CPN,可以对该聚合的object进行拆分(也可以直 接交由云端渲染)。例如若没有满足该大的object的33G FLOPS算力需求的CPN(假设CPN2的算力为30G FLOPS),则可以将object1-object4的算力需求进行拆分,例如拆为object1-object3和objec4,再结合无依赖关系的object5进行排序,即该排序为object1-object3->object4->object5,进而再根据算力资源表中的CPN能提供的算力确定CPN。
需要说明的是,除了考虑依赖关系,gNB还可以通过将算力需求较大的渲染单元分配给算力资源大的CPN以减少渲染的时延。
S105:gNB将渲染单元的组合方式和对应的CPN信息发送给HMD,HMD接收来自gNB的渲染单元的组合方式和对应的CPN信息。
其中,渲染单元的组合方式可以包括分配给某个CPN的一个或多个渲染单元的编号,对应的CPN信息可以为该CPN的地址。例如,将object1-object4分配给CPN2,将object5分配给CPN3,则渲染单元的组合方式和对应的CPN信息可以如下表4所示。例如将object1-object3分配给CPN2,将object4分配给CPN3,将object5分配CPN4,则渲染单元的组合方式和对应的CPN信息可以如下表5所示。
表4
表5
S106:HMD根据渲染单元的组合方式和对应的CPN信息,确定渲染子任务信息。
其中,渲染子任务信息包括渲染单元的编号、渲染单元的依赖关系(可选,例如渲染单元之间无依赖关系,则渲染子任务信息可以不包括渲染单元的依赖关系)。例如,将object1-object4分配给CPN2,将object5分配给CPN3,CPN2的渲染子任务信息可以包括下表6所示的信息,CPN3的渲染子任务信息可以包括下表7所示的信息或者CPN3的渲染子任务信息包括指示渲染的object编号为5的信息,而不包括依赖关系。
表6
表7
也就是说,HMD根据渲染单元的组合方式和对应的CPN信息,确定分配给某个CPN的渲染单元和该渲染单元之间的依赖关系(如果有的话),并将该渲染单元和该渲染单元之间的依赖关系打包为一个子任务,进而确定一个子任务信息。
如果分配给某个CPN的渲染单元依赖于别的CPN的渲染结果,则该CPN的渲染子任务信息还可以包括所依赖的CPN的地址。如果某个CPN的渲染结果被别的CPN的渲染单元所依赖,则该CPN渲染子任务信息还可以包括被依赖的CPN的地址。例如,将object1-object3分配给CPN2,将object4分配给CPN3,将object5分配CPN4,CPN2的渲染子任务信息可以包括下表8所示的信息,CPN3的渲染子任务信息可以包括下表9所示的信息,CPN4的渲染子任务信息可以包括下表10所示的信息。
表8
表9
表10
可以理解的是,上述示例中的CPN3的渲染子任务信息可以包括指示渲染的object编号为4和被地址2的CPN依赖的信息,而不限定包括依赖关系,也不限定为表格的形式;同样地,CPN4的渲染子任务信息可以包括指示渲染的object编号为5的信息,而不限定包括依赖关系,也不限定为表格的形式。
在一种实现方式中,渲染单元的编号、渲染单元的依赖关系(可选)、依赖的CPN的地址(可选)、被依赖的CPN的地址(可选)可以包含在分组数据汇聚层协议(packet data convergence protocol,PDCP)数据包的包头中。可以理解的是,渲染子任务信息还可以包括用于渲染某个渲染单元的配置信息(或者称之为数据),该配置信息可以在PDCP数据包中。
S107:HMD将渲染子任务信息发送至对应的CPN,对应的CPN接收来自HMD的渲染子任务信息。
例如,CPN2接收如表6所示的渲染子任务信息,CPN3接收如表7所示的渲染子任务信息;或者CPN2接收如表8所示的渲染子任务信息,CPN3接收如表9所示的渲染子任务信息,CPN4接收如表10所示的渲染子任务信息。
在一种实现方式中,HMD和CPN通过基站进行通信,例如HMD和CPN之间通过Uu口进行通信。这样,HMD将各个渲染子任务信息发送给gNB,gNB将相应的渲染子任务信息转发给对应的CPN。可以理解的是,该情况下,HMD还可以将渲染子任务信息对应的CPN信息发送给gNB,gNB根据该CPN信息,获知将渲染子任务信息转发给对应的CPN。其中,CPN信息可以包括在渲染子任务信息所在的数据包的包头(例如PDCP的包头)中,gNB可以查看渲染子任务信息所在的数据包的包头,获知转发给相应的CPN。此外,CPN信息可以包括在渲染子任务信息中,或者不包括在渲染子任务信息中,本申请对此不作限定。
在上述实现方式中,HMD可以将渲染子任务信息中的渲染单元的编号、渲染单元的依赖关系(可选)、依赖的CPN地址(可选)、以及被依赖的CPN的地址(可选),通过控制面发送给gNB,将渲染子任务信息中的配置信息通过数据面发送给gNB。gNB进而将包括渲染单元的编号、渲染单元的依赖关系、依赖的CPN地址、被依赖的CPN的地址以及配置信息的完整的渲染子任务信息发送给对应的CPN。这样,gNB无法感知配置信息。或者,HMD将渲染子任务信息通过数据面发送给gNB,gNB可以查看数据包的包头部分,而无法查看数据包中的配置信息。
在另一种实现方式中,HMD和CPN可以直接通信,无需通过基站中转,例如HMD和CPN之间通过sidelink方式通信。
需要说明的是,本申请实施例可以同时包括上述两种实现方式,即HMD和部分CPN可以直接通信,同时,HMD和其他的CPN可以通过基站进行通信。
S108:CPN收到渲染子任务信息之后,进行子任务的渲染。
其中,若某个CPN依赖于别的CPN的渲染结果,则该CPN可以按照被依赖->无依赖->有依赖顺序进行渲染。可以理解的是,被依赖的子任务可能会对其他CPN的渲染产生影响,所以需要最先完成渲染,然后将被依赖的子任务的渲染结果传输给依赖于该渲染结果的CPN。相应地,依赖于其他CPN的渲染结果的CPN也需要等到其他CPN的渲染结果才能完成渲染。例如在object1-object3分配给CPN2、object4分配给CPN3、object5分配给CPN4的情况下,CPN3需要将object4的渲染结果发送给CPN2, CPN2可以按照object2->object3->object1的顺序进行渲染。
若某个CPN不依赖于别的CPN的渲染结果,则可以不限于按照被依赖->无依赖->有依赖顺序进行渲染。
可以理解的是,CPN之间可以直接通信,例如采用sidelink通信方式;或者,CPN之间可能无法直接通信,例如CPN之间通过5GS连接,则某个CPN可以通过gNB转发渲染结果至依赖该渲染结果的CPN,gNB处可存储渲染单元的依赖关系和各渲染单元对应CPN的信息,从而实现转发。
若gNB处能够实现渲染结果的转发(例如将来自CPN3的object4的渲染结果发送给CPN2),渲染子任务信息中可以不包括依赖的CPN的地址,或者可以不包括被依赖的CPN的地址(例如如表8中的CPN2的渲染子任务信息不包括依赖的CPN的地址;如表9中的CPN3的渲染子任务信息不包括被依赖的CPN的地址)。即,gNB根据本地的渲染单元的依赖关系和各渲染单元对应CPN的信息,可将被依赖的渲染结果转发给依赖于该渲染结果的CPN,而无需渲染子任务信息中包括依赖的CPN的地址或者被依赖的CPN的地址。
S109:CPN完成渲染子任务之后,将渲染结果发送至HMD,HMD接收来自CPN的渲染结果。
其中,渲染结果可以包括子任务渲染完成后的位置、光照、阴影、或纹路等信息。
在一种实现方式中,CPN通过基站和HMD进行通信,例如CPN和HMD之间通过Uu口进行通信;或者CPN和HMD直接通信,无需通过基站中转,例如CPN和HMD之间通过sidelink方式通信;或者部分CPN和HMD直接通信,同时,其他的CPN和HMD通过基站进行通信。
S110:HMD完成对各个渲染结果的合并和显示。
在本申请实施例中,gNB根据CPN的通信状态信息和CPN的算力资源表,将渲染任务分配给相应的CPN,同时在分配过程中考虑渲染任务所包括的各渲染单元之间的依赖关系,比起不考虑依赖关系的分配方法,本申请实施例的方法能够减少CPN之间的交互、降低CPN之间的交互开销,从而降低渲染时延。
本申请实施例提供了如图4所示的另一种渲染方法,该渲染方法中HMD负责渲染任务的调度决策,gNB和CMF分别向HMD提供CPN的通信状态信息和CPN的算力信息以辅助HMD对渲染任务的分配。该方法包括以下步骤:
S201:HMD向CMF请求算力资源表,CMF接收来自HMD的算力资源表的请求。
其中,该算力资源表可以用于确定渲染任务的分割方法和渲染任务的分配。
在一种实现方式中,HMD收到用户的渲染请求,进而向CMF请求算力资源表。其中,HMD与CMF之间可以通过基站,或者通过基站和核心网进行通信。HMD与CMF之间通过基站进行通信的情况下,CMF可以为gNB中的一个功能模块,或者为核心网的一个网元;HMD与CMF之间通过基站和核心网进行通信的情况下,CMF可以为第三代合作伙伴计划(3rd Generation Partnership Project,3GPP)网络系统之外的第三方网元。
S202:CMF向gNB下发算力资源表,gNB接收来自CMF的算力资源表。
关于算力资源表、CMF的介绍以及gNB获取算力资源表的介绍,可以参考S102中的相关描述,此处不作赘述。
S203:gNB在收到的算力资源表中添加CPN的通信状态信息。
在一个示例中,在表1所示算力资源表的基础上,gNB可以添加表3所示的CPN的通信状态信息,组成如下表11所示的信息。
表11
需要说明的是,本申请实施例可以适用于基站由CU和DU组成的场景(可称为CU-DU场景)。该场景下,CMF首先将算力资源表下发给CU,DU再将CPN的通信状态信息发给CU,CU最后将CPN通信状态信息加到算力资源表中并发送给HMD。
S204:gNB向HMD发送CPN通信状态信息和CPN的算力资源信息,HMD接收来自gNB的CPN通信状态信息和CPN的算力资源信息。
可以理解的是,该CPN的算力资源信息可以为未经添加的算力资源表(例如表1),CPN通信状态信息和CPN的算力资源信息可以为经过添加的算力资源表(例如表11)。
可替换地,CMF可以将算力资源表发送给HMD,gNB将CPN的通信状态信息发送给HMD,也就是说,gNB无需在算力资源表中添加CPN的通信状态信息,HMD分别接收来自CMF的算力资源表和gNB的通信状态信息。在该情况下,S203和S204为可选步骤。
S205:HMD根据CPN的通信状态信息、CPN的算力资源信息以及渲染任务信息,选择CPN。
其中,HMD可以本地保存渲染任务信息。关于HMD如何根据CPN的通信状态信息、CPN的算力资源信息以及渲染任务信息选择CPN的详细介绍可参考S104中gNB选择CPN的描述,此处不作赘述。
可以理解的是,本申请实施例中,HMD选择CPN的原则可以为渲染和传输产生的时延小于或等于渲染任务的时延要求,且尽可能减少CPN之间的交互。然而,若HMD确定算力资源表中的CPN不能支持渲染任务的需求,则HMD可以选择云端进行渲染。关于HMD是否选择云端、以及选择CPN的情况下如何具体分配的相关描述可参考S104中的gNB相关动作的描述,此处不作赘述。
需要说明的是,HMD收到CPN的算力资源信息之后,可以选择分割渲染任务的方法和/或渲染子任务的个数。例如算力足够的情况下,优先选择基于object的分割模型,或者选择基于tile的分割模型。如果可用CPN的算力较小但是可用CPN的数量较多,单个的算力节点不足以支撑一个object的渲染,HMD可以选择采用基于tile的切割方式。分割的object的个数,或者分割的tile的大小和数量可以依据CPN的算力确定。
在该步骤中,HMD可以确定出分配给某个CPN的一个或多个渲染单元(可对应于S105中的渲染单元的组合方式),例如确定将object1-object4分配给CPN2,将object5分配给CPN3;或者确定将object1-object3分配给CPN2,将object4分配给CPN3,将object5分配CPN4。
S206:HMD根据选择的CPN和该CPN对应的渲染单元,确定渲染子任务信息。
其中,渲染子任务信息可以包括渲染单元的编号、渲染单元的依赖关系(可选)、依赖的CPN的地址(可选)、以及被依赖的CPN的地址(可选),关于渲染子任务的详细介绍还可参考S106中的描述,此次不作赘述。
S207-S210可以参考S107-S110中的描述,此处不作赘述。
在本申请实施例中,HMD根据CPN的通信状态信息和CPN的算力资源信息,将渲染任务分配给相应的CPN,同时在分配过程中考虑渲染任务所包括的各渲染单元之间的依赖关系,比起不考虑依赖关系的分配方法,本申请实施例的方法能够减少CPN之间的交互、降低CPN之间的交互开销,从而降低渲染时延。
针对上述图3和图4所述的方法,可以理解的是:上述侧重描述了各方法之间的差异之处,除差异之处的其它内容,各方法之间可以相互参照;此外,同一方法中,不同实现方式或不同示例之间也可以相互参照。
图3和图4所描述的各个流程图的步骤编号仅为执行流程的一种示例,并不构成对步骤执行的先后顺序的限制,本申请实施例中相互之间没有时序依赖关系的步骤之间没有严格的执行顺序。各个流程图中所示意的步骤并非全部是必须执行的步骤,可以根据实际需要在各个流程图的基础上删除部分步骤,或者也可以根据实际需要在各个流程图的基础上增添其它可能的步骤。
上述主要从装置交互的角度对本申请实施例提供的方案进行了介绍。可以理解的是,为了实现上述功能,HMD(或者说终端)、接入网设备、CMF、或者算力节点可以包括执行各个功能相应的硬件结构和/或软件模块。本领域技术人员应该很容易意识到,结合本申请实施例描述的各示例的单元及算法步骤, 本申请的实施例能够以硬件或硬件和计算机软件的结合形式来实现。某个功能究竟以硬件还是计算机软件驱动硬件的方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。
本申请实施例可以根据上述方法示例对HMD、接入网设备、CMF、或者算力节点进行功能单元的划分,例如,可以对应各个功能划分各个功能单元,也可以将两个或两个以上的功能集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。
图5示出了本申请实施例中所涉及的装置的可能的示例性框图。如图5所示,装置500可以包括:处理模块501和通信模块502。处理模块501用于对装置500的动作进行控制管理。通信模块502用于支持装置500与其他设备的通信。可选地,通信模块502也称为收发模块,可以包括接收模块和/或发送模块,分别用于执行接收和发送操作。装置500还可以包括存储模块503,用于存储装置500的程序代码和/或数据。
该装置500可以为上述实施例中的HMD或者接入网设备。处理模块501可以支持装置500执行上文中各方法示例中HMD或者接入网设备的动作。或者,处理模块501主要执行方法示例中的HMD或者接入网设备的内部动作,通信模块502可以支持装置500与其它设备之间的通信。
比如,在一个实施例中,处理模块501用于:获取渲染任务的信息,该渲染任务被分割为多个渲染单元,该渲染任务的信息包括该多个渲染单元中的至少两个渲染单元之间的依赖关系;以及根据该依赖关系,确定将该多个渲染单元中的一个或多个渲染单元分配给用于渲染的第一算力节点。
在一种可能的设计中,该依赖关系指示该多个渲染单元中的第一渲染单元依赖于该多个渲染单元中的第二渲染单元,则该处理模块501具体用于:确定将该第一渲染单元和该第二渲染单元都分配给该第一算力节点。
在一种可能的设计中,对于该多个渲染单元中的第三渲染单元,该渲染任务的信息还包括渲染该第三渲染单元所需的算力大小,该通信模块502用于:接收来自算力管理功能模块的一个或多个算力节点的算力资源大小,该一个或多个算力节点包括该第一算力节点;该处理模块501具体用于:根据渲染该第三渲染单元所需的算力大小和该第一算力节点的算力资源大小,确定将该第三渲染单元分配给该第一算力节点。
在一种可能的设计中,对于该多个渲染单元中的第三渲染单元,该渲染任务的信息还包括指示渲染的任务类型,该通信模块502用于:接收来自算力管理功能模块的一个或多个算力节点的算力类型,其中,该一个或多个算力节点包括该第一算力节点,该第一算力节点的算力类型包括图形处理单元;该处理模块501具体用于:根据该任务类型和该第一算力节点的算力类型,确定将该第三渲染单元分配给该第一算力节点。
在一种可能的设计中,当装置500为接入网设备,该算力管理功能模块为该接入网设备中的功能模块,或者为与该接入网设备连接的核心网中的网元;或者,当装置500为HMD,该算力管理功能模块为与该HMD连接的接入网设备中的功能模块,或者为与该接入网设备连接的核心网中的网元。
在一种可能的设计中,对于该多个渲染单元中的第三渲染单元,该渲染任务的信息还包括该第三渲染单元的数据量大小,该处理模块501还用于:获取该第一算力节点的通信状态信息;以及具体用于:根据该第三渲染单元的数据量大小和该第一算力节点的通信状态信息,确定将该第三渲染单元分配给该第一算力节点。
在一种可能的设计中,当装置500为接入网设备,该处理模块501具体用于:获取来自HMD的该渲染任务的信息;该通信模块502用于:将该一个或多个渲染单元的标识信息以及该第一算力节点的标识信息发送给该HMD。
在一种可能的设计中,该通信模块502还用于:接收来自该HMD的渲染子任务信息,该渲染子任务信息用于指示渲染包括该一个或多个渲染单元的子任务;以及向该第一算力节点转发该渲染子任务信息。
在一种可能的设计中,当装置500为HMD,该通信模块502还用于:向该第一算力节点发送渲染子任务信息,该渲染子任务信息用于指示渲染包括该一个或多个渲染单元的子任务。
需要说明的是,当装置500为接入网设备时,接入网设备可包括一个或多个DU和一个或多个CU。DU主要用于射频信号的收发以及射频信号与基带信号的转换,以及部分基带处理。CU主要用于进行基带处理,对接入网设备进行控制等。例如该CU可以用于控制接入网设备执行上述方法实施例中关于接入网设备的操作流程。该DU与CU可以是物理上设置在一起,也可以物理上分离设置的,即分布式基站。
应理解以上装置中模块(或者说单元)的划分仅仅是一种逻辑功能的划分,实际实现时可以全部或部分集成到一个物理实体上,也可以物理上分开。且装置中的模块可以全部以软件通过处理元件调用的形式实现;也可以全部以硬件的形式实现;还可以部分以软件通过处理元件调用的形式实现,部分以硬件的形式实现。例如,各个模块可以为单独设立的处理元件,也可以集成在装置的某一个芯片中实现,此外,也可以程序的形式存储于存储器中,由装置的某一个处理元件调用并执行该模块的功能。此外这些模块全部或部分可以集成在一起,也可以独立实现。这里所述的处理元件又可以成为处理器,可以是一种具有信号的处理能力的集成电路。在实现过程中,上述方法的各操作或以上各个模块可以通过处理器元件中的硬件的集成逻辑电路实现或者以软件通过处理元件调用的形式实现。
在一个例子中,以上任一装置中的模块可以是被配置成实施以上方法的一个或多个集成电路,例如:一个或多个特定集成电路(application specific integrated circuit,ASIC),或,一个或多个微处理器(digital singnal processor,DSP),或,一个或者多个现场可编程门阵列(field programmable gate array,FPGA),或这些集成电路形式中至少两种的组合。再如,当装置中的模块可以通过处理元件调度程序的形式实现时,该处理元件可以是处理器,比如通用中央处理器(central processing unit,CPU),或其它可以调用程序的处理器。再如,这些模块可以集成在一起,以片上系统(system-on-a-chip,SOC)的形式实现。
以上用于接收的模块是一种该装置的接口电路,用于从其它装置接收信号。例如,当该装置以芯片的方式实现时,该接收模块是该芯片用于从其它芯片或装置接收信号的接口电路。以上用于发送的模块是一种该装置的接口电路,用于向其它装置发送信号。例如,当该装置以芯片的方式实现时,该发送模块是该芯片用于向其它芯片或装置发送信号的接口电路。
本申请实施例中的术语“系统”和“网络”可被互换使用。“至少一种”是指一种或者多种,“至少两个”是指两个或者两个以上,“多个”是指两个或两个以上。“和/或”,描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A、同时存在A和B、单独存在B的情况,其中A,B可以是单数或者复数。字符“/”一般表示前后关联对象是一种“或”的关系。“以下至少一项(个)”或其类似表达,是指的这些项中的任意组合,包括单项(个)或复数项(个)的任意组合。例如“A,B或C中的至少一个”包括A,B,C,AB,AC,BC或ABC,“A,B和C中的至少一个”也可以理解为包括A,B,C,AB,AC,BC或ABC。以及,除非有特别说明,本申请实施例提及“第一”、“第二”等序数词是用于对多个对象进行区分,不用于限定多个对象的顺序、时序、优先级或者重要程度。
本领域内的技术人员应明白,本申请的实施例可为方法、装置、系统、或计算机程序产品。因此,本申请可采用完全硬件实施例、完全软件实施例、或结合软件和硬件方面的实施例的形式。而且,本申请可采用在一个或多个其中包含有计算机可用程序代码的计算机可用存储介质(包括但不限于磁盘存储器、光学存储器等)上实施的计算机程序产品的形式。
本申请是参照根据本申请的方法、装置(系统)、和计算机程序产品的流程图和/或方框图来描述的。应理解可由计算机程序指令实现流程图和/或方框图中的每一流程和/或方框、以及流程图和/或方框图中的流程和/或方框的结合。可提供这些计算机程序指令到通用计算机、专用计算机、嵌入式处理机或其他可编程数据处理设备的处理器以产生一个机器,使得通过计算机或其他可编程数据处理设备的处理器执行的指令产生用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的装置。
这些计算机程序指令也可存储在能引导计算机或其他可编程数据处理设备以特定方式工作的计算机可读存储器中,使得存储在该计算机可读存储器中的指令产生包括指令装置的制造品,该指令装置实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能。
这些计算机程序指令也可装载到计算机或其他可编程数据处理设备上,使得在计算机或其他可编程 设备上执行一系列操作步骤以产生计算机实现的处理,从而在计算机或其他可编程设备上执行的指令提供用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的步骤。
显然,本领域的技术人员可以对本申请进行各种改动和变型而不脱离本申请的范围。这样,倘若本申请的这些修改和变型属于本申请权利要求及其等同技术的范围之内,则本申请也意图包含这些改动和变型在内。

Claims (26)

  1. 一种渲染方法,其特征在于,所述方法包括:
    获取渲染任务的信息,所述渲染任务被分割为多个渲染单元,所述渲染任务的信息包括所述多个渲染单元中的至少两个渲染单元之间的依赖关系;
    根据所述依赖关系,确定将所述多个渲染单元中的一个或多个渲染单元分配给用于渲染的第一算力节点。
  2. 根据权利要求1所述的方法,其特征在于,所述依赖关系指示所述多个渲染单元中的第一渲染单元依赖于所述多个渲染单元中的第二渲染单元,则根据所述依赖关系,确定将所述多个渲染单元中的一个或多个渲染单元分配给第一算力节点包括:
    确定将所述第一渲染单元和所述第二渲染单元都分配给所述第一算力节点。
  3. 根据权利要求1或2所述的方法,其特征在于,对于所述多个渲染单元中的第三渲染单元,所述渲染任务的信息还包括渲染所述第三渲染单元所需的算力大小,所述方法还包括:
    接收来自算力管理功能模块的一个或多个算力节点的算力资源大小,所述一个或多个算力节点包括所述第一算力节点;
    则确定将所述多个渲染单元中的一个或多个渲染单元分配给第一算力节点包括:
    根据渲染所述第三渲染单元所需的算力大小和所述第一算力节点的算力资源大小,确定将所述第三渲染单元分配给所述第一算力节点。
  4. 根据权利要求1至3任一项所述的方法,其特征在于,对于所述多个渲染单元中的第三渲染单元,所述渲染任务的信息还包括指示渲染的任务类型,所述方法还包括:
    接收来自算力管理功能模块的一个或多个算力节点的算力类型,其中,所述一个或多个算力节点包括所述第一算力节点,所述第一算力节点的算力类型包括图形处理单元;
    则确定将所述多个渲染单元中的一个或多个渲染单元分配给第一算力节点包括:
    根据所述任务类型和所述第一算力节点的算力类型,确定将所述第三渲染单元分配给所述第一算力节点。
  5. 根据权利要求3或4所述的方法,其特征在于,
    当所述方法的执行主体为接入网设备时,所述算力管理功能模块为所述接入网设备中的功能模块,或者为与所述接入网设备连接的核心网中的网元;或者
    当所述方法的执行主体为头显设备HMD时,所述算力管理功能模块为与所述HMD连接的接入网设备中的功能模块,或者为与所述接入网设备连接的核心网中的网元。
  6. 根据权利要求1至5中任一项所述的方法,其特征在于,对于所述多个渲染单元中的第三渲染单元,所述渲染任务的信息还包括所述第三渲染单元的数据量大小,所述方法还包括:
    获取所述第一算力节点的通信状态信息;
    则确定将所述多个渲染单元中的一个或多个渲染单元分配给第一算力节点包括:
    根据所述第三渲染单元的数据量大小和所述第一算力节点的通信状态信息,确定将所述第三渲染单元分配给所述第一算力节点。
  7. 根据权利要求1至6中任一项所述的方法,其特征在于,当所述方法的执行主体为接入网设备时,
    获取渲染任务的信息包括:获取来自HMD的所述渲染任务的信息;
    且所述方法还包括:将所述一个或多个渲染单元的标识信息以及所述第一算力节点的标识信息发送给所述HMD。
  8. 根据权利要求7所述的方法,其特征在于,所述方法还包括:
    接收来自所述HMD的渲染子任务信息,所述渲染子任务信息用于指示渲染包括所述一个或多个渲染单元的子任务;
    向所述第一算力节点转发所述渲染子任务信息。
  9. 根据权利要求1至6中任一项所述的方法,其特征在于,当所述方法的执行主体为HMD时,所 述方法还包括:
    向所述第一算力节点发送渲染子任务信息,所述渲染子任务信息用于指示渲染包括所述一个或多个渲染单元的子任务。
  10. 一种装置,其特征在于,包括处理模块和通信模块;
    其中,所述处理模块用于获取渲染任务的信息,所述渲染任务被分割为多个渲染单元,所述渲染任务的信息包括所述多个渲染单元中的至少两个渲染单元之间的依赖关系;以及根据所述依赖关系,确定将所述多个渲染单元中的一个或多个渲染单元分配给用于渲染的第一算力节点。
  11. 根据权利要求10所述的装置,其特征在于,所述依赖关系指示所述多个渲染单元中的第一渲染单元依赖于所述多个渲染单元中的第二渲染单元,则所述处理单元具体用于:确定将所述第一渲染单元和所述第二渲染单元都分配给所述第一算力节点。
  12. 根据权利要求10或11所述的装置,其特征在于,对于所述多个渲染单元中的第三渲染单元,所述渲染任务的信息还包括渲染所述第三渲染单元所需的算力大小,所述通信模块用于:接收来自算力管理功能模块的一个或多个算力节点的算力资源大小,所述一个或多个算力节点包括所述第一算力节点;
    所述处理模块具体用于:根据渲染所述第三渲染单元所需的算力大小和所述第一算力节点的算力资源大小,确定将所述第三渲染单元分配给所述第一算力节点。
  13. 根据权利要求10至12中任一项所述的装置,其特征在于,对于所述多个渲染单元中的第三渲染单元,所述渲染任务的信息还包括指示渲染的任务类型,所述通信模块用于:接收来自算力管理功能模块的一个或多个算力节点的算力类型,其中,所述一个或多个算力节点包括所述第一算力节点,所述第一算力节点的算力类型包括图形处理单元;
    所述处理模块具体用于:根据所述任务类型和所述第一算力节点的算力类型,确定将所述第三渲染单元分配给所述第一算力节点。
  14. 根据权利要求12或13所述的装置,其特征在于,
    当所述装置为接入网设备时,所述算力管理功能模块为所述接入网设备中的功能模块,或者为与所述接入网设备连接的核心网中的网元;或者
    当所述装置为头显设备HMD时,所述算力管理功能模块为与所述HMD连接的接入网设备中的功能模块,或者为与所述接入网设备连接的核心网中的网元。
  15. 根据权利要求10至14中任一项所述的装置,其特征在于,对于所述多个渲染单元中的第三渲染单元,所述渲染任务的信息还包括所述第三渲染单元的数据量大小,所述处理单元还用于:获取所述第一算力节点的通信状态信息;以及具体用于:根据所述第三渲染单元的数据量大小和所述第一算力节点的通信状态信息,确定将所述第三渲染单元分配给所述第一算力节点。
  16. 根据权利要求10至15中任一项所述的装置,其特征在于,当所述装置为接入网设备时,所述处理模块具体用于:获取来自HMD的渲染任务的信息;
    所述通信模块用于:将所述一个或多个渲染单元的标识信息以及所述第一算力节点的标识信息发送给所述HMD。
  17. 根据权利要求16所述的装置,其特征在于,所述通信模块还用于:接收来自所述HMD的渲染子任务信息,所述渲染子任务信息用于指示渲染包括所述一个或多个渲染单元的子任务;以及向所述第一算力节点转发所述渲染子任务信息。
  18. 根据权利要求10至15中任一项所述的装置,其特征在于,当所述装置为HMD时,所述通信模块用于:向所述第一算力节点发送渲染子任务信息,所述渲染子任务信息用于指示渲染包括所述一个或多个渲染单元的子任务。
  19. 一种装置,其特征在于,包括处理器和存储器,所述处理器和所述存储器耦合,所述处理器用于实现如权利要求1至9中任一项所述的方法。
  20. 一种装置,其特征在于,包括处理器和接口电路,所述接口电路用于接收来自所述装置之外的其它装置的信号并传输至所述处理器或将来自所述处理器的信号发送给所述装置之外的其它装置,所述处理器通过逻辑电路或执行代码指令用于实现如权利要求1至9中任一项所述的方法。
  21. 一种系统,其特征在于,包括如权利要求10至18中任一项所述的装置和与所述装置连接的算力管理功能模块。
  22. 根据权利要求21所述的系统,其特征在于,所述装置为接入网设备。
  23. 一种系统,其特征在于,包括如权利要求10至18中任一项所述的装置和所述第一算力节点。
  24. 根据权利要求23所述的系统,其特征在于,所述装置为头显设备HMD。
  25. 一种计算机可读存储介质,其特征在于,所述存储介质中存储有计算机程序或指令,当所述计算机程序或指令被装置执行时,实现如权利要求1至9中任一项所述的方法。
  26. 一种芯片,其特征在于,所述芯片与存储器耦合,用于读取并执行所述存储器中存储的程序指令,使得如权利要求1至9中任一项所述的方法被执行。
PCT/CN2023/135811 2022-12-08 2023-12-01 渲染方法、装置及系统 WO2024120309A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202211574911.6A CN118212333A (zh) 2022-12-08 2022-12-08 渲染方法、装置及系统
CN202211574911.6 2022-12-08

Publications (1)

Publication Number Publication Date
WO2024120309A1 true WO2024120309A1 (zh) 2024-06-13

Family

ID=91378546

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/135811 WO2024120309A1 (zh) 2022-12-08 2023-12-01 渲染方法、装置及系统

Country Status (2)

Country Link
CN (1) CN118212333A (zh)
WO (1) WO2024120309A1 (zh)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6438576B1 (en) * 1999-03-29 2002-08-20 International Business Machines Corporation Method and apparatus of a collaborative proxy system for distributed deployment of object rendering
CN111949394A (zh) * 2020-07-16 2020-11-17 广州玖的数码科技有限公司 一种共享算力资源的方法、系统及存储介质
CN113632145A (zh) * 2019-04-01 2021-11-09 苹果公司 计算机生成的现实系统中的分布式处理
CN114470767A (zh) * 2022-02-15 2022-05-13 竞技世界(北京)网络技术有限公司 一种任务处理方法、装置及电子设备
CN114968521A (zh) * 2022-05-20 2022-08-30 每平每屋(上海)科技有限公司 分布式渲染方法及设备
CN115409926A (zh) * 2021-05-11 2022-11-29 电子科技大学 一种分布式渲染方法

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6438576B1 (en) * 1999-03-29 2002-08-20 International Business Machines Corporation Method and apparatus of a collaborative proxy system for distributed deployment of object rendering
CN113632145A (zh) * 2019-04-01 2021-11-09 苹果公司 计算机生成的现实系统中的分布式处理
CN111949394A (zh) * 2020-07-16 2020-11-17 广州玖的数码科技有限公司 一种共享算力资源的方法、系统及存储介质
CN115409926A (zh) * 2021-05-11 2022-11-29 电子科技大学 一种分布式渲染方法
CN114470767A (zh) * 2022-02-15 2022-05-13 竞技世界(北京)网络技术有限公司 一种任务处理方法、装置及电子设备
CN114968521A (zh) * 2022-05-20 2022-08-30 每平每屋(上海)科技有限公司 分布式渲染方法及设备

Also Published As

Publication number Publication date
CN118212333A (zh) 2024-06-18

Similar Documents

Publication Publication Date Title
US20230062526A1 (en) Method, apparatus, computer readable medium, and electronic device for communication
CN111819872A (zh) 信息传输方法、装置、通信设备及存储介质
CN109600246A (zh) 网络切片管理方法及其装置
CN108476508B (zh) 下行数据包配置方法及装置
You et al. Fog computing as an enabler for immersive media: Service scenarios and research opportunities
CN104955172A (zh) 实现移动网络虚拟化方法、控制平台、虚拟化基站和系统
WO2020024961A1 (zh) 数据处理方法、设备及系统
CN110268751A (zh) 用于在接入网环境中选择接入和移动性管理功能的方法和系统
CN112019363B (zh) 确定业务传输需求的方法、设备及系统
CN114205839A (zh) 多流关联传输的方法、装置及系统
WO2022143508A1 (zh) 一种近场中传输数据的方法、设备及系统
WO2020232720A1 (zh) 一种通信方法及装置、网络架构
US20210234808A1 (en) Methods and systems for data transmission
WO2024120309A1 (zh) 渲染方法、装置及系统
CN112714146B (zh) 一种资源调度方法、装置、设备及计算机可读存储介质
WO2022062799A1 (zh) 数据传输方法及相关装置
CN114281608A (zh) 一种业务报文的处理方法、装置及存储介质
WO2021062826A1 (zh) 数据传输方法、装置、系统和存储介质
CN116437399A (zh) 媒体报文的传输方法、装置及系统
CN113938985A (zh) 一种通信方法及装置
WO2024067062A1 (zh) 数据传输方法和相关产品
WO2023045714A1 (zh) 一种调度方法及通信装置
WO2023093461A1 (zh) WiFi双模式下的资源分配方法、设备和存储介质
WO2024140655A1 (zh) 一种配置授权的处理方法、装置及计算机可读存储介质
WO2023016402A1 (zh) 数据传输方法、装置及终端、网络侧设备