WO2022116764A1 - 数据处理方法、装置、通信节点和存储介质 - Google Patents

数据处理方法、装置、通信节点和存储介质 Download PDF

Info

Publication number
WO2022116764A1
WO2022116764A1 PCT/CN2021/128213 CN2021128213W WO2022116764A1 WO 2022116764 A1 WO2022116764 A1 WO 2022116764A1 CN 2021128213 W CN2021128213 W CN 2021128213W WO 2022116764 A1 WO2022116764 A1 WO 2022116764A1
Authority
WO
WIPO (PCT)
Prior art keywords
point cloud
information
fusion
frame
cloud data
Prior art date
Application number
PCT/CN2021/128213
Other languages
English (en)
French (fr)
Inventor
徐异凌
侯礼志
管云峰
孙军
吴钊
李秋婷
吴平
Original Assignee
中兴通讯股份有限公司
上海交通大学
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 中兴通讯股份有限公司, 上海交通大学 filed Critical 中兴通讯股份有限公司
Priority to EP21899798.9A priority Critical patent/EP4258621A1/en
Priority to US18/255,992 priority patent/US20240054723A1/en
Publication of WO2022116764A1 publication Critical patent/WO2022116764A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/60Network streaming of media packets
    • H04L65/75Media network packet handling
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/60Network streaming of media packets
    • H04L65/70Media network packetisation

Definitions

  • the present application relates to communication, for example, to a data processing method, apparatus, communication node and storage medium.
  • the point cloud data is the three-dimensional coordinate data information obtained by the object after three-dimensional scanning.
  • the point cloud data may also record the color, reflectivity and other attribute information (Attribute) of the corresponding point.
  • attribute information Attribute
  • the data volume of the point cloud data obtained by scanning will reach the order of millions or even greater.
  • the massive point cloud data brings a heavy burden to computer storage, processing and transmission. .
  • the point cloud compression algorithm has been systematically studied, which can be divided into video-based point cloud coding (Video-based Point Cloud Coding, V-PCC) and geometry-based point cloud coding (Geometry-based Point Cloud Coding, G -PCC).
  • V-PCC Video-based Point Cloud Coding
  • G -PCC geometry-based point cloud coding
  • the compression method of G-PCC is to convert the point cloud data into components such as geometric information and attribute information, and then encode the geometric information and attribute information into a point cloud data stream, where the geometric information is the position information of the point, using It is described and encoded in the form of an Octree, and the attribute information is a number of different types such as the color and reflectivity of the point.
  • Embodiments of the present application provide a data processing method, device, communication node, and storage medium, which improve the transmission efficiency of fused point cloud data, thereby facilitating flexible use by users.
  • the embodiment of the present application provides a data processing method, including:
  • the media stream of the fusion point cloud data is respectively sent to the receiving end.
  • the embodiment of the present application provides a data processing method, including:
  • the fused point cloud data are respectively processed according to the multi-frame fused point cloud information.
  • An embodiment of the present application provides a data processing device, including:
  • a writer configured to write the multi-frame fusion point cloud information into a media stream of fusion point cloud data
  • the transmitter is configured to respectively send the media stream of the fused point cloud data to the receiving end according to the multi-frame fused point cloud information.
  • An embodiment of the present application provides a data processing device, including:
  • the parser is set to parse the media stream of the fused point cloud data to obtain multi-frame fused point cloud information
  • the processor is configured to separately process the fused point cloud data according to the multi-frame fused point cloud information.
  • An embodiment of the present application provides a device, including: a communication module, a memory, and one or more processors;
  • the communication module is configured to perform communication interaction between two communication nodes
  • the memory configured to store one or more programs
  • the one or more processors When the one or more programs are executed by the one or more processors, the one or more processors implement the method described in any of the above embodiments.
  • An embodiment of the present application provides a storage medium, where a computer program is stored in the storage medium, and when the computer program is executed by a processor, the method described in any of the foregoing embodiments is implemented.
  • FIG. 1 is a flowchart of a data processing method provided by an embodiment of the present application.
  • FIG. 2 is a schematic diagram of decoding and presenting fused point cloud data provided by an embodiment of the present application
  • FIG. 3 is another schematic diagram of decoding and presenting fused point cloud data provided by an embodiment of the present application.
  • FIG. 4 is another schematic diagram of decoding and presentation of fused point cloud data provided by an embodiment of the present application.
  • FIG. 5 is a schematic diagram of yet another decoding and presentation of fused point cloud data provided by an embodiment of the present application.
  • FIG. 6 is a flowchart of another data processing method provided by an embodiment of the present application.
  • FIG. 7 is a structural block diagram of a data processing apparatus provided by an embodiment of the present application.
  • FIG. 8 is a structural block diagram of another data processing apparatus provided by an embodiment of the present application.
  • FIG. 9 is a schematic structural diagram of a communication node provided by an embodiment of the present application.
  • a more reliable autonomous driving solution may involve multi-party interconnection and cooperation, dynamic driving vehicles. Continuously perceive the surrounding environment to form a point cloud, and send the point cloud data to the cloud. Based on the existing point cloud data information, the cloud compares and analyzes the data sent by all vehicles at the current moment, updates the point cloud data, and then updates the point cloud data required by different vehicles. The point cloud data is encoded and sent to the vehicle. In this process, if the vehicle uses the multi-frame point cloud fusion coding method, the point cloud within a certain time interval can be transmitted to the cloud at one time.
  • the cloud can also use the multi-frame point cloud fusion coding method to convert the time interval to a certain time interval.
  • the internal point cloud is transmitted to the vehicle at one time, which is beneficial to improve the encoding and transmission efficiency.
  • the cloud can preferentially decode the geometric information and frame number information in order to obtain the split geometric information, and then only analyze and process the geometric information before sending it back to the vehicle. In the same way, when the vehicle is driving, the geometric information can be quickly transmitted back to the cloud for marking processing, which improves the processing efficiency.
  • FIG. 1 is a flowchart of a data processing method provided by an embodiment of the present application. This embodiment may be executed by the sending end.
  • the sender is used for sending messages and/or code streams.
  • the sender can be a client or a server.
  • the client may be a terminal side (eg, user equipment), or may be a network side (eg, a base station).
  • this embodiment includes: S110-S130.
  • the multi-frame fusion point cloud information is used to indicate whether the original point cloud data uses the multi-frame fusion encoding method, and whether the fusion point cloud data adopts different transmission methods after the multi-frame fusion encoding method is used. and/or consumption patterns.
  • the fusion point cloud data refers to data obtained after fusion coding the original point cloud data corresponding to the multi-frame images.
  • consumption refers to the use, processing, decoding, or operations of rendering a presentation after decoding the point cloud codestream.
  • the components of the fused point cloud data include: geometric information after fusion and attribute information after fusion; the attribute information after fusion includes frame number information and non-frame number attribute information after fusion.
  • the components in the fused point cloud data may include: geometric information after fusion and frame coding information after fusion, that is, excluding non-frame number attribute information and other information after fusion.
  • geometric information is the information describing the spatial position of each point in the point cloud; attribute information is the incidental information describing each point in the point cloud, which can be divided into different types according to the information content represented, such as the color of the point, reflectivity , Frame Index, etc.; other information can be auxiliary information, such as user-defined information.
  • the geometric information after fusion refers to the information obtained after fusion coding the geometric information in the point cloud data;
  • the attribute information after fusion refers to the information obtained after fusion coding the attribute information in the point cloud data Information.
  • the multi-frame fusion point cloud information includes one of the following parameters: a multi-frame fusion coding mode flag; a transmission method of the fusion point cloud data; a consumption method of the fusion point cloud data.
  • the multi-frame fusion coding mode flag is used to represent whether the multi-frame fusion coding mode is used. For example, when the multi-frame fusion coding mode flag is 0, it means that the original point cloud data does not use the multi-frame fusion coding mode; When the multi-frame fusion coding mode flag is 1, it means that the original point cloud data uses the multi-frame fusion coding mode.
  • the transmission mode of the fused point cloud data is used to characterize the transmission priority of each component in the fused point cloud data; the consumption mode of the fused point cloud data is used to characterize the processing priority of each component in the fused point cloud data.
  • the merged attribute information can be classified according to information content, and then different types of attribute information can be transmitted and/or consumed as a whole, or each type can be transmitted and/or consumed independently.
  • determining the multi-frame fusion point cloud information includes one of the following:
  • the multi-frame fusion point cloud information is determined according to the current channel information and the attribute information of the receiver.
  • the multi-frame fusion point cloud information may be acquired by the sender from an external device; it may also be directly set by the sender according to its own pre-configured preset configuration information; or the sender may be based on the information requested by the receiver
  • the preset configuration information carried is used for setting; it may also be set by the sending end through the acquired current channel information and/or the attribute information of the receiving end.
  • the determination of the multi-frame fusion point cloud information according to the preset configuration information includes two cases: first, in the case of establishing a connection initialization between the transmitting end and the current channel and/or the receiving end, determining the multi-frame fusion point cloud information according to the preset configuration information.
  • Frame fusion point cloud information in the case that the sending end does not establish a connection initialization with the current channel and/or the receiving end, determine the multi-frame fusion point cloud information according to the preset configuration information, wherein the sending end is connected with the current channel and/or receiving end.
  • connection initialization it refers to the situation in which the sender and the receiver perform media communication; in the case where the sender does not establish a connection initialization with the current channel and/or the receiver, it means that the sender does not conduct media communication with the receiver. communication situation.
  • the multi-frame fusion point cloud information may be a multi-frame fusion encoding method that does not use the original point cloud data, or It can be a multi-frame fusion coding method.
  • the fusion point cloud data obtained after fusion may adopt the same transmission mode and/or consumption method; or the fusion point cloud data obtained after using the multi-frame fusion coding method may adopt different transmission methods and/or consumption methods.
  • the preset configuration information may include parameters such as device conditions and network conditions of the sender itself, that is, the sender configures multi-frame fusion point cloud information according to its own device parameters.
  • the multi-frame fusion point cloud information may not use the multi-frame fusion encoding method; it may also use the multi-frame fusion encoding. Way.
  • the fused point cloud data obtained after fusion can adopt the same transmission method and/or consumption method; or in the case of using the multi-frame fusion coding method, the fused point cloud data after fusion Data can be transmitted and/or consumed in different ways.
  • the sending end may statically set the multi-frame fusion point cloud information when establishing a connection with the current channel and/or the receiving end and initialize it; it may also dynamically set the multi-frame fusion point cloud information according to the receiving end request information after the connection is established.
  • the point cloud information is fused; it may also be that after the connection is established, the multi-frame fusion point cloud information is dynamically set according to the current channel information and/or the attribute information of the receiver.
  • the current channel information and/or the attribute information of the receiver include but not limited to network conditions, user requirements, device performance, and the like.
  • the multi-frame fusion point cloud information can be written in different parts of the media code stream, for example, the point cloud information source data part in the media code stream, the system layer data part in the media code stream, and so on.
  • Different parameters in the multi-frame fusion point cloud information can be written in the same part of the media stream as a whole, or can be written in different parts of the media stream independently, or several parameters can be combined and written separately in the different section.
  • a media stream containing multi-frame fusion point cloud information is sent to the receiving end.
  • the media stream of the fused point cloud data can be sent separately according to the multi-frame fused point cloud information. For example, by sending high-priority information first, a more reliable channel can also be used to transmit high-priority information.
  • the data processing method applied to the sending end further includes: respectively storing the fused point cloud data as a media file according to the multi-frame fused point cloud information.
  • the media stream can store the fused point cloud data as a media file according to the multi-frame fusion point cloud information, for example, the geometric information, the frame number information and the non-frame number attribute information are independently placed in different positions of the media file, In order to achieve the purpose of independent consumption of geometric information and frame number information.
  • the transmission method of the fusion point cloud data includes one of the following: preferentially transmits the geometric information after fusion, and then transmits the attribute information after fusion; simultaneously transmits the geometric information after fusion and the attribute information after fusion; The frame number information and the fused geometric information are transmitted, and then the non-frame number attribute information after fusion is transmitted; the fused geometric information and the non-frame number attribute information after fusion are transmitted first, and then the frame number information is transmitted; Geometry information, non-frame number attribute information after fusion, and frame number information.
  • the consumption mode of the fused point cloud data includes one of the following: preferential consumption of geometric information after fusion; priority consumption of non-frame number attribute information after fusion; priority consumption of geometric information after fusion and non-frame number attribute information after fusion Frame number attribute information; priority consumption of frame number information and geometric information after fusion; priority consumption of frame number information.
  • the combination of the multi-frame fusion coding method flag, the transmission method of the fusion point cloud data, and the consumption method is described. Exemplarily, the following combination methods may be included:
  • Mode 1 The sender adopts the multi-frame fusion coding method, and preferentially transmits the geometric information after fusion, and then transmits the attribute information after fusion, wherein the attribute information after fusion includes frame number information.
  • the sender adopts the multi-frame fusion coding method, and sets the geometric information after the fusion to be preferentially consumed, and then sets a flag for the preferential consumption of the geometric information after the fusion.
  • the transmission methods of the fused geometric information include: the fused geometric information and the fused attribute information can be transmitted at the same time; the fused geometric information can also be transmitted first, and then the fused attribute information; the frame number information can also be preferentially transmitted And the geometric information after fusion, and then transmit the non-frame number attribute information after fusion.
  • Method 3 The receiver sends a request to the sender to consume the geometric information after fusion. For example, the receiver needs to display the fused geometric information first.
  • the sender uses the multi-frame fusion encoding method according to the request of the receiver, and sets the priority. Labels for consuming geometric information after fusion.
  • the transmission method of the fused geometric information includes: the sender can transmit the fused geometric information and the fused attribute information at the same time; it can also transmit the frame number information and the fused geometric information first, and then transmit the non-frame number attribute after fusion. information.
  • Method 4 In the case of stable network conditions, such as high transmission rate, no bit errors and packet loss, the sender can directly send the original point cloud data to the receiver without using the multi-frame fusion coding method; In the case of the frame fusion coding mode, the fused geometry information and the fused attribute information are sent to the receiver together.
  • the sender can use the multi-frame fusion coding method, and transmit the fused geometric information first, and then transmit the fused attribute information. method; or use the channel with high reliability to transmit the geometric information after fusion, and use the channel with low reliability to transmit the attribute information after fusion.
  • the frame number information is a kind of attribute information after fusion, and is used in splitting the fusion point cloud data into original multi-frame point cloud data. Therefore, the frame number information is consumed after the fused geometry information and/or the fused non-frame number attribute information, that is to say, the fused geometry information and/or the fused non-frame number attribute information is transmitted first, and then transmitted.
  • Frame number information it is also possible to preferentially transmit the geometric information after fusion, then transmit the non-frame number attribute information after fusion, and finally transmit the frame number information; it is also possible to transmit the frame number information and the fusion geometry information first, and then transmit the fusion data.
  • Non-frame number attribute information is a kind of attribute information after fusion, and is used in splitting the fusion point cloud data into original multi-frame point cloud data. Therefore, the frame number information is consumed after the fused geometry information and/or the fused non-frame number attribute information, that is to say, the fused geometry information and/or the fused non-frame number attribute information is transmitted first, and then
  • the sender may use different transmission priorities for different attribute information, or use different channels for transmission of different attribute information, according to preset configuration information, user requirements, channel conditions or receiver information.
  • the sender may choose to send or not to send other information according to preset configuration information, user requirements, channel conditions, or receiver information.
  • the correspondence between the multi-frame fusion point cloud information and fusion point cloud data includes one of the following:
  • Multi-frame fusion point cloud information corresponds to a fusion point cloud data
  • the multi-frame fusion point cloud information corresponds to each component in the fusion point cloud data
  • At least one parameter in the multi-frame fusion point cloud information corresponds to one fusion point cloud data, and at least one parameter in the multi-frame fusion point cloud information corresponds to each component in the fusion point cloud data respectively.
  • this embodiment describes an implementation manner of merging point cloud information of multiple frames in the above-mentioned embodiment.
  • the multi-frame fusion point cloud information includes the following parameters: a multi-frame fusion coding mode flag; a transmission method for fusion point cloud data; and a consumption method for fusion point cloud data.
  • the correspondence between the multi-frame fusion point cloud information and the fusion point cloud data is the corresponding description of the multi-frame fusion point cloud information and one fusion point cloud data, that is, the multi-frame fusion point cloud information is for one fusion point cloud. data to set.
  • the implementation of multi-frame fusion of point cloud information is as follows:
  • combine_frame_coding_enabled_flag is used to indicate the multi-frame fusion coding mode flag. For example, 1 indicates that the multi-frame fusion coding method is used, and 0 indicates that the multi-frame fusion coding method is not used.
  • combine_frame_decoding_order is used to indicate the consumption method used for fused point cloud data (ie, the fused geometric information, attribute information and other information), such as the order of decoding and presentation.
  • combine_frame_decoding_order is equal to 0
  • combine_frame_decoding_order equal to 1 indicates that after receiving the fused geometric information, it should be decoded and presented immediately, that is, the decoded fused geometric information is presented
  • combine_frame_decoding_order equal to 2 indicates that all After the information is decoded and presented, the multi-frame point cloud data with attribute information is presented, that is, the decoded and split geometric information and non-frame number attribute information are presented
  • combine_frame_decoding_order is equal to 3.
  • FIG. 2 is a schematic diagram of decoding and presentation of fused point cloud data provided by an embodiment of the present application.
  • the code stream carrying the fused point cloud data that is, the multi-frame fusion coding code stream
  • the code stream corresponding to the fused geometric information that is, the geometric code stream
  • the frame number information are preferentially decoded.
  • the corresponding code stream that is, the frame number code stream
  • the fused geometric information is split into the geometric information of the point cloud data of multiple frames according to the frame number information, and the geometric information obtained after decoding and split is presented.
  • FIG. 3 is another schematic diagram of decoding and presentation of fused point cloud data provided by an embodiment of the present application.
  • combine_frame_decoding_order is equal to 1 is described.
  • the code stream carrying the fused point cloud data is decoded, and the code stream corresponding to the fused geometric information is preferentially decoded to present the decoded fused geometric information.
  • FIG. 4 is another schematic diagram of decoding and presentation of fused point cloud data provided by an embodiment of the present application.
  • combine_frame_decoding_order is equal to 2 is described.
  • the code stream carrying the fused point cloud data is decoded, and after all the information is received, the decoding is performed to present the decoded and split geometric information and non-frame number attribute information.
  • FIG. 5 is a schematic diagram of still another kind of decoding and presentation of fused point cloud data provided by an embodiment of the present application.
  • the case where combine_frame_decoding_order is equal to 3 is described.
  • the code stream carrying the fused point cloud data is decoded, and the fused geometry information and the fused non-frame number attribute information are decoded and presented immediately after receiving the fused geometric information after decoding.
  • the non-frame number attribute information may be divided into independent information according to the type of the non-frame number attribute information, not as a whole, and different decoding and presentation modes are set for them. For example, color attribute information can be preferentially decoded and presented together with geometric information.
  • combine_frame_transfer_priority is the transfer order used to fuse point cloud data, that is, using different values to represent different transfer orders.
  • Table 1 is a schematic table of a transmission mode of fused point cloud data provided by an embodiment of the present application. As shown in Table 1. When combine_frame_transfer_priority is equal to 0, the geometric information after fusion, frame number information and attribute information after fusion are transmitted in sequence; when combine_frame_transfer_priority is equal to 1, the geometric information after fusion, the attribute information after fusion and the frame number are transmitted in sequence information.
  • Table 1 A schematic diagram of the transmission mode of fusion point cloud data
  • the non-frame number attribute information can be divided into independent information according to the attribute type instead of being a whole, and different transmission sequences can be set for them.
  • color attribute information can be preferentially transmitted separately.
  • this embodiment describes an implementation manner of merging point cloud information of multiple frames in the above-mentioned embodiment.
  • the multi-frame fusion point cloud information includes the following parameters: a multi-frame fusion coding mode flag; a transmission method for fusion point cloud data; and a consumption method for fusion point cloud data.
  • the corresponding relationship between the multi-frame fusion point cloud information and the fusion point cloud data is explained that the multi-frame fusion point cloud information corresponds to each component in the fusion point cloud data respectively, that is, the fusion point Different components in the cloud data have their own corresponding multi-frame fusion point cloud information.
  • the implementation of multi-frame fusion of point cloud information is as follows:
  • combine_frame_coding_enabled_flag indicates the multi-frame fusion coding mode flag, for example, 1 indicates that the multi-frame fusion coding method is used, and 0 indicates that the multi-frame fusion coding method is not used.
  • combine_frame_sample_decoding_order is the consumption method used to fuse point cloud data (ie, the fused geometric information, attribute information and other information), such as the order of decoding and presentation.
  • combine_frame_sample_decoding_order is 0, it means that the decoding and presentation should be performed first, that is, decoding and presentation should be performed immediately after receiving. decoding order. For example, if the frame number information and the fused geometry information are first decoded and presented, then the combine_frame_sample_decoding_order of the frame number information and the fused geometry information are both set to 0, and the combine_frame_sample_decoding_order of the non-frame number attribute information after fusion is set to 1, as shown in the figure 2 shown.
  • combine_frame_sample_transfer_priority is the transfer order of the information of the point cloud data after fusion, and different values are used to represent different transfer orders.
  • the combine_frame_sample_transfer_priority of the fused geometry information is set to 0
  • the combine_frame_sample_transfer_priority of the frame number information is set to 1
  • the combine_frame_sample_transfer_priority of the non-frame number attribute information after fusion is set to 2.
  • the non-frame number attribute information after fusion can also be set independently according to the type, for example, the combine_frame_sample_transfer_priority of the color information is set to 3.
  • this embodiment describes an implementation manner of merging point cloud information of multiple frames in the above-mentioned embodiment.
  • the multi-frame fusion point cloud information includes the following parameters: a multi-frame fusion coding mode flag; a transmission method for fusion point cloud data; and a consumption method for fusion point cloud data.
  • the correspondence between the multi-frame fusion point cloud information and the fusion point cloud data is that at least one parameter in the multi-frame fusion point cloud information corresponds to one fusion point cloud data, and that in the multi-frame fusion point cloud information At least one parameter of the fused point cloud data corresponds to each component in the fusion point cloud data, that is, the multi-frame fusion point cloud information can be partially set for a multi-frame fusion point cloud data as a whole, and a part is for different components in the fusion point cloud data.
  • combine_frame_coding_enabled_flag is used to indicate the multi-frame fusion coding mode flag. For example, 1 indicates that the multi-frame fusion coding method is used, and 0 indicates that the multi-frame fusion coding method is not used.
  • combine_frame_decoding_order is used to indicate the consumption method used for fused point cloud data (ie, the fused geometric information, attribute information and other information), such as the order of decoding and presentation.
  • the order of decoding and presentation ie, the fused geometric information, attribute information and other information
  • combine_frame_decoding_order 0 means that decoding and rendering should be performed immediately after receiving the frame number information and the fused geometry information, and the split multi-frame point cloud data is presented, which supports the priority consumption of geometry information
  • combine_frame_decoding_order equals to 1 means that After receiving the fused geometric information, it should be decoded and presented immediately, and the decoded fused geometric information is presented
  • combine_frame_decoding_order equal to 2 means that all the information is received and then decoded and presented, and the multi-frame with attribute information is presented.
  • the point cloud data of ; combine_frame_decoding_order equal to 3 means that after receiving the fused geometric information and the fused non-frame number attribute information, decoding and rendering is performed immediately, and the single frame point cloud data with attribute information fused after decoding is presented.
  • the non-frame number attribute information after fusion can be divided into different types of independent information instead of being a whole, and different decoding and presentation modes can be set for them respectively.
  • color information can be preferentially decoded and presented together with geometric information.
  • combine_frame_sample_transfer_priority is used to indicate the transfer order used to fuse point cloud data, and different values are used to indicate different transfer orders.
  • it indicates that the transfer priority is the highest, and a higher value indicates a lower priority.
  • the combine_frame_sample_transfer_priority of the fused geometry information is set to 0
  • the combine_frame_sample_transfer_priority of the frame number information is set to 1
  • the combine_frame_sample_transfer_priority of the non-frame number attribute information after fusion is set to 2.
  • the non-frame number attribute information after fusion can also be set independently, for example, the combine_frame_sample_transfer_priority of the color information after fusion is set to 3.
  • the multi-frame fusion point cloud information is written into the media stream, including one of the following:
  • this embodiment writes multi-frame fused point cloud information into a media package file according to different requirements for consuming and/or transmitting fused point cloud data.
  • this embodiment adopts the International Organization for Standardization/International Electrotechnical Commission 14496-12 International Organization for Standardization Base Media File Format (International Organization for Standardization/International Electrotechnical Commission 14496-12 International Organization for Standardization Base Media File Format, ISO/IEC 14496 -12 ISO BMFF) to encapsulate the fused point cloud data.
  • This encapsulation method is an implementation method of multi-frame point cloud fusion information, but it is not limited to this encapsulation method.
  • the multi-frame fusion point cloud information includes one of the following parameters: a multi-frame fusion coding mode flag, a transmission method of the fusion point cloud data, and a consumption method of the fusion point cloud data.
  • the fused geometric information and the fused attribute information in the fused point cloud data are stored in different media tracks, respectively, by defining sample entries of different types (for example, using four-character code identifiers) Identify the data type of the fusion point cloud data stored in the track, such as the geometric information and frame number information after fusion.
  • indication information is given in the sample entry to represent the transmission mode and/or consumption mode adopted for the fusion point cloud data (ie, the geometric information, attribute information and other information after fusion coding).
  • the fusion point cloud data is stored in the sample of the media track.
  • the indication information in the media track is implemented as follows:
  • decoding_order_flag is used to indicate a flag of the decoding and presentation order of the samples in the media track, for example, 1 indicates that the decoding is presented in order, and 0 indicates that the decoding is presented in a default manner.
  • sample_decoding_order is used to indicate the order of decoded presentation of samples in this media track.
  • 0 represents the first decoding and presentation, that is, decoding and presentation should be performed immediately after receiving. order.
  • the frame number information and the fused geometry information are first decoded and presented, then the sample_decoding_order of the frame number information and the fused geometry information are both set to 0, and the sample_decoding_order of the non-frame number attribute information after fusion is set to 1, as shown in the figure 2 shown.
  • transfer_priority_flag is a flag used to indicate the transfer order of samples in this media track, for example, 1 indicates that samples are transferred in order, and 0 indicates that samples are transferred by default.
  • sample_transfer_priority is used to indicate the transfer order of samples in the media track, and different values are used to indicate different transfer orders. Exemplarily, 0 indicates the highest transmission priority, and a higher value indicates a lower priority.
  • the sample_transfer_priority of the geometric information is set to 0
  • the sample_transfer_priority of the frame number information is set to 1
  • the sample_transfer_priority of the non-frame number attribute information after fusion is set to 2.
  • it is also possible to independently set the non-frame number attribute information after different types of fusion for example, set the sample_transfer_priority of the color information to 3.
  • the point cloud data adopts the multi-frame fusion encoding method
  • the media file that fuses the point cloud data needs to provide indication information to indicate that the point cloud data in the media file is encoded in the multi-frame fusion encoding method.
  • the indication information of the fusion coding of the multi-frame point cloud data is specifically expressed as follows:
  • combine_frame_coding_enabled_flag indicates the multi-frame fusion coding mode flag. For example, 1 means that the multi-frame fusion coding method is used, and 0 means that the multi-frame fusion coding method is not used.
  • the indication information of the fusion coding of the multi-frame point cloud data may be indicated in the file level, for example, indicated in the related media header data box (MediaHeaderBox) under the media information data box (MediaInformationBox), or indicated in the Indicated in other data boxes (Box) at the file level.
  • the indication information of the fusion coding of the multi-frame point cloud data may also be indicated in the media track level, for example, indicated in the corresponding sample entry.
  • the geometric information and different types of attribute information in the fusion point cloud data may be stored in one or more different media. For example, if two different types of attribute information are stored in the same media track, the media The fused point cloud data in the track uses the same transmission and/or consumption method.
  • the consumption method and/or transmission method of the fused point cloud data may change according to different scenarios or needs, and the consumption method and/or transmission method of the fused point cloud data in one or more media tracks can be consumed by using a dynamically timed metadata track. way to make dynamic settings. Identify dynamically timed metadata tracks describing the consumption mode and/or transmission mode of the dynamic fused point cloud data by 'dydt', the specific implementation is as follows:
  • the consumption mode and/or transmission mode of each sample in the media track that stores the fusion point cloud data referenced by the dynamic timing metadata track is indicated by DynamicDecodingAndTransferSample(), and the specific implementation is as follows:
  • dynamic_order_flag indicates that the consumption mode of the sample changes dynamically, 0 means that the consumption mode does not change, and 1 means that the consumption mode changes;
  • sample_transfer_priority indicates that the transfer mode of the sample changes dynamically, 0 means that the transfer mode does not change, and 1 means that the transfer mode changes.
  • this embodiment describes the writing of multi-frame fusion point cloud information into a media stream.
  • writing the multi-frame fused point cloud information into the media stream includes: writing the multi-frame fused point cloud information into supplemental enhancement information (Supplemental Enhancement Information, SEI) of the fused point cloud data.
  • SEI Supplemental Enhancement Information
  • the implementation manner of writing multi-frame fused point cloud information into the SEI of the fused point cloud data includes:
  • combine_frame_coding_info() can be as follows:
  • the implementation manner of combine_frame_coding_info( ) may be any form or a combination form of any of the foregoing embodiments, and details are not described herein again.
  • this embodiment describes the writing of multi-frame fusion point cloud information into a media stream.
  • writing the multi-frame fusion point cloud information into the media stream includes: writing the multi-frame fusion point cloud information into the video application information (Video Usability Information, VUI) of the fusion point cloud data.
  • VUI Video Usability Information
  • the implementation manner of writing multi-frame fusion point cloud information into the VUI of fusion point cloud data includes:
  • combine_frame_coding_info_flag when the value of combine_frame_coding_info_flag is equal to 1, it indicates that the subsequent multi-frame fusion point cloud information; combine_frame_coding_info() may be any form or combination form of the above-mentioned embodiment.
  • FIG. 6 is a flowchart of another data processing method provided by an embodiment of the present application. This embodiment may be executed by the receiving end.
  • the receiving end is used for receiving messages and/or code streams.
  • the receiving end can be a client or a server.
  • the client may be a terminal side (eg, user equipment), or may be a network side (eg, a base station).
  • this embodiment includes: S210-S220.
  • the receiving end parses the media code stream to acquire multi-frame fused point cloud information.
  • the multi-frame fusion point cloud information can be extracted from the media code stream.
  • the multi-frame fusion point cloud information includes one of the following parameters: a multi-frame fusion coding mode flag; a transmission method of the fusion point cloud data; a consumption method of the fusion point cloud data.
  • the multi-frame fusion coding mode flag is used to indicate whether the original point cloud data uses the multi-frame fusion coding method. If the multi-frame fusion coding method is used, the fusion point cloud data (ie, the geometric information, attribute information and other information) whether a different mode of transmission and/or consumption is used.
  • all parameters in the multi-frame fusion point cloud information may be stored in different locations of the media stream or media file, respectively, or may be stored in the same location of the media data or media file.
  • S220 respectively process the fused point cloud data according to the fused point cloud information of the multiple frames.
  • the processing of the media stream is to first determine whether the multi-frame fusion coding method is used.
  • the media data is processed in the manner of transmission and/or consumption of other information. For example, if the multi-frame fusion point cloud information indicates that the geometric information is preferentially decoded and presented, the geometric information can be obtained and parsed from the media stream or media file first; if the frame number information does not indicate preferential decoding and presentation, then the fusion point cloud data The geometry information is decoded and presented directly; if the frame number information is also instructed to be decoded and presented preferentially, the frame number information is acquired and parsed, and the multi-frame fused geometry information is decoded and split according to the frame number information, and the split geometry is presented. information.
  • the multi-frame fusion point cloud information can be ignored to perform conventional processing on the media stream, that is, all fusion point cloud data are obtained, and geometric information, frame number information, non-frame number attribute information and other information are parsed separately.
  • the geometric information, non-frame number attribute information and other information are split, and the split point cloud information is presented separately.
  • the method before parsing the media stream of the fused point cloud data to obtain multi-frame fused point cloud information, the method further includes: receiving the media stream of the fused point cloud data sent by the sending end.
  • the method before parsing the media stream of the fused point cloud data to obtain multi-frame fused point cloud information, the method further includes: reading a locally pre-stored media file of the fused point cloud data.
  • the process of parsing the media stream of the fused point cloud data refers to performing a process on the media file storing the fused point cloud data. parsing process.
  • the components of the fused point cloud data include: geometric information after fusion and attribute information after fusion; the attribute information after fusion includes frame number information and non-frame number attribute information after fusion.
  • the transmission method of the fusion point cloud data includes one of the following: preferentially transmits the geometric information after fusion, and then transmits the attribute information after fusion; simultaneously transmits the geometric information after fusion and the attribute information after fusion; The frame number information and the fused geometric information are transmitted, and then the non-frame number attribute information after fusion is transmitted; the fused geometric information and the non-frame number attribute information after fusion are transmitted first, and then the frame number information is transmitted; Geometry information, non-frame number attribute information after fusion, and frame number information.
  • the consumption mode of the fused point cloud data includes one of the following: preferential consumption of geometric information after fusion; priority consumption of non-frame number attribute information after fusion; priority consumption of geometric information after fusion and non-frame number attribute information after fusion Frame number attribute information; priority consumption of frame number information and geometric information after fusion; priority consumption of frame number information.
  • the correspondence between the multi-frame fused point cloud information and the fused point cloud data includes one of the following:
  • Multi-frame fusion point cloud information corresponds to a fusion point cloud data
  • the multi-frame fusion point cloud information corresponds to each component in the fusion point cloud data
  • At least one parameter in the multi-frame fusion point cloud information corresponds to one fusion point cloud data, and at least one parameter in the multi-frame fusion point cloud information corresponds to each component in the fusion point cloud data respectively.
  • FIG. 7 is a structural block diagram of a data processing apparatus provided by an embodiment of the present application. This embodiment is executed by the sending end. As shown in FIG. 7 , the data processing apparatus in this embodiment includes: a determination module 310 , a writer 320 and a transmitter 330 .
  • the determining module 310 is configured to determine multi-frame fusion point cloud information
  • the writer 320 is configured to write the multi-frame fusion point cloud information into the media stream of the fusion point cloud data
  • the transmitter 330 is configured to respectively send the media stream of the fused point cloud data to the receiving end according to the multi-frame fused point cloud information.
  • the data processing apparatus applied to the sending end further includes: a memory configured to store the fused point cloud data as media files respectively according to the multi-frame fused point cloud information.
  • the multi-frame fusion point cloud information includes one of the following parameters: a multi-frame fusion coding mode flag; a transmission method of the fusion point cloud data; a consumption method of the fusion point cloud data.
  • the determining module 310 includes one of the following:
  • the multi-frame fusion point cloud information is determined according to the current channel information and the attribute information of the receiver.
  • the components of the fused point cloud data include: geometric information after fusion and attribute information after fusion; the attribute information after fusion includes frame number information and non-frame number attribute information after fusion.
  • the transmission method of the fusion point cloud data includes one of the following: preferentially transmits the geometric information after fusion, and then transmits the attribute information after fusion; simultaneously transmits the geometric information after fusion and the attribute information after fusion; The frame number information and the fused geometric information are transmitted, and then the non-frame number attribute information after fusion is transmitted; the fused geometric information and the non-frame number attribute information after fusion are transmitted first, and then the frame number information is transmitted; Geometry information, non-frame number attribute information after fusion, and frame number information.
  • the consumption mode of the fused point cloud data includes one of the following: preferential consumption of geometric information after fusion; priority consumption of non-frame number attribute information after fusion; priority consumption of geometric information after fusion and non-frame number attribute information after fusion Frame number attribute information; priority consumption of frame number information and geometric information after fusion; priority consumption of frame number information.
  • the correspondence between the multi-frame fused point cloud information and the fused point cloud data includes one of the following:
  • Multi-frame fusion point cloud information corresponds to a fusion point cloud data
  • the multi-frame fusion point cloud information corresponds to each component in the fusion point cloud data
  • At least one parameter in the multi-frame fusion point cloud information corresponds to one fusion point cloud data, and at least one parameter in the multi-frame fusion point cloud information corresponds to each component in the fusion point cloud data respectively.
  • the multi-frame fusion point cloud information is written into the media stream, including one of the following:
  • the data processing apparatus provided in this embodiment is set to implement the data processing method of the embodiment shown in FIG. 1 , and the implementation principle and technical effect of the data processing apparatus provided in this embodiment are similar, and are not repeated here.
  • FIG. 8 is a structural block diagram of another data processing apparatus provided by an embodiment of the present application. This embodiment is executed by the receiving end. As shown in FIG. 8 , the data processing apparatus in this embodiment includes: a parser 410 and a processor 420 .
  • the parser 410 is configured to parse the media stream of the fused point cloud data to obtain multi-frame fused point cloud information
  • the processor 420 is configured to separately process the fused point cloud data according to the multi-frame fused point cloud information.
  • the data processing device applied to the receiving end further includes: a receiver configured to receive the fused point cloud sent by the sending end before parsing the media stream of the fused point cloud data to obtain multi-frame fused point cloud information.
  • the media stream of the data is not limited to: a receiver configured to receive the fused point cloud sent by the sending end before parsing the media stream of the fused point cloud data to obtain multi-frame fused point cloud information. The media stream of the data.
  • the data processing device applied to the receiving end further includes: a reader, configured to read the locally pre-stored fusion point cloud information before parsing the media stream of the fusion point cloud data to obtain multi-frame fusion point cloud information.
  • a reader configured to read the locally pre-stored fusion point cloud information before parsing the media stream of the fusion point cloud data to obtain multi-frame fusion point cloud information.
  • Media files for point cloud data
  • the multi-frame fusion point cloud information includes one of the following parameters: a multi-frame fusion coding mode flag; a transmission method of the fusion point cloud data; a consumption method of the fusion point cloud data.
  • the components of the fused point cloud data include: geometric information after fusion and attribute information after fusion; the attribute information after fusion includes frame number information and non-frame number attribute information after fusion.
  • the transmission method of the fusion point cloud data includes one of the following: preferentially transmits the geometric information after fusion, and then transmits the attribute information after fusion; simultaneously transmits the geometric information after fusion and the attribute information after fusion; The frame number information and the fused geometric information are transmitted, and then the non-frame number attribute information after fusion is transmitted; the fused geometric information and the non-frame number attribute information after fusion are transmitted first, and then the frame number information is transmitted; Geometry information, non-frame number attribute information after fusion, and frame number information.
  • the consumption mode of the fused point cloud data includes one of the following: preferential consumption of geometric information after fusion; priority consumption of non-frame number attribute information after fusion; priority consumption of geometric information after fusion and non-frame number attribute information after fusion Frame number attribute information; priority consumption of frame number information and geometric information after fusion; priority consumption of frame number information.
  • the correspondence between the multi-frame fused point cloud information and the fused point cloud data includes one of the following:
  • Multi-frame fusion point cloud information corresponds to a fusion point cloud data
  • the multi-frame fusion point cloud information corresponds to each component in the fusion point cloud data
  • At least one parameter in the multi-frame fusion point cloud information corresponds to one fusion point cloud data, and at least one parameter in the multi-frame fusion point cloud information corresponds to each component in the fusion point cloud data respectively.
  • the data processing apparatus provided in this embodiment is configured to implement the data processing method of the embodiment shown in FIG. 6 , and the implementation principles and technical effects of the data processing apparatus provided in this embodiment are similar, which will not be repeated here.
  • FIG. 9 is a schematic structural diagram of a communication node provided by an embodiment of the present application.
  • the communication node provided by this application includes: a processor 510 , a memory 520 and a communication module 530 .
  • the number of processors 510 in the communication node may be one or more, and one processor 510 is taken as an example in FIG. 9 .
  • the number of memories 520 in the communication node may be one or more, and one memory 520 is taken as an example in FIG. 9 .
  • the processor 510 , the memory 520 and the communication module 530 of the communication node may be connected by a bus or in other ways, and the connection by a bus is taken as an example in FIG. 9 .
  • the communication node is a sender, where the sender may be a client or a server.
  • the client may be a terminal side (eg, user equipment), or may be a network side (eg, a base station).
  • the communication node may also be a device in a video application, such as a mobile phone, a computer, a server, a set-top box, a portable mobile terminal, a digital video camera, a television broadcasting system device, and the like.
  • the memory 520 can be configured to store software programs, computer-executable programs, and modules, such as program instructions/modules corresponding to the communication nodes in any embodiment of the present application (for example, a determination module in a data processing apparatus). 310, writer 320 and transmitter 330).
  • the memory 520 may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created according to use of a communication node, and the like.
  • memory 520 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid-state storage device. In some instances, memory 520 may further include memory located remotely from processor 510, which may be connected to the communication node through a network. Examples of such networks include, but are not limited to, the Internet, an intranet, a local area network, a mobile communication network, and combinations thereof.
  • the communication module 530 is configured to perform communication interaction between two communication nodes.
  • the communication node provided above can be set to execute the data processing method applied to the transmitting end provided by any of the above embodiments, and has corresponding functions and effects.
  • the communication node provided above may be configured to execute the data processing method applied to the receiving end provided by any of the above embodiments, and has corresponding functions and effects.
  • Embodiments of the present application further provide a storage medium containing computer-executable instructions, where the computer-executable instructions are used to execute a data processing method applied to a sending end when executed by a computer processor, the method comprising: determining a multi-frame fusion point cloud information; write the multi-frame fusion point cloud information into the media stream of the fusion point cloud data; send the media stream of the fusion point cloud data to the receiving end respectively according to the multi-frame fusion point cloud information.
  • the storage medium may be a non-transitory storage medium.
  • Embodiments of the present application further provide a storage medium containing computer-executable instructions, where the computer-executable instructions are used to execute a data processing method applied to a receiving end when executed by a computer processor, the method comprising: parsing and fused point cloud data According to the multi-frame fusion point cloud information, the fusion point cloud data is processed respectively.
  • the storage medium may be a non-transitory storage medium.
  • user equipment encompasses any suitable type of wireless user equipment such as a mobile telephone, portable data processing device, portable web browser or vehicle mounted mobile station.
  • the various embodiments of the present application may be implemented in hardware or special purpose circuits, software, logic, or any combination thereof.
  • some aspects may be implemented in hardware, while other aspects may be implemented in firmware or software that may be executed by a controller, microprocessor or other computing device, although the application is not limited thereto.
  • Embodiments of the present application may be implemented by the execution of computer program instructions by a data processor of a mobile device, eg in a processor entity, or by hardware, or by a combination of software and hardware.
  • Computer program instructions may be assembly instructions, Instruction Set Architecture (ISA) instructions, machine instructions, machine-dependent instructions, microcode, firmware instructions, state setting data, or written in any combination of one or more programming languages source or object code.
  • ISA Instruction Set Architecture
  • the block diagrams of any logic flow in the figures of the present application may represent program steps, or may represent interconnected logic circuits, modules and functions, or may represent a combination of program steps and logic circuits, modules and functions.
  • Computer programs can be stored on memory.
  • the memory may be of any type suitable for the local technical environment and may be implemented using any suitable data storage technology, such as, but not limited to, Read-Only Memory (ROM), Random Access Memory (RAM), optical Memory devices and systems (Digital Video Disc (DVD) or Compact Disk (CD)), etc.
  • Computer-readable media may include non-transitory storage media.
  • the data processor may be of any type suitable for the local technical environment, such as, but not limited to, a general purpose computer, a special purpose computer, a microprocessor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC) ), programmable logic devices (Field-Programmable Gate Array, FPGA) and processors based on multi-core processor architecture.
  • a general purpose computer such as, but not limited to, a general purpose computer, a special purpose computer, a microprocessor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC) ), programmable logic devices (Field-Programmable Gate Array, FPGA) and processors based on multi-core processor architecture.
  • DSP Digital Signal Processor
  • ASIC Application Specific Integrated Circuit
  • FPGA Field-Programmable Gate Array

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • Geometry (AREA)
  • Computer Graphics (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)
  • Compression, Expansion, Code Conversion, And Decoders (AREA)
  • Computer And Data Communications (AREA)

Abstract

本申请提供一种数据处理方法、装置、通信节点和存储介质。该数据处理方法包括:确定多帧融合点云信息;将多帧融合点云信息写入融合点云数据的媒体码流;按照多帧融合点云信息分别将融合点云数据的媒体码流发送至接收端。

Description

数据处理方法、装置、通信节点和存储介质
本申请要求在2020年12月04日提交中国专利局、申请号为202011407418.6的中国专利申请的优先权,该申请的全部内容通过引用结合在本申请中。
技术领域
本申请涉及通信,例如涉及一种数据处理方法、装置、通信节点和存储介质。
背景技术
随着三维扫描技术和系统日趋成熟,三维(3-dimension,3D)扫描仪仪器制造成本降低,精度提高,基于实际物体表面的三维坐标信息的点云(Point Cloud)数据可以快速并精确的获取并存储,这也使得点云数据逐渐在各种图像处理领域中得到广泛应用。
点云数据是物体通过三维扫描后获得的三维坐标数据信息,同时,点云数据还可能记录对应点上的颜色、反射率等属性信息(Attribute)。随着三维扫描系统精度和速度的提升,扫描得到的点云数据的数据量将达到几百万甚至更大的数量级,目前,海量点云数据为计算机存储、处理和传输带来了沉重的负担。
点云的压缩算法已经有了较为系统的研究,可以分为基于视频的点云编码(Video-based Point Cloud Coding,V-PCC)和基于几何的点云编码(Geometry-based Point Cloud Coding,G-PCC)。其中,G-PCC的压缩方法是将点云数据转化为几何信息和属性信息等组成部分,再分别将几何信息和属性信息编码为点云数据码流,其中几何信息是点的位置信息,使用八叉树(Octree)形式描述并编码,属性信息是点的颜色和反射率等多个不同的种类。因为在一定时间内时域相邻的点云图像的八叉树描述可能保持不变,也就是说,这几帧点云图像可以共用一个相同的八叉树描述,所以,为了提高压缩效率,可以将这一时间内的不同点云图像在八叉树中相同位置的叶子节点(Leaf Node)上的点进行合并,并且分别给这些点标记为不同的帧编号(Frame Index)以示区别,其中帧编号是一种属性信息,这样就可以将多帧点云数据进行融合编码(Combine Frame Coding),得到融合点云数据。
由于融合点云数据中包含的点云帧数和内容具有很大的不确定性,用户可能对拆分后的多帧点云数据和融合后的单帧点云数据都有消费需求。同时,由于网络条件、硬件设备的情况不同,其它非帧编号的属性信息(比如,颜色、反射率等)的传输和消费也可以进行灵活的动态调整。而目前的点云数据封装、 传输和消费方法未考虑融合点云数据所对应码流的不同消费需求。因此,如果所有融合后的点云数据按照相同的策略进行传输,未被优先使用的信息占用了大量网络或解码资源,不利于高效的传输,也不利于用户灵活使用。
发明内容
本申请实施例提供一种数据处理方法、装置、通信节点和存储介质,提高了融合点云数据的传输效率,从而便于用户灵活使用。
本申请实施例提供一种数据处理方法,包括:
确定多帧融合点云信息;
将所述多帧融合点云信息写入融合点云数据的媒体码流;
按照所述多帧融合点云信息分别将所述融合点云数据的媒体码流发送至接收端。
本申请实施例提供一种数据处理方法,包括:
解析融合点云数据的媒体码流,得到多帧融合点云信息;
根据所述多帧融合点云信息分别对所述融合点云数据进行处理。
本申请实施例提供一种数据处理装置,包括:
确定模块,设置为确定多帧融合点云信息;
写入器,设置为将所述多帧融合点云信息写入融合点云数据的媒体码流;
发送器,设置为按照所述多帧融合点云信息分别将所述融合点云数据的媒体码流发送至接收端。
本申请实施例提供一种数据处理装置,包括:
解析器,设置为解析融合点云数据的媒体码流,得到多帧融合点云信息;
处理器,设置为根据所述多帧融合点云信息分别对所述融合点云数据进行处理。
本申请实施例提供一种设备,包括:通信模块,存储器,以及一个或多个处理器;
所述通信模块,配置为在两个通信节点之间进行通信交互;
所述存储器,配置为存储一个或多个程序;
当所述一个或多个程序被所述一个或多个处理器执行,使得所述一个或多个处理器实现上述任一实施例所述的方法。
本申请实施例提供一种存储介质,所述存储介质存储有计算机程序,所述计算机程序被处理器执行时实现上述任一实施例所述的方法。
附图说明
图1是本申请实施例提供的一种数据处理方法的流程图;
图2是本申请实施例提供的一种对融合点云数据进行解码呈现的示意图;
图3是本申请实施例提供的另一种对融合点云数据进行解码呈现的示意图;
图4是本申请实施例提供的又一种对融合点云数据进行解码呈现的示意图;
图5是本申请实施例提供的再一种对融合点云数据进行解码呈现的示意图;
图6是本申请实施例提供的另一种数据处理方法的流程图;
图7是本申请实施例提供的一种数据处理装置的结构框图;
图8是本申请实施例提供的另一种数据处理装置的结构框图;
图9是本申请实施例提供的一种通信节点的结构示意图。
具体实施方式
下文中将结合附图对本申请的实施例进行说明。以下结合实施例附图对本申请进行描述,所举实例仅用于解释本申请,并非用于限定本申请的范围。
以自动驾驶场景为例,在第五代移动通信技术(5th Generation,5G)时代,随着数据传输速度不断加快,一种更可靠的自动驾驶方案可能会涉及到多方互联协作,动态行驶的车辆不断感知周围环境形成点云,点云数据发送到云端,云端根据已有的点云数据信息,对当前时刻所有车辆发送的数据进行比对分析后,更新点云数据,再将不同车辆需要的点云数据编码后发送给车辆。在此过程中,如果车辆使用多帧点云融合编码方式,可以将一定时间间隔内的点云一次性的传输给云端,同样的,云端也可以使用多帧点云融合编码方法将一定时间间隔内的点云一次性的传输给车辆,这样有利于提高编码和传输效率。云端在接收到点云数据后,可以优先解码几何信息和帧编号信息,以便获得拆分后的几何信息,再仅对几何信息进行分析处理后即可发送回给车辆。同理,车辆行驶过程中,可以优先将几何信息快速传回云端进行标记处理,提高处理效率。
在一实施例中,图1是本申请实施例提供的一种数据处理方法的流程图。 本实施例可以由发送端执行。其中,发送端用于发送消息和/或码流。同时,发送端可以为客户端,也可以为服务器端。其中,客户端可以为终端侧(比如,用户设备),也可以为网络侧(比如,基站)。如图1所示,本实施例包括:S110-S130。
S110、确定多帧融合点云信息。
在实施例中,多帧融合点云信息用于表征原始的点云数据是否使用了多帧融合编码方式,以及在使用了多帧融合编码方式之后,融合点云数据是否采用了不同的传输方式和/或消费方式。其中,融合点云数据指的是对多帧图像所对应的原始的点云数据进行融合编码之后得到的数据。在实施例中,消费指的是对点云码流的使用、处理、解码或解码后渲染呈现的操作。
在一实施例中,融合点云数据的组成部分包括:融合之后的几何信息和融合之后的属性信息;融合之后的属性信息包括:帧编号信息和融合之后的非帧编号属性信息。在一实施例中,融合点云数据中的组成部分可以包括:融合之后的几何信息和融合之后的帧编码信息,即不包括融合之后的非帧编号属性信息和其它信息。其中,几何信息是描述点云中每个点的空间位置的信息;属性信息是描述点云中每个点的附带信息,根据代表的信息内容可分成不同的种类,比如点的颜色、反射率、帧编号(Frame Index)等;其它信息可以是辅助性的信息,比如用户自定义的信息等。在实施例中,融合之后的几何信息指的是对点云数据中的几何信息进行融合编码之后得到的信息;融合之后的属性信息指的是对点云数据中的属性信息进行融合编码之后得到的信息。
在一实施例中,多帧融合点云信息包括下述参数之一:多帧融合编码方式标志;融合点云数据的传输方式;融合点云数据的消费方式。在实施例中,多帧融合编码方式标志用于表征是否使用了多帧融合编码方式,比如,在多帧融合编码方式标志为0时,表示原始的点云数据未使用多帧融合编码方式;在多帧融合编码方式标志为1时,表示原始的点云数据使用了多帧融合编码方式。融合点云数据的传输方式用于表征融合点云数据中每个组成部分的传输优先级;融合点云数据的消费方式用于表征融合点云数据中每个组成部分的处理优先级。在实施例中,融合之后的属性信息可以按照信息内容进行分类,然后不同种类的属性信息可以作为一个整体进行传输和/或消费,也可以每个种类独立进行传输和/或消费。
在一实施例中,在一实施例中,确定多帧融合点云信息,包括下述之一:
根据预设配置信息确定多帧融合点云信息;
在与当前信道和/或接收端建立连接之后,根据接收端请求信息确定多帧融 合点云信息;
在与当前信道和/或接收端建立连接之后,根据当前信道信息确定多帧融合点云信息;
在与当前信道和/或接收端建立连接之后,根据接收端属性信息确定多帧融合点云信息;
在与当前信道和/或接收端建立连接之后,根据当前信道信息和接收端属性信息确定多帧融合点云信息。
在实施例中,多帧融合点云信息可以是发送端从外部设备获取;也可以是发送端根据自身预先配置的预设配置信息直接进行设定;也可以是发送端根据接收端请求信息中携带的预设配置信息进行设定;还可以是发送端通过所获取的当前信道信息和/或接收端属性信息进行设定。在实施例中,根据预设配置信息确定多帧融合点云信息包括两种情况:其一,在发送端与当前信道和/或接收端建立连接初始化的情况下,根据预设配置信息确定多帧融合点云信息;在发送端未与当前信道和/或接收端建立连接初始化的情况下,根据预设配置信息确定多帧融合点云信息,其中,在发送端与当前信道和/或接收端建立连接初始化的情况,指的是发送端与接收端进行媒体通信的情况;在发送端未与当前信道和/或接收端建立连接初始化的情况,指的是发送端未与接收端进行媒体通信的情况。
在一实施例中,在发送端根据预设配置信息直接设定多帧融合点云信息的情况下,多帧融合点云信息可以是对原始的点云数据不使用多帧融合编码方式,也可以是使用多帧融合编码方式。同时,融合之后得到的融合点云数据可以采用相同的传输方式和/或消费方式;也可以是使用多帧融合编码方式之后得到的融合点云数据采用不同的传输方式和/或消费方式。在实施例中,预设配置信息可以包括发送端自身的设备条件、网络条件等参数,即发送端根据自身的设备参数配置多帧融合点云信息。
在一实施例中,在发送端根据接收端请求信息设定多帧融合点云信息的情况下,多帧融合点云信息可以是不使用多帧融合编码方式;也可以是使用多帧融合编码方式。在使用多帧融合编码方式的情况下,融合之后得到的融合点云数据可以采用相同的传输方式和/或消费方式;也可以是使用多帧融合编码方式的情况下,融合之后的融合点云数据可以采用不同的传输方式和/或消费方式。
在实施例中,发送端可以是在与当前信道和/或者接收端建立连接初始化时静态设定多帧融合点云信息;也可以是在建立连接后,根据接收端请求信息动态设定多帧融合点云信息;也可以是在建立连接后,根据当前信道信息和/或接 收端属性信息动态设定多帧融合点云信息。其中,当前信道信息和/或接收端属性信息包括但不限于网络条件、用户需求、设备性能等。
S120、将多帧融合点云信息写入融合点云数据的媒体码流。
在实施例中,多帧融合点云信息可以写在媒体码流的不同部分,比如,媒体码流中点云信源数据部分,媒体码流中系统层数据部分等。
多帧融合点云信息中的不同参数可以以整体的形式写在媒体码流的同一部分,也可以各自独立的写在媒体码流的不同部分,也可以将几个参数进行组合,分别写在不同部分。
S130、按照多帧融合点云信息分别将融合点云数据的媒体码流发送至接收端。
在实施例中,将含有多帧融合点云信息的媒体码流发送至接收端。在实际通信过程中,可以根据多帧融合点云信息分别对融合点云数据的媒体码流进行发送。比如,先发送高优先级的信息,也可以采用更可靠的信道传输高优先级的信息。
在一实施例中,应用于发送端的数据处理方法,还包括:按照多帧融合点云信息分别将融合点云数据存储为媒体文件。
在实施例中,媒体码流可以根据多帧融合点云信息将融合点云数据存储为媒体文件,比如,将几何信息、帧编号信息和非帧编号属性信息独立放置于媒体文件的不同位置,以达到可以独立消费几何信息和帧编号信息的目的。
在一实施例中,融合点云数据的传输方式包括下述之一:优先传输融合之后的几何信息,再传输融合之后的属性信息;同时传输融合之后的几何信息和融合之后的属性信息;优先传输帧编号信息和融合之后的几何信息,再传输融合之后的非帧编号属性信息;优先传输融合之后的几何信息和融合之后的非帧编号属性信息,再传输帧编号信息;依次传输融合之后的几何信息,融合之后的非帧编号属性信息,和帧编号信息。
在一实施例中,融合点云数据的消费方式包括下述之一:优先消费融合之后的几何信息;优先消费融合之后的非帧编号属性信息;优先消费融合之后的几何信息和融合之后的非帧编号属性信息;优先消费帧编号信息和融合之后的几何信息;优先消费帧编号信息。
在一实施例中,对多帧融合编码方式标志、融合点云数据的传输方式和消费方式之间的组合方式进行说明,示例性地,可以包括下述组合方式:
方式一:发送端采用多帧融合编码方式,并且优先传输融合之后的几何信 息,再传输融合之后的属性信息,其中,融合之后的属性信息包括帧编号信息。
方式二:发送端采用多帧融合编码方式,并设定优先消费融合之后的几何信息,则设定优先消费融合之后的几何信息的标记。融合之后的几何信息的传输方式包括:融合之后的几何信息和融合之后的属性信息可以同时传输;也可以优先传输融合之后的几何信息,再传输融合之后的属性信息;也可以优先传输帧编号信息和融合之后的几何信息,再传输融合之后的非帧编号属性信息。
方式三:接收端向发送端发送先消费融合之后的几何信息的请求,比如,接收端需要优先显示融合之后的几何信息,发送端根据接收端的请求,采用多帧融合编码方式,并且设定优先消费融合之后的几何信息的标记。融合之后的几何信息的传输方式包括:发送端可以同时传输融合之后的几何信息和融合之后的属性信息;也可以先传输帧编号信息和融合之后的几何信息,再传输融合之后的非帧编号属性信息。
方式四:在网络条件稳定,比如传输速率高,无误码和丢包的情况下,发送端可以不使用多帧融合编码方式,直接将原始的点云数据发送至接收端;也可以在采用多帧融合编码方式的情况下,将融合之后的几何信息和融合之后的属性信息一起发送至接收端。
在网络条件不稳定,比如传输速率不稳定,有误码和丢包的情况下,发送端可以使用多帧融合编码方式,并且采用优先传输融合之后的几何信息,再传输融合之后的属性信息的方式;或者将融合之后的几何信息使用可靠性高的信道传输,融合之后的属性信息采用可靠性低的信道传输。
方式五:采用多帧融合编码方式的多帧点云数据中,帧编号信息是融合之后的属性信息的一种,在将融合点云数据拆分为原始的多帧点云数据中使用。因此,帧编号信息是在融合之后的几何信息和/或融合之后的非帧编号属性信息之后消费,也就是说优先传输融合之后的几何信息和/或融合之后的非帧编号属性信息,再传输帧编号信息;也可以优先传输融合之后的几何信息,再传输融合之后的非帧编号属性信息,最后传输帧编号信息;也可以先传输帧编号信息和融合之后的几何信息,再传输融合之后的非帧编号属性信息。
方式六:发送端可以根据预设配置信息、用户需求、信道条件或者接收端信息,对不同的属性信息采用不同的传输优先级,或者对不同的属性信息采用不同的信道传输。
方式七:发送端可以根据预设配置信息、用户需求、信道条件或者接收端信息,选择发送或者不发送其它信息。
在一实施例中,多帧融合点云信息和融合点云数据之间的对应关系包括下 述之一:
多帧融合点云信息与一个融合点云数据对应;
多帧融合点云信息分别与融合点云数据中的每个组成部分对应;
多帧融合点云信息中的至少一个参数与一个融合点云数据对应,以及多帧融合点云信息中的至少一个参数分别与融合点云数据中的每个组成部分对应。
在一实施例中,本实施例是对上述实施例中多帧融合点云信息的实现方式进行说明。其中,多帧融合点云信息包括下述参数:多帧融合编码方式标志;融合点云数据的传输方式;融合点云数据的消费方式。在实施例中,对多帧融合点云信息和融合点云数据之间的对应关系为多帧融合点云信息与一个融合点云数据对应说明,即多帧融合点云信息针对一个融合点云数据进行设置。示例性地,多帧融合点云信息的实现方式如下:
Figure PCTCN2021128213-appb-000001
其中,combine_frame_coding_enabled_flag用于表示多帧融合编码方式标志,比如,1表示使用了多帧融合编码方式,0表示不使用多帧融合编码方式。
combine_frame_decoding_order用于表示融合点云数据(即融合后的几何信息、属性信息和其它信息)所采用的消费方式,比如解码呈现的顺序。示例性地,combine_frame_decoding_order等于0表示在接收帧编号信息和融合之后的几何信息之后,立即进行解码呈现,呈现的是拆分后的多帧点云数据,支持拆分后的几何信息优先消费,即呈现的是解码且拆分后的几何信息;combine_frame_decoding_order等于1表示接收到融合之后的几何信息后,应立即进行解码呈现,即呈现的是解码后的融合的几何信息; combine_frame_decoding_order等于2表示接收到所有信息后再进行解码呈现,呈现的是多帧的带有属性信息的点云数据,即呈现的是解码后且拆分的几何信息和非帧编号属性信息;combine_frame_decoding_order等于3表示接收到融合之后的几何信息和融合之后的非帧编号属性信息后立即进行解码呈现,即呈现的是解码后融合的几何信息和非帧编号属性信息的单帧点云数据。
在一实施例中,图2是本申请实施例提供的一种对融合点云数据进行解码呈现的示意图。本实施例中是对combine_frame_decoding_order等于0的情况进行描述。如图2所示,对携带融合点云数据的码流(即多帧融合编码码流)进行解码,优先对融合之后的几何信息所对应的码流(即几何码流)和帧编号信息所对应的码流(即帧编号码流)进行解码,然后根据帧编号信息将融合之后的几何信息拆分成多帧的点云数据的几何信息,并呈现解码拆分后得到的几何信息。
在一实施例中,图3是本申请实施例提供的另一种对融合点云数据进行解码呈现的示意图。本实施例中是对combine_frame_decoding_order等于1的情况进行描述。如图3所示,对携带融合点云数据的码流进行解码,优先对融合之后的几何信息所对应的码流进行解码,呈现出解码后的融合的几何信息。
在一实施例中,图4是本申请实施例提供的又一种对融合点云数据进行解码呈现的示意图。本实施例中是对combine_frame_decoding_order等于2的情况进行描述。如图4所示,对携带融合点云数据的码流进行解码,且在接收到所有信息后再进行解码,呈现出解码并拆分后的几何信息和非帧编号属性信息。
在一实施例中,图5是本申请实施例提供的再一种对融合点云数据进行解码呈现的示意图。本实施例中是对combine_frame_decoding_order等于3的情况进行描述。如图5所示,对携带融合点云数据的码流进行解码,且接收到融合之后的几何信息和融合之后的非帧编号属性信息后立即进行解码呈现,呈现的是解码后的融合的几何信息和非帧编号属性信息的单帧点云数据。
在实施例中,可以将非帧编号属性信息不作为一个整体,而是根据非帧编号属性信息的种类拆分成独立的信息,针对它们设置不同的解码呈现方式。比如,可以让颜色属性信息与几何信息一起优先解码呈现。
combine_frame_transfer_priority是融合点云数据所采用的传输顺序,即使用不同的取值表示不同的传输顺序。表1是本申请实施例提供的一种融合点云数据的传输方式的示意表。如表1所示。在combine_frame_transfer_priority等于0的情况下,依次传输融合之后的几何信息、帧编号信息和融合之后的属性信息;在combine_frame_transfer_priority等于1的情况下,依次传输融合之后的几何信息、融合之后的属性信息和帧编号信息。
表1一种融合点云数据的传输方式的示意表
Figure PCTCN2021128213-appb-000002
在实施例中,可以将非帧编号属性信息不作为一个整体,而是根据属性种类拆分成独立的信息,针对它们设置不同的传输顺序。比如,可以让颜色属性信息优先单独传输。
在一实施例中,本实施例是对上述实施例中多帧融合点云信息的实现方式进行说明。其中,多帧融合点云信息包括下述参数:多帧融合编码方式标志;融合点云数据的传输方式;融合点云数据的消费方式。在实施例中,对多帧融合点云信息和融合点云数据之间的对应关系为多帧融合点云信息分别与所述融合点云数据中的每个组成部分对应进行说明,即融合点云数据中的不同组成部分都有各自对应的多帧融合点云信息。示例性地,多帧融合点云信息的实现方式如下:
Figure PCTCN2021128213-appb-000003
其中,combine_frame_coding_enabled_flag表示多帧融合编码方式标志,比如,1表示使用了多帧融合编码方式,0表示不使用多帧融合编码方式。
combine_frame_sample_decoding_order是表示融合点云数据(即融合后的几何信息、属性信息和其它信息)所采用的消费方式,比如解码呈现的顺序。示 例性地,combine_frame_sample_decoding_order为0时,表示最先解码呈现,即接收后应立即进行解码呈现,数值越高代表解码呈现顺序越靠后,不同的信息可以采用不同的解码呈现顺序,也可以采用相同的解码顺序。比如,帧编号信息和融合之后的几何信息最先解码呈现,那么帧编号信息和融合之后的几何信息的combine_frame_sample_decoding_order均设置为0,融合之后的非帧编号属性信息的combine_frame_sample_decoding_order均设置为1,如图2所示。
combine_frame_sample_transfer_priority是融合后点云数据的信息采用的传输顺序,使用不同的取值表示不同的传输顺序。combine_frame_sample_transfer_priority为0时,表示传输优先级最高,数值越高表示优先级越低。比如,融合之后的几何信息的combine_frame_sample_transfer_priority设置为0,帧编号信息的combine_frame_sample_transfer_priority设置为1,融合之后的非帧编号属性信息的combine_frame_sample_transfer_priority均设置为2。也可以将融合之后的非帧编号属性信息根据种类类型独立设置,比如颜色信息的combine_frame_sample_transfer_priority设为3。
在一实施例中,本实施例是对上述实施例中多帧融合点云信息的实现方式进行说明。其中,多帧融合点云信息包括下述参数:多帧融合编码方式标志;融合点云数据的传输方式;融合点云数据的消费方式。在实施例中,对多帧融合点云信息和融合点云数据之间的对应关系为多帧融合点云信息中的至少一个参数与一个融合点云数据对应,以及多帧融合点云信息中的至少一个参数分别与融合点云数据中的每个组成部分对应进行说明,即多帧融合点云信息可以一部分针对一个多帧融合点云数据整体设置,一部分针对融合点云数据中的不同组成部分设定。示例性地,多帧融合点云信息的实现方式如下:
Figure PCTCN2021128213-appb-000004
Figure PCTCN2021128213-appb-000005
其中,combine_frame_coding_enabled_flag用于表示多帧融合编码方式标志,比如,1表示使用了多帧融合编码方式,0表示不使用多帧融合编码方式。
combine_frame_decoding_order用于表示融合点云数据(即融合后的几何信息、属性信息和其它信息)所采用的消费方式,比如解码呈现的顺序。示例性地,combine_frame_decoding_order等于0表示在接收到帧编号信息和融合之后的几何信息后应立即进行解码呈现,呈现的是拆分后的多帧点云数据,支持几何信息优先消费;combine_frame_decoding_order等于1表示接收到融合之后的几何信息后,应立即进行解码呈现,呈现的是解码后的融合的几何信息;combine_frame_decoding_order等于2表示接收到所有信息后再进行解码呈现,呈现的是多帧的带有属性信息的点云数据;combine_frame_decoding_order等于3表示接收到融合后的几何信息和融合之后的非帧编号属性信息后立即进行解码呈现,呈现的是解码后融合的具有属性信息的单帧点云数据。
其中,可以将融合之后的非帧编号属性信息不作为一个整体,而是拆分成不同种类的独立的信息,分别对它们设置不同的解码呈现方式。比如,可以将颜色信息与几何信息一起优先解码呈现。
其中,combine_frame_sample_transfer_priority用于表示融合点云数据所采用的传输顺序,使用不同的取值表示不同的传输顺序。示例性地,combine_frame_sample_transfer_priority为0时,表示传输优先级最高,数值越高表示优先级越低。比如,融合之后的几何信息的combine_frame_sample_transfer_priority设置为0,帧编号信息的combine_frame_sample_transfer_priority设置为1,融合之后的非帧编号属性信息的combine_frame_sample_transfer_priority均设置为2。当然,也可以将融合之后的非帧编号属性信息进行独立设置,示例性地,将融合之后的颜色信息的combine_frame_sample_transfer_priority设为3。
在一实施例中,将多帧融合点云信息写入媒体码流,包括下述之一:
将多帧融合点云信息写入媒体文件中的轨道信息;
将多帧融合点云信息写入融合点云数据的补充增强信息;
将多帧融合点云信息写入融合点云数据的视频应用信息。
在一实施例中,本实施例根据消费和/或传输融合点云数据的不同需求,将 多帧融合点云信息写入媒体封装文件中。示例性地,本实施例采用国际标准化组织/国际电工委员会14496-12国际标准化组织基础媒体文件格式(International Organization for Standardization/International Electrotechnical Commission 14496-12 International Organization for Standardization Base Media File Format,ISO/IEC 14496-12 ISO BMFF)对融合点云数据进行封装,该封装方式是多帧点云融合信息的一种实现方式,但不限于此种封装方式。
在实施例中,多帧融合点云信息包括下述参数之一:多帧融合编码方式标志,融合点云数据的传输方式,融合点云数据的消费方式。
在实施例中,融合点云数据中融合之后的几何信息和融合之后的属性信息分别存放在不同的媒体轨道中,通过定义不同类型(比如,采用四字代码标识)的样本入口(sample entry)识别轨道中存放融合点云数据的数据类型,比如,融合之后的几何信息、帧编号信息。并且,在样本入口中给出指示信息用于表征融合点云数据(即融合编码之后的几何信息、属性信息和其它信息)所采用的传输方式和/消费方式。具体的,融合点云数据存放在该媒体轨道的样本中。示例性地,媒体轨道中的指示信息实现方式如下:
Figure PCTCN2021128213-appb-000006
其中,decoding_order_flag用于指示该媒体轨道中样本的解码呈现顺序的标志,比如,1表示按顺序解码呈现,0表示按缺省方式进行解码呈现。
sample_decoding_order用于表示该媒体轨道中样本的解码呈现的顺序。示例性地,0表示最先解码呈现,即接收后应立即进行解码呈现,数值越高代表解码呈现顺序越靠后,不同类型的信息可以采用不同的解码呈现顺序,也可以采用相同的解码呈现顺序。比如,帧编号信息和融合之后的几何信息最先解码呈现,那么帧编号信息和融合之后的几何信息的sample_decoding_order均设置为0,融 合之后的非帧编号属性信息的sample_decoding_order均设置为1,如图2所示。
transfer_priority_flag用于指示该媒体轨道中样本的传输顺序的标志,比如,1表示按顺序传输样本,0表示按缺省方式传输样本。
sample_transfer_priority用于表示该媒体轨道中样本的传输顺序,使用不同的取值表示不同的传输顺序。示例性,0表示传输优先级最高,数值越高表示优先级越低。比如,几何信息的sample_transfer_priority设置为0,帧编号信息的sample_transfer_priority设置为1,融合之后的非帧编号属性信息的sample_transfer_priority均设置为2。当然,也可以将不同类型的融合之后的非帧编号属性信息独立设置,比如将颜色信息的sample_transfer_priority设为3。
在实施例中,点云数据采用多帧融合的编码方式,则融合点云数据的媒体文件需要给出指示信息,以指示该媒体文件中点云数据采用多帧融合编码方式进行编码。示例性地,该多帧点云数据的融合编码的指示信息具体表示如下:
combine_frame_coding_enabled_flag,表示多帧融合编码方式标志,比如,1表示使用了多帧融合编码方式,0表示不使用多帧融合编码方式。
在一实施例中,该多帧点云数据的融合编码的指示信息可以在文件层级中指示,比如,在媒体信息数据盒(MediaInformationBox)下相关的媒体头数据盒(MediaHeaderBox)中指示,或者在文件层级的其他数据盒(Box)中指示。
在一实施例中,该多帧点云数据的融合编码的指示信息也可以在媒体轨道层级中指示,比如,在相应的样本入口(sample entry)中指示。
在实施例中,融合点云数据中的几何信息和不同类型的属性信息可存放在一个或多个不同的媒体,比如,两种不同类型的属性信息存放在同一个媒体轨道中,则该媒体轨道中的融合点云数据使用相同的传输方式和/或消费方式。
融合点云数据的消费方式和/或传输方式可以根据不同的场景或需求发生变化,则可以采用动态定时元数据轨道对一个或多个媒体轨道中的融合点云数据的消费方式和/或传输方式进行动态的设置。通过‘dydt’识别描述动态的融合点云数据的消费方式和/或传输方式的动态定时元数据轨道,具体实施方式如下:
Figure PCTCN2021128213-appb-000007
在实施例中,通过DynamicDecodingAndTransferSample()指示该动态定时元数据轨道引用到的存放融合点云数据的媒体轨道中每一个sample的消费方式和/或传输方式,具体的实施方式如下:
Figure PCTCN2021128213-appb-000008
其中,dynamic_order_flag指示样本的消费方式动态变化,0表示消费方式不发生变化,1表示该消费方式发生变化;
sample_transfer_priority指示样本的传输方式动态变化,0表示传输方式不发生变化,1表示该传输方式发生变化。
在一实施例中,本实施例对将多帧融合点云信息写入媒体码流进行说明。在实施例中,将多帧融合点云信息写入媒体码流,包括:将多帧融合点云信息写入融合点云数据的补充增强信息(Supplemental Enhancement Information,SEI)。示例性地,将多帧融合点云信息写入融合点云数据的SEI中的实现方式包括:
Figure PCTCN2021128213-appb-000009
在实施例中,combine_frame_coding_info()的实现方式,可以如下所示:
Figure PCTCN2021128213-appb-000010
在实施例中,combine_frame_coding_info()的实现方式可以为上述任一实施例中的任一形式或者组合形式,在此不再赘述。
在一实施例中,本实施例对将多帧融合点云信息写入媒体码流进行说明。在实施例中,将多帧融合点云信息写入媒体码流,包括:将多帧融合点云信息写入融合点云数据的视频应用信息(Video Usability Information,VUI)。示例性地,将多帧融合点云信息写入融合点云数据的VUI中的实现方式包括:
Figure PCTCN2021128213-appb-000011
在实施例中,combine_frame_coding_info_flag取值等于1时,表示后续有多帧融合点云信息;combine_frame_coding_info()可以是上述实施例的任意形式或组合形式。
在一实施例中,图6是本申请实施例提供的另一种数据处理方法的流程图。本实施例可以由接收端执行。其中,接收端用于接收消息和/或码流。同时,接收端可以为客户端,也可以为服务器端。其中,客户端可以为终端侧(比如,用户设备),也可以为网络侧(比如,基站)。如图6所示,本实施例包括:S210-S220。
S210、解析融合点云数据的媒体码流,得到多帧融合点云信息。
在实施例中,对融合点云数据和多帧融合点云信息的解释,见上述实施例中的描述,在此不再赘述。
在实施例中,接收端在获取到融合点云数据的媒体码流之后,对媒体码流进行解析,以获取到多帧融合点云信息。在实施例中,若在媒体码流中存在多帧融合点云信息,则可以从媒体码流中提取出多帧融合点云信息。在一实施例中,多帧融合点云信息包括下述参数之一:多帧融合编码方式标志;融合点云数据的传输方式;融合点云数据的消费方式。其中,多帧融合编码方式标志用于表征原始的点云数据是否使用了多帧融合编码方式,若使用了多帧融合编码方式,融合点云数据(即融合编码之后的几何信息、属性信息和其它信息)是否采用了不同的传输方式和/或消费方式。
在实施例中,多帧融合点云信息中的所有参数可以分别存储在媒体码流或媒体文件的不同位置,也可以存储在媒体数据或媒体文件的相同位置。
在实施例中,多帧融合点云信息的实现方式,可参见上述实施例中的描述,在此不再赘述。
S220、根据多帧融合点云信息分别对融合点云数据进行处理。
在实施例中,对媒体码流的处理是先判断是否使用了多帧融合编码方式,如果使用了多帧融合编码方式,再根据多帧融合点云信息中融合之后的几何信息、属性信息和其它信息所采用的传输方式和/或消费方式对媒体数据进行处理。比如,多帧融合点云信息中指示几何信息优先解码呈现,可以先从媒体码流或媒体文件中获取并解析几何信息;如果帧编号信息没有指示优先解码呈现的话,那么融合点云数据中的几何信息解码后直接呈现;如果帧编号信息也被指示优先解码呈现,那么获取并解析帧编号信息,根据帧编号信息对多帧融合后的几何信息进行解码并拆分,呈现拆分后的几何信息。
其中,可以忽略多帧融合点云信息对媒体码流进行常规处理,即获取所有融合点云数据,分别解析出几何信息、帧编号信息、非帧编号属性信息和其它信息,根据帧编号信息对几何信息和非帧编号属性信息和其它信息进行拆分,将拆分后的点云信息分别呈现。
在一实施例中,在解析融合点云数据的媒体码流,得到多帧融合点云信息之前,还包括:接收发送端发送的融合点云数据的媒体码流。
在一实施例中,在解析融合点云数据的媒体码流,得到多帧融合点云信息之前,还包括:读取本地预先存储的融合点云数据的媒体文件。在实施例中,在接收端直接通过读取媒体文件来获取融合点云数据的情况下,解析融合点云数据的媒体码流的过程,指的是对存储有融合点云数据的媒体文件进行解析的过程。
在一实施例中,融合点云数据的组成部分包括:融合之后的几何信息和融合之后的属性信息;融合之后的属性信息包括:帧编号信息和融合之后的非帧编号属性信息。
在一实施例中,融合点云数据的传输方式包括下述之一:优先传输融合之后的几何信息,再传输融合之后的属性信息;同时传输融合之后的几何信息和融合之后的属性信息;优先传输帧编号信息和融合之后的几何信息,再传输融合之后的非帧编号属性信息;优先传输融合之后的几何信息和融合之后的非帧编号属性信息,再传输帧编号信息;依次传输融合之后的几何信息,融合之后的非帧编号属性信息,和帧编号信息。
在一实施例中,融合点云数据的消费方式包括下述之一:优先消费融合之后的几何信息;优先消费融合之后的非帧编号属性信息;优先消费融合之后的几何信息和融合之后的非帧编号属性信息;优先消费帧编号信息和融合之后的几何信息;优先消费帧编号信息。
在一实施例中,多帧融合点云信息和融合点云数据之间的对应关系包括下述之一:
多帧融合点云信息与一个融合点云数据对应;
多帧融合点云信息分别与融合点云数据中的每个组成部分对应;
多帧融合点云信息中的至少一个参数与一个融合点云数据对应,以及多帧融合点云信息中的至少一个参数分别与融合点云数据中的每个组成部分对应。
在一实施例中,图7是本申请实施例提供的一种数据处理装置的结构框图。本实施例由发送端执行。如图7所示,本实施例中的数据处理装置包括:确定模块310、写入器320和发送器330。
其中,确定模块310,设置为确定多帧融合点云信息;
写入器320,设置为将多帧融合点云信息写入融合点云数据的媒体码流;
发送器330,设置为按照多帧融合点云信息分别将融合点云数据的媒体码流 发送至接收端。
在一实施例中,应用于发送端的数据处理装置,还包括:存储器,设置为按照多帧融合点云信息分别将融合点云数据存储为媒体文件。
在一实施例中,多帧融合点云信息包括下述参数之一:多帧融合编码方式标志;融合点云数据的传输方式;融合点云数据的消费方式。
在一实施例中,确定模块310,包括下述之一:
根据预设配置信息确定多帧融合点云信息;
在与当前信道和/或接收端建立连接之后,根据接收端请求信息确定多帧融合点云信息;
在与当前信道和/或接收端建立连接之后,根据当前信道信息确定多帧融合点云信息;
在与当前信道和/或接收端建立连接之后,根据接收端属性信息确定多帧融合点云信息;
在与当前信道和/或接收端建立连接之后,根据当前信道信息和接收端属性信息确定多帧融合点云信息。
在一实施例中,融合点云数据的组成部分包括:融合之后的几何信息和融合之后的属性信息;融合之后的属性信息包括:帧编号信息和融合之后的非帧编号属性信息。
在一实施例中,融合点云数据的传输方式包括下述之一:优先传输融合之后的几何信息,再传输融合之后的属性信息;同时传输融合之后的几何信息和融合之后的属性信息;优先传输帧编号信息和融合之后的几何信息,再传输融合之后的非帧编号属性信息;优先传输融合之后的几何信息和融合之后的非帧编号属性信息,再传输帧编号信息;依次传输融合之后的几何信息,融合之后的非帧编号属性信息,和帧编号信息。
在一实施例中,融合点云数据的消费方式包括下述之一:优先消费融合之后的几何信息;优先消费融合之后的非帧编号属性信息;优先消费融合之后的几何信息和融合之后的非帧编号属性信息;优先消费帧编号信息和融合之后的几何信息;优先消费帧编号信息。
在一实施例中,多帧融合点云信息和融合点云数据之间的对应关系包括下述之一:
多帧融合点云信息与一个融合点云数据对应;
多帧融合点云信息分别与融合点云数据中的每个组成部分对应;
多帧融合点云信息中的至少一个参数与一个融合点云数据对应,以及多帧融合点云信息中的至少一个参数分别与融合点云数据中的每个组成部分对应。
在一实施例中,将多帧融合点云信息写入媒体码流,包括下述之一:
将多帧融合点云信息写入媒体文件中的轨道信息;
将多帧融合点云信息写入融合点云数据的补充增强信息;
将多帧融合点云信息写入融合点云数据的视频应用信息。
本实施例提供的数据处理装置设置为实现图1所示实施例的数据处理方法,本实施例提供的数据处理装置实现原理和技术效果类似,此处不再赘述。
在一实施例中,在一实施例中,图8是本申请实施例提供的另一种数据处理装置的结构框图。本实施例由接收端执行。如图8所示,本实施例中的数据处理装置包括:解析器410和处理器420。
其中,解析器410,设置为解析融合点云数据的媒体码流,得到多帧融合点云信息;
处理器420,设置为根据多帧融合点云信息分别对融合点云数据进行处理。
在一实施例中,应用于接收端的数据处理装置,还包括:接收器,设置为在解析融合点云数据的媒体码流,得到多帧融合点云信息之前,接收发送端发送的融合点云数据的媒体码流。
在一实施例中,应用于接收端的数据处理装置,还包括:读取器,设置为在解析融合点云数据的媒体码流,得到多帧融合点云信息之前,读取本地预先存储的融合点云数据的媒体文件。
在一实施例中,多帧融合点云信息包括下述参数之一:多帧融合编码方式标志;融合点云数据的传输方式;融合点云数据的消费方式。
在一实施例中,融合点云数据的组成部分包括:融合之后的几何信息和融合之后的属性信息;融合之后的属性信息包括:帧编号信息和融合之后的非帧编号属性信息。
在一实施例中,融合点云数据的传输方式包括下述之一:优先传输融合之后的几何信息,再传输融合之后的属性信息;同时传输融合之后的几何信息和融合之后的属性信息;优先传输帧编号信息和融合之后的几何信息,再传输融合之后的非帧编号属性信息;优先传输融合之后的几何信息和融合之后的非帧编号属性信息,再传输帧编号信息;依次传输融合之后的几何信息,融合之后的非帧编号属性信息,和帧编号信息。
在一实施例中,融合点云数据的消费方式包括下述之一:优先消费融合之 后的几何信息;优先消费融合之后的非帧编号属性信息;优先消费融合之后的几何信息和融合之后的非帧编号属性信息;优先消费帧编号信息和融合之后的几何信息;优先消费帧编号信息。
在一实施例中,多帧融合点云信息和融合点云数据之间的对应关系包括下述之一:
多帧融合点云信息与一个融合点云数据对应;
多帧融合点云信息分别与融合点云数据中的每个组成部分对应;
多帧融合点云信息中的至少一个参数与一个融合点云数据对应,以及多帧融合点云信息中的至少一个参数分别与融合点云数据中的每个组成部分对应。
本实施例提供的数据处理装置设置为实现图6所示实施例的数据处理方法,本实施例提供的数据处理装置实现原理和技术效果类似,此处不再赘述。
图9是本申请实施例提供的一种通信节点的结构示意图。如图9所示,本申请提供的通信节点,包括:处理器510、存储器520和通信模块530。该通信节点中处理器510的数量可以是一个或者多个,图9中以一个处理器510为例。该通信节点中存储器520的数量可以是一个或者多个,图9中以一个存储器520为例。该通信节点的处理器510、存储器520和通信模块530可以通过总线或者其他方式连接,图9中以通过总线连接为例。在该实施例中,该通信节点为发送端,其中,发送端可以为客户端,也可以为服务器端。其中,客户端可以为终端侧(比如,用户设备),也可以为网络侧(比如,基站)。在实施例中,通信节点也可以是视频应用中设备,例如,手机、计算机、服务器、机顶盒、便携式移动终端、数字摄像机,电视广播系统设备等。
存储器520作为一种计算机可读存储介质,可设置为存储软件程序、计算机可执行程序以及模块,如本申请任意实施例的通信节点对应的程序指令/模块(例如,数据处理装置中的确定模块310、写入器320和发送器330)。存储器520可包括存储程序区和存储数据区,其中,存储程序区可存储操作系统、至少一个功能所需的应用程序;存储数据区可存储根据通信节点的使用所创建的数据等。此外,存储器520可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件、闪存器件、或其他非易失性固态存储器件。在一些实例中,存储器520可进一步包括相对于处理器510远程设置的存储器,这些远程存储器可以通过网络连接至通信节点。上述网络的实例包括但不限于互联网、企业内部网、局域网、移动通信网及其组合。
通信模块530,配置为在两个通信节点之间进行通信交互。
上述提供的通信节点可设置为执行上述任意实施例提供的应用于发送端的 数据处理方法,具备相应的功能和效果。
在一实施例中,在通信节点为接收端的情况下,上述提供的通信节点可设置为执行上述任意实施例提供的应用于接收端的数据处理方法,具备相应的功能和效果。
本申请实施例还提供一种包含计算机可执行指令的存储介质,计算机可执行指令在由计算机处理器执行时用于执行应用于发送端的一种数据处理方法,该方法包括:确定多帧融合点云信息;将多帧融合点云信息写入融合点云数据的媒体码流;按照多帧融合点云信息分别将融合点云数据的媒体码流发送至接收端。存储介质可以是非暂态(non-transitory)存储介质。
本申请实施例还提供一种包含计算机可执行指令的存储介质,计算机可执行指令在由计算机处理器执行时用于执行应用于接收端的一种数据处理方法,该方法包括:解析融合点云数据的媒体码流,得到多帧融合点云信息;根据多帧融合点云信息分别对融合点云数据进行处理。存储介质可以是非暂态(non-transitory)存储介质。
本领域内的技术人员应明白,术语用户设备涵盖任何适合类型的无线用户设备,例如移动电话、便携数据处理装置、便携网络浏览器或车载移动台。
一般来说,本申请的多种实施例可以在硬件或专用电路、软件、逻辑或其任何组合中实现。例如,一些方面可以被实现在硬件中,而其它方面可以被实现在可以被控制器、微处理器或其它计算装置执行的固件或软件中,尽管本申请不限于此。
本申请的实施例可以通过移动装置的数据处理器执行计算机程序指令来实现,例如在处理器实体中,或者通过硬件,或者通过软件和硬件的组合。计算机程序指令可以是汇编指令、指令集架构(Instruction Set Architecture,ISA)指令、机器指令、机器相关指令、微代码、固件指令、状态设置数据、或者以一种或多种编程语言的任意组合编写的源代码或目标代码。
本申请附图中的任何逻辑流程的框图可以表示程序步骤,或者可以表示相互连接的逻辑电路、模块和功能,或者可以表示程序步骤与逻辑电路、模块和功能的组合。计算机程序可以存储在存储器上。存储器可以具有任何适合于本地技术环境的类型并且可以使用任何适合的数据存储技术实现,例如但不限于只读存储器(Read-Only Memory,ROM)、随机访问存储器(Random Access Memory,RAM)、光存储器装置和系统(数码多功能光碟(Digital Video Disc,DVD)或光盘(Compact Disk,CD))等。计算机可读介质可以包括非瞬时性存储介质。数据处理器可以是任何适合于本地技术环境的类型,例如但不限于 通用计算机、专用计算机、微处理器、数字信号处理器(Digital Signal Processor,DSP)、专用集成电路(Application Specific Integrated Circuit,ASIC)、可编程逻辑器件(Field-Programmable Gate Array,FPGA)以及基于多核处理器架构的处理器。

Claims (16)

  1. 一种数据处理方法,包括:
    确定多帧融合点云信息;
    将所述多帧融合点云信息写入融合点云数据的媒体码流;
    按照所述多帧融合点云信息分别将所述融合点云数据的媒体码流发送至接收端。
  2. 根据权利要求1所述的方法,还包括:
    按照所述多帧融合点云信息分别将所述融合点云数据存储为媒体文件。
  3. 根据权利要求1所述的方法,其中,所述多帧融合点云信息包括下述参数之一:多帧融合编码方式标志;融合点云数据的传输方式;融合点云数据的消费方式。
  4. 根据权利要求1所述的方法,其中,所述确定多帧融合点云信息,包括下述之一:
    根据预设配置信息确定所述多帧融合点云信息;
    在与当前信道、或所述接收端中的至少之一建立连接之后,根据接收端请求信息确定所述多帧融合点云信息;
    在与当前信道、或所述接收端中的至少之一建立连接之后,根据当前信道信息确定所述多帧融合点云信息;
    在与当前信道、或所述接收端中的至少之一建立连接之后,根据接收端属性信息确定所述多帧融合点云信息;
    在与当前信道、或所述接收端中的至少之一建立连接之后,根据当前信道信息和接收端属性信息确定所述多帧融合点云信息。
  5. 根据权利要求1所述的方法,其中,所述融合点云数据的组成部分包括:融合之后的几何信息和融合之后的属性信息;所述融合之后的属性信息包括:帧编号信息和融合之后的非帧编号属性信息。
  6. 根据权利要求3所述的方法,其中,所述融合点云数据的传输方式包括下述之一:优先传输融合之后的几何信息,再传输融合之后的属性信息;同时传输融合之后的几何信息和融合之后的属性信息;优先传输帧编号信息和融合之后的几何信息,再传输融合之后的非帧编号属性信息;优先传输融合之后的几何信息和融合之后的非帧编号属性信息,再传输帧编号信息;依次传输融合之后的几何信息,融合之后的非帧编号属性信息,和帧编号信息。
  7. 根据权利要求3所述的方法,其中,所述融合点云数据的消费方式包括 下述之一:优先消费融合之后的几何信息;优先消费融合之后的非帧编号属性信息;优先消费融合之后的几何信息和融合之后的非帧编号属性信息;优先消费帧编号信息和融合之后的几何信息;优先消费帧编号信息。
  8. 根据权利要求1所述的方法,其中,所述多帧融合点云信息和所述融合点云数据之间的对应关系包括下述之一:
    所述多帧融合点云信息与一个融合点云数据对应;
    所述多帧融合点云信息分别与所述融合点云数据中的每个组成部分对应;
    所述多帧融合点云信息中的至少一个参数与一个融合点云数据对应,以及所述多帧融合点云信息中的至少一个参数分别与所述融合点云数据中的每个组成部分对应。
  9. 根据权利要求1-8中任一项所述的方法,其中,所述将所述多帧融合点云信息写入融合点云数据的媒体码流,包括下述之一:
    将所述多帧融合点云信息写入媒体文件中的轨道信息;
    将所述多帧融合点云信息写入所述融合点云数据的补充增强信息;
    将所述多帧融合点云信息写入所述融合点云数据的视频应用信息。
  10. 一种数据处理方法,包括:
    解析融合点云数据的媒体码流,得到多帧融合点云信息;
    根据所述多帧融合点云信息分别对所述融合点云数据进行处理。
  11. 根据权利要求10所述的方法,其中,在所述解析融合点云数据的媒体码流,得到多帧融合点云信息之前,还包括:
    接收发送端发送的融合点云数据的媒体码流。
  12. 根据权利要求10所述的方法,其中,在所述解析融合点云数据的媒体码流,得到多帧融合点云信息之前,还包括:
    读取本地预先存储的融合点云数据的媒体文件。
  13. 一种数据处理装置,包括:
    确定模块,设置为确定多帧融合点云信息;
    写入器,设置为将所述多帧融合点云信息写入融合点云数据的媒体码流;
    发送器,设置为按照所述多帧融合点云信息分别将所述融合点云数据的媒体码流发送至接收端。
  14. 一种数据处理装置,包括:
    解析器,设置为解析融合点云数据的媒体码流,得到多帧融合点云信息;
    处理器,设置为根据所述多帧融合点云信息分别对所述融合点云数据进行处理。
  15. 一种通信节点,包括:通信模块,存储器,以及一个或多个处理器;
    所述通信模块,配置为在两个通信节点之间进行通信交互;
    所述存储器,配置为存储一个或多个程序;
    当所述一个或多个程序被所述一个或多个处理器执行,使得所述一个或多个处理器实现如权利要求1-9或10-12中任一项所述的方法。
  16. 一种存储介质,所述存储介质存储有计算机程序,所述计算机程序被处理器执行时实现如权利要求1-9或10-12中任一项所述的方法。
PCT/CN2021/128213 2020-12-04 2021-11-02 数据处理方法、装置、通信节点和存储介质 WO2022116764A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP21899798.9A EP4258621A1 (en) 2020-12-04 2021-11-02 Data processing method and apparatus, and communication node and storage medium
US18/255,992 US20240054723A1 (en) 2020-12-04 2021-11-02 Data processing method and apparatus, communication node and storage medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202011407418.6A CN112583803A (zh) 2020-12-04 2020-12-04 数据处理方法、装置、通信节点和存储介质
CN202011407418.6 2020-12-04

Publications (1)

Publication Number Publication Date
WO2022116764A1 true WO2022116764A1 (zh) 2022-06-09

Family

ID=75128180

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/128213 WO2022116764A1 (zh) 2020-12-04 2021-11-02 数据处理方法、装置、通信节点和存储介质

Country Status (4)

Country Link
US (1) US20240054723A1 (zh)
EP (1) EP4258621A1 (zh)
CN (1) CN112583803A (zh)
WO (1) WO2022116764A1 (zh)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112583803A (zh) * 2020-12-04 2021-03-30 上海交通大学 数据处理方法、装置、通信节点和存储介质
CN117176715A (zh) * 2021-03-31 2023-12-05 腾讯科技(深圳)有限公司 点云编解码方法、装置、计算机可读介质及电子设备
CN113490178B (zh) * 2021-06-18 2022-07-19 天津大学 一种智能网联车辆多级协作感知系统
CN115866274A (zh) * 2022-09-19 2023-03-28 腾讯科技(深圳)有限公司 一种点云媒体的数据处理方法及相关设备

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109886272A (zh) * 2019-02-25 2019-06-14 腾讯科技(深圳)有限公司 点云分割方法、装置、计算机可读存储介质和计算机设备
US20190318488A1 (en) * 2018-04-12 2019-10-17 Samsung Electronics Co., Ltd. 3d point cloud compression systems for delivery and access of a subset of a compressed 3d point cloud
CN111699697A (zh) * 2019-06-14 2020-09-22 深圳市大疆创新科技有限公司 一种用于点云处理、解码的方法、设备及存储介质
CN112583803A (zh) * 2020-12-04 2021-03-30 上海交通大学 数据处理方法、装置、通信节点和存储介质

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190318488A1 (en) * 2018-04-12 2019-10-17 Samsung Electronics Co., Ltd. 3d point cloud compression systems for delivery and access of a subset of a compressed 3d point cloud
CN109886272A (zh) * 2019-02-25 2019-06-14 腾讯科技(深圳)有限公司 点云分割方法、装置、计算机可读存储介质和计算机设备
CN111699697A (zh) * 2019-06-14 2020-09-22 深圳市大疆创新科技有限公司 一种用于点云处理、解码的方法、设备及存储介质
CN112583803A (zh) * 2020-12-04 2021-03-30 上海交通大学 数据处理方法、装置、通信节点和存储介质

Also Published As

Publication number Publication date
US20240054723A1 (en) 2024-02-15
EP4258621A1 (en) 2023-10-11
CN112583803A (zh) 2021-03-30

Similar Documents

Publication Publication Date Title
WO2022116764A1 (zh) 数据处理方法、装置、通信节点和存储介质
US10477280B2 (en) Apparatus and method for delivering and receiving multimedia data in hybrid network
CN112653700A (zh) 一种基于webrtc网页视频通信的方法
WO2017101487A1 (zh) 转码属性信息的提交方法和装置
CN114697668B (zh) 点云媒体的编解码方法及相关产品
JP2020115350A (ja) プラットフォーム及びイメージデバイスの間の通信プロトコル
CN110996160B (zh) 视频处理方法、装置、电子设备及计算机可读取存储介质
US20230372814A1 (en) Latency management with deep learning based prediction in gaming applications
CN114697631B (zh) 沉浸媒体的处理方法、装置、设备及存储介质
CN113727114A (zh) 一种可转码的视频解码方法
CN110611842B (zh) 基于虚拟机的视频传输管理方法及相关装置
CN113422669B (zh) 数据传输方法、装置和系统、电子设备以及存储介质
KR102346090B1 (ko) 체적 3d 비디오 데이터의 실시간 혼합 현실 서비스를 위해 증강 현실 원격 렌더링 방법
CN111726616B (zh) 点云编码方法、点云解码方法、装置及存储介质
CN111629228A (zh) 数据传输方法及服务器
CN113259339B (zh) 一种基于udp的数据传输方法、系统及电子设备
CN113099270B (zh) 文件存储方法及解码方法、装置、存储介质、电子装置
CN105744297A (zh) 码流传输方法和装置
US10659826B2 (en) Cloud streaming service system, image cloud streaming service method using application code, and device therefor
CN113709518A (zh) 一种基于rtsp协议的视频实时传输模式设计方法
US20230061573A1 (en) Point Cloud Encoding and Decoding Method and Apparatus, Computer-Readable Medium, and Electronic Device
WO2023169003A1 (zh) 点云媒体的解码方法、点云媒体的编码方法及装置
US20230334716A1 (en) Apparatus and method for providing 3-dimensional spatial data based on spatial random access
CN101652989B (zh) 引用在用于轻便应用场景表现服务的其他简单集合格式会话中包括的流的方法和设备、和提供轻便应用场景表现服务的方法和设备
EP4199522A1 (en) Method and apparatus of encapsulating/parsing point cloud data in/from encapsulating containers

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21899798

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 18255992

Country of ref document: US

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2021899798

Country of ref document: EP

Effective date: 20230704