WO2020063169A1 - 数据处理方法及装置、电子设备及存储介质 - Google Patents

数据处理方法及装置、电子设备及存储介质 Download PDF

Info

Publication number
WO2020063169A1
WO2020063169A1 PCT/CN2019/100639 CN2019100639W WO2020063169A1 WO 2020063169 A1 WO2020063169 A1 WO 2020063169A1 CN 2019100639 W CN2019100639 W CN 2019100639W WO 2020063169 A1 WO2020063169 A1 WO 2020063169A1
Authority
WO
WIPO (PCT)
Prior art keywords
pixel
data
coding
video data
dimensional video
Prior art date
Application number
PCT/CN2019/100639
Other languages
English (en)
French (fr)
Inventor
夏炀
Original Assignee
Oppo广东移动通信有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Oppo广东移动通信有限公司 filed Critical Oppo广东移动通信有限公司
Priority to EP19866939.2A priority Critical patent/EP3849178B1/en
Publication of WO2020063169A1 publication Critical patent/WO2020063169A1/zh
Priority to US17/207,111 priority patent/US11368718B2/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/597Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding specially adapted for multi-view video sequence encoding

Definitions

  • the present application relates to the field of information technology but is not limited to the field of technical information technology, and in particular, to a data processing method and device, an electronic device, and a storage medium.
  • An image generally needs to use pixel values to represent various information such as the color, grayscale, and brightness of a pixel. Under normal circumstances, the same amount of information is transmitted, and the bandwidth consumed by images and / or videos is relatively large. As such, in some image transmission scenarios, if images are continuously transmitted, a large amount of bandwidth and / or a large transmission delay may be consumed.
  • the embodiments of the present application provide a data processing method and device, an electronic device, and a storage medium.
  • a data processing method applied to a terminal includes:
  • the amount of data before pixel encoding of the three-dimensional video data is the first data amount; the amount of data after pixel encoding of the three-dimensional video data is the second data amount; the first data amount is greater than the second data the amount.
  • a data processing method applied to a mobile edge computing MEC server includes:
  • a data processing device applied to a terminal includes:
  • a determination module configured to dynamically determine a current encoding mapping relationship of a pixel encoding
  • a first sending module configured to send the current coding mapping relationship or the indication information of the current coding mapping relationship to a mobile edge computing MEC server;
  • An obtaining module configured to perform pixel encoding on pixel values of three-dimensional video data based on the current encoding mapping relationship to obtain pixel encoded data
  • a second sending module configured to send the pixel encoded data to a mobile edge computing MEC server, wherein the pixel encoded data is used by the MEC server to restore the three-dimensional video data;
  • the amount of data before pixel encoding of the three-dimensional video data is the first data amount; the amount of data after pixel encoding of the three-dimensional video data is the second data amount; the first data amount is greater than the second data the amount.
  • a data processing device applied to a mobile edge computing MEC server includes:
  • a first receiving module configured to receive a current coding mapping relationship or indication information of the current coding mapping relationship sent by a terminal
  • a second receiving module configured to receive pixel-encoded data sent by a terminal
  • a restoration module configured to restore the pixel-encoded data to obtain pixel values of three-dimensional video data according to the current encoding mapping relationship; wherein the data amount before pixel encoding of the three-dimensional video data is a first data amount; The data amount after pixel coding of the three-dimensional video data is a second data amount; the first data amount is greater than the second data amount.
  • An electronic device includes a memory, a processor, and computer instructions stored on the memory and executable on the processor, wherein when the processor executes the instructions, the steps of any of the foregoing data processing methods applied to a terminal are implemented , Or implement any of the foregoing steps applied to the data processing method in the MEC server.
  • the terminal no longer directly transmits the pixel values of the three-dimensional video data, but instead performs pixel coding on the pixel values and then transmits the pixels.
  • Encoded data The amount of transmitted pixel-encoded data is smaller than the amount of directly transmitted pixel values, thereby reducing the bandwidth and delay required for transmission; it has the characteristics of small amount of transmitted data, small required bandwidth, and small transmission delay.
  • the terminal will dynamically determine the current encoding mapping relationship of the pixel encoding, so that it can select a suitable current encoding mapping relationship according to the current needs, ensuring the accuracy and / or delay requirements of the 3D video data transmitted to the MEC server, and improving the 3D Quality of service for video data.
  • FIG. 1 is a schematic diagram of a system architecture for applying a data transmission method according to an embodiment of the present application
  • FIG. 2 is a schematic flowchart of a data processing method according to an embodiment of the present application.
  • FIG. 3 is a schematic flowchart of another data processing method according to an embodiment of the present application.
  • FIG. 4 is a schematic flowchart of still another data processing method according to an embodiment of the present application.
  • FIG. 5 is a schematic structural diagram of a data processing apparatus according to an embodiment of the present application.
  • FIG. 6 is a schematic structural diagram of another data processing apparatus according to an embodiment of the present application.
  • the data processing method in the embodiment of the present application is applied to a service related to three-dimensional video data.
  • the service is, for example, a service for sharing three-dimensional video data, or a live broadcast service based on three-dimensional video data.
  • the transmitted depth data and 2D video data require higher technical support in the data transmission process, so the mobile communication network needs a faster data transmission rate. , And a more stable data transmission environment.
  • FIG. 1 is a schematic diagram of a system architecture applied to a data transmission method according to an embodiment of the present application.
  • the system may include a terminal, a base station, a MEC server, a service processing MEC server, a core network, and the Internet.
  • a high-speed channel is established between service processing MEC servers through the core network to achieve data synchronization.
  • MEC server A is a MEC server deployed near terminal A (sender), and core network A is the core network in the area where terminal A is located; B is a MEC server deployed near terminal B (receiving end), and core network B is the core network in the area where terminal B is located; MEC server A and MEC server B can communicate with business processing MEC servers through core network A and core network respectively. B establishes a high-speed channel to achieve data synchronization.
  • the MEC server A synchronizes the data to the service processing MEC server through the core network A; the MEC server B obtains the three-dimensional video sent by the terminal A from the service processing MEC server The data is sent to terminal B for presentation.
  • terminal B and terminal A use the same MEC server to implement transmission, then terminal B and terminal A directly implement three-dimensional video data transmission through one MEC server, and no service processing MEC server is required to participate.
  • This method is called Local postback method. Specifically, it is assumed that the terminal B and the terminal A realize the transmission of three-dimensional video data through the MEC server A, and after the three-dimensional video data sent by the terminal A is transmitted to the MEC server A, the three-dimensional video data is sent by the MEC server A to the terminal B for presentation.
  • the terminal may select an evolved base station (eNB) that accesses a 4G network or a next-generation evolved base station (gNB) that accesses a 5G network based on the network situation, or the configuration of the terminal itself, or an algorithm configured by itself.
  • eNB evolved base station
  • gNB next-generation evolved base station
  • the eNB is connected to the MEC server through a Long Term Evolution (LTE) access network, so that the gNB is connected to the MEC server through the Next Generation Access Network (NG-RAN).
  • LTE Long Term Evolution
  • NG-RAN Next Generation Access Network
  • the MEC server is deployed on the edge of the network near the terminal or the source of the data.
  • the so-called near the terminal or the source of the data is not only at the logical location, but also geographically close to the terminal or the source of the data.
  • multiple MEC servers can be deployed in one city. For example, in an office building with many users, a MEC server can be deployed near the office building.
  • the MEC server as an edge computing gateway with core capabilities of converged networks, computing, storage, and applications, provides platform support for edge computing including device domains, network domains, data domains, and application domains. It connects various types of smart devices and sensors, and provides smart connection and data processing services nearby, allowing different types of applications and data to be processed in the MEC server, realizing business real-time, business intelligence, data aggregation and interoperation, security and privacy protection, etc. Key intelligent services effectively improve the intelligent decision-making efficiency of the business.
  • this embodiment provides a data processing method, which is applied to a terminal and includes:
  • Step 201 dynamically determine a current encoding mapping relationship of the pixel encoding
  • Step 202 Send the current coding mapping relationship or the indication information of the current coding mapping relationship to a mobile edge computing MEC server.
  • Step 203 pixel-encode the pixel values of the three-dimensional video data based on the current encoding mapping relationship to obtain pixel-encoded data;
  • Step 204 Send the pixel-encoded data to a mobile edge computing MEC server, where the pixel-encoded data is used by the MEC server to restore the three-dimensional video data;
  • the amount of data before pixel encoding of the three-dimensional video data is the first data amount; the amount of data after pixel encoding of the three-dimensional video data is the second data amount; the first data amount is greater than the second data the amount.
  • the data processing method provided in this embodiment is applied to a terminal, and the terminal may be various types of terminals, for example, a mobile phone, a flat cloth computer or a wearable device, or a fixed image monitor.
  • the terminal may be a fixed terminal and / or a mobile terminal.
  • Step 201 Before transmitting the three-dimensional video data, a current encoding mapping relationship needs to be determined.
  • the three-dimensional video data is based on a Transmission Control Protocol (TCP) or a User Datagram Protocol (UDP). Transmission. If the three-dimensional video data is transmitted based on the TCP protocol, the current encoding mapping relationship or the indication information of the current encoding mapping relationship is sent to the MEC server during a handshake phase of establishing a TCP connection, or after the TCP connection is officially established Send to the MEC server via TCP connection.
  • TCP Transmission Control Protocol
  • UDP User Datagram Protocol
  • a special UDP data packet may be used to send the current encoding mapping relationship or the indication information of the current encoding mapping relationship.
  • the indication information of the current encoding mapping relationship may be: number information or name information of the current encoding mapping relationship, and the like, which can be used by the MEC server to uniquely determine the current encoding mapping relationship.
  • the current encoding mapping relationship is dynamically determined, the current encoding mapping relationship can be determined according to the current requirements. Compared to using a static encoding mapping relationship, it can meet three-dimensional video data in different application scenarios. Transmission requirements and data transmission quality.
  • the three-dimensional video data includes a two-dimensional image and a depth image.
  • the two-dimensional image includes color pixels.
  • the pixel value of the color pixel is a color value.
  • the color value is a red / green / blue (RGB) value or a brightness / chroma / density (YUV) value.
  • the depth image includes a depth pixel, and a pixel value of the depth pixel is a depth value; wherein the depth value represents a spatial distance between the acquisition target and the image acquisition module.
  • the three-dimensional video data and the depth image can construct a three-dimensional image in a three-dimensional image space.
  • the image sizes of the two-dimensional image and the depth image are the same.
  • both the two-dimensional image and the depth image include W * H pixels; W represents pixels included in the first direction.
  • Number, H represents the number of pixels included in the second direction.
  • W and H are both positive integers.
  • the two-dimensional image and the depth image may be two images acquired at the same time; in order to reduce the amount of data, the image sizes of the two-dimensional image and the depth image meet a preset relationship.
  • the depth image contains W * H pixels
  • the depth image contains (W / a) * (H / b) pixels.
  • one depth pixel corresponds to a * b color pixels.
  • the pixel values of one depth pixel can be applied to the pixel values of a * b adjacent color pixels.
  • (W / a) * (H / b) is equal to (W / 2) * (H / 2).
  • one depth pixel corresponds to four color pixels.
  • the pixel values of one depth pixel can be applied to the pixel values of four adjacent color pixels; thus, the amount of image data of the depth image is reduced. Because the unevenness in a small area adjacent to an object is generally consistent, if the image size of the depth image is smaller than the image size of the two-dimensional image, the restoration and construction of the three-dimensional video with high accuracy can also be maintained; At the same time, the amount of data that the terminal and the MEC server need to interact with and / or the amount of data that the MEC server needs to process is reduced.
  • the generated image size when it is smaller than the two-dimensional image, it may have at least one of the following methods: directly acquiring the depth image by using the image size of the depth image; using the two-dimensional image The original depth image is collected by the image size; and the depth image is generated according to the pixel values of the adjacent a * b pixels according to the image size of the depth image. For example, the depth image is generated according to the average or median value of adjacent a * b pixel values.
  • the first encoding of the conversion from the sensing data to the pixel value will be performed.
  • the pixel value is encoded a second time.
  • the second encoding here is the pixel encoding. After the pixel encoding is completed, the pixel encoded data is obtained.
  • the generated pixel-coded data may include a pixel value code instead of the pixel value itself. In this way, after receiving the pixel-encoded data, the receiving end cannot directly display or read the image based on the pixel-encoded data, and it needs to be restored to the pixel value itself before the normal display or image can be read out.
  • the first encoding may be an encoding provided by many image acquisition modules. In this way, the image acquisition module directly collects the data stored in the storage through the collection of light. The pixel value, that is, the first encoded data is completed.
  • the pixel-encoded data obtained after the pixel encoding is completed is transmitted to a MEC server for generating a three-dimensional video of the MEC server.
  • the second data amount obtained is smaller than the first data amount before coding.
  • the data amount for transmitting three-dimensional video data is reduced, thereby reducing the consumption of data amount.
  • Bandwidth and transmission delay required for a large amount of data therefore, it has the characteristics of small amount of data to be transmitted, small bandwidth consumed, and small transmission delay. Therefore, the MEC server has a small delay in receiving data, and can quickly and accurately restore 3D video data and build 3D video.
  • Dynamically determining the current encoding mapping relationship in step 201 may include two ways: dynamic selection; and dynamic generation.
  • Dynamically selecting the current encoding mapping relationship may include at least one of the following:
  • the current encoding relationship is selected from alternative encoding mapping relationships.
  • the target scene can be divided into a still scene and / or a motion scene according to the motion state of the collection target. For example, if the displacement of the acquisition target within a unit time is not greater than a specific displacement, the acquisition target can be considered to be stationary, otherwise the acquisition target can be considered to be in motion.
  • the positional relationship between the imaging of different parts of the acquisition object in the acquisition image may change, so that the pixel combination of the imaging of the different parts has a relationship, so it may not be suitable for the combination encoding mapping method at this time, and The single encoding mapping method is more suitable; therefore, the current encoding mapping relationship selected at this time is also different.
  • the collection scene can be the environment where the collection target is located.
  • the collection scene can be reflected as the background in the collected 3D video data, and the collection scene may affect the imaging of the collection target in the 3D video data.
  • the lighting color and / or lighting angle of the collection scene will affect the color and / or depth value of the imaging of the collection target in the three-dimensional video data.
  • a suitable encoding mapping mode and encoding mapping relationship are also selected according to the switching rate of the collection scene.
  • the switching rate of the collection scenario may include:
  • the switching rate is determined by comparing the degree of background difference in the images in different three-dimensional video data; if the difference is greater, it indicates that the switching rate is greater.
  • the selecting the current encoding relationship from candidate encoding mapping relationships according to the required accuracy of the three-dimensional video data includes at least one of the following:
  • a single encoding mapping relationship is used, it is necessary to pay attention to checking each pixel value and obtaining pixel encoded data based on each pixel value. If the combined coding and mapping method is adopted, a bit data error occurs during the transmission process, which may cause the pixel values of multiple pixels to change, which may cause abnormal display. Therefore, in order to ensure high transmission accuracy, in this embodiment, if the accuracy is required to be greater than or equal to the first accuracy threshold, the encoding mapping relationship of a single encoding mapping method is selected; otherwise, combined encoding mapping can be used for the purpose of simplifying transmission Mode, and select the coding mapping relationship corresponding to the combined coding mapping mode.
  • the second precision threshold may be lower than the first precision threshold, and when the required precision is lower than the first precision threshold, a combination coding mapping method is performed with more pixel combinations; otherwise, fewer pixels are used. Pixel coding is performed by combining coding mapping methods. In this way, after the encoding mapping manner is determined, a corresponding encoding mapping relationship may be selected according to the determined encoding mapping manner.
  • an encoding mapping relationship is selected according to the selected encoding mapping manner. If there is only one type of coding mapping relationship in the selected coding mapping mode, then this coding mapping relationship may be directly selected. If there are multiple encoding mapping relationships in the selected encoding mapping mode, randomly select one as the current encoding mapping relationship; or, further select one suitable for the current transmission from multiple encoding mapping relationships according to parameters such as required accuracy and / or target scene Coding mapping.
  • the dynamically determining the encoding mapping relationship of the pixel encoding may further include:
  • the step 201 may determine the currently suitable encoding mapping mode according to the currently required accuracy and / or the target scene, and then scan the sample three-dimensional video data to generate the encoding mapping relationship of the corresponding encoding mapping mode.
  • the generated coding mapping relationship is the current coding mapping relationship determined dynamically in step 201.
  • the generating the current mapping encoding relationship according to the required accuracy and / or the target scene of the sample three-dimensional video data includes:
  • the current encoding mapping method is a single encoding mapping method, obtaining a sequence number of the pixel value of the three-dimensional video data according to a preset sorting method according to the pixel values of the sample three-dimensional video data;
  • the pixel value serial number includes at least one of the following: a color value serial number formed by color value ordering; and a depth value serial number formed by ordering depth value.
  • An 8-channel color channel is used for illustration. 256 values from “0” to “225” are used to represent different colors. At this time, the color values can be sorted from high to low or low to high, and then the sorted serial number is used as the color value serial number, and then a mapping relationship between the color value serial number and the corresponding color value is established, and the established mapping The relationship is one of the aforementioned coding mapping relationships.
  • the depth value is used for description, and the depth value may be the distance between the acquired representative image acquisition module and the acquisition target.
  • the distance can be directly sorted from large to small or small to large, and then The sorted sequence number is used as the depth value sequence number to build the encoding mapping relationship.
  • the pixel values of the sample 3D video data are sorted according to a preset sorting method to obtain the pixel value serial number of the 3D video data, including at least one of the following: according to the color of the 3D video data. Sort the color values of the pixels to obtain the color value serial number of the three-dimensional video data; and obtain the serial number of the depth values of the three-dimensional video data according to the sorting of the depth values of the depth pixels of the three-dimensional video data.
  • generating the current mapping and encoding relationship according to the required accuracy and / or target scene of the sample three-dimensional video data includes:
  • the current encoding mapping mode is a combined encoding mapping mode, determining the value of N * M of the combined encoding mapping mode according to the required accuracy and / or the target scene; wherein the values of N and M are positive integers;
  • the pixel values of the sample three-dimensional video data are sorted in combination to obtain the pixel combination number of the three-dimensional video data;
  • N * M will be determined according to the required accuracy.
  • N can be the number of rows corresponding to a pixel combination, then M can be the number of columns corresponding to a pixel combination; or N can be a pixel combination corresponding Number of columns, N can be the number of rows corresponding to one pixel combination.
  • the pixel combination number includes at least one of the following:
  • the sorting based on the pixel values of the sample three-dimensional video data and the pixel values of N * M pixels to obtain the pixel combination number of the three-dimensional video data may include:
  • the color values of N * M pixels are sorted in combination to obtain the color value combination number of the three-dimensional video data.
  • the sorting may be performed according to the chronological order of the scanned color value combinations, or the sorting may be performed based on the occurrence frequency of the scanned color value combinations to obtain the color value combination serial number.
  • the pixel values of the three-dimensional video data of the sample are sorted by combining the pixel values of N * M pixels, and obtaining the pixel combination number of the three-dimensional video data may further include:
  • the depth values of the depth pixels in the sample three-dimensional video data are combined to sort, and the depth value combination number of the three-dimensional video data is obtained.
  • the sorting may be performed according to the average depth value of the depth value combination, or the maximum depth value or the minimum depth value of the depth value combination. In short, there are many ways to sort, not limited to any of the above.
  • step 201 may directly include: directly determining an appropriate encoding mapping manner and / or encoding mapping relationship according to the data characteristics of the sample three-dimensional video data.
  • N * M can be directly determined based on the combination of color values.
  • sort to get the color value serial number and so on.
  • sample three-dimensional video data in the embodiments of the present application may be three-dimensional video data collected before the three-dimensional video data is formally transmitted.
  • the method further includes:
  • sample three-dimensional video data includes pixel values that are not in the current encoding mapping relationship, updating the current encoding mapping relationship according to the sample three-dimensional video data;
  • the step 203 may include:
  • Color coding according to the color pixel values of the three-dimensional video data to obtain color coding data
  • Depth value encoding is performed according to the depth value pixel value of the three-dimensional video data to obtain depth value encoded data.
  • encoding the pixel values may color-encode only the color pixel values of the color pixels in the three-dimensional video data to obtain the color-encoded data.
  • the pixel value encoding may be depth value encoding of only the depth value pixel values in the three-dimensional video data to obtain the re-encoded depth value encoded data.
  • the amount of data transmitted to the MEC server can be reduced after re-encoding.
  • the pixel coding in step 203 may be performing color coding and depth value coding simultaneously.
  • Step 203 may include:
  • the pixel-encoded data is determined. For example, the pixel value A1 in one or more three-dimensional video data is matched with all the pixel values in the pixel coding mapping relationship. If there is a matching pixel value A1, the pixel coding mapping relationship corresponding to the pixel value A1 Of pixel-encoded data as a result of the pixel-encoding of the image being the value A1.
  • the matching result includes the following three types:
  • the matching result indicates that the match is successful; the matching success includes: the matching result indicates that the same condition or a similar condition is satisfied;
  • the matching result indicates that the matching is unsuccessful; that is, the matching results do not satisfy the same condition and / or similar conditions.
  • the pixel degree is greater than the Set the similarity threshold, for example, 70%, 80%, 90%, or 85%; then it can be considered that the N * M pixels in the mapping relationship between the pixel coded data of the currently scanned N * M pixels and the pixel code meet the pixel coding Under similar conditions, the pixel-coded data of N * M pixels in the pixel-coded mapping relationship can be directly used as the color-coded data of the pixel values of the N * M pixels currently scanned.
  • the Set the similarity threshold for example, 70%, 80%, 90%, or 85%
  • the pixel degree is greater than Preset similarity threshold; 70%, 80%, 90%, or 85%. Further, the pixel values of the scanned N * M pixels and the pixel values of the N * M pixels in the pixel code mapping relationship are extracted, and the extracted pixel values and pixels are calculated. The pixel value difference of the pixel values of the N * M pixels in the encoding mapping relationship.
  • the pixel value difference is within a preset difference range, it can be considered that the pixel encoded data of the currently scanned N * M pixels is mapped to the pixel encoding.
  • the N * M pixels in the pixel meet the similar conditions of pixel coding.
  • the pixel coding data of the N * M pixels in the pixel coding mapping relationship can be directly used as the color coding data of the pixel values of the N * M pixels currently scanned; Otherwise, it can be considered that the pixel coding data of the scanned N * M pixels and the N * M pixels in the mapping relationship between the pixel codes do not satisfy the similar conditions of the pixel coding.
  • the preset difference range the following may be included:
  • the difference in pixel values indicates that the two pixel values are approximate, for example, a color approximation. If the difference in pixel values indicates that the two colors are opposite colors, it can be considered to be no longer within the preset difference range; if the difference in depth of the two depth pixels indicates that the difference between the two depth values is above the preset depth value or depth ratio, the It is considered not to be within the preset difference range, otherwise it may be considered to be within the preset difference range.
  • the encoding mapping relationship is an encoding mapping function
  • inputting the pixel value into the encoding mapping function automatically outputs pixel encoded data.
  • the encoding mapping function is determined by fitting the color values in the sample image. In this way, each pixel value or a group of pixel values is automatically input to the encoding mapping function to obtain the pixel encoding data. There is no need to determine the pixel coded data by matching.
  • step 203 there are multiple ways to determine the pixel-encoded data in step 203, and the specific implementation is not limited to any one.
  • the step 203 includes:
  • both the terminal and the MEC server may know the pixel encoding mapping relationship in advance.
  • the MEC server and the terminal both store a pixel encoding mapping table in advance.
  • the pixel code mapping relationship is pre-negotiated between the terminal and the MEC server.
  • the pixel coding mapping relationship may include at least one of the following:
  • the method further includes:
  • the pixel encoding method is selected according to preset information, wherein the preset information includes at least one of network transmission status information, load status information of the terminal, and load status information of the MEC server;
  • the step 203 may include: performing pixel coding on the pixel value to obtain the pixel coding data according to the selected pixel coding mode.
  • the pixel encoding may not be performed.
  • the network transmission status information indicates that the currently available high bandwidth is less than the bandwidth required to directly transmit the pixel value, then according to the currently available bandwidth, a pixel whose data amount after pixel coding is less than or equal to the currently available bandwidth is selected. Encoding.
  • the calculation amount required for terminal encoding and the calculation amount restored by the MEC server are different.
  • an appropriate pixel encoding method is also selected according to the load status information of the terminal and / or the load status information of the MEC server.
  • the load status information may include at least one of the following: a current load rate, a current load amount, a maximum load rate, and a maximum load amount.
  • the pixel coding method with a small amount of coding or decoding is preferred; otherwise, it can be selected arbitrarily or according to other reference factors such as network transmission status information.
  • the pixel coding data obtained by performing pixel coding on the pixel value according to the selected pixel coding mode includes at least one of the following:
  • the three-dimensional video data is subjected to single-pixel coding on the pixel value of a single pixel to obtain the first type of encoded data, where the number of bits occupied by the first type of encoded data is less than that occupied by the pixel value. Number of bits
  • pixel values of N * M pixels of the three-dimensional video data are combined for pixel coding to obtain a second type of pixel coding, where N and M are positive integers.
  • one pixel value corresponds to one pixel encoding data.
  • S first-type encoded data will be obtained.
  • the number of bits occupied by a first type of encoded data at this time is less than the number of bits occupied by the pixel value itself.
  • a pixel value occupies 32 bits or 16 bits, while the first type of encoded data occupies only 8 bits or 10 bits. In this way, since the number of bits required for each single pixel transmission is reduced, the amount of data required is reduced as a whole.
  • Pixel coding may also be combined in some embodiments.
  • Combined pixel coding is the pixel coding of multiple pixels at the same time.
  • one adjacent pixel matrix is encoded at the same time, or multiple pixels arranged in a matrix or non-matrix are simultaneously encoded at the same time.
  • a pixel matrix composed of 3 * 3 or 4 * 4 pixels is encoded.
  • the N * M can be exactly divided by pixels included in one frame of the three-dimensional image data.
  • the depth values and / or color information of these adjacent pixels are relatively fixed, and these color combinations or depth combinations may be combined to generate a preset encoding value in the pixel encoding mapping relationship.
  • it is determined whether to include a specific color combination and / or depth combination by scanning a color pixel value or a depth image pixel value in a corresponding three-dimensional video data frame, so as to convert into a corresponding encoded value. To obtain the pixel-encoded data.
  • the single pixel encoding and the combined pixel encoding may be mixed and used according to current requirements.
  • the selected encoding method may be notified in advance.
  • the selected encoding method can be the single-pixel encoding, combined pixel encoding, or mixed pixel encoding mixed single-pixel encoding and combined pixel encoding.
  • the N * M pixels are adjacently distributed
  • the N * M pixels are distributed at intervals according to a preset interval.
  • N * M pixels are distributed adjacently, an N * M pixel matrix is formed.
  • N * M pixels are distributed at a predetermined interval, for example, two pixels belonging to the N * M pixels may be spaced at a predetermined number of pixels, for example, one or more pixels.
  • the N * M may be determined dynamically or may be set statically.
  • an image in a three-dimensional image data frame is divided into a first region and a second region.
  • the first region can be encoded using a single pixel, and the second region is subjected to combined pixel encoding.
  • the pixel values of the first region of the image in the three-dimensional image frame are directly transmitted to the MEC server, and the second region is subjected to single pixel coding and / or combined pixel coding.
  • the querying a pixel coding mapping relationship according to a pixel value of the three-dimensional video data to determine the pixel coding data includes:
  • the pixel coding data is determined according to a pixel coding value corresponding to the pixel value.
  • the pixel coding mapping relationship of the image data of a three-dimensional video data frame may have been determined in advance, but in other cases may not be determined, or may have changed over time.
  • the terminal's holding or the MEC server may store the encoding mapping relationship of the host's face. If the face of the anchor is suddenly modified or the makeup is changed, at least the vivid image of the face may be changed. At this time, the above pixel mapping relationship may not be in the pixel coding mapping relationship.
  • the method further includes:
  • the pixel value is not in the pixel encoding mapping relationship, updating the pixel encoding mapping relationship according to the pixel value, and sending the updated pixel encoding mapping relationship or an updated part of the pixel encoding mapping relationship to the MEC server.
  • one or more three-dimensional video data of the target object may be collected during the interactive handshake or debugging phase before the official live broadcast, and determined by scanning the pixel values of these three-dimensional video data Find out whether the pixel mapping relationship corresponding to the target object has been established, or whether the pixel mapping relationship needs to be updated. If the three-dimensional mapping relationship needs to be updated, the three-dimensional mapping relationship is updated, and if not needed, the formal interaction of the three-dimensional video data can be directly entered.
  • the step 203 may include:
  • a pixel value sequence number of the three-dimensional video data is obtained according to the sorting of the pixel values of the three-dimensional video data according to a preset sorting method.
  • both the skin color and the height of the face have their maximum and minimum values.
  • the two-dimensional image and / or depth image collected by the image acquisition module are concentrated on a specific In the range of color pixel value or depth pixel value, in most cases, the maximum pixel value and minimum pixel value of the entire image collector will not be covered.
  • the 512 possible pixel values corresponding to the 16-bit color channel may be effectively used. Only about 200 or even more than 100.
  • the pixel coding mapping relationship can be generated by sorting the above-mentioned statistical number of pixel values, or updating the pixels
  • the encoding mapping relationship is used to determine and generate the encoding video relationship.
  • the serial number of the pixel encoding value with the high occurrence frequency of the pixel value will appear first.
  • the subsequent encoding using the same target scene and sample 3D video data as the sample 3D video data can be Reduce the number of pixel value matches and improve the efficiency of pixel coding.
  • the pixel coding mapping relationship obtained for different target objects may be different. In this way, for data, it has the characteristics of high security without leaking the pixel coding mapping relationship. . In this way, if another person intercepts the pixel-encoded pixel-encoded data during the transmission process, the three-dimensional video data cannot be normally decoded, and thus has the characteristics of high transmission security.
  • this embodiment provides a data processing method applied to a mobile edge computing MEC server, including:
  • Step 301 Receive a current coding mapping relationship or indication information of the current coding mapping relationship sent by a terminal.
  • Step 302 Receive pixel-encoded data sent by the terminal.
  • Step 303 Restore the pixel-encoded data to obtain pixel values of the three-dimensional video data according to the current encoding mapping relationship, wherein a data amount before pixel encoding of the three-dimensional video data is a first data amount; the three-dimensional The amount of data after pixel coding of video data is a second amount of data; the first amount of data is greater than the second amount of data.
  • the MEC server After receiving the pixel-encoded data, the MEC server needs to restore the pixel values to three-dimensional video data.
  • the MEC server will also receive the current encoding mapping relationship or the indication information of the current encoding mapping relationship from the terminal, so as to facilitate the restoration of the pixel encoding data according to the current encoding mapping relationship in step 303.
  • the bandwidth consumed is smaller.
  • the step 303 may include at least one of the following:
  • the depth value pixel value of the three-dimensional video data is restored according to the depth value encoded data of the pixel encoded data.
  • the color pixel value is restored based on the color-encoded data, and the depth value pixel value is restored according to the depth-encoded data.
  • the step 303 may further include at least one of the following:
  • the pixel coding data of N * M pixels are decoded to restore the pixel values of the three-dimensional video data by using the current coding mapping relationship.
  • the method further includes:
  • the pixel encoding mode may include a single encoding mode and / or a combined encoding mode.
  • the step 303 may include:
  • pixel decoding is performed on the pixel encoded data to obtain pixel values of the three-dimensional video data.
  • step 302 there are multiple ways of step 302, and several alternative ways are provided below:
  • Option two interact with the terminal with pixel coding parameters, wherein the pixel coding parameters include at least: a pixel coding mode.
  • the pixel encoding parameters include the pixel encoding mode. In other implementations, the pixel encoding parameters may further include:
  • the number of bits occupied by one pixel-coded data in a single coding method and / or a combined coding method is the number of bits occupied by one pixel-coded data in a single coding method and / or a combined coding method.
  • the pixel value of the three-dimensional video data is obtained by performing pixel decoding on the pixel-encoded data according to the pixel encoding mode, including at least one of the following:
  • the pixel-encoded data of N * M pixels is decoded to restore the pixel values of the three-dimensional video data.
  • the step 302 may include: querying a pixel coding mapping relationship according to the pixel coding data to obtain a pixel value corresponding to the pixel coding data.
  • the method further includes:
  • an updated pixel coded mapping relationship or an updated part of the pixel coded mapping relationship sent by the terminal is received.
  • the pixel coding mapping relationship is synchronized between the terminal and the MEC server.
  • this embodiment provides a data processing apparatus, which is applied to a terminal and includes:
  • a determining module 401 configured to dynamically determine a current encoding mapping relationship of a pixel encoding
  • the first sending module 402 is configured to send the current coding mapping relationship or the indication information of the current coding mapping relationship to a mobile edge computing MEC server;
  • the obtaining module 403 is configured to perform pixel encoding on the pixel values of the three-dimensional video data based on the current encoding mapping relationship to obtain pixel encoded data;
  • the second sending module 404 is configured to send the pixel-encoded data to a mobile edge computing MEC server, where the pixel-encoded data is used by the MEC server to restore the three-dimensional video data;
  • the amount of data before pixel encoding of the three-dimensional video data is the first data amount; the amount of data after pixel encoding of the three-dimensional video data is the second data amount; the first data amount is greater than the second data the amount.
  • the first sending module 402, the obtaining module 403, and the second sending module 404 may be program modules corresponding to computer-executable code. After the computer-executable code is executed, the pixel-encoded data can be implemented And sending of 3D video data.
  • the first sending module 402, the obtaining module 403, and the second sending module 404 may also be a combination of hardware modules and program modules, for example, a complex programmable array or a field programmable array.
  • the first sending module 402, the obtaining module 403, and the second sending module 404 may correspond to hardware modules, for example, the first sending module 402, the obtaining module 403, and the second sending module 404. May be an application specific integrated circuit.
  • the determining module 401 includes:
  • a first selection sub-module configured to select the current encoding relationship from an alternative encoding mapping relationship according to a target scene corresponding to the three-dimensional video data
  • a second selection sub-module is configured to select the current encoding relationship from candidate encoding mapping relationships according to the required accuracy of the three-dimensional video data.
  • the first selection sub-module is configured to perform at least one of the following:
  • the three-dimensional video data corresponds to a motion scene where target motion is collected, selecting a coding mapping relationship of a single coding mapping mode as the current coding relationship;
  • the three-dimensional video data corresponds to a still scene where the collection target is still, selecting a coding mapping relationship of a combined coding mapping mode as the current coding mapping relationship;
  • a coding mapping relationship of a combined coding mapping manner is selected as the current coding relationship.
  • the second selection sub-module is configured to execute at least one of the following:
  • the determining module 401 includes:
  • a generating sub-module is configured to generate the current mapping encoding relationship according to a required accuracy and / or a target scene of the three-dimensional video data.
  • a generation sub-module is configured to determine a current encoding mapping method according to a precision requirement and / or a target scene; if the current encoding mapping method is a single encoding mapping method, according to the pixel values of the sample three-dimensional video data, the A sorting method is used to obtain a pixel value sequence number of the three-dimensional video data; and a mapping relationship between the pixel value and the pixel value sequence number is established.
  • the pixel value sequence number includes at least one of the following:
  • the generating sub-module is configured to determine a value of N * M of the combined encoding mapping mode according to the required accuracy and / or target scenario if the current encoding mapping mode is a combined encoding mapping mode;
  • the values of N and M are positive integers; according to the pixel values of the sample three-dimensional video data, the pixel values of N * M pixels are sorted to obtain the pixel combination number of the sample three-dimensional video data; A mapping relationship between the pixel value and the pixel combination number.
  • the pixel combination number includes at least one of the following:
  • the apparatus further includes:
  • An update module configured to update the current encoding mapping relationship according to the sample three-dimensional video data if the sample three-dimensional video data includes pixel values that are not in the current encoding mapping relationship;
  • the third sending module is configured to send the updated current coding mapping relationship or an updated part of the current coding mapping relationship to the MEC server.
  • the obtaining module 403 is configured to execute at least one of the following:
  • single-pixel coding is performed on the pixel value of a single pixel for the three-dimensional video data to obtain the first type of coding data, where the number of bits occupied by the first type of coding data is less than the number of bits The number of bits occupied by the pixel value;
  • the pixel values of N * M pixels of the three-dimensional video data are combined pixel coding to obtain a second type of pixel coding, where N and M are positive integers .
  • this embodiment provides a data processing apparatus, which is applied to a mobile edge computing MEC server, and includes:
  • a first receiving module 501 configured to receive a current coding mapping relationship or indication information of the current coding mapping relationship sent by a terminal;
  • a second receiving module 502 configured to receive pixel-coded data sent by a terminal
  • a restoration module 503 configured to restore the pixel-encoded data to obtain pixel values of three-dimensional video data according to the current encoding mapping relationship; wherein the data amount before pixel encoding of the three-dimensional video data is the first data amount; The data amount after pixel coding of the three-dimensional video data is a second data amount; the first data amount is greater than the second data amount.
  • the first receiving module 501, the second receiving module 502, and the restoration module 503 may be program modules corresponding to computer-executable code. After the computer-executable code is executed, the pixel-encoded data can be implemented. And sending of 3D video data.
  • the first receiving module 501, the second receiving module 502, and the restoration module 503 may also be a combination of hardware modules and program modules, for example, a complex programmable array or a field programmable array.
  • the first receiving module 501, the second receiving module 502, and the restoration module 503 may correspond to hardware modules, for example, the first receiving module 501, the second receiving module 502, and the restoration module 503. May be an application specific integrated circuit.
  • the restoration module 503 is configured to execute at least one of the following:
  • the depth value pixel value of the three-dimensional video data is restored according to the depth value encoded data of the pixel encoded data.
  • the restoration module 503 is configured to execute at least one of the following:
  • the pixel coding data of N * M pixels are decoded to restore the pixel values of the three-dimensional video data by using the current coding mapping relationship.
  • This embodiment provides a computer storage medium on which computer instructions are stored.
  • the steps of a data processing method applied to a terminal or a MEC server are implemented, for example, as shown in FIG. 2 and FIG. One or more of the methods shown in 3.
  • this embodiment provides an electronic device including a memory, a processor, and computer instructions stored in the memory and executable on the processor.
  • the processor executes the instructions, the processor is implemented in a terminal or
  • the steps of the data processing method in the MEC server may, for example, execute one or more of the methods shown in FIG. 2 to FIG. 3.
  • the electronic device further includes: a communication interface, which can be used for information interaction with other devices.
  • a communication interface can be used for information interaction with other devices.
  • the communication interface can at least perform information interaction with the MEC server.
  • the communication interface can at least perform information interaction with the terminal.
  • the mapping table is dynamically selected according to the actual situation of the current target scene, accuracy requirements, etc.
  • the mobile phone collects RGB, it scans the RGB of each pixel of the image; if RGB is in the color sequence, replace the RGB data with the color serial number; The RGB corresponding to all pixels of the entire image, and then based on the color numbering in advance, replace the RGB corresponding to each pixel with a serial number, and then package and upload the pixels and corresponding color serial numbers.
  • the common colors are sequentially numbered.
  • the RGB data of each pixel of the image is scanned. If the RGB data is in the color sequence, the color serial number is used instead of the RGB data.
  • RGB data of each pixel count the RGB data of the entire picture pixel, then sort the RGB numbers, replace the RGB of each pixel with a serial number, and then pack and upload the pixels and the statistical RGB data; MEC server and mobile phone save one Mapping table, when there is RGB data transmission, scan pixels horizontally, if the pixel is not in the mapping table, then create a new mapping (such as pixel RGB-flag A [16-bit] or [32-bit] or [8-bit]), Save to the mapping table, and replace the RGB data with a 16-bit color serial number. After scanning, upload the changed items in the mapping table and the RGB data, or you can extend the coding of a single pixel to NxN pixels together.
  • the disclosed method and smart device may be implemented in other ways.
  • the device embodiments described above are only schematic.
  • the division of the unit is only a logical function division.
  • there may be another division manner such as multiple units or components may be combined, or Can be integrated into another system, or some features can be ignored or not implemented.
  • the displayed or discussed components are coupled, or directly coupled, or communicated with each other through some interfaces.
  • the indirect coupling or communication connection of the device or unit may be electrical, mechanical, or other forms. of.
  • the units described above as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, which may be located in one place or distributed to multiple network units; Some or all of the units may be selected according to actual needs to achieve the objective of the solution of this embodiment.
  • each functional unit in each embodiment of the present application may be integrated into a second processing unit, or each unit may be separately used as a unit, or two or more units may be integrated into a unit;
  • the above integrated unit may be implemented in the form of hardware, or in the form of hardware plus software functional units.
  • the foregoing program may be stored in a computer-readable storage medium.
  • the program is executed, the program is executed.
  • the method includes the steps of the foregoing method embodiment.
  • the foregoing storage medium includes: various types of media that can store program codes, such as a mobile storage device, a ROM, a RAM, a magnetic disk, or an optical disc.
  • the above-mentioned integrated unit of the present application is implemented in the form of a software functional module and sold or used as an independent product, it may also be stored in a computer-readable storage medium.
  • the computer software product is stored in a storage medium and includes several instructions for A computer device (which may be a personal computer, a MEC server, or a network device) is caused to execute all or part of the methods described in the embodiments of the present application.
  • the foregoing storage medium includes: various types of media that can store program codes, such as a mobile storage device, a ROM, a RAM, a magnetic disk, or an optical disc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

本实施例公开了一种数据处理方法及装置、电子设备及存储介质。所述数据处理方法,应用于终端中,包括:动态确定像素编码的当前编码映射关系;向移动边缘计算MEC服务器发送所述当前编码映射关系或者所述当前编码映射关系的指示信息;基于所述当前编码映射关系,对三维视频数据的像素值进行像素编码,获得像素编码数据;向移动边缘计算MEC服务器发送所述像素编码数据,其中,所述像素编码数据用于所述MEC服务器还原出所述三维视频数据;其中,所述三维视频数据进行像素编码之前的数据量为第一数据量;所述三维视频数据进行像素编码之后的数据量为第二数据量;所述第一数据量大于所述第二数据量。

Description

数据处理方法及装置、电子设备及存储介质
相关申请的交叉引用
本申请基于申请号为201811163427.8、申请日为2018年09月30日的中国专利申请提出,并要求该中国专利申请的优先权,该中国专利申请的全部内容在此引入本申请作为参考。
技术领域
本申请涉及信息技术领域但不限于技术信息技术领域,尤其涉及一种数据处理方法及装置、电子设备及存储介质。
背景技术
一幅图像一般需要用像素值来逐一表示某一个像素的颜色、灰度、亮度等各种信息。通常情况下,传输同样的信息量,图像和/或视频所消耗的带宽是比较大的。如此,在一些图像传输场景下,若连续传输图像,则可能会消耗大量的带宽和/或,传输时延大的问题。
发明内容
本申请实施例提供了一种数据处理方法及装置、电子设备及存储介质。
一种数据处理方法,应用于终端中,包括:
动态确定像素编码的当前编码映射关系;
向移动边缘计算MEC服务器发送所述当前编码映射关系或者所述当前编码映射关系的指示信息;
基于所述当前编码映射关系,对三维视频数据的像素值进行像素编码,获得像素编码数据;
向移动边缘计算MEC服务器发送所述像素编码数据,其中,所述像素编码数据用于所述MEC服务器还原出所述三维视频数据;
其中,所述三维视频数据进行像素编码之前的数据量为第一数据量;所述三维视频数据进行像素编码之后的数据量为第二数据量;所述第一数据量大于所述第二数据量。
一种数据处理方法,应用于移动边缘计算MEC服务器,包括:
接收终端发送的当前编码映射关系或所述当前编码映射关系的指示信息;
接收终端发送的像素编码数据;
根据所述当前编码映射关系,对所述像素编码数据还原以获得三维视频数据的像素值;其中,所述三维视频数据进行像素编码之前的数据量为第一数据量;所述三维视频数据进行像素编码之后的数据量为第二数据量;所述第一数据量大于所述第二数据量。
一种数据处理装置,应用于终端中,包括:
确定模块,配置为动态确定像素编码的当前编码映射关系;
第一发送模块,配置为向移动边缘计算MEC服务器发送所述当前编码映射关系或者所述当前编码映射关系的指示信息;
获得模块,配置为基于所述当前编码映射关系,对三维视频数据的像素值进行像素编码,获得像素编码数据;
第二发送模块,配置为向移动边缘计算MEC服务器发送所述像素编码数据,其中,所述像素编码数据用于所述MEC服务器还原出所述三维视频数据;
其中,所述三维视频数据进行像素编码之前的数据量为第一数据量;所述三维视频数据进行像素编码之后的数据量为第二数据量;所述第一数据量大于所述第二数据量。
一种数据处理装置,应用于移动边缘计算MEC服务器,包括:
第一接收模块,配置为接收终端发送的当前编码映射关系或所述当前编码映射关系的指示信息;
第二接收模块,配置为接收终端发送的像素编码数据;
还原模块,配置为根据所述当前编码映射关系,对所述像素编码数据还原以获得三维视频数据的像素值;其中,所述三维视频数据进行像素编码之前的数据量为第一数据量;所述三维视频数据进行像素编码之后的数据量为第二数据量;所述第一数据量大于所述第二数据量。
一种计算机存储介质,所述计算机存储介质上存储有计算机指令,其中,该指令被处理器执行时实现前述任意应用于终端中的数据处理方法的步骤;或者,该指令被处理器执行时实现前述任意应用于MEC服务器中数据处理方法的步骤。
一种电子设备,包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机指令,其中,所述处理器执行所述指令时实现前述任意应用于终端中的数据处理方法的步骤,或实现前述任意应用于MEC服务器中数据处理方法的步骤。
本申请实施例提供的方法,数据处理方法及装置、电子设备及存储介质,一方面,终端不再是直接传输三维视频数据的像素值,而是会对像素值进行像素编码之后,传输的像素编码数据。传输的像素编码数据的数据量是小于直接传输像素值的数据量的,从而减少了传输所需的带宽及延时;具有传输数据量小、所需带宽小及传输延时小的特点。另一方面,终端会动态确定像素编码的当前编码映射关系,如此可以根据当前需求选择合适的当前编码映射关系,确保传输到MEC服务器的三维视频数据的精准性和/或延时需求,提高三维视频数据的服务质量。
附图说明
图1为本申请实施例提供的一种数据传输方法应用的系统架构示意图;
图2为本申请实施例提供的一种数据处理方法的流程示意图;
图3为本申请实施例提供的另一种数据处理方法的流程示意图;
图4为本申请实施例提供的再一种数据处理方法的流程示意图;
图5为本申请实施例提供的一种数据处理装置的结构示意图;
图6为本申请实施例提供的另一种数据处理装置的结构示意图。
具体实施方式
在对本申请实施例的技术方案进行详细说明之前,首先对本申请实施例的数据处理方法应用的系统架构进行简单说明。本申请实施例的数据处理方法应用于三维视频数据的相关业务,该业务例如是三维视频数据分享的业务,或者基于三维视频数据的直播业务等等。在这种情况下,由于三维视频数据的数据量较大,分别传输的深度数据和二维视频数据在数据传输过程中需要较高的技术支持,因此需要移动通信网络具有较快的数据传输速率,以及较稳定的数据传输环境。
图1为本申请实施例的数据传输方法应用的系统架构示意图;如图1所示,系统可包括终端、基站、MEC服务器、业务处理MEC服务器、核心网和互联网(Internet)等;MEC服务器与业务处理MEC服务器之间通过核心网建立高速通道以实现数据同步。
以图1所示的两个终端交互的应用场景为例,MEC服务器A为部署于靠近终端A(发送端)的MEC服务器,核心网A为终端A所在区域的核心网;相应的,MEC服务器B为部署于靠近终端B(接收端)的MEC服务器,核心网B为终端B所在区域的核心网;MEC服务器A和MEC服务器B可与业务处理MEC服务器之间分别通过核心网A和核心网B建立高速通道以实现数据同步。
其中,终端A发送的三维视频数据传输到MEC服务器A后,由MEC服务器A通过核心网A将数据同步至业务处理MEC服务器;再由MEC服务器B从业务处理MEC服务器获取终端A发送 的三维视频数据,并发送至终端B进行呈现。
这里,如果终端B与终端A通过同一个MEC服务器来实现传输,此时终端B和终端A直接通过一个MEC服务器实现三维视频数据的传输,不需要业务处理MEC服务器的参与,这种方式称为本地回传方式。具体地,假设终端B与终端A通过MEC服务器A实现三维视频数据的传输,终端A发送的三维视频数据传输到MEC服务器A后,由MEC服务器A发送三维视频数据至终端B进行呈现。
这里,终端可基于网络情况、或者终端自身的配置情况、或者自身配置的算法选择接入4G网络的演进型基站(eNB),或者接入5G网络的下一代演进型基站(gNB),从而使得eNB通过长期演进(Long Term Evolution,LTE)接入网与MEC服务器连接,使得gNB通过下一代接入网(NG-RAN)与MEC服务器连接。
这里,MEC服务器部署于靠近终端或数据源头的网络边缘侧,所谓靠近终端或者靠近数据源头,不仅是逻辑位置上,还在地理位置上靠近终端或者靠近数据源头。区别于现有的移动通信网络中主要的业务处理MEC服务器部署于几个大城市中,MEC服务器可在一个城市中部署多个。例如在某写字楼中,用户较多,则可在该写字楼附近部署一个MEC服务器。
其中,MEC服务器作为具有融合网络、计算、存储、应用核心能力的边缘计算网关,为边缘计算提供包括设备域、网络域、数据域和应用域的平台支撑。其联接各类智能设备和传感器,就近提供智能联接和数据处理业务,让不同类型的应用和数据在MEC服务器中进行处理,实现业务实时、业务智能、数据聚合与互操作、安全与隐私保护等关键智能服务,有效提升业务的智能决策效率。
如图2所示,本实施例提供一种数据处理方法,应用于终端中,包括:
步骤201:动态确定像素编码的当前编码映射关系;
步骤202:向移动边缘计算MEC服务器发送所述当前编码映射关系或者所述当前编码映射关系的指示信息;
步骤203:基于所述当前编码映射关系,对三维视频数据的像素值进行像素编码,获得像素编码数据;
步骤204:向移动边缘计算MEC服务器发送所述像素编码数据,其中,所述像素编码数据用于所述MEC服务器还原出所述三维视频数据;
其中,所述三维视频数据进行像素编码之前的数据量为第一数据量;所述三维视频数据进行像素编码之后的数据量为第二数据量;所述第一数据量大于所述第二数据量。
本实施例提供的数据处理方法应用于终端中,该终端可为各种类型的终端,例如,手机、平布电脑或可穿戴式设备、或者固定的图像监控等。所述终端可为固定终端和/或移动终端。
步骤201,在进行所述三维视频数据传输之前需要确定当前编码映射关系,例如,所述三维视频数据是基于传输控制协议(Transmission Control Protocol,TCP)或用户数据报协议(User Datagram Protocol,UDP)传输的。若三维视频数据是基于TCP协议传输的,则所述当前编码映射关系或者所述当前编码映射关系的指示信息,在建立TCP连接的握手阶段发送给MEC服务器,也可以在正式建立好TCP连接之后,通过TCP连接发送给MEC服务器。
若利用UDP协议传输所述像素编码数据时,可以利用专门的UDP数据包发送所述当前编码映射关系或当前编码映射关系的指示信息。
所述当前编码映射关系的指示信息可为:当前编码映射关系的编号信息或者名称信息,等可以供MEC服务器唯一确定所述当前编码映射关系的信息。
在本实施例中由于所述当前编码映射关系是动态确定的,如此可以根据当前的需求,来确定当前编码映射关系,相对于使用静态的编码映射关系,可以满足不同应用场景下的三维视频数据的传输需求及数据传输质量。
在一些实施例中,所述三维视频数据包括:二维图像及深度图像。其中,所述二维图像中包含颜色像素。所述颜色像素的像素值为颜色值。例如,所述颜色值是红/绿/蓝(RGB)值或者是明亮度/色度/浓度(YUV)值。
所述深度图像包含深度像素,所述深度像素的像素值为深度值;其中,所述深度值表征的是采 集目标与图像采集模组之间的空间距离。所述三维视频数据及深度图像可以在三维图像空间内搭建出三维图像。
在一些实施例中,所述二维图像和深度图像的图像尺寸是一致,例如,所述二维图像和深度图像所包含的像素均为W*H个;W表示第一方向上包括的像素个数,H表示第二方向上包括的像素个数。W和H均为正整数。
在一些实施例中,所述二维图像和所述深度图像,可为同一个时刻采集的两个图像;为了减少数据量,所述二维图像和所述深度图像的图像尺寸,满足预设关系。例如,深度图像所包含的像素为W*H个,深度图像包含的像素为(W/a)*(H/b)。如此,一个深度像素对应了a*b个颜色像素。在进行三维视频搭建时,可以根据一个深度像素的像素值应用于a*b个相邻颜色像素的像素值。譬如,(W/a)*(H/b)等于(W/2)*(H/2)。如此,一个深度像素对应了4个颜色像素。在进行三维视频搭建时,可以根据一个深度像素的像素值应用于4个相邻颜色像素的像素值;如此,就减少了深度图像的图像数据量。由于通常一个物体相邻的很小区域内的凹凸感是基本上一致的,故若深度图像的图像尺寸小于所述二维图像的图像尺寸,也可以维持较高精度的三维视频的还原和搭建;同时减少的终端和MEC服务器需要交互的数据量和/或MEC服务器需要处理的数据量。
在一些实施例中,在生成图像尺寸小于所述二维图像时,可具有以下方式中的至少一种:直接利用所述深度图像的图像尺寸采集所述深度图像;利用所述二维图像的图像尺寸采集原始深度图像;再根据深度图像的图像尺寸,根据相邻的a*b个像素的像素值生成所述深度图像。例如,根据相邻的a*b个像素值的均值或中值生成所述深度图像。
在本实施例中,会对已经完成了从传感数据转换为像素值的第一次编码。在本实施例中会对像素值进行第二次编码,此处的第二编码即为所述像素编码,在完成所述像素编码之后会得到所述像素编码数据。
在一些实施例中,在对所述像素值进行像素编码之后,生成的所述像素编码数据可包括:像素值代码,而非像素值本身。如此,接收端在接收到像素编码数据之后,不能直接根据像素编码数据显示或读取出图像,需要先还原成像素值本身,才可进行正常显示或读取出图像。
在一些实施例中,所述第一次编码可为很多图像采集模组自带的编码,如此,图像采集模组通过光线的采集直接向存储其中存储的是完成了所述传感数据转换的像素值,即完成了所述第一次编码的数据。
在完成所述像素编码之后得到的像素编码数据传输给MEC服务器,用于MEC服务器三维视频的生成。在本实施例中,由于通过再次的像素编码之后,得到的第二数据量是小于编码之前的第一数据量,如此,减少了传输三维视频数据的数据量,从而减少了数据量所消耗的带宽及因为大量数据所需的传输时延;从而具有传输的数据量小、消耗的带宽小及传输时延小的特点。故此,MEC服务器接收到数据的延时小,则可以快速的精准还原出三维视频数据并搭建三维视频。
在步骤201中动态确定当前编码映射关系可包括两种方式:动态选择;动态生成。
动态选择所述当前编码映射关系可包括以下至少之一:
根据所述三维视频数据对应的目标场景,从备选编码映射关系中选择所述当前编码关系;
根据所述三维视频数据的要求精度,从备选编码映射关系中选择所述当前编码关系。
所述目标场景根据采集目标的运动状态可以分为:静止场景和/或运动场景。例如,单位时间内采集目标的位移不大于特定位移,则可认为采集目标静止,否则可认为采集目标运动。
若采集对象快速运动,则可能会因为采集图像中采集对象的不同部分成像之间的位置关系发生变化,从而使得不同部分成像的像素组合发生关系,故此时可能不适用于组合编码映射方式,而更加合适单一编码映射方式;故此时选择的当前编码映射关系也不同。
采集场景可为采集目标所在的环境,采集场景在采集的三维视频数据中可以体现为背景,且采集场景可能会影响采集目标在三维视频数据中的成像。例如,采集场景的光照颜色和/或光照角度,会影响三维视频数据中对采集目标的成像的颜色和/或深度值等。
故在一些实施例中,还根据采集场景的切换速率来选择合适的编码映射方式及编码映射关系。
所述采集场景的切换速率可包括:
通过比对不同三维视频数据中图像内背景的差异度来确定切换速率;若差异越大,则表示切换速率越大。
再例如,所述根据所述三维视频数据的要求精度,从备选编码映射关系中选择所述当前编码关系,包括以下至少之一:
若所述三维视频数据的要求精度不低于第一精度阈值,从所述备选编码关系中选择单一编码映射方式的编码映射关系;
若所述三维视频数据的要求精度低于所述第一精度阈值,从所述备选关系中选择组合编码映射方式的编码映射关系;
若所述三维视频数据的要求精度低于第二精度阈值,从所述备选关系中选择以N1*M1个像素为一个组合的组合映射方式的编码映射关系;
若所述三维视频数据的要求精度不低于第二精度阈值,从所述备选关系中选择以N2*M2个像素为一个组合的组合映射方式的编码映射关系;其中,N2*M2大于N1*M1。
如采用单一编码映射关系,则需要注意核对每一个像素值,基于每一个像素值进行像素编码数据的获取。若采用组合编码映射方式,在传输的过程中出现一个比特数据错误,可能会使得多个像素的像素值均发生变化,从而导致显示异常。故为了确保较高的传输精度,在本实施例中,若要求精度大于或等于第一精度阈值时,选择单一编码映射方式的编码映射关系,否则可以出于简化传输的需求,采用组合编码映射方式,并选择组合编码映射方式对应的编码映射关系。
在一些实施例中,所述第二精度阈值可以低于所述第一精度阈值,则在要求精度低于第一精度阈值,以较多像素组合进行组合编码映射方式;否则以较少的像素组合进行组合编码映射方式进行像素编码。如此,在确定了编码映射方式之后,可以根据确定的编码映射方式选择对应的编码映射关系。
在本申请实施例中,步骤201中除了根据目标场景和/或传输精度需求选择合适的编码映射方式,再根据选择的编码映射方式来选择编码映射关系。若选择的编码映射方式的编码映射关系就仅有一种,则直接选择这种编码映射关系即可。若选择的编码映射方式的编码映射关系有多种,则随机选择一个作为当前编码映射关系;或者,进一步根据要求精度和/或目标场景等参数从多个编码映射关系中选择一个适合当前传输的编码映射关系。
所述动态确定像素编码的编码映射关系,还可包括:
根据所述三维视频数据的要求精度和/或目标场景,生成所述当前映射编码关系。
在一些实施例中,所述步骤201可根据当前要求精度和/或目标场景,确定出当前适合的编码映射方式,然后通过扫码样本三维视频数据,生成对应的编码映射方式的编码映射关系。生成的该编码映射关系即为步骤201中动态确定的当前编码映射关系。
所述根据样本三维视频数据的要求精度和/或目标场景,生成所述当前映射编码关系,包括:
根据精度需求和/或目标场景,确定当前编码映射方式;
若所述当前编码映射方式为单一编码映射方式,根据所述样本三维视频数据的像素值按照预设排序方式的排序,获得所述三维视频数据的像素值序号;
建立所述像素值与所述像素值序号之间的映射关系。
例如,所述像素值序号包括以下至少之一:颜色值排序形成的颜色值序号;深度值排序形成的深度值序号。
以8通道的颜色通道为了进行说明,由“0”至“225”共256个取值表征不同的颜色。此时,可以以颜色值进行从高到低或者从低到高排序,然后将排序的序号作为所述颜色值序号,再建立颜色值序号与对应颜色值之间的映射关系,该建立的映射关系即为前述的编码映射关系的一种。
以深度值进行说明,所述深度值可能为采集的表征图像采集模组与采集目标之间的距离;在本实施例中可以直接对距离进行从大到小或者从小到大的排序,然后将排序的序号作为所述深度值序号来构建所述编码映射关系。
故在一些实施例中,所述根据样本三维视频数据的像素值按照预设排序方式的排序,获得所述三维视频数据的像素值序号,包括以下至少之一:根据所述三维视频数据的颜色像素的颜色值的排 序,获得所述三维视频数据的颜色值序号;根据所述三维视频数据的深度像素的深度值的排序,获得所述三维视频数据的深度值序号。
在另一些实施例中,所述根据样本三维视频数据的要求精度和/或目标场景,生成所述当前映射编码关系,包括:
若所述当前编码映射方式为组合编码映射方式,根据所述要求精度和/或目标场景确定组合编码映射方式的N*M的取值;其中,N及M的取值为正整数;
根据样本三维视频数据的像素值,以N*M个像素的像素值为组合进行排序,获得所述三维视频数据的像素组合序号;
建立所述像素值与所述像素组合序号之间的映射关系。
在本实施例中,会根据要求精度确定出所述N*M,N可为一个像素组合对应的行数,则M可为一个像素组合对应的列数;或者,N可为一个像素组合对应的列数,则N可为一个像素组合对应的行数。
在一些实施例中,所述像素组合序号包括以下至少之一:
颜色值组合排序形成的颜色值组合序号;
深度值组合排序形成的深度值组合序号。
故在一些实施例中,所述根据样本三维视频数据的像素值,以N*M个像素的像素值为组合进行排序,获得所述三维视频数据的像素组合序号,可包括:
根据样本三维视频数据中颜色像素的颜色值,以N*M个像素的颜色值为组合进行排序,获得三维视频数据的颜色值组合序号。例如,在排序时可以按照扫描到的颜色值组合的时间先后顺序进行排序,或者,可以基于扫描到的颜色值组合的出现频次高低进行排序,从而得到所述颜色值组合序号。
在还有一些实施例中,所述样本所述三维视频数据的像素值,以N*M个像素的像素值为组合进行排序,获得所述三维视频数据的像素组合序号,还可包括:
根据样本三维视频数据中深度像素的深度值,以N*M个像素的深度值为组合进行排序,获得三维视频数据的深度值组合序号。例如,在排序时可以按照深度值组合的平均深度值进行排序,也可以按照深度值组合的最大深度值或者最小深度值进行排序。总之,排序的方式有多种,不局限于上述任意一种。
在还有一些实施例中,在所述步骤201可直接包括:根据样本三维视频数据的数据特点,直接确定出合适的编码映射方式和/或编码映射关系。
例如,通过扫描样本三维视频数据,发现在一帧三维视频数据中有多个颜色值合频繁反复出现,则此时适用于组合编码映射方式,可以直接根据该颜色值组合确定出N*M,并进行排序获得颜色值序号等。
又例如,通过扫描样本三维视频数据,发现在一帧三维视频数据中有多个深度值组合或者深度差固定的深度值组合频繁反复出现,则此时适用于组合编码映射方式,可以直接根据该深度值组合确定出N*M,并进行排序获得颜色值序号等。
在本申请实施例中所述样本三维视频数据可为正式传输三维视频数据之前采集的三维视频数据。
在还有一些实施例中,若步骤201中是动态选择的当前编码映射关系,则可能当前采集的三维视频数据有的颜色值或深度值并未在所述当前编码映射关系中,此时,所述方法还包括:
若样本三维视频数据中包含有未在所述当前编码映射关系的像素值,根据所述样本三维视频数据更新所述当前编码映射关系;
将更新后的所述当前编码映射关系或者所述当前编码映射关系中的更新部分发送给MEC服务器。
如此,一方面减少了完全重新生成所述编码映射关系所消耗的资源和时间,另一方面也可以通过部分更新的方式,获得能够更加适合当前三维视频数据传输的当前编码映射关系。在一些实施例中,所述步骤203可包括:
根据所述三维视频数据的颜色像素值进行颜色编码,获得颜色编码数据;
和/或,
根据所述三维视频数据的深度值像素值进行深度值编码,获得深度值编码数据。
在一些实施例中,对像素值编码可以仅对三维视频数据中的颜色像素的颜色像素值进行颜色编码,得到颜色编码数据。
在另一些实施例中,对像素值编码可以是仅对三维视频数据中的深度值像素值进行深度值编码,获得再次编码的深度值编码数据。
不管是颜色编码还是深度值编码,通过再次编码之后,可以减少传输给MEC服务器的数据量。
在另一些实施例中,在步骤203中的像素编码,可以是同时进行颜色编码和深度值编码。
在步骤203可包括:
将所述三维视频数据中的像素值与像素编码映射关系中的像素值进行匹配;
根据匹配结果,确定所述像素编码数据。例如,某一个或多个三维视频数据中的像素值A1,与像素编码映射关系中所有的像素值进行匹配,若有匹配到像素值A1,则将与像素值A1对应的像素编码映射关系中的像素编码数据作为所述像是值A1的像素编码的结果。
所述匹配结果包括以下三种:
匹配结果表明匹配成功;所述匹配成功包括:匹配结果表明满足相同条件或相似条件;
匹配结果表明匹配不成功;即所述匹配结果不满足相同条件和/或相似条件。
若满足相同条件,则表明当前采集的三维视频数据中的像素值位于所述像素编码映射关系。
若满足相似条件,则表明当前采集的三维视频数据中的像素值与位于像素编码。
在一些实施例中,可以根据当前需求,确定匹配成功是需要满足相同条件还是相似条件。
在一些实施例中,若扫描到当前采集的三维视频数据中的N*M个像素的像素值与所述像素编码映射关系中某一个预设N*M个像素的像素值的像素度大于预设相似度阈值,例如,70%、80%、90%或85%;则可认为当前扫描到的N*M个像素的像素编码数据与像素编码映射关系中的N*M个像素满足像素编码的相似条件,可以将像素编码映射关系中的N*M个像素的像素编码数据,直接作为当前扫描的N*M个像素的像素值的颜色编码数据。
在另一些实施例中,若扫描到当前采集的三维视频数据中的N*M个像素的像素值与所述像素编码映射关系中某一个预设N*M个像素的像素值的像素度大于预设相似度阈值;70%、80%、90%或85%。进一步地,提取出扫描的N*M个像素的像素值与像素编码映射关系中的N*M个像素的像素值不同的1个或多个像素的像素值,计算提取出的像素值与像素编码映射关系中的N*M个像素的像素值的像素值差异,若像素值差异在预设差异范围内,则可认为当前扫描到的N*M个像素的像素编码数据与像素编码映射关系中的N*M个像素满足像素编码的相似条件,可以将像素编码映射关系中的N*M个像素的像素编码数据,直接作为当前扫描的N*M个像素的像素值的颜色编码数据;否则可认为扫描到的N*M个像素的像素编码数据与像素编码映射关系中的N*M个像素不满足像素编码的相似条件。例如,若像素值差异在预设差异范围内可包括:
像素值差异表明两个像素值为近似值,例如,颜色近似值。若像素值差异表明两个颜色为相反色,则可认为不再所述预设差异范围内;若两个深度像素的深度差异表明两个深度值差异在预设深度值或深度比值以上,可认为不再所述预设差异范围内,否则可认为在所述预设差异范围内。
在另一些实施例中,若所述编码映射关系为编码映射函数,则以所述像素值输入所述编码映射函数中就自动输出像素编码数据。例如,通过拟合样本图像中的颜色值确定出所述编码映射函数,如此,每一个像素值或一组像素值输入到所述编码映射函数中就会自动得到所述像素编码数据,如此,就不用通过匹配的方式确定所述像素编码数据。
总之在步骤203中确定所述像素编码数据的方式有多种,具体实现时不限于任意一种。
在一些实施例中,所述步骤203,包括:
根据所述三维视频数据的像素值查询像素编码映射关系,确定所述像素编码数据。
在一些实施例中,所述终端和所述MEC服务器均可预先知道所述像素编码映射关系,例如,MEC服务器和终端均预先存储有像素编码映射表。
在一些实施例中,所述终端和MEC服务器之间预先协商所述像素编码映射关系。
所述像素编码映射关系可包括以下至少之一:
所述像素编码映射表;
多个离散的像素编码映射值对;
由像素值与像素编码数据的函数表达式。
总之,所述像素编码映射关系的表达方式至少有多种,不局限于上述任意一种。
在一些实施例中,所述方法还包括:
根据预设信息选择像素编码方式,其中,所述预设信息包括:网络传输状况信息、所述终端的负载状况信息及所述MEC服务器的负载状况信息的至少其中之一;
所述步骤203可包括:根据选择的所述像素编码方式,对所述像素值进行像素编码获得所述像素编码数据。
例如,若网络传输状况信息表明:当前可用带宽大于直接传输所述像素值所需的带宽,则可以不用进行所述像素编码。
再例如,若所述网络传输状况信息表明:当前可用高带宽小于直接传输所述像素值所需的带宽,则根据当前可用带宽,选择像素编码后数据量小于或等于所述当前可用带宽的像素编码方式。
再例如,采用不同的像素编码方式,则终端编码所需的计算量和MEC服务器还原的计算量均不同。
在本实施例中还会根据终端的负载状况信息和/或MEC服务器的负载状况信息,选择合适的像素编码方式。
所述负载状况信息可包括以下至少之一:当前负载率、当前负载量、最大负载率及最大负载量。
若当前负载率高或者当前负载量大,则优先选择编码或解码的计算量小的像素编码方式;否则可以任意选择或者根据网络传输状况信息等其他参考因素进行选择。
在一些实施例中,所述根据选择的像素编码方式,对所述像素值进行像素编码获得所述像素编码数据,包括以下至少之一:
根据单一像素编码方式,对所述三维视频数据对单个像素的像素值进行单一像素编码,获得第一类编码数据,其中,所述第一类编码数据占用的比特数小于所述像素值占用的比特数;
根据组合像素编码方式,对所述三维视频数据对N*M个像素的像素值进行组合像素编码,获得第二类像素编码,其中,所述N和所述M均为正整数。
在本实施例中单一像素编码,则一个像素值对应于一个像素编码数据。例如,一个三维视频数据的图像包括S个像素,则通过单一像素编码之后,会得到S个第一类编码数据。为了减少数据量,则此时一个第一类编码数据占用的比特数小于像素值本身占用的比特数。例如,一个像素值占用32个比特或16个比特,而第一类编码数据则仅占用8个比特或10个比特。如此,由于减少了每一个单一像素传输所需的比特数,从而整体上减少了所需的数据量。
在一些实施例中还可以组合像素编码。
组合像素编码是同时对多个像素进行像素编码。
例如,同时对相邻的一个像素矩阵进行编码,或者,同时对按照矩阵或者非矩阵排列的多个像素进行像素编码。
在一些实施例中,对3*3或4*4个像素构成的像素矩阵进行编码。在一些实施例中,所述N*M恰好能够被一帧所述三维图像数据所包含的像素整除。
在一些情况下,进行图像采集时,这些相邻像素的深度值和/或颜色信息是相对固定的,可以将这些颜色组合或深度组合,在所述像素编码映射关系中生成预设的编码值,如此,后续在进行所述像素编码时,通过扫描对应三维视频数据帧中的颜色像素值或深度像像素值,确定出是否包括特定颜色组合和/或深度组合,从而转换为对应的编码值,从而获得所述像素编码数据。
在一些实施例中,根据当前需求,可以混合使用所述单一像素编码和组合像素编码。
在传输所述像素编码数据的同时,或者在传输所述像素编码数据之前,可以预先告知选择的编码方式。选择的编码方式可为前述的单一像素编码、组合像素编码,或混合单一像素编码及组合像 素编码的混合像素编码。
所述N*M个像素相邻分布;
所述N*M个像素按照预设间隔方式,间隔分布。
如N*M个像素相邻分布,则形成了一个N*M个像素矩阵。
N*M个像素按照预设间隔方式间隔分布,例如,属于N*M个像素中的两个像素可以间隔预设个像素,例如,间隔一个或多个。
在一些实施例中,所述N*M可以动态确定,也可以是静态设置的。
例如,将三维图像数据帧中的图像分为第一区域和第二区域,第一区域可以使用单一像素编码,则第二区域进行组合像素编码。
再例如,将三维图像帧中图像的第一区域的像素值直接传输给MEC服务器,对第二区域进行单一像素编码和/或组合像素编码。
如此,可以很好的平衡传输数据量和图像质量之间的关系。
在一些实施例中,所述根据所述三维视频数据的像素值查询像素编码映射关系,确定所述像素编码数据,包括:
根据所述三维视频数据的像素值查询所述像素编码映射关系;
若所述像素值在所述像素编码映射关系中,则根据所述像素值对应的像素编码值确定出所述像素编码数据。
一个三维视频数据帧的图像数据的像素编码映射关系,可能预先已经确定了,但是在另一些情况下可能是未确定的,或者,可能随着时间的推移发生了变化。
例如,以一个主播的三维直播视频为例,若该主播之前就参与过三维视频直播,则在该主播所持有的终端或者MEC服务器中就可能存储有该主播脸部的编码映射关系。若该主播的脸部突然增加了修饰或者妆容发生了变化,则脸部的至少艳色图像可能发生了改变,则此时上述像素映射关系可能并不在所述像素编码映射关系中。
在另一些实施例中,所述方法还包括:
若所述像素值不在所述像素编码映射关系中,根据所述像素值更新所述像素编码映射关系,将更新后的像素编码映射关系或者所述像素编码映射关系中更新部分发送给所述MEC服务器。
在本实施例中,为了方便确定编码映射关系,可以会在正式直播前的交互握手或者调试阶段,会采集目标对象的一个或多个三维视频数据,通过这些三维视频数据的像素值扫描,确定出是否已经建立了对应目标对象的像素映射关系,或者,是否需要更新像素映射关系。若需要更新三维映射关系,则更新所述三维映射关系,若不需要就可以直接进入三维视频数据的正式交互了。
在一些实施例中,所述步骤203可包括:
根据所述三维视频数据的像素值按照预设排序方式的排序,获得所述三维视频数据的像素值序号。
例如,以一个人脸为例,人脸的肤色和脸部高低起伏都是有其最大值和最小值的,如此,利用图像采集模组采集的二维图像和/或深度图像都集中在特定的颜色像素值或深度像素值区间内,绝大多数情况下,不会覆盖到整个图像采集器的最大像素值和最小像素值,16比特的颜色通道对应的512个可能像素值,可能有效利用的仅是200个左右,甚至100多个左右。
通过所述像素值的排序,可以得到目前产生有多少个像素值,例如,产生了P个,则需要log 2P的向上取整个比特就能够完成所有像素的像素编码,得到仅占用了log 2P的向上取整个的像素编码数据;如此,可以大大的降低所需的数据量。
若一个目标对象(例如,各种类型的主播、特定类型场景经常出现在视频中),如此,可以通过像素值的上述统计个数排序,生成所述像素编码映射关系,或者,更新所述像素编码映射关系,从而完成编码视频关系的确定和生成。
若按照统计个数排序,如此,出现像素值出现频次高的像素编码值序号会出现在前,如此,后续利用与样本三维视频数据在同一目标场景和采集目标的三维视频数据的编码时,可以减少像素值匹配的次数,提升像素编码的效率。
在一些实施例中,由于不同的目标对象而言得到的述像素编码映射关系可能是不同的,如此,对于数据而言在不泄露所述像素编码映射关系的情况下,具有安全性高的特点。如此,他人若在传输过程中截获了进行了像素编码的像素编码数据,也并非能够正常解码出所述三维视频数据,从而具有传输安全性高的特点。
如图3所示,本实施例提供一种数据处理方法,应用于移动边缘计算MEC服务器,包括:
步骤301:接收终端发送的当前编码映射关系或所述当前编码映射关系的指示信息;
步骤302:接收终端发送的像素编码数据;
步骤303:根据所述当前编码映射关系,对所述像素编码数据还原以获得三维视频数据的像素值;其中,所述三维视频数据进行像素编码之前的数据量为第一数据量;所述三维视频数据进行像素编码之后的数据量为第二数据量;所述第一数据量大于所述第二数据量。
在本实施例中直接接收的并非是像素值,而是像素编码数据。MEC服务器接收到像素编码数据之后需要还原成三维视频数据的像素值。
且用于是动态确定的当前编码映射关系,故MEC服务器还会从终端接收到当前编码映射关系或者当前编码映射关系的指示信息,以方便步骤303中根据当前编码映射关系进行像素编码数据的还原。
由于MEC服务器接收的像素编码数据,相对于直接接收像素值的数据量是更小的,消耗的带宽是更小的。
所述步骤303可包括以下至少之一:
根据所述当前编码映射关系,根据所述像素编码数据的颜色编码数据,还原所述三维视频数据的颜色像素值;
根据所述当前编码映射关系,根据所述像素编码数据的深度值编码数据,还原所述三维视频数据的深度值像素值。
在本实施例中,基于颜色编码数据还原颜色像素值,根据深度值编码数据,还原深度值像素值。
所述步骤303还可包括以下至少之一:
根据单一像素编码方式,利用所述当前编码映射关系对单个像素的所述像素编码数据进行解码还原三维视频数据像素值;
根据组合像素编码方式,利用所述当前编码映射关系对N*M个像素的所述像素编码数据进行解码还原所述三维视频数据像素值。
在一些实施例中,所述方法还包括:
确定所述像素编码数据的像素编码方式;例如,所述像素编码方式可包括:单一编码方式和/或组合编码方式。
所述步骤303可包括:
根据所述像素编码方式,对所述像素编码数据进行像素解码得到所述三维视频数据的像素值。
在一些实施例中,所述步骤302的方式有多种,以下提供几种可选方式:
可选方式一:确定所述三维视频数据包含的像素个数,确定所述像素编码数据的数据个数;根据所述像素个数及所述数据个数,确定所述像素编码方式;
可选方式二:与所述终端交互像素编码参数,其中,所述像素编码参数至少包括:像素编码方式。
在一些实施例中,所述像素编码参数包括所述像素编码方式,在另一些实施中,所述像素编码参数还可包括:
组合编码方式的N*M的取值;
单一编码方式和/或组合编码方式的一个像素编码数据所占用的比特个数。
在一些实施例中,所述根据所述像素编码方式,对所述像素编码数据进行像素解码得到所述三维视频数据的像素值,包括以下至少之一:
根据单一像素编码方式,对单个像素的所述像素编码数据进行解码还原三维视频数据像素值;
根据组合像素编码方式,对N*M个像素的所述像素编码数据进行解码还原所述三维视频数据像 素值。
在一些实施例中,所述步骤302可包括:根据所述像素编码数据查询像素编码映射关系,获得与所述像素编码数据对应的像素值。
在一些实施例中,所述方法还包括:
在根据所述像素编码数据还原三维视频数据的像素值之前,接收所述终端发送的更新后的像素编码映射关系或者所述像素编码映射关系中的更新部分。
通过所述像素编码映射关系的交互,如此,使得像素编码映射关系在终端和MEC服务器中进行同步。
如图4所示,本实施例提供一种数据处理装置,应用于终端中,包括:
确定模块401,配置为动态确定像素编码的当前编码映射关系;
第一发送模块402,配置为向移动边缘计算MEC服务器发送所述当前编码映射关系或者所述当前编码映射关系的指示信息;
获得模块403,配置为基于所述当前编码映射关系,对三维视频数据的像素值进行像素编码,获得像素编码数据;
第二发送模块404,配置为向移动边缘计算MEC服务器发送所述像素编码数据,其中,所述像素编码数据用于所述MEC服务器还原出所述三维视频数据;
其中,所述三维视频数据进行像素编码之前的数据量为第一数据量;所述三维视频数据进行像素编码之后的数据量为第二数据量;所述第一数据量大于所述第二数据量。
在一些实施例中,所述第一发送模块402、获得模块403及第二发送模块404可为程序模块,对应于计算机可执行代码,该计算机可执行代码被执行后,能够实现前述像素编码数据及三维视频数据的发送。
在另一些实施例中,所述第一发送模块402、获得模块403及第二发送模块404还可为硬件模块及程序模块的组合,例如,复杂可编程阵列或者现场可编程阵列。
在还有一些实施例中,所述第一发送模块402、获得模块403及第二发送模块404可对应于硬件模块,例如,所述第一发送模块402、获得模块403及第二发送模块404可为专用集成电路。
在一些实施例中,所述确定模块401,包括:
第一选择子模块,配置为根据所述三维视频数据对应的目标场景,从备选编码映射关系中选择所述当前编码关系;
第二选择子模块,配置为根据所述三维视频数据的要求精度,从备选编码映射关系中选择所述当前编码关系。
在一些实施例中,所述第一选择子模块,配置为执行以下至少之一:
若所述三维视频数据对应的是采集目标运动的运动场景,选择单一编码映射方式的编码映射关系作为所述当前编码关系;
若所述三维视频数据对应的是采集目标静止的静止场景,选择组合编码映射方式的编码映射关系作为所述当前编码映射关系;
若所述三维视频数据对应的采集场景的切换速率大于第一预设速率,选择单一编码映射方式的编码映射关系作为所述当前编码关系;
若所述三维视频数据对应的采集场景的切换速率低于所述第一预设速率,选择组合编码映射方式的编码映射关系作为所述当前编码关系。
在一些实施例中,所述第二选择子模块,配置为于执行以下至少之一:
若所述三维视频数据的要求精度不低于第一精度阈值,从所述备选编码关系中选择单一编码映射方式的编码映射关系;
若所述三维视频数据的要求精度低于所述第一精度阈值,从所述备选关系中选择组合编码映射方式的编码映射关系;
若所述三维视频数据的要求精度低于第二精度阈值,从所述备选关系中选择以N1*M1个像素为一个组合的组合映射方式的编码映射关系;
若所述三维视频数据的要求精度不低于第二精度阈值,从所述备选关系中选择以N2*M2个像素为一个组合的组合映射方式的编码映射关系;其中,N2*M2大于N1*M1。
在一些实施例中,所述确定模块401,包括:
生成子模块,配置为根据所述三维视频数据的要求精度和/或目标场景,生成所述当前映射编码关系。
在一些实施例中,生成子模块,配置为根据精度需求和/或目标场景,确定当前编码映射方式;若所述当前编码映射方式为单一编码映射方式,根据样本三维视频数据的像素值按照预设排序方式的排序,获得所述三维视频数据的像素值序号;建立所述像素值与所述像素值序号之间的映射关系。
在一些实施例中,所述像素值序号包括以下至少之一:
颜色值排序形成的颜色值序号;
深度值排序形成的深度值序号。
在一些实施例中,所述生成子模块,配置为若所述当前编码映射方式为组合编码映射方式,根据所述要求精度和/或目标场景确定组合编码映射方式的N*M的取值;其中,N及M的取值为正整数;根据所述样本三维视频数据的像素值,以N*M个像素的像素值为组合进行排序,获得所述样本三维视频数据的像素组合序号;建立所述像素值与所述像素组合序号之间的映射关系。
在一些实施例中,所述像素组合序号包括以下至少之一:
颜色值组合排序形成的颜色值组合序号;
深度值组合排序形成的深度值组合序号。
在一些实施例中,所述装置还包括:
更新模块,配置为若样本三维视频数据中包含有未在所述当前编码映射关系的像素值,根据所述样本三维视频数据更新所述当前编码映射关系;
第三发送模块,用于将更新后的所述当前编码映射关系或者所述当前编码映射关系中的更新部分发送给MEC服务器。
在一些实施例中,所述获得模块403,配置为执行以下至少之一:
根据单一像素编码方式的编码映射关系,对所述三维视频数据对单个像素的像素值进行单一像素编码,获得第一类编码数据,其中,所述第一类编码数据占用的比特数小于所述像素值占用的比特数;
根据组合像素编码方式的编码映射关系,对所述三维视频数据对N*M个像素的像素值进行组合像素编码,获得第二类像素编码,其中,所述N和所述M均为正整数。
如图5所示,本实施例提供一种数据处理装置,应用于移动边缘计算MEC服务器,包括:
第一接收模块501,配置为接收终端发送的当前编码映射关系或所述当前编码映射关系的指示信息;
第二接收模块502,配置为接收终端发送的像素编码数据;
还原模块503,配置为根据所述当前编码映射关系,对所述像素编码数据还原以获得三维视频数据的像素值;其中,所述三维视频数据进行像素编码之前的数据量为第一数据量;所述三维视频数据进行像素编码之后的数据量为第二数据量;所述第一数据量大于所述第二数据量。
在一些实施例中,所述第一接收模块501、第二接收模块502及还原模块503可为程序模块,对应于计算机可执行代码,该计算机可执行代码被执行后,能够实现前述像素编码数据及三维视频数据的发送。
在另一些实施例中,所述第一接收模块501、第二接收模块502及还原模块503还可为硬件模块及程序模块的组合,例如,复杂可编程阵列或者现场可编程阵列。
在还有一些实施例中,所述第一接收模块501、第二接收模块502及还原模块503可对应于硬件模块,例如,所述第一接收模块501、第二接收模块502及还原模块503可为专用集成电路。
在一些实施例中,所述还原模块503,配置为执行以下至少之一:
根据所述当前编码映射关系,根据所述像素编码数据的颜色编码数据,还原所述三维视频数据的颜色像素值;
根据所述当前编码映射关系,根据所述像素编码数据的深度值编码数据,还原所述三维视频数据的深度值像素值。
在另一些实施例中,所述还原模块503,配置为执行以下至少之一:
根据单一像素编码方式,利用所述当前编码映射关系对单个像素的所述像素编码数据进行解码还原三维视频数据像素值;
根据组合像素编码方式,利用所述当前编码映射关系对N*M个像素的所述像素编码数据进行解码还原所述三维视频数据像素值。
本实施例提供一种计算机存储介质,所述计算机存储介质上存储有计算机指令,该指令被处理器执行时实现应用于终端或者MEC服务器中的数据处理方法的步骤,例如,如图2及图3所示的方法中的一个或多个。
如图6所示,本实施例提供一种电子设备,包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机指令,所述处理器执行所述指令时实现应用于终端或者MEC服务器中的数据处理方法的步骤,例如,可执行如图2至图3所示的方法中的一个或多个。
在一些实施例中,所述电子设备还包括:通信接口,该通信接口可用于与其他设备信息交互。例如,若所述电子设备为终端,该通信接口至少可与MEC服务器进行信息交互。若所述电子设备为MEC服务器,则该通信接口至少可与终端进行信息交互。
以下结合上述任意实施例提供一个具体示例:
根据当前的目标场景的实际情况以及精度需求等,动态选择映射表;手机采集完RGB后,扫描图像每个像素的RGB;如果RGB在颜色序列里,用颜色序号代替RGB数据;具体地,获得整个图像所有像素对应的RGB,然后基于预先对颜色进行的编号,用序号替换每个像素对应的RGB,然后将像素和对应的颜色序号打包上传。
对常见颜色进行按序编号,手机采集完红绿蓝(RGB)数据后,扫描图像每个像素的RGB数据,如果RGB数据在颜色序列里,用颜色序号代替RGB数据;具体地,扫描图像每个像素的RGB数据,统计出整个图像素有的RGB数据,然后对RGB排序编号,用序号替换每个像素的RGB,然后将像素和统计的RGB数据打包上传;MEC服务器和手机端保存一张映射表,当有RGB数据传输时候,水平扫描像素,如果像素是没有在映射表中,则新建一个映射(比如像素RGB-标志A[16比特]or[32比特]or[8比特]),保存到映射表,同时将RGB数据替换成16位的颜色序号,扫描完后,将映射表中变化的项和RGB数据上传,或者可以将单个像素的编码延伸到NxN个像素一起编码。
在本申请所提供的几个实施例中,应该理解到,所揭露的方法和智能设备,可以通过其它的方式实现。以上所描述的设备实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,如:多个单元或组件可以结合,或可以集成到另一个系统,或一些特征可以忽略,或不执行。另外,所显示或讨论的各组成部分相互之间的耦合、或直接耦合、或通信连接可以是通过一些接口,设备或单元的间接耦合或通信连接,可以是电性的、机械的或其它形式的。
上述作为分离部件说明的单元可以是、或也可以不是物理上分开的,作为单元显示的部件可以是、或也可以不是物理单元,即可以位于一个地方,也可以分布到多个网络单元上;可以根据实际的需要选择其中的部分或全部单元来实现本实施例方案的目的。
另外,在本申请各实施例中的各功能单元可以全部集成在一个第二处理单元中,也可以是各单元分别单独作为一个单元,也可以两个或两个以上单元集成在一个单元中;上述集成的单元既可以采用硬件的形式实现,也可以采用硬件加软件功能单元的形式实现。
本领域普通技术人员可以理解:实现上述方法实施例的全部或部分步骤可以通过程序指令相关的硬件来完成,前述的程序可以存储于一计算机可读取存储介质中,该程序在执行时,执行包括上述方法实施例的步骤;而前述的存储介质包括:移动存储设备、ROM、RAM、磁碟或者光盘等各种可以存储程序代码的介质。
或者,本申请上述集成的单元如果以软件功能模块的形式实现并作为独立的产品销售或使用时,也可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请实施例的技术方案本质上或 者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机、MEC服务器、或者网络设备等)执行本申请各个实施例所述方法的全部或部分。而前述的存储介质包括:移动存储设备、ROM、RAM、磁碟或者光盘等各种可以存储程序代码的介质。
需要说明的是:本申请实施例所记载的技术方案之间,在不冲突的情况下,可以任意组合。
以上所述,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本申请的保护范围之内。

Claims (30)

  1. 一种数据处理方法,应用于终端中,包括:
    动态确定像素编码的当前编码映射关系;
    向移动边缘计算MEC服务器发送所述当前编码映射关系或者所述当前编码映射关系的指示信息;
    基于所述当前编码映射关系,对三维视频数据的像素值进行像素编码,获得像素编码数据;
    向移动边缘计算MEC服务器发送所述像素编码数据,其中,所述像素编码数据用于所述MEC服务器还原出所述三维视频数据;
    其中,所述三维视频数据进行像素编码之前的数据量为第一数据量;所述三维视频数据进行像素编码之后的数据量为第二数据量;所述第一数据量大于所述第二数据量。
  2. 根据权利要求1所述的方法,其中,
    所述动态确定像素编码的编码映射关系,包括以下至少之一:
    根据所述三维视频数据对应的目标场景,从备选编码映射关系中选择所述当前编码关系;
    根据所述三维视频数据的要求精度,从备选编码映射关系中选择所述当前编码关系。
  3. 根据权利要求2所述的方法,其中,
    所述根据所述三维视频数据对应的目标场景,从备选编码映射关系中选择所述当前编码关系,包括以下至少之一:
    若所述三维视频数据对应的是采集目标运动的运动场景,选择单一编码映射方式的编码映射关系作为所述当前编码关系;
    若所述三维视频数据对应的是采集目标静止的静止场景,选择组合编码映射方式的编码映射关系作为所述当前编码映射关系;
    若所述三维视频数据对应的采集场景的切换速率大于第一预设速率,选择单一编码映射方式的编码映射关系作为所述当前编码关系;
    若所述三维视频数据对应的采集场景的切换速率低于所述第一预设速率,选择组合编码映射方式的编码映射关系作为所述当前编码关系。
  4. 根据权利要求2所述的方法,其中,
    所述根据所述三维视频数据的要求精度,从备选编码映射关系中选择所述当前编码关系,包括以下至少之一:
    若所述三维视频数据的要求精度不低于第一精度阈值,从所述备选编码关系中选择单一编码映射方式的编码映射关系;
    若所述三维视频数据的要求精度低于所述第一精度阈值,从所述备选关系中选择组合编码映射方式的编码映射关系;
    若所述三维视频数据的要求精度低于第二精度阈值,从所述备选关系中选择以N1*M1个像素为一个组合的组合映射方式的编码映射关系;
    若所述三维视频数据的要求精度不低于第二精度阈值,从所述备选关系中选择以N2*M2个像素为一个组合的组合映射方式的编码映射关系;其中,N2*M2大于N1*M1。
  5. 根据权利要求2所述的方法,其中,
    所述动态确定像素编码的编码映射关系,包括:
    根据所述三维视频数据的要求精度和/或目标场景,生成所述当前映射编码关系。
  6. 根据权利要求5所述的方法,其中,
    所述根据所述三维视频数据的要求精度和/或目标场景,生成所述当前映射编码关系,包括:
    根据精度需求和/或目标场景,确定当前编码映射方式;
    若所述当前编码映射方式为单一编码映射方式,根据样本三维视频数据的像素值按照预设排序方式的排序,获得所述三维视频数据的像素值序号;
    建立所述像素值与所述像素值序号之间的映射关系。
  7. 根据权利要求6所述的方法,其中,
    所述像素值序号包括以下至少之一:
    颜色值排序形成的颜色值序号;
    深度值排序形成的深度值序号。
  8. 根据权利要求6所述的方法,其中,
    所述根据样本三维视频数据的要求精度和/或目标场景,生成所述当前映射编码关系,包括:
    若所述当前编码映射方式为组合编码映射方式,根据所述要求精度和/或目标场景确定组合编码映射方式的N*M的取值;其中,N及M的取值为正整数;
    根据所述样本三维视频数据的像素值,以N*M个像素的像素值为组合进行排序,获得所述样本三维视频数据的像素组合序号;
    建立所述像素值与所述像素组合序号之间的映射关系。
  9. 根据权利要求8所述的方法,其中,
    所述像素组合序号包括以下至少之一:
    颜色值组合排序形成的颜色值组合序号;
    深度值组合排序形成的深度值组合序号。
  10. 根据权利要求2所述的方法,其中,
    所述方法还包括:
    若样本三维视频数据中包含有未在所述当前编码映射关系的像素值,根据所述样本三维视频数据更新所述当前编码映射关系;
    将更新后的所述当前编码映射关系或者所述当前编码映射关系中的更新部分发送给MEC服务器。
  11. 根据权利要求1所述的方法,其中,
    所述基于所述当前编码映射关系,对三维视频数据的像素值进行像素编码,获得像素编码数据,包括以下至少之一:
    根据单一像素编码方式的编码映射关系,对所述三维视频数据对单个像素的像素值进行单一像素编码,获得第一类编码数据,其中,所述第一类编码数据占用的比特数小于所述像素值占用的比特数;
    根据组合像素编码方式的编码映射关系,对所述三维视频数据对N*M个像素的像素值进行组合像素编码,获得第二类像素编码,其中,所述N和所述M均为正整数。
  12. 一种数据处理方法,应用于移动边缘计算MEC服务器,包括:
    接收终端发送的当前编码映射关系或所述当前编码映射关系的指示信息;
    接收终端发送的像素编码数据;
    根据所述当前编码映射关系,对所述像素编码数据还原以获得三维视频数据的像素值;其中,所述三维视频数据进行像素编码之前的数据量为第一数据量;所述三维视频数据进行像素编码之后的数据量为第二数据量;所述第一数据量大于所述第二数据量。
  13. 根据权利要求12所述的方法,其中,
    所述根据所述当前编码映射关系,对所述像素编码数据还原以获得三维视频数据的像素值,包括以下至少之一:
    根据所述当前编码映射关系,根据所述像素编码数据的颜色编码数据,还原所述三维视频数据的颜色像素值;
    根据所述当前编码映射关系,根据所述像素编码数据的深度值编码数据,还原所述三维视频数据的深度值像素值。
  14. 根据权利要求12所述的方法,其中,
    所述根据所述当前编码映射关系,对所述像素编码数据还原以获得三维视频数据的像素值,包括以下至少之一:
    根据单一像素编码方式,利用所述当前编码映射关系对单个像素的所述像素编码数据进行解码还原三维视频数据像素值;
    根据组合像素编码方式,利用所述当前编码映射关系对N*M个像素的所述像素编码数据进行解码还原所述三维视频数据像素值。
  15. 一种数据处理装置,应用于终端中,包括:
    确定模块,配置为动态确定像素编码的当前编码映射关系;
    第一发送模块,配置为向移动边缘计算MEC服务器发送所述当前编码映射关系或者所述当前编码映射关系的指示信息;
    获得模块,配置为基于所述当前编码映射关系,对三维视频数据的像素值进行像素编码,获得像素编码数据;
    第二发送模块,配置为向移动边缘计算MEC服务器发送所述像素编码数据,其中,所述像素编码数据用于所述MEC服务器还原出所述三维视频数据;
    其中,所述三维视频数据进行像素编码之前的数据量为第一数据量;所述三维视频数据进行像素编码之后的数据量为第二数据量;所述第一数据量大于所述第二数据量。
  16. 根据权利要求15所述的装置,其中,
    所述确定模块,包括:
    第一选择子模块,配置为根据所述三维视频数据对应的目标场景,从备选编码映射关系中选择所述当前编码关系;
    第二选择子模块,配置为根据所述三维视频数据的要求精度,从备选编码映射关系中选择所述当前编码关系。
  17. 根据权利要求16所述的装置,其中,
    所述第一选择子模块,配置为执行以下至少之一:
    若所述三维视频数据对应的是采集目标运动的运动场景,选择单一编码映射方式的编码映射关系作为所述当前编码关系;
    若所述三维视频数据对应的是采集目标静止的静止场景,选择组合编码映射方式的编码映射关系作为所述当前编码映射关系;
    若所述三维视频数据对应的采集场景的切换速率大于第一预设速率,选择单一编码映射方式的编码映射关系作为所述当前编码关系;
    若所述三维视频数据对应的采集场景的切换速率低于所述第一预设速率,选择组合编码映射方式的编码映射关系作为所述当前编码关系。
  18. 根据权利要求16所述的装置,其中,
    所述第二选择子模块,配置为执行以下至少之一:
    若所述三维视频数据的要求精度不低于第一精度阈值,从所述备选编码关系中选择单一编码映射方式的编码映射关系;
    若所述三维视频数据的要求精度低于所述第一精度阈值,从所述备选关系中选择组合编码映射方式的编码映射关系;
    若所述三维视频数据的要求精度低于第二精度阈值,从所述备选关系中选择以N1*M1个像素为一个组合的组合映射方式的编码映射关系;
    若所述三维视频数据的要求精度不低于第二精度阈值,从所述备选关系中选择以N2*M2个像素为一个组合的组合映射方式的编码映射关系;其中,N2*M2大于N1*M1。
  19. 根据权利要求15所述的装置,其中,
    所述确定模块,包括:
    生成子模块,配置为根据所述三维视频数据的要求精度和/或目标场景,生成所述当前映射编码关系。
  20. 根据权利要求19所述的装置,其中,
    生成子模块,配置为根据精度需求和/或目标场景,确定当前编码映射方式;若所述当前编码映 射方式为单一编码映射方式,根据样本三维视频数据的像素值按照预设排序方式的排序,获得所述三维视频数据的像素值序号;建立所述像素值与所述像素值序号之间的映射关系。
  21. 根据权利要求20所述的装置,其中,
    所述像素值序号包括以下至少之一:
    颜色值排序形成的颜色值序号;
    深度值排序形成的深度值序号。
  22. 根据权利要求19所述的装置,其中,
    所述生成子模块,配置为若所述当前编码映射方式为组合编码映射方式,根据所述要求精度和/或目标场景确定组合编码映射方式的N*M的取值;其中,N及M的取值为正整数;根据所述样本三维视频数据的像素值,以N*M个像素的像素值为组合进行排序,获得所述样本三维视频数据的像素组合序号;建立所述像素值与所述像素组合序号之间的映射关系。
  23. 根据权利要求22所述的装置,其中,
    所述像素组合序号包括以下至少之一:
    颜色值组合排序形成的颜色值组合序号;
    深度值组合排序形成的深度值组合序号。
  24. 根据权利要求15所述的装置,其中,
    所述装置还包括:
    更新模块,配置为若样本三维视频数据中包含有未在所述当前编码映射关系的像素值,根据所述样本三维视频数据更新所述当前编码映射关系;
    第三发送模块,配置为将更新后的所述当前编码映射关系或者所述当前编码映射关系中的更新部分发送给MEC服务器。
  25. 根据权利要求24所述的装置,其中,
    所述获得模块,用于执行以下至少之一:
    根据单一像素编码方式的编码映射关系,对所述三维视频数据对单个像素的像素值进行单一像素编码,获得第一类编码数据,其中,所述第一类编码数据占用的比特数小于所述像素值占用的比特数;
    根据组合像素编码方式的编码映射关系,对所述三维视频数据对N*M个像素的像素值进行组合像素编码,获得第二类像素编码,其中,所述N和所述M均为正整数。
  26. 一种数据处理装置,其中,应用于移动边缘计算MEC服务器,包括:
    第一接收模块,用于接收终端发送的当前编码映射关系或所述当前编码映射关系的指示信息;
    第二接收模块,用于接收终端发送的像素编码数据;
    还原模块,用于根据所述当前编码映射关系,对所述像素编码数据还原以获得三维视频数据的像素值;其中,所述三维视频数据进行像素编码之前的数据量为第一数据量;所述三维视频数据进行像素编码之后的数据量为第二数据量;所述第一数据量大于所述第二数据量。
  27. 根据权利要求26所述的装置,其中,
    所述还原模块,具体用于执行以下至少之一:
    根据所述当前编码映射关系,根据所述像素编码数据的颜色编码数据,还原所述三维视频数据的颜色像素值;
    根据所述当前编码映射关系,根据所述像素编码数据的深度值编码数据,还原所述三维视频数据的深度值像素值。
  28. 根据权利要求26所述的装置,其中,
    所述还原模块,具体用于执行以下至少之一:
    根据单一像素编码方式,利用所述当前编码映射关系对单个像素的所述像素编码数据进行解码还原三维视频数据像素值;
    根据组合像素编码方式,利用所述当前编码映射关系对N*M个像素的所述像素编码数据进行解码还原所述三维视频数据像素值。
  29. 一种计算机存储介质,所述计算机存储介质上存储有计算机指令,其中,该指令被处理器执行时实现权利要求1至11任一项所述数据处理方法的步骤;或者,该指令被处理器执行时实现权利要求12至14任一项所述数据处理方法的步骤。
  30. 一种电子设备,包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机指令,其中,所述处理器执行所述指令时实现权利要求1至11任一项所述数据处理方法的步骤,或实现权利要求12至14任一项所述数据处理方法的步骤。
PCT/CN2019/100639 2018-09-30 2019-08-14 数据处理方法及装置、电子设备及存储介质 WO2020063169A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP19866939.2A EP3849178B1 (en) 2018-09-30 2019-08-14 Data processing method and storage medium
US17/207,111 US11368718B2 (en) 2018-09-30 2021-03-19 Data processing method and non-transitory computer storage medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201811163427.8A CN109151436B (zh) 2018-09-30 2018-09-30 数据处理方法及装置、电子设备及存储介质
CN201811163427.8 2018-09-30

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/207,111 Continuation US11368718B2 (en) 2018-09-30 2021-03-19 Data processing method and non-transitory computer storage medium

Publications (1)

Publication Number Publication Date
WO2020063169A1 true WO2020063169A1 (zh) 2020-04-02

Family

ID=64810639

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/100639 WO2020063169A1 (zh) 2018-09-30 2019-08-14 数据处理方法及装置、电子设备及存储介质

Country Status (4)

Country Link
US (1) US11368718B2 (zh)
EP (1) EP3849178B1 (zh)
CN (2) CN109151436B (zh)
WO (1) WO2020063169A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112866763A (zh) * 2020-12-28 2021-05-28 网宿科技股份有限公司 Hls多码率流切片的序列号生成方法、服务器及存储介质

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109151436B (zh) * 2018-09-30 2021-02-02 Oppo广东移动通信有限公司 数据处理方法及装置、电子设备及存储介质
CN109257609B (zh) * 2018-09-30 2021-04-23 Oppo广东移动通信有限公司 数据处理方法及装置、电子设备及存储介质
CN111385579B (zh) * 2020-04-09 2022-07-26 广州市百果园信息技术有限公司 视频压缩方法、装置、设备和存储介质
CN112597334B (zh) * 2021-01-15 2021-09-28 天津帕克耐科技有限公司 通信数据中心的数据处理方法
CN113505707A (zh) * 2021-07-14 2021-10-15 腾讯音乐娱乐科技(深圳)有限公司 吸烟行为检测方法、电子设备及可读存储介质
WO2024011370A1 (zh) * 2022-07-11 2024-01-18 Oppo广东移动通信有限公司 视频图像处理方法及装置、编解码器、码流、存储介质

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105611274A (zh) * 2016-01-08 2016-05-25 湖南拓视觉信息技术有限公司 一种三维图像数据的传输方法、装置及三维成像系统
US20160150237A1 (en) * 2014-11-25 2016-05-26 Electronics And Telecommunications Research Institute Apparatus and method for transmitting and receiving 3dtv broadcasting
CN108495112A (zh) * 2018-05-10 2018-09-04 Oppo广东移动通信有限公司 数据传输方法及终端、计算机存储介质
CN109151436A (zh) * 2018-09-30 2019-01-04 Oppo广东移动通信有限公司 数据处理方法及装置、电子设备及存储介质
CN109257609A (zh) * 2018-09-30 2019-01-22 Oppo广东移动通信有限公司 数据处理方法及装置、电子设备及存储介质
CN109274976A (zh) * 2018-09-30 2019-01-25 Oppo广东移动通信有限公司 数据处理方法及装置、电子设备及存储介质

Family Cites Families (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101822063A (zh) * 2007-08-16 2010-09-01 诺基亚公司 用于编码和解码图像的方法和装置
CN101472190B (zh) * 2007-12-28 2013-01-23 华为终端有限公司 多视角摄像及图像处理装置、系统
US9185426B2 (en) * 2008-08-19 2015-11-10 Broadcom Corporation Method and system for motion-compensated frame-rate up-conversion for both compressed and decompressed video bitstreams
WO2011005511A2 (en) * 2009-06-22 2011-01-13 Sony Corporation A method of compression of graphics images and videos
CN101742349B (zh) * 2010-01-05 2011-07-20 浙江大学 一种对三维场景的表达方法及其电视系统
CN102387359A (zh) * 2010-08-31 2012-03-21 中国电信股份有限公司 三维视频传输方法、系统及编解码装置
CN102055982B (zh) * 2011-01-13 2012-06-27 浙江大学 三维视频编解码方法及装置
CN102170579B (zh) * 2011-03-23 2013-10-09 深圳超多维光电子有限公司 一种图形图像处理系统、方法和芯片
WO2013039363A2 (ko) * 2011-09-16 2013-03-21 한국전자통신연구원 영상 부호화/복호화 방법 및 그 장치
CN103096049A (zh) * 2011-11-02 2013-05-08 华为技术有限公司 一种视频处理方法及系统、相关设备
US11259020B2 (en) * 2013-04-05 2022-02-22 Qualcomm Incorporated Determining palettes in palette-based video coding
US9571809B2 (en) * 2013-04-12 2017-02-14 Intel Corporation Simplified depth coding with modified intra-coding for 3D video coding
US10045048B2 (en) * 2013-10-18 2018-08-07 Lg Electronics Inc. Method and apparatus for decoding multi-view video
CN105100814B (zh) * 2014-05-06 2020-07-14 同济大学 图像编码、解码方法及装置
US10341632B2 (en) * 2015-04-15 2019-07-02 Google Llc. Spatial random access enabled video system with a three-dimensional viewing volume
US9734009B2 (en) * 2015-10-08 2017-08-15 Sandisk Technologies Llc Data encoding techniques for a device
CN106651972B (zh) * 2015-11-03 2020-03-27 杭州海康威视数字技术股份有限公司 一种二值图像编码、解码方法及装置
CN108111833A (zh) * 2016-11-24 2018-06-01 阿里巴巴集团控股有限公司 用于立体视频编解码的方法、装置及系统
CN108123777A (zh) * 2016-11-30 2018-06-05 华为技术有限公司 一种编码方式确定方法及装置
CN108235007B (zh) * 2016-12-12 2023-06-27 上海天荷电子信息有限公司 各模式使用不同精度同种编码参数的数据压缩方法和装置
CN106961612B (zh) * 2017-03-16 2021-02-02 Oppo广东移动通信有限公司 一种图像处理方法及设备
US10643301B2 (en) * 2017-03-20 2020-05-05 Qualcomm Incorporated Adaptive perturbed cube map projection
WO2018215046A1 (en) * 2017-05-22 2018-11-29 Telefonaktiebolaget Lm Ericsson (Publ) Edge cloud broker and method therein for allocating edge cloud resources

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160150237A1 (en) * 2014-11-25 2016-05-26 Electronics And Telecommunications Research Institute Apparatus and method for transmitting and receiving 3dtv broadcasting
CN105611274A (zh) * 2016-01-08 2016-05-25 湖南拓视觉信息技术有限公司 一种三维图像数据的传输方法、装置及三维成像系统
CN108495112A (zh) * 2018-05-10 2018-09-04 Oppo广东移动通信有限公司 数据传输方法及终端、计算机存储介质
CN109151436A (zh) * 2018-09-30 2019-01-04 Oppo广东移动通信有限公司 数据处理方法及装置、电子设备及存储介质
CN109257609A (zh) * 2018-09-30 2019-01-22 Oppo广东移动通信有限公司 数据处理方法及装置、电子设备及存储介质
CN109274976A (zh) * 2018-09-30 2019-01-25 Oppo广东移动通信有限公司 数据处理方法及装置、电子设备及存储介质

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112866763A (zh) * 2020-12-28 2021-05-28 网宿科技股份有限公司 Hls多码率流切片的序列号生成方法、服务器及存储介质

Also Published As

Publication number Publication date
CN109151436B (zh) 2021-02-02
CN112672132A (zh) 2021-04-16
US11368718B2 (en) 2022-06-21
US20210211725A1 (en) 2021-07-08
EP3849178A1 (en) 2021-07-14
EP3849178A4 (en) 2021-08-18
CN112672132B (zh) 2023-12-26
CN109151436A (zh) 2019-01-04
EP3849178B1 (en) 2023-07-12

Similar Documents

Publication Publication Date Title
WO2020063169A1 (zh) 数据处理方法及装置、电子设备及存储介质
AU2019345715B2 (en) Methods and devices for data processing, electronic device
EP3429207A1 (en) A method and apparatus for encoding/decoding a colored point cloud representing the geometry and colors of a 3d object
US11631217B2 (en) Data processing method and electronic device
CN109274976B (zh) 数据处理方法及装置、电子设备及存储介质
US11394978B2 (en) Video fidelity measure
US20210377542A1 (en) Video encoding and decoding method, device, and system, and storage medium
US20150365685A1 (en) Method and system for encoding and decoding, encoder and decoder
WO2021147463A1 (zh) 视频处理方法、装置及电子设备
CN110720223A (zh) 双去块滤波阈值
CN109257588A (zh) 一种数据传输方法、终端、服务器和存储介质
RU2799771C2 (ru) Способы и устройства для обработки данных, электронное устройство
CN109309839B (zh) 数据处理方法及装置、电子设备及存储介质
CN109389674B (zh) 数据处理方法及装置、mec服务器及存储介质
WO2023202177A1 (zh) 图像编码方法和装置
WO2020063172A1 (zh) 数据处理方法、终端、服务器和存储介质
CN115988258B (zh) 一种基于IoT设备的视频通信方法、存储介质及系统
WO2022164358A1 (en) Managing handover execution

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19866939

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2019866939

Country of ref document: EP

Effective date: 20210409