WO2021087800A1 - 信息处理方法、编码装置、解码装置、系统及存储介质 - Google Patents

信息处理方法、编码装置、解码装置、系统及存储介质 Download PDF

Info

Publication number
WO2021087800A1
WO2021087800A1 PCT/CN2019/115935 CN2019115935W WO2021087800A1 WO 2021087800 A1 WO2021087800 A1 WO 2021087800A1 CN 2019115935 W CN2019115935 W CN 2019115935W WO 2021087800 A1 WO2021087800 A1 WO 2021087800A1
Authority
WO
WIPO (PCT)
Prior art keywords
information
depth
depth information
video
image
Prior art date
Application number
PCT/CN2019/115935
Other languages
English (en)
French (fr)
Inventor
贾玉虎
Original Assignee
Oppo广东移动通信有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Oppo广东移动通信有限公司 filed Critical Oppo广东移动通信有限公司
Priority to CN201980098950.3A priority Critical patent/CN114175626B/zh
Priority to PCT/CN2019/115935 priority patent/WO2021087800A1/zh
Publication of WO2021087800A1 publication Critical patent/WO2021087800A1/zh
Priority to US17/691,095 priority patent/US20220230361A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T9/00Image coding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/597Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding specially adapted for multi-view video sequence encoding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/271Image signal generators wherein the generated image signals comprise depth maps or disparity maps
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/42Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence

Definitions

  • the embodiments of the present application relate to image processing technology, and in particular, to an information processing method, encoding device, decoding device, system, and storage medium.
  • the encoder when transmitting video signals, in order to increase the transmission speed, the encoder is first used to video-encode the two-dimensional image collected by the image sensor and the depth image collected by the depth camera to form video encoding information, and send the video encoding information to Decoder, the decoder decodes the two-dimensional image and the depth image from the video encoding information; it can be known that the related technology only obtains the depth image at the encoding end, encodes and transmits it, and then uses the depth image to compare the two-dimensional image at the decoder end. Stereoscopic, but the amount of information actually obtained by the depth camera is far greater than the amount of information presented by the depth image. The related technology only encodes and transmits the depth image, which reduces the information utilization rate.
  • This application provides an information processing method, encoding device, decoding device, system, and storage medium, which can improve the utilization rate of information.
  • the embodiment of the present application provides an information processing method, which is applied to an encoding device, and the method includes:
  • the coded information is written into a code stream, and the code stream is sent to a decoding device, so that the decoding device performs image processing based on the coded information.
  • the collection of depth information and video frames includes:
  • the method further includes:
  • the method further includes:
  • the redundant information is information other than the depth image generated in the process of generating the depth image
  • the coding information is hybrid coding information; the joint coding of the depth information and the video frame to obtain coding information includes:
  • the video frame is encoded to obtain video encoding information; the depth information is encoded to obtain depth encoding information; the depth encoding information is merged into a preset position in the video encoding information to obtain all The mixed coding information.
  • the coding information is depth coding information and video coding information; the joint coding of the depth information and the video frame to obtain coding information includes:
  • the video frame is encoded to obtain the video encoding information.
  • the encoding the depth information to obtain the depth coding information includes:
  • the data amount of the reduced depth information is less than the data amount of the depth information
  • the performing reduction processing on the depth information to obtain the reduced depth information includes:
  • the performing reduction processing on the depth information to obtain the reduced depth information includes:
  • the depth information is redundantly eliminated to obtain the eliminated depth information, including:
  • the depth information is at least two pieces of phase information
  • the depth information is not the at least two phase information
  • performing frequency domain conversion on the depth information to obtain frequency domain information using the frequency domain correlation to perform redundancy elimination on the frequency domain information to obtain the eliminated depth information.
  • the method further includes:
  • the eliminated encoded information is written into a code stream, and the code stream is sent to a decoding device, so that the decoding device performs image processing based on the eliminated encoded information.
  • the embodiment of the present application provides an information processing method, which is applied to a decoding device, and the method includes:
  • image processing is performed on the video frame to obtain a target image frame, and the target image frame is synthesized into a video.
  • the using the depth information to perform image processing on the video frame to obtain a target image frame includes:
  • the using the depth information to perform image processing on the video frame to obtain a target image frame includes:
  • the depth information is phase information
  • using the phase information to deblur the video frame to obtain a deblurred image frame
  • the method further includes:
  • the depth information is restored to generate a depth image frame.
  • An embodiment of the application provides an encoding device, the encoding device includes: a depth information module, an image sensor, and an encoder;
  • the depth information module is used to collect depth information
  • the image sensor is used to collect video frames
  • the encoder is configured to jointly or independently encode the depth information and the video frame to obtain encoded information; and write the encoded information into a code stream, and send the code stream to a decoding device, This allows the decoding device to perform image processing based on the encoded information.
  • the depth information module includes a depth information sensor
  • the image sensor is also used to collect the video frame within a preset time period
  • the depth information sensor is configured to collect initial depth information through a time-of-flight module or a binocular vision module within the preset time period; and use the initial depth information as the depth information.
  • the depth information module is also used to perform phase calibration on the initial depth information after the video frames are collected, and initial depth information is collected by the time-of-flight module or binocular vision module , Obtain phase information; and use the phase information as the depth information.
  • the depth information module is also used to perform a depth image on the initial depth information after the video frame is collected, and initial depth information is collected by the time-of-flight module or binocular vision module Generated to obtain redundant information; the redundant information is information other than the depth image generated in the process of generating the depth image; and the redundant information is used as the depth information.
  • the encoding information is hybrid encoding information
  • the encoder includes a video encoder
  • the video encoder is configured to use the correlation between the depth information and the video frame to jointly encode the depth information and the video frame to obtain the hybrid encoding information;
  • the video frame is encoded to obtain video encoding information; the depth information is encoded to obtain depth encoding information; the depth encoding information is merged into a preset position in the video encoding information to obtain all The mixed coding information.
  • the video encoder is further configured to perform reduction processing on the depth information to obtain reduced depth information; the data amount of the reduced depth information is less than the data amount of the depth information; and The reduced depth information is encoded to obtain the depth encoding information.
  • the video encoder is further configured to determine part of the video frame from the video frame, and determine the part of depth information corresponding to the part of the video frame from the depth information;
  • the video encoder is further configured to use the phase correlation of the depth information, the spatial correlation of the depth information, the temporal correlation of the depth information, the preset depth range, or the depth information
  • the frequency domain correlation of the depth information is redundantly eliminated to obtain the eliminated depth information; and the eliminated depth information is used as the reduced depth information.
  • the video encoder is further configured to, when the depth information is at least two pieces of phase information, use the phase correlation between the at least two pieces of phase information to perform Redundancy is eliminated, and the eliminated depth information is obtained;
  • the depth information is not the at least two phase information
  • performing frequency domain conversion on the depth information to obtain frequency domain information using the frequency domain correlation to perform redundancy elimination on the frequency domain information to obtain the eliminated depth information.
  • the encoding information is depth encoding information and video encoding information
  • the encoder includes a depth information encoder and a video encoder
  • the depth information encoder is configured to encode the depth information to obtain the depth coding information
  • the video encoder is used to encode the video frame to obtain the video encoding information.
  • the depth information encoder is further configured to perform reduction processing on the depth information to obtain reduced depth information; the data amount of the reduced depth information is less than the data amount of the depth information; and Encoding the reduced depth information to obtain the depth encoding information.
  • the depth information encoder is further configured to determine a part of the video frame from the video frame, and determine the part of the depth information corresponding to the part of the video frame from the depth information;
  • the depth information encoder is further configured to use the phase correlation of the depth information, the spatial correlation of the depth information, the time correlation of the depth information, the preset depth range, or the depth information.
  • the frequency domain correlation of the information is redundantly eliminated on the depth information to obtain the eliminated depth information; and the eliminated depth information is used as the reduced depth information.
  • the depth information encoder is further configured to, when the depth information is at least two pieces of phase information, use the phase correlation between the at least two pieces of phase information to compare the at least two pieces of phase information. Performing redundancy elimination to obtain the eliminated depth information;
  • the depth information is not the at least two phase information
  • performing frequency domain conversion on the depth information to obtain frequency domain information using the frequency domain correlation to perform redundancy elimination on the frequency domain information to obtain the eliminated depth information.
  • the encoder is further configured to jointly encode or independently encode the depth information and the video frame to obtain the encoded information, and then use the correlation between the encoded binary data to perform the Encoding information eliminates bit redundancy to obtain eliminated encoding information; and writing the eliminated encoding information into a code stream, and sends the code stream to a decoding device, so that the decoding device is based on the eliminated encoding information
  • the encoded information for image processing.
  • An embodiment of the present application provides a decoding device, where the decoding device includes: an image processor and a decoder;
  • the decoder is configured to perform joint decoding or independent decoding on the code stream when a code stream carrying coding information is received to obtain the depth information and the video frame;
  • the image processor is configured to use the depth information to perform image processing on the video frame to obtain a target image frame, and synthesize the target image frame into a video.
  • the image processor is further configured to use the depth information to adjust the depth of field of the video frame to obtain the depth image frame; and use the depth image frame as the target image frame.
  • the image processor is further configured to use the phase information to deblur the video frame to obtain a deblurred image frame when the depth information is phase information; and to deblur the image frame;
  • the image frame is used as the target image frame.
  • the decoding device further includes a depth image generator
  • the depth image generator is configured to restore the depth information to generate a depth image frame after the depth information and the video frame are obtained by jointly or independently decoding the bitstream.
  • the decoder includes a video decoder, and the decoding device further includes a depth image generator;
  • the depth image generator and the image processor are independent of the video decoder, and the video decoder connects the depth image generator and the image processor; or, the depth image generator and the The image processor is integrated in the video decoder; or, the depth image generator is integrated in the video decoder, the image processor is independent of the video decoder, and the video decoder is connected to the video decoder.
  • the decoder includes a depth information decoder and a video decoder, and the decoding device further includes a depth image generator;
  • the depth image generator is independent of the depth information decoder, the image processor is independent of the video decoder, and the depth information decoder connects the depth image generator and the image processor, and
  • the video decoder is connected to the image processor; or, the depth image generator is integrated in the depth information decoder, the image processor is independent of the video decoder, the depth information decoder and the The video decoder is connected to the image processor; or, the depth image generator is independent of the depth information decoder, the image processor is integrated in the video decoder, and the depth information decoder is connected to the A depth image generator and the video decoder; or, the depth image generator is integrated in the video decoder, the image processor is integrated in the depth information decoder, and the depth information decoder is connected The video decoder.
  • An embodiment of the application provides an information processing system, the system includes: an encoding device and a decoding device, the encoding device includes a depth information module, an image sensor, and an encoder, and the decoding device includes an image processor and a decoder;
  • the depth information module is used to collect depth information
  • the image sensor is used to collect video frames
  • the encoder is configured to jointly or independently encode the depth information and the video frame to obtain encoded information; and write the encoded information into a code stream, and send the code stream to the decoder Device
  • the decoder is configured to jointly decode or independently decode the code stream when the code stream is received to obtain the depth information and the video frame;
  • the image processor is configured to use the depth information to perform image processing on the video frame to obtain a target image frame, and synthesize the target image frame into a video.
  • the embodiment of the present application provides a computer-readable storage medium, and the computer-readable storage medium stores one or more programs, and the one or more programs can be executed by one or more first processors to realize the following Any one of the above-mentioned information processing methods applied to an encoding device.
  • the embodiments of the present application provide a computer-readable storage medium, and the computer-readable storage medium stores one or more programs, and the one or more programs can be executed by one or more second processors to implement such as Any one of the above-mentioned information processing methods applied to a decoding device.
  • FIG. 1 is a schematic flowchart of an information processing method applied to an encoding device according to an embodiment of the application
  • FIG. 2 is a schematic flowchart of another information processing method applied to an encoding device according to an embodiment of the application
  • FIG. 3 is a schematic flowchart of an information processing method applied to a decoding device according to an embodiment of the application
  • FIG. 4 is a schematic flowchart of another information processing method applied to a decoding device according to an embodiment of the application
  • FIG. 5 is a schematic flowchart of an information processing method applied to an encoding device and a decoding device according to an embodiment of the application;
  • FIG. 6 is a first structural diagram of an encoding device provided by an embodiment of this application.
  • FIG. 7 is a second structural diagram of an encoding device provided by an embodiment of this application.
  • FIG. 8(a) is a first structural diagram of a decoding device provided by an embodiment of this application.
  • FIG. 8(b) is a second structural diagram of a decoding device provided by an embodiment of this application.
  • FIG. 8(c) is a third structural schematic diagram of a decoding device provided by an embodiment of this application.
  • FIG. 8(d) is a fourth structural diagram of a decoding device provided by an embodiment of this application.
  • FIG. 9(a) is a fifth structural schematic diagram of a decoding device provided by an embodiment of this application.
  • FIG. 9(b) is a sixth structural diagram of a decoding device provided by an embodiment of this application.
  • FIG. 9(c) is a schematic structural diagram 7 of a decoding device provided by an embodiment of this application.
  • FIG. 9(d) is a schematic structural diagram eight of a decoding device provided by an embodiment of this application.
  • FIG. 10(a) is a first structural diagram of an information processing system provided by an embodiment of this application.
  • Figure 10(b) is a second structural schematic diagram of an information processing system provided by an embodiment of this application.
  • FIG. 10(c) is a third structural diagram of an information processing system provided by an embodiment of this application.
  • FIG. 10(d) is a fourth structural schematic diagram of an information processing system provided by an embodiment of this application.
  • Fig. 11(a) is a schematic structural diagram 5 of an information processing system provided by an embodiment of this application.
  • FIG. 11(b) is a sixth structural diagram of an information processing system provided by an embodiment of this application.
  • FIG. 11(c) is a schematic diagram seven of the structure of an information processing system provided by an embodiment of this application.
  • Fig. 11(d) is an eighth structural schematic diagram of an information processing system provided by an embodiment of this application.
  • the embodiment of the present application provides an information processing method, which is applied to an encoding device. As shown in FIG. 1, the information processing method includes:
  • the encoding device collects depth information and video frames at the same time within a preset time period; among them, a video frame refers to a multi-frame image collected within a preset time period, and the multi-frame image constitutes a video with a preset time period.
  • each frame of depth information corresponds to one frame of image in the video frame.
  • the encoding device collects video frames within a preset time period, and collects initial depth information through a time-of-flight module, a binocular vision module, or other depth information acquisition modules; the acquired initial depth information is used as In-depth information.
  • the encoding device uses the image sensor to collect video frames, and at the same time, uses the depth information module to collect initial depth information; uses the collected initial depth information as depth information; where the depth information module includes time of flight (TOF, Time of Flight) Module or binocular vision module.
  • TOF Time of Flight
  • the TOF module is a TOF camera.
  • the depth information module determines the original charge image and/or sensor attribute parameters (such as temperature) as the initial depth information.
  • the acquisition process of the charge image can be: under two different transmit signal frequencies, by controlling the integration time, the depth information module samples a total of multiple sets of signals with different phases, and after photoelectric conversion, these multiple sets of signals are bit Quantization to generate multiple original charge images.
  • the binocular vision module is a binocular camera.
  • the depth information module uses the binocular camera to capture two images, according to the position of the two images.
  • the pose will calculate the parallax and other information, and the depth information module will use the parallax information, camera parameters, etc. as the initial depth information.
  • the encoding device collects video frames and collects initial depth information through a time-of-flight module or binocular vision module
  • the initial depth information is phase-calibrated to obtain phase information; and the calibrated phase information As in-depth information.
  • the depth information module in the encoding device performs phase calibration on the initial depth information to obtain phase information; or performs other processing on the initial depth information to generate other information, and use other information as depth information.
  • the phase information may be speckles, laser fringes, Gray codes, sine fringes, etc. obtained by the depth information module.
  • the specific phase information may be determined according to actual conditions, which is not limited in the embodiment of the present application.
  • the encoding device collects video frames and collects initial depth information through a time-of-flight module or binocular vision module
  • the initial depth information is generated by depth image to obtain redundant information; the redundant information is generated
  • the redundant information is used as the depth information.
  • the depth information module in the encoding device uses the initial depth information to generate a depth image, and obtains information other than the depth image generated in the process of generating the depth image, that is, redundant information.
  • the depth information module when the TOF camera is used to obtain the original charge image, the depth information module generates 2 process depth data and 1 background data from the original charge image, and uses these 2 process depth data and 1 background data as the target Depth information of the object.
  • S102 Perform joint coding or independent coding on the depth information and the video frame to obtain coding information
  • the encoder in the encoding device performs joint encoding on the depth information and the video frame to obtain the information corresponding to the depth information and the video frame, that is, mixed encoding information; or, independently encode the depth information and the video frame to obtain the characteristic depth information and Information corresponding to each video frame, namely depth coding information and video coding information.
  • the video encoder in the encoding device uses the correlation between the video frame and the depth information to jointly encode each depth information in the depth information and a corresponding video frame in the video frame to obtain a hybrid Encoding information, and then obtaining hybrid encoding information composed of all hybrid encoding information.
  • the encoding information is hybrid encoding information; the encoding device uses the correlation between the depth information and the video frame to jointly encode the depth information and the video frame to obtain the hybrid encoding information;
  • the video frame is encoded to obtain video encoding information, the depth information is encoded to obtain the depth encoding information, and the depth encoding information is merged into a preset position of the video encoding information to obtain mixed encoding information.
  • the encoder in the encoding device includes a video encoder.
  • the video encoder uses the spatial correlation or temporal correlation of the depth information to encode the depth information to obtain the depth encoding information; to encode the video frame to obtain the video frame encoding information; Then, the depth coding information and the video frame coding information are combined to obtain mixed coding information.
  • the preset position may be an image information header, a sequence information header, an additional parameter set, or any other position.
  • the video encoder in the encoding device encodes each piece of depth information to obtain one piece of depth coding information; then encodes each corresponding video frame to obtain one piece of video frame coding information, and then, A piece of depth coding information is merged into the image information header of this video frame coding information to obtain a piece of mixed coding information; then, mixed coding information composed of all mixed coding information is obtained; wherein, the video coding information consists of all video frame coding information.
  • the video encoder in the encoding device encodes the depth information to obtain the depth coding information; encodes the video frame to obtain the video coding information; merges the depth coding information into the sequence information header of the video coding information to obtain the mixed Encoding information.
  • the decoding device using the standard coding and decoding protocol of the video image can only extract the mixed coding information from the mixed coding information.
  • the video frame is extracted, but the depth information is not extracted; it is also possible to extract only the depth information from the mixed coding information, without extracting the video frame; the embodiment of the present application does not limit it.
  • the encoding information is depth encoding information and video encoding information; the encoding device encodes the depth information to obtain the depth encoding information; and the video frame is encoded to obtain the video encoding information.
  • the encoder in the encoding device includes a depth information encoder and a video encoder.
  • the depth information encoder uses the spatial correlation or temporal correlation of the depth information to encode the depth information to obtain the depth encoding information; the video encoder encodes the video frame Perform encoding to obtain video encoding information.
  • the video encoder uses a video encoding and decoding protocol to encode video frames to obtain video encoding information;
  • the video encoding and decoding protocol can be H.264, H.265, H.266, VP9, or AV1, etc.
  • the depth information encoder adopts an industry standard or a specific standard of a specific organization to encode the depth information to obtain the depth coding information.
  • the encoding device performs reduction processing on the depth information to obtain the reduced depth information; the data amount of the reduced depth information is less than the data amount of the depth information; and the reduced depth information is encoded to obtain the depth encoding information .
  • the encoder in the encoding device performs reduction processing on the depth information, so that the data amount of the reduced depth information is smaller than the data amount of the depth information, which reduces the encoding workload of the depth information.
  • the encoding device determines part of the video frame from the video frame, and determines part of the depth information corresponding to the part of the video frame from the depth information; or, determines the position of the part of the image from the video frame, and determines from the depth information Part of the depth information corresponding to the part of the image position; use the part of the depth information as the depth coding information.
  • the encoding device may encode all depth information; or, only encode the depth information corresponding to part of the video frame in the video frame, and not encode the depth information corresponding to the non-part of the video frame in the video frame; or, only the video
  • the depth information corresponding to the partial image position of each video frame in the frame is encoded, and the depth information corresponding to the non-partial image position of each video frame in the video frame is not encoded; the embodiment of the present application does not limit it.
  • the encoding device uses the phase correlation of the depth information, the spatial correlation of the depth information, the time correlation of the depth information, the preset depth range, or the frequency domain correlation of the depth information to eliminate the redundancy of the depth information.
  • Get the deleted depth information use the deleted depth information as the reduced depth information.
  • the encoding device In order to compress the size of the encoded information, the encoding device performs an operation to eliminate redundancy during the process of encoding the depth information, and then encodes the eliminated depth information to obtain the depth encoding information.
  • the depth information module in the encoding device determines that the depth information is at least two pieces of phase information, it uses the phase correlation between the at least two pieces of phase information to eliminate the redundancy of the at least two pieces of phase information.
  • In-depth information In-depth information
  • the depth information when it is determined that the depth information is not at least two phase information, use the spatial correlation of the depth information to eliminate the redundancy of the depth information to obtain the eliminated depth information;
  • the preset depth range is the range within which the depth information sensor can collect depth information.
  • the decoding device collects depth information and video frames from at least one viewpoint; determines the interval viewpoint from at least one viewpoint, and uses the depth information corresponding to the interval viewpoint as the interval depth information; compares the interval depth information and the video frame Joint coding or independent coding is performed to obtain interval coding information, and the interval coding information is sent to the decoding device, so that the decoding device performs image processing based on the interval coding information; wherein, the viewpoint represents the shooting angle.
  • the encoding device is aimed at multiple viewpoints. Considering that multiple depth information, such as phase information or charge images, collected from multiple viewpoints in the same scene at the same time, there is a strong correlation.
  • multiple depth information such as phase information or charge images
  • the depth information of the interval viewpoints among the multiple viewpoints can be encoded and sent; after the decoding device obtains the depth information of the interval viewpoints, the depth information of the interval viewpoints among the multiple viewpoints can be used to generate the multiple viewpoints except for the interval viewpoints. Depth information of other viewpoints.
  • the encoding device often collects the depth information and video frames of multiple viewpoints, and can measure the depth information of multiple viewpoints and multiple viewpoints.
  • the video frames of the viewpoints are encoded independently or jointly to obtain interval coding information.
  • the interval coding information is the information corresponding to the depth information of the interval viewpoint and the video frames of multiple viewpoints, or the information corresponding to the depth information of the interval viewpoint and multiple Information corresponding to the video frame of the viewpoint.
  • the interval viewpoint among the 3 viewpoints is the left and right viewpoints
  • the other viewpoints among the 3 viewpoints are the intermediate viewpoints.
  • the encoding device writes the encoded information into the code stream, and sends the code stream to the decoding device.
  • the video encoder in the coding device writes the mixed coding information into the mixed code stream, and sends the mixed code stream to the decoding device.
  • the video encoder in the encoding device writes the video encoding information into the video encoding bitstream, and sends the video encoding bitstream to the decoding device;
  • the depth information encoder in the encoding device writes the depth encoding information into the depth encoding information Code stream, and send the depth coding information code stream to the decoding device.
  • the information processing method further includes:
  • the encoding device performs the elimination operation of the specific heat redundancy after obtaining the encoded information to obtain the eliminated encoded information.
  • the depth information encoder in the coding device removes bit redundancy from the depth coding information to obtain the removed depth coding information; after the video encoder in the coding device obtains the video coding information, The bit redundancy of the video encoding information is eliminated to obtain the eliminated video encoding information; the eliminated depth encoding information and the eliminated video encoding information are the eliminated encoding information.
  • the video encoder in the coding device removes bit redundancy from the mixed coding information to obtain the eliminated coding information.
  • the video encoder in the encoding device writes the eliminated encoding information into the mixed code stream and sends the mixed code stream to the decoding device; or the video encoder in the encoding device writes the eliminated video encoding information into the video encoding code stream , And send the video coding stream to the decoding device; the depth information encoder in the coding device writes the deleted depth coding information into the depth coding information bit stream, and sends the depth coding information bit stream to the decoding device.
  • the decoding device directly uses the depth information for encoding to obtain encoding information representing the depth information, and sends the encoding information to the decoding device.
  • the decoding device can decode the depth information and video frames from the encoded information, and further, The decoding device can not only use the depth information to recover the depth image, but also use the depth information to perform image processing on the video frame, which improves the information utilization rate.
  • the embodiment of the present application also provides an information processing method, which is applied to a decoding device. As shown in FIG. 3, the information processing method includes:
  • the decoder in the decoding device After receiving the code stream, the decoder in the decoding device performs joint decoding or independent decoding on the code stream to obtain depth information and video frames.
  • the decoding device may also receive a code stream carrying the coded information after elimination, and perform joint decoding or independent decoding on the code stream carrying the coded information after elimination to obtain depth information and video frames.
  • the code stream is a mixed coded information code stream; the decoding device decodes the mixed coded information code stream to obtain video frames and depth information.
  • the decoder in the decoding device includes a video decoder, and the video decoder decodes the mixed coding information to obtain depth information and video frames.
  • the code stream is a video coding information code stream and a depth coding information code stream; the decoding device decodes the video coding information code stream to obtain a video frame; and decodes the depth coding information code stream to obtain depth information.
  • the decoder in the decoding device includes a video decoder and a depth information decoder.
  • the video decoder decodes the video encoding information to obtain a video frame; the depth information decoder decodes the depth encoding information to obtain the depth information.
  • S302 Perform image processing on the video frame by using the depth information to obtain a target image frame, and synthesize the target image frame into a video.
  • the decoding device can use each depth information in the depth information to perform image processing on each video frame corresponding to it in the video frame to obtain a target image frame, and then obtain all target image frames, and The video is synthesized from all target image frames, and the video is displayed.
  • the decoding device uses the depth information to process the video frame correspondingly according to the default decoding requirements; or, receives a decoding instruction, and in response to the decoding instruction, uses the depth information to process the video frame accordingly; wherein,
  • the decoding instruction may be a depth setting instruction, an image enhancement instruction, or a background blur instruction, etc.
  • the decoding device uses the depth information to adjust the depth of field of the video frame to obtain the depth image; the depth image frame is used as the target image frame.
  • each depth information in the depth information is used to adjust the depth of each video frame corresponding to it in the video frame to obtain the depth image.
  • the depth information can be directly used to act on the video frame to generate an image with depth of field, and it is not necessary to superimpose the depth image generated by using the depth information with the video frame to generate an image with depth of field.
  • the decoding device uses the phase information to deblur the video frame to obtain a deblurred image; the deblurred image frame is used as the target image frame.
  • the image processor in the decoding device receives the image enhancement instruction, in response to the image enhancement instruction, it analyzes each phase information to obtain the analysis result, and uses the analysis result to deblur each corresponding video frame to obtain the deblurring image.
  • the decoding device uses the phase information to perform blurring foreground or background processing on the video frame to obtain a blurry image frame; use the blurring image frame as the target image frame .
  • each depth information in the depth information is used for each video frame corresponding to it.
  • the frame is processed to blur the foreground or background to obtain a blurry image.
  • the decoding device uses the charge information to determine the noise and external visible light in the shooting scene, thereby helping to denoise and adjust the white balance of the video frame, and generate a higher-quality video. Show it to users to improve the user’s image and video experience.
  • the decoding device decodes the interval coding information independently or jointly to obtain the depth information of the interval view and the video frame of the at least one view; the difference is performed on the depth information of the interval view to obtain at least one view except for the interval.
  • At least one frame is 3 viewpoints for the same scene, and the interval viewpoints among the 3 viewpoints are the left and right viewpoints.
  • the depth information of the left and right viewpoints can be differenced to obtain the depth information of the middle viewpoint.
  • the information processing method further includes:
  • the depth image generator in the decoding device processes each depth information in the depth information to obtain a depth image frame.
  • the depth information is phase information
  • multiple phase information collected at multiple time points within a preset time period are used to perform motion estimation to recover a depth image.
  • the image the depth image is clearer; wherein, a depth image is a depth image corresponding to one time point among multiple time points; the multiple time points may be consecutive time points.
  • the embodiment of the present application encodes and sends the phase information within a preset time period, instead of encoding the depth image.
  • the decoding device can decode from the code stream to obtain the phase information within the preset time length, and then use the phase information within the preset time length to obtain multiple phase information corresponding to multiple time points to realize the restoration of a depth image .
  • the information processing system includes an encoding device and a decoding device, and is applied to the information processing method of the information processing system. As shown in FIG. 5, a flowchart of an information processing method is shown.
  • the information processing method includes:
  • the encoding device collects depth information and video frames
  • the encoding device performs joint encoding or independent encoding on the depth information and the video frame to obtain encoding information;
  • the encoding information represents information corresponding to the depth information and the video frame, or represents information corresponding to the depth information and the video frame;
  • the encoding device writes the encoding information into the code stream, and sends the code stream to the decoding device;
  • the decoding device When receiving the code stream carrying the coded information, the decoding device performs joint decoding or independent decoding on the code stream to obtain depth information and video frames;
  • the decoding device uses the depth information to perform image processing on the video frame to obtain a target image frame, and synthesize the target image frame into a video.
  • the decoding device receives the coding information that characterizes the depth information. In this way, the decoding device can decode the depth information and video frames from the coding information. Furthermore, the decoding device can not only use the depth information to recover the depth image, but also use Depth information, optimized processing such as depth adjustment and deblurring of the video frame, improves the information utilization, and the target image frame obtained after the optimization processing has better image effect than the video frame, that is to say, it also improves the image quality.
  • the embodiment of the present application also provides an encoding device.
  • the encoding device 6 includes: a depth information module 61, an image sensor 62, and an encoder 60;
  • the depth information module 61 is used to collect depth information
  • the image sensor 62 is used to collect video frames
  • the encoder 60 is used to jointly or independently encode depth information and video frames to obtain encoded information; and write the encoded information into the code stream, and send the code stream to the decoding device, so that the decoding device performs image processing based on the encoded information deal with.
  • the depth information module 61 includes a depth information sensor 611;
  • the image sensor 62 is also used to collect video frames within a preset time period
  • the depth information sensor 611 is configured to collect initial depth information through a time-of-flight module or a binocular vision module within the preset time period; and use the initial depth information as the depth information.
  • the depth information module 61 is also used to phase the initial depth information after the video frames are collected and the initial depth information is collected by the time-of-flight module or binocular vision module. Calibrating to obtain phase information; and using the phase information as the depth information.
  • the depth information module 61 is also used to perform depth information on the initial depth information after the video frames are collected and the initial depth information is collected by the time-of-flight module or binocular vision module.
  • the image is generated to obtain redundant information; the redundant information is information other than the depth image generated in the process of generating the depth image; and the redundant information is used as the depth information.
  • the encoding information is depth encoding information and video encoding information;
  • the encoder 60 includes a depth information encoder 63 and a video encoder 64; among them,
  • the depth information encoder 63 is configured to encode depth information to obtain depth coding information
  • the video encoder 64 is used to encode video frames to obtain video encoding information.
  • the depth information encoder 63 is also used to perform reduction processing on the depth information to obtain the reduced depth information; the data amount of the reduced depth information is less than the data amount of the depth information; and the reduced depth information Information is coded to obtain depth coding information.
  • the depth information encoder 63 is further configured to determine part of the video frame from the video frame, and determine part of the depth information corresponding to the part of the video frame from the depth information;
  • the depth information encoder 63 is further configured to use the phase correlation of the depth information, the spatial correlation of the depth information, the time correlation of the depth information, the preset depth range or the frequency domain correlation of the depth information, Perform redundancy elimination on the depth information to obtain the eliminated depth information; and use the eliminated depth information as the reduced depth information.
  • the depth information encoder 63 is further configured to use the phase correlation between the at least two phase information to eliminate the redundancy of the at least two phase information when the depth information is at least two phase information. Get the deleted depth information;
  • the depth information is not at least two phase information, use the spatial correlation of the depth information to eliminate the redundancy of the depth information to obtain the eliminated depth information;
  • the encoding information is mixed encoding information; as shown in FIG. 7 for a schematic structural diagram of another encoding device, the encoder 60 includes a video encoder 71;
  • the video encoder 71 is configured to use the correlation between the depth information and the video frame to jointly encode the depth information and the video frame to obtain mixed coding information;
  • the video frame is encoded to obtain video encoding information; the depth information is encoded to obtain depth encoding information; the depth encoding information is merged into a preset position in the video encoding information to obtain mixed encoding information.
  • the video encoder 71 is also used to reduce the depth information to obtain the reduced depth information; the data amount of the reduced depth information is less than the data amount of the depth information; and the reduced depth information Perform coding to get depth coding information.
  • the video encoder 71 is further configured to determine part of the video frame from the video frame, and determine part of the depth information corresponding to the part of the video frame from the depth information;
  • the video encoder 71 is also used to utilize the phase correlation of depth information, the spatial correlation of depth information, the time correlation of depth information, the preset depth range, or the frequency domain correlation of depth information.
  • the depth information is redundantly eliminated to obtain the eliminated depth information; and the eliminated depth information is used as the reduced depth information.
  • the video encoder 71 is further configured to use the phase correlation between the at least two phase information to eliminate the redundancy of the at least two phase information when the depth information is at least two phase information, to obtain Depth information after elimination;
  • the depth information is not at least two phase information, use the spatial correlation of the depth information to eliminate the redundancy of the depth information to obtain the eliminated depth information;
  • the encoder 60 is also used to jointly or independently encode the depth information and the video frame to obtain the encoded information, and then use the correlation between the encoded binary data to eliminate bit redundancy in the encoded information, Obtain the eliminated encoded information; and write the eliminated encoded information into the code stream, and send the code stream to the decoding device, so that the decoding device performs image processing based on the eliminated encoded information.
  • the embodiments of the present application provide a computer-readable storage medium, which is applied to an encoding device.
  • the computer-readable storage medium stores one or more programs, and the one or more programs can be executed by one or more first processors. When executed by the first processor, the information processing method as applied to the encoding device is realized.
  • the embodiment of the present application also provides a decoding device, which includes: an image processor and a decoder;
  • the decoder is used to jointly or independently decode the code stream when receiving the code stream carrying the coded information to obtain depth information and video frames;
  • the image processor is used to use the depth information to perform image processing on the video frame to obtain the target image frame, and synthesize the target image frame into the video.
  • the bitstream is a video coding information bitstream and a depth coding information bitstream
  • the decoder includes a video decoder and a depth information decoder
  • the video decoder is used to decode the video encoding information stream to obtain the video frame;
  • the depth information decoder is used to decode the depth encoding information bitstream to obtain depth information.
  • the code stream is a mixed coded information code stream;
  • the decoder includes a video decoder;
  • the video decoder is used to decode the mixed coded information stream to obtain video frames and depth information.
  • the image processor is further configured to use the depth information to adjust the depth of field of the video frame to obtain the depth-of-field image frame; and use the depth-of-field image frame as the target image frame.
  • the image processor is further configured to use the phase information to deblur the video frame to obtain a deblurred image frame when the depth information is phase information; and use the deblurred image frame as the target image frame.
  • the decoding device further includes a depth image generator
  • the depth image generator is used to jointly or independently decode the code stream to obtain the depth information and the video frame, and then restore the depth information to generate the depth image frame.
  • the decoder includes a video decoder, and the decoding device further includes a depth image generator;
  • the depth image generator and image processor are independent of the video decoder, and the video decoder connects the depth image generator and the image processor; or, the depth image generator and the image processor are integrated in the video decoder; or, the depth image generator Integrated in the video decoder, the image processor is independent of the video decoder, and the video decoder is connected to the image processor; or, the image processor is integrated in the video decoder, and the depth image generator is independent of the video decoder, and the video decoder is connected Depth image generator.
  • the decoding device 18 includes an image processor 181, and also includes a video decoder 182 and a depth image generator 183; a depth image generator 183 and an image
  • the processor 181 is independent of the video decoder 182, and the video decoder 182 is connected to the depth image generator 183 and the image processor 181; among them, the video decoder 182 processes the mixed coding information and outputs the depth information and video frames.
  • the video decoder 182 transmits the depth information to the depth image generator 183, and the depth image generator 183 restores the depth information and outputs the depth image frame; the video decoder 182 sends the video frame and depth information to the image processor 181, and the image processor 181 uses The depth information performs image processing on the video frame and outputs the target image frame.
  • the decoding device 28 includes an image processor 281, a video decoder 282 and a depth image generator 283; a depth image generator 283 and an image
  • the processors 281 are integrated in the video decoder 282; the video decoder 282 processes the mixed coding information and directly outputs the depth image frame and/or the target image frame.
  • the decoding device 38 includes an image processor 381, and also includes a video decoder 382 and a depth image generator 383; the depth image generator 383 is integrated in In the video decoder 382, the image processor 381 is independent of the video decoder 382, and the video decoder 382 is connected to the image processor 381; among them, the video decoder 382 processes the mixed coding information and outputs depth image frames, depth information, and video frames Then, the video decoder 382 sends the video frame and depth information to the image processor 381; the image processor 381 uses the depth information to perform image processing on the video frame and output the target image frame.
  • the decoding device 48 includes an image processor 481, and also includes a video decoder 482 and a depth image generator 483; the image processor 481 is integrated in the video In the decoder 482, the depth image generator 483 is independent of the video decoder 482, and the video decoder 482 is connected to the depth image generator 483.
  • the video decoder 482 processes the mixed coding information and outputs the depth information and the target image frame.
  • the decoder 482 then sends the depth information to the depth image generator 483; the depth image generator 483 restores the depth information and outputs the depth image frame.
  • the decoder includes a depth information decoder and a video decoder, and the decoding device further includes a depth image generator;
  • the depth image generator is independent of the depth information decoder, the image processor is independent of the video decoder, the depth information decoder is connected to the depth image generator and the image processor, and the video decoder is connected to the image processor; or the depth image generator is integrated in In the depth information decoder, the image processor is independent of the video decoder, and the depth information decoder and the video decoder are connected to the image processor; or the depth image generator is independent of the depth information decoder, and the image processor is integrated in the video decoder , The depth information decoder is connected to the depth image generator and the video decoder; or, the depth image generator is integrated in the video decoder, the image processor is integrated in the depth information decoder, and the depth information decoder is connected to the video decoder.
  • the decoding device 19 includes an image processor 191, and further includes a depth information decoder 192, a video decoder 193, and a depth image generator 194;
  • the image generator 194 is independent of the depth information decoder 192
  • the image processor 191 is independent of the video decoder 193, the depth information decoder 192 is connected to the depth image generator 194 and the image processor 191, and the video decoder 193 is connected to the image processor 191;
  • the video decoder 193 processes the video encoding information and outputs video frames
  • the depth information decoder 192 processes the depth encoding information and outputs depth information
  • the video decoder 193 transmits the video frames to the image processor 191, and the depth information decodes
  • the device 192 transmits the depth information to the depth image generator 194 and the image processor 191, the depth image generator 194 outputs the depth image frame, and the image processor 191 outputs the target image
  • the decoding device 29 includes an image processor 291, and further includes a depth information decoder 292, a video decoder 293, and a depth image generator 294;
  • the image generator 294 is integrated in the depth information decoder 292, the image processor 291 is independent of the video decoder 293, the depth information decoder 292 and the video decoder 293 are connected to the image processor 291; wherein the video decoder 293 encodes the video information For processing and output video frames, the depth information decoder 292 processes the depth encoding information and outputs depth information and depth image frames; the video decoder 293 transmits the video frames to the image processor 291, and the depth information decoder 292 transmits the depth information To the image processor 291, the image processor 291 outputs the target image frame.
  • the decoding device 39 includes an image processor 391, and further includes a depth information decoder 392, a video decoder 393, and a depth image generator 394;
  • the image generator 394 is independent of the depth information decoder 392.
  • the image processor 391 is integrated in the video decoder 393.
  • the depth information decoder 392 is connected to the depth image generator 394 and the video decoder 393; among them, the depth information decoder 392 is The encoding information is processed and the depth information is output; the depth information decoder 392 transmits the depth information to the depth image generator 394 and the video decoder 393, the depth image generator 394 outputs the depth image frames, and the video decoder 393 is based on the video encoding information and depth Information, output the target image frame.
  • the decoding device 49 includes an image processor 491, and further includes a depth information decoder 492, a video decoder 493, and a depth image generator 494;
  • the image generator 494 is integrated in the depth information decoder 492, the image processor 491 is integrated in the video decoder 493, and the depth information decoder 492 is connected to the video decoder 493; among them, the depth information decoder 492 processes the depth encoding information, Output depth information and depth image frames; the depth information decoder 492 transmits the depth information to the video decoder 493, and the video decoder 493 outputs the target image frame based on the video encoding information and the depth information.
  • the embodiments of the present application provide a computer-readable storage medium, which is applied to a decoding device.
  • the computer-readable storage medium stores one or more programs, and the one or more programs can be executed by one or more second processors. When executed by the second processor, the information processing method as applied to the decoding device is realized.
  • the embodiments of the present application also provide an information processing system.
  • the information processing system includes an encoding device and a decoding device.
  • the encoding device includes a depth information module, an image sensor, and an encoder
  • the decoding device includes an image processor and a decoder;
  • Depth information module used to collect depth information
  • Image sensor used to collect video frames
  • the encoder is used to jointly or independently encode the depth information and the video frame to obtain the encoded information; and write the encoded information into the code stream, and send the code stream to the decoding device;
  • the decoder is used to jointly decode or independently decode the code stream to obtain depth information and video frames when the code stream is received;
  • the image processor is used to use the depth information to perform image processing on the video frame to obtain the target image frame, and synthesize the target image frame into the video.
  • FIG. 10(a) a schematic structural diagram of an information processing system.
  • the information processing system includes an encoding device 7 and a decoding device 18; the encoding device 7 includes a video encoder 71, and the decoding device 18 includes an image processor. 181.
  • the image generator 183 restores the depth information and outputs a depth image frame; the image processor 181 uses the depth information to perform image processing on the video frame to output the target image frame.
  • FIG. 10(b) a schematic structural diagram of an information processing system.
  • the information processing system includes an encoding device 7 and a decoding device 28; the encoding device 7 includes a video encoder 71, and the decoding device 28 includes a video decoder. 282.
  • the video decoder 282 includes a depth image generator 283 and an image processor 281; among them, the video encoder 71 sends the mixed coding information to the video decoder 282; the video decoder 282 processes the mixed coding information and directly outputs the depth Image frame and target image frame.
  • FIG. 10(c) a schematic structural diagram of an information processing system.
  • the information processing system includes an encoding device 7 and a decoding device 38; the encoding device 7 includes a video encoder 71, and the decoding device 38 includes an image processor. 381 and a video decoder 382.
  • the video decoder 382 includes a depth image generator 383; wherein, the video encoder 71 sends the mixed coding information to the video decoder 382; the video decoder 382 processes the mixed coding information and outputs a depth image Frame, depth information, and video frame; the image processor 381 uses the depth information to perform image processing on the video frame to output the target image frame.
  • the information processing system includes an encoding device 7 and a decoding device 48; the encoding device 7 includes a video encoder 71, and the decoding device 48 includes a video decoder 482 and a depth image generator 483.
  • the video decoder 482 includes an image processor 481; among them, the video encoder 71 sends the mixed coding information to the video decoder 482; the video decoder 482 processes the mixed coding information and outputs the depth information And the target image frame, the depth image generator 483 restores the depth information and outputs the depth image frame.
  • the information processing system includes an encoding device 6 and a decoding device 19; the encoding device 6 includes a depth information encoder 63 and a video encoder 64; and decoding
  • the device 19 includes an image processor 191, a depth information decoder 192, a video decoder 193, and a depth image generator 194; wherein the depth information encoder 63 sends the depth encoding information to the depth information decoder 192, and the video encoder 64 sends the video
  • the encoding information is sent to the video decoder 193; the depth information decoder 192 decodes the depth encoding information and outputs depth information; the video decoder 193 decodes the video encoding information and outputs the video frame; the depth image generator 194 decodes the depth
  • the information is processed to output a depth image frame; the image processor 191 uses the depth information to perform image processing on the video frame to output a target image frame.
  • FIG. 11(b) a schematic structural diagram of an information processing system.
  • the information processing system includes an encoding device 6 and a decoding device 29, and the encoding device 6 includes a depth information encoder 63 and a video encoder 64; decoding
  • the device 29 includes an image processor 291, a depth information decoder 292, and a video decoder 293.
  • the depth information decoder 292 includes a depth image generator 294; wherein the depth information encoder 63 sends the depth encoding information to the depth information decoder 292, The video encoder 64 sends the video encoding information to the video decoder 293; the depth information decoder 292 processes the depth encoding information and outputs depth image frames and depth information; the video decoder 293 decodes the video encoding information and outputs the video Frame; the image processor 291 uses the depth information to perform image processing on the video frame and output the target image frame.
  • the information processing system includes an encoding device 6 and a decoding device 39, and the encoding device 6 includes a depth information encoder 63 and a video encoder 64; decoding The device 39 includes a depth information decoder 392, a video decoder 393, and a depth image generator 394.
  • the video decoder 393 includes an image processor 391; where the depth information encoder 63 sends the depth encoding information to the depth information decoder 392, and the video The encoder 64 sends the video encoding information to the video decoder 393; the depth information decoder 392 decodes the depth encoding information and outputs the depth information; the video decoder 393 processes the video encoding information and the depth information, and outputs the target image Frame; The depth image generator 394 restores the depth information and outputs a depth image frame.
  • the information processing system includes an encoding device 6 and a decoding device 49, and the encoding device 6 includes a depth information encoder 63 and a video encoder 64; decoding
  • the device 49 includes a depth information decoder 492 and a video decoder 493.
  • the depth information decoder 492 includes a depth image generator 494, and the video decoder 493 includes an image processor 491.
  • the depth information encoder 63 sends the depth encoding information to the depth information encoder.
  • the information decoder 492 the video encoder 64 sends the video encoding information to the video decoder 493; the depth information decoder 492 processes the depth encoding information, and outputs the depth image frame and depth information; the video decoder 493 encodes the video information And the depth information is processed, and the target image frame is output.
  • one depth information decoder can be used to encode multiple depth information to generate multiple depth encoding information.
  • the information is written into one code stream, and then the redundant information is encoded to obtain the redundant information encoding information, and the redundant information encoding information is written into another code stream; accordingly, a depth information decoder can parse the multiple channels
  • the code stream, or multiple depth information decoders parse one bit stream, or multiple depth information decoders parse multiple bit streams, can be specifically determined according to actual conditions, which is not limited in
  • this application can be provided as methods, devices, systems, or computer program products. Therefore, this application may adopt the form of hardware embodiment, software embodiment, or a combination of software and hardware embodiments. Moreover, this application may adopt the form of a computer program product implemented on one or more computer-usable storage media (including but not limited to disk storage, optical storage, etc.) containing computer-usable program codes.
  • These computer program instructions can also be stored in a computer-readable memory that can guide a computer or other programmable data processing equipment to work in a specific manner, so that the instructions stored in the computer-readable memory produce an article of manufacture including the instruction device.
  • the device implements the functions specified in one process or multiple processes in the flowchart and/or one block or multiple blocks in the block diagram.
  • These computer program instructions can also be loaded on a computer or other programmable data processing equipment, so that a series of operation steps are executed on the computer or other programmable equipment to produce computer-implemented processing, so as to execute on the computer or other programmable equipment.
  • the instructions provide steps for implementing the functions specified in one process or multiple processes in the flowchart and/or one block or multiple blocks in the block diagram.
  • the embodiment of the application adopts the above-mentioned technical implementation scheme, directly uses the depth information for encoding, obtains the encoding information representing the depth information, and sends the encoded information to the decoding device.
  • the decoding device can decode the depth information and the video frame from the encoded information.
  • the decoding device can not only use the depth information to recover the depth image, but also use the depth information to perform image processing on the video frame, which improves the information utilization rate.

Abstract

本申请实施例公开了一种信息处理方法、编码装置、解码装置、系统及存储介质,应用于编码装置的信息处理方法包括:采集深度信息和视频帧;对深度信息和视频帧进行联合编码或独立编码,得到编码信息;将编码信息写入码流,并将码流发送至解码装置,以使得解码装置基于编码信息进行图像处理。

Description

信息处理方法、编码装置、解码装置、系统及存储介质 技术领域
本申请实施例涉及图像处理技术,尤其涉及一种信息处理方法、编码装置、解码装置、系统及存储介质。
背景技术
目前,在传输视频信号时,为了提高传输速度,先利用编码器,对图像传感器采集到的二维图像和深度相机采集到的深度图像进行视频编码,形成视频编码信息,将视频编码信息发送至解码器,解码器从视频编码信息中解码得到二维图像和深度图像;可以知道,相关技术在编码端仅仅获取深度图像,对其进行编码和传输,进而在解码端利用深度图像对二维图像进行立体化,但是深度相机实际获得的信息量是远远大于深度图像所呈现的信息量,相关技术只编码传输了深度图像,降低了信息利用率。
发明内容
本申请提供一种信息处理方法、编码装置、解码装置、系统及存储介质,能够提高信息的利用率。
本申请实施例的技术方案可以如下实现:
本申请实施例提供一种信息处理方法,应用于编码装置,所述方法包括:
采集深度信息和视频帧;
对所述深度信息和所述视频帧进行联合编码或独立编码,得到编码信息;
将所述编码信息写入码流,并将所述码流发送至解码装置,以使得所述解码装置基于所述编码信息进行图像处理。
上述方案中,所述采集深度信息和视频帧,包括:
在预设时长内,采集所述视频帧,并通过飞行时间模组或双目视觉模组采集初始深度信息;
将所述初始深度信息,作为所述深度信息。
上述方案中,所述采集所述视频帧,并通过飞行时间模组或双目视觉模组采集初始深度信息之后,所述方法还包括:
对所述初始深度信息进行相位校准,得到相位信息;
将所述相位信息,作为所述深度信息。
上述方案中,所述采集所述视频帧,并通过飞行时间模组或双目视觉模组采集初始深度信息之后,所述方法还包括:
对所述初始深度信息进行深度图像生成,得到冗余信息;所述冗余信息为生成深度图像的过程中产生的除了深度图像之外的其他信息;
将所述冗余信息,作为所述深度信息。
上述方案中,所述编码信息为混合编码信息;所述对所述深度信息和所述视频帧进行联合编码,得到编码信息,包括:
利用所述深度信息和所述视频帧的相关性,对所述深度信息和所述视频帧进行联合编码,得到所述混合编码信息;
或者,对所述视频帧进行编码,得到视频编码信息;对所述深度信息进行编码,得到深度编码信息;将所述深度编码信息合并至所述视频编码信息中的预设位置处,得到所述混合编码信息。
上述方案中,所述编码信息为深度编码信息和视频编码信息;所述对所述深度信息和所述视频帧进行联合编码,得到编码信息,包括:
对所述深度信息进行编码,得到所述深度编码信息;
对所述视频帧进行编码,得到所述视频编码信息。
上述方案中,所述对所述深度信息进行编码,得到深度编码信息;或者,所述对所述深度信息进行 编码,得到所述深度编码信息,包括:
对所述深度信息进行缩减处理,得到缩减后的深度信息;所述缩减后的深度信息的数据量小于所述深度信息的数据量;
对所述缩减后的深度信息进行编码,得到所述深度编码信息。
上述方案中,所述对所述深度信息进行缩减处理,得到缩减后的深度信息,包括:
从所述视频帧中确定部分视频帧,并从所述深度信息中确定与所述部分视频帧对应的部分深度信息;
或者,从所述视频帧中确定部分图像位置,并从所述深度信息中确定与所述部分图像位置对应的部分深度信息;
将所述部分深度信息,作为所述缩减后的深度信息。
上述方案中,所述对所述深度信息进行缩减处理,得到缩减后的深度信息,包括:
利用所述深度信息的相位相关性、所述深度信息的空间相关性、所述深度信息的时间相关性、预设深度范围或者所述深度信息的频域相关性,对所述深度信息进行冗余消除,得到消除后的深度信息;
将所述消除后的深度信息,作为所述缩减后的深度信息。
上述方案中,所述利用所述深度信息的相位相关性、所述深度信息的空间相关性、所述深度信息的时间相关性、预设深度范围或者所述深度信息的频域相关性,对所述深度信息进行冗余消除,得到消除后的深度信息,包括:
当所述深度信息为至少两个相位信息时,利用所述至少两个相位信息之间的相位相关性,对所述至少两个相位信息进行冗余消除,得到所述消除后的深度信息;
或者,当所述深度信息不是所述至少两个相位信息时,利用所述深度信息的空间相关性,对所述深度信息进行冗余消除,得到所述消除后的深度信息;
或者,利用所述深度信息的时间相关性,对所述深度信息进行冗余消除,得到所述消除后的深度信息;
或者,利用所述预设深度范围,对所述深度信息进行冗余消除,得到所述消除后的深度信息;
或者,对所述深度信息进行频域转换,得到频域信息;利用所述频域相关性,对所述频域信息进行冗余消除,得到所述消除后的深度信息。
上述方案中,在所述对所述深度信息和所述视频帧进行联合编码或独立编码,得到编码信息之后,所述方法还包括:
利用编码二进制数据之间的相关性,对所述编码信息消除比特冗余,得到消除后的编码信息;
将所述消除后的编码信息写入码流,并将所述码流发送至解码装置,以使得所述解码装置基于所述消除后的编码信息进行图像处理。
本申请实施例提供一种信息处理方法,应用于解码装置,所述方法包括:
当接收到携带有编码信息的码流时,对所述码流进行联合解码或独立解码,得到所述深度信息和所述视频帧;
利用所述深度信息,对所述视频帧进行图像处理,得到目标图像帧,并将所述目标图像帧合成视频。
上述方案中,所述利用所述深度信息,对所述视频帧进行图像处理,得到目标图像帧,包括:
利用所述深度信息调整所述视频帧的景深,得到所述景深图像帧;
将所述景深图像帧,作为所述目标图像帧。
上述方案中,所述利用所述深度信息,对所述视频帧进行图像处理,得到目标图像帧,包括:
当所述深度信息为相位信息时,利用所述相位信息,对所述视频帧进行去模糊,得到去模糊图像帧;
将所述去模糊图像帧,作为所述目标图像帧。
上述方案中,在所述对所述码流进行联合解码或独立解码,得到所述深度信息和所述视频帧之后,所述方法还包括:
对所述深度信息进行恢复,生成深度图像帧。
本申请实施例提供一种编码装置,所述编码装置包括:深度信息模组、图像传感器和编码器;
所述深度信息模组,用于采集深度信息;
所述图像传感器,用于采集视频帧;
所述编码器,用于对所述深度信息和所述视频帧进行联合编码或独立编码,得到编码信息;以及将所述编码信息写入码流,并将所述码流发送至解码装置,以使得所述解码装置基于所述编码信息进行图像处理。
上述方案中,所述深度信息模组包括深度信息传感器;
所述图像传感器,还用于在预设时长内,采集所述视频帧;
所述深度信息传感器,用于在所述预设时长内,通过飞行时间模组或双目视觉模组采集初始深度信息;以及将所述初始深度信息,作为所述深度信息。
上述方案中,所述深度信息模组,还用于在所述采集所述视频帧,并通过飞行时间模组或双目视觉模组采集初始深度信息之后,对所述初始深度信息进行相位校准,得到相位信息;以及将所述相位信息,作为所述深度信息。
上述方案中,所述深度信息模组,还用于在所述采集所述视频帧,并通过飞行时间模组或双目视觉模组采集初始深度信息之后,对所述初始深度信息进行深度图像生成,得到冗余信息;所述冗余信息为生成深度图像的过程中产生的除了深度图像之外的其他信息;以及将所述冗余信息,作为所述深度信息。
上述方案中,所述编码信息为混合编码信息;所述编码器包括视频编码器;
所述视频编码器,用于利用所述深度信息和所述视频帧的相关性,对所述深度信息和所述视频帧进行联合编码,得到所述混合编码信息;
或者,对所述视频帧进行编码,得到视频编码信息;对所述深度信息进行编码,得到深度编码信息;将所述深度编码信息合并至所述视频编码信息中的预设位置处,得到所述混合编码信息。
上述方案中,所述视频编码器,还用于对所述深度信息进行缩减处理,得到缩减后的深度信息;所述缩减后的深度信息的数据量小于所述深度信息的数据量;以及对所述缩减后的深度信息进行编码,得到所述深度编码信息。
上述方案中,所述视频编码器,还用于从所述视频帧中确定部分视频帧,并从所述深度信息中确定与所述部分视频帧对应的部分深度信息;
或者,从所述视频帧中确定部分图像位置,并从所述深度信息中确定与所述部分图像位置对应的部分深度信息;
以及将所述部分深度信息,作为所述缩减后的深度信息。
上述方案中,所述视频编码器,还用于利用所述深度信息的相位相关性、所述深度信息的空间相关性、所述深度信息的时间相关性、预设深度范围或者所述深度信息的频域相关性,对所述深度信息进行冗余消除,得到消除后的深度信息;以及将所述消除后的深度信息,作为所述缩减后的深度信息。
上述方案中,所述视频编码器,还用于当所述深度信息为至少两个相位信息时,利用所述至少两个相位信息之间的相位相关性,对所述至少两个相位信息进行冗余消除,得到所述消除后的深度信息;
或者,当所述深度信息不是所述至少两个相位信息时,利用所述深度信息的空间相关性,对所述深度信息进行冗余消除,得到所述消除后的深度信息;
或者,利用所述深度信息的时间相关性,对所述深度信息进行冗余消除,得到所述消除后的深度信息;
或者,利用所述预设深度范围,对所述深度信息进行冗余消除,得到所述消除后的深度信息;
或者,对所述深度信息进行频域转换,得到频域信息;利用所述频域相关性,对所述频域信息进行冗余消除,得到所述消除后的深度信息。
上述方案中,所述编码信息为深度编码信息和视频编码信息;所述编码器包括深度信息编码器和视频编码器;其中,
所述深度信息编码器,用于对所述深度信息进行编码,得到所述深度编码信息;
所述视频编码器,用于对所述视频帧进行编码,得到所述视频编码信息。
上述方案中,所述深度信息编码器,还用于对所述深度信息进行缩减处理,得到缩减后的深度信息;所述缩减后的深度信息的数据量小于所述深度信息的数据量;以及对所述缩减后的深度信息进行编码,得到所述深度编码信息。
上述方案中,所述深度信息编码器,还用于从所述视频帧中确定部分视频帧,并从所述深度信息中确定与所述部分视频帧对应的部分深度信息;
或者,从所述视频帧中确定部分图像位置,并从所述深度信息中确定与所述部分图像位置对应的部分深度信息;
以及将所述部分深度信息,作为所述缩减后的深度信息。
上述方案中,所述深度信息编码器,还用于利用所述深度信息的相位相关性、所述深度信息的空间相关性、所述深度信息的时间相关性、预设深度范围或者所述深度信息的频域相关性,对所述深度信息进行冗余消除,得到消除后的深度信息;以及将所述消除后的深度信息,作为所述缩减后的深度信息。
上述方案中,所述深度信息编码器,还用于当所述深度信息为至少两个相位信息时,利用所述至少两个相位信息之间的相位相关性,对所述至少两个相位信息进行冗余消除,得到所述消除后的深度信息;
或者,当所述深度信息不是所述至少两个相位信息时,利用所述深度信息的空间相关性,对所述深度信息进行冗余消除,得到所述消除后的深度信息;
或者,利用所述深度信息的时间相关性,对所述深度信息进行冗余消除,得到所述消除后的深度信息;
或者,利用所述预设深度范围,对所述深度信息进行冗余消除,得到所述消除后的深度信息;
或者,对所述深度信息进行频域转换,得到频域信息;利用所述频域相关性,对所述频域信息进行冗余消除,得到所述消除后的深度信息。
上述方案中,所述编码器,还用于在所述对所述深度信息和所述视频帧进行联合编码或独立编码,得到编码信息之后,利用编码二进制数据之间的相关性,对所述编码信息消除比特冗余,得到消除后的编码信息;以及将所述消除后的编码信息写入码流,并将所述码流发送至解码装置,以使得所述解码装置基于所述消除后的编码信息进行图像处理。
本申请实施例提供一种解码装置,所述解码装置包括:图像处理器和解码器;
所述解码器,用于当接收到携带有编码信息的码流时,对所述码流进行联合解码或独立解码,得到所述深度信息和所述视频帧;
所述图像处理器,用于利用所述深度信息,对所述视频帧进行图像处理,得到目标图像帧,并将所述目标图像帧合成视频。
上述方案中,所述图像处理器,还用于利用所述深度信息调整所述视频帧的景深,得到所述景深图像帧;以及将所述景深图像帧,作为所述目标图像帧。
上述方案中,所述图像处理器,还用于当所述深度信息为相位信息时,利用所述相位信息,对所述视频帧进行去模糊,得到去模糊图像帧;以及将所述去模糊图像帧,作为所述目标图像帧。
上述方案中,所述解码装置还包括深度图像生成器;
所述深度图像生成器,用于在所述对所述码流进行联合解码或独立解码,得到所述深度信息和所述视频帧之后,对所述深度信息进行恢复,生成深度图像帧。
上述方案中,所述解码器包括视频解码器,所述解码装置还包括深度图像生成器;
所述深度图像生成器和所述图像处理器独立于所述视频解码器,所述视频解码器连接所述深度图像生成器和所述图像处理器;或者,所述深度图像生成器和所述图像处理器集成于所述视频解码器中;或者,所述深度图像生成器集成于所述视频解码器中,所述图像处理器独立于所述视频解码器,所述视频解码器连接所述图像处理器;或者,所述图像处理器集成于所述视频解码器中,所述深度图像生成器独立于所述视频解码器,所述视频解码器连接所述深度图像生成器。
上述方案中,所述解码器包括深度信息解码器和视频解码器,所述解码装置还包括深度图像生成器;
所述深度图像生成器独立于所述深度信息解码器,所述图像处理器独立于所述视频解码器,所述深度信息解码器连接所述深度图像生成器和所述图像处理器,所述视频解码器连接所述图像处理器;或者,所述深度图像生成器集成于所述深度信息解码器中,所述图像处理器独立于所述视频解码器,所述深度信息解码器和所述视频解码器连接所述图像处理器;或者,所述深度图像生成器独立于所述深度信息解码器,所述图像处理器集成于所述视频解码器中,所述深度信息解码器连接所述深度图像生成器和所述视频解码器;或者,所述深度图像生成器集成于所述视频解码器中,所述图像处理器集成于所述深度信息解码器中,所述深度信息解码器连接所述视频解码器。
本申请实施例提供一种信息处理系统,所述系统包括:编码装置和解码装置,所述编码装置包括深度信息模组、图像传感器和编码器,所述解码装置包括图像处理器和解码器;
所述深度信息模组,用于采集深度信息;
所述图像传感器,用于采集视频帧;
所述编码器,用于对所述深度信息和所述视频帧进行联合编码或独立编码,得到编码信息;以及将所述编码信息写入码流,并将所述码流发送至所述解码装置;
所述解码器,用于当接收到所述码流时,对所述码流进行联合解码或独立解码,得到所述深度信息和所述视频帧;
所述图像处理器,用于利用所述深度信息,对所述视频帧进行图像处理,得到目标图像帧,并将所述目标图像帧合成视频。
本申请实施例提供一种计算机可读存储介质,所述计算机可读存储介质存储有一个或者多个程序,所述一个或者多个程序可被一个或者多个第一处理器执行,以实现如上述任意一种应用于编码装置的信息处理方法。
本申请实施例提供一种计算机可读存储介质,所述计算机可读存储介质存储有一个或者多个程序,所述一个或者多个程序可被一个或者多个第二处理器执行,以实现如上述任意一种应用于解码装置的信息处理方法。
附图说明
图1为本申请实施例提供的一种应用于编码装置的信息处理方法的流程示意图;
图2为本申请实施例提供的另一种应用于编码装置的信息处理方法的流程示意图;
图3为本申请实施例提供的一种应用于解码装置的信息处理方法的流程示意图;
图4为本申请实施例提供的另一种应用于解码装置的信息处理方法的流程示意图;
图5为本申请实施例提供的一种应用于编码装置和解码装置的信息处理方法的流程示意图;
图6为本申请实施例提供的一种编码装置的结构示意图一;
图7为本申请实施例提供的一种编码装置的结构示意图二;
图8(a)为本申请实施例提供的一种解码装置的结构示意图一;
图8(b)为本申请实施例提供的一种解码装置的结构示意图二;
图8(c)为本申请实施例提供的一种解码装置的结构示意图三;
图8(d)为本申请实施例提供的一种解码装置的结构示意图四;
图9(a)为本申请实施例提供的一种解码装置的结构示意图五;
图9(b)为本申请实施例提供的一种解码装置的结构示意图六;
图9(c)为本申请实施例提供的一种解码装置的结构示意图七;
图9(d)为本申请实施例提供的一种解码装置的结构示意图八;
图10(a)为本申请实施例提供的一种信息处理系统的结构示意图一;
图10(b)为本申请实施例提供的一种信息处理系统的结构示意图二;
图10(c)为本申请实施例提供的一种信息处理系统的结构示意图三;
图10(d)为本申请实施例提供的一种信息处理系统的结构示意图四;
图11(a)为本申请实施例提供的一种信息处理系统的结构示意图五;
图11(b)为本申请实施例提供的一种信息处理系统的结构示意图六;
图11(c)为本申请实施例提供的一种信息处理系统的结构示意图七;
图11(d)为本申请实施例提供的一种信息处理系统的结构示意图八。
具体实施方式
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述。
可以理解的是,此处所描述的具体实施例仅仅用于解释相关申请,而非对该申请的限定。另外还需要说明的是,为了便于描述,附图中仅示出了与有关申请相关的部分。
本申请实施例提供了一种信息处理方法,应用于编码装置中,如图1所示,信息处理方法包括:
S101、采集深度信息和视频帧;
编码装置在预设时长内,同时采集深度信息和视频帧;其中,视频帧是指在预设时长内采集的多帧图像,由多帧图像构成预设时长的视频。
需要说明的是,每一帧深度信息都对应视频帧中的一帧图像。
在一些实施例中,编码装置在预设时长内,采集视频帧,并通过飞行时间模组、双目视觉模组或者其他深度信息采集模块来采集初始深度信息;将采集到的初始深度信息作为深度信息。
编码装置利用图像传感器采集视频帧,与此同时,利用深度信息模组采集初始深度信息;将采集到的初始深度信息作为深度信息;其中,深度信息模组包括飞行时间(TOF,Time of Flight)模组或双目视觉模组。
示例性地,TOF模组为TOF摄像头,当利用TOF摄像头采集初始深度信息时,深度信息模组将原始电荷图像和/或传感器属性参数(如温度等),确定为初始深度信息,其中,原始电荷图像的获取过程可以为:在两种不同的发射信号频率下,通过控制积分时间,深度信息模组采样得到不同相位的共多组信号,进行光电转换后,再将这多组信号进行比特量化,以生成多张原始电荷图像。
示例性地,双目视觉模组为双目摄像头,当利用双目摄像头获取目标对象对应的初始深度信息时,深度信息模组利用双目摄像头拍摄得到的两幅图像,根据两幅图像的位姿将计算得到视差等信息,深度信息模组将视差信息、摄像头参数等作为初始深度信息。
在一些实施例中,编码装置在采集视频帧,并通过飞行时间模组或双目视觉模组采集初始深度信息之后,对初始深度信息进行相位校准,得到相位信息;并将校准后的相位信息作为深度信息。
编码装置中的深度信息模组对初始深度信息进行相位校准,得到相位信息;或对初始深度信息进行 其他处理,生成其他信息,将其他信息作为深度信息。
示例性地,相位信息可以为深度信息模组获取到的散斑、激光条纹、格雷码、正弦条纹等,具体的相位信息可根据实际情况确定,本申请实施例对此不做限定。
在一些实施例中,编码装置在采集视频帧,并通过飞行时间模组或双目视觉模组采集初始深度信息之后,对初始深度信息进行深度图像生成,得到冗余信息;冗余信息为生成深度图像的过程中产生的除了深度图像之外的其他信息;将冗余信息,作为深度信息。
编码装置中的深度信息模组利用初始深度信息生成深度图像,获取生成深度图像的过程中产生的除了深度图像的其他信息,即冗余信息。
示例性地,当利用TOF摄像头获取到原始电荷图像之后,深度信息模组将原始电荷图像生成2幅过程深度数据和1幅背景数据,并将这2幅过程深度数据和1幅背景数据作为目标对象的深度信息。
S102、对深度信息和视频帧进行联合编码或独立编码,得到编码信息;
编码装置中的编码器对深度信息和视频帧进行联合编码,得到表征与深度信息和视频帧对应的信息,即混合编码信息;或者,对深度信息和视频帧进行独立编码,得到表征深度信息和视频帧各自对应的信息,即深度编码信息和视频编码信息。
在一些实施例中,编码装置中的视频编码器,利用视频帧和深度信息的相关性,对深度信息中每个深度信息和视频帧中与之对应的一个视频帧进行联合编码,得到一个混合编码信息,进而得到由所有混合编码信息组成的混合编码信息。
在一些实施例中,编码信息为混合编码信息;编码装置利用所述深度信息和所述视频帧的相关性,对所述深度信息和所述视频帧进行联合编码,得到所述混合编码信息;或者,对视频帧进行编码,得到视频编码信息,对深度信息进行编码,得到深度编码信息,将深度编码信息合并至视频编码信息的预设位置处,得到混合编码信息。
编码装置中的编码器包括视频编码器,视频编码器利用深度信息的空间相关性或时间相关性等,对深度信息进行编码,得到深度编码信息;对视频帧进行编码,得到视频帧编码信息;再将深度编码信息和视频帧编码信息进行合并,得到混合编码信息。
在一些实施例中,预设位置可以为图像信息头、序列信息头、附加参数集或其他任意位置。
示例性地,编码装置中的视频编码器,对每一个深度信息进行编码,得到一个深度编码信息;再对与之对应的每一个视频帧进行编码,得到一个视频帧编码信息,然后,将这一个深度编码信息合并至这一个视频帧编码信息的图像信息头,得到一个混合编码信息;进而得到由所有混合编码信息组成的混合编码信息;其中,视频编码信息由所有视频帧编码信息组成。
示例性地,编码装置中的视频编码器,对深度信息进行编码,得到深度编码信息;对视频帧进行编码,得到视频编码信息;将深度编码信息合并至视频编码信息的序列信息头,得到混合编码信息。
需要说明的是,由于包含深度编码信息的混合编码信息具有可解耦性或独立性,采用视频图像的标准编解码协议的解码装置接收到该混合编码信息后,可以从该混合编码信息中仅提取出视频帧,不提取深度信息;也可以从该混合编码信息中仅提取出深度信息,不提取视频帧;本申请实施例不做限制。
在一些实施例中,编码信息为深度编码信息和视频编码信息;编码装置对深度信息进行编码,得到深度编码信息;对视频帧进行编码,得到视频编码信息。
编码装置中的编码器包括深度信息编码器和视频编码器,深度信息编码器利用深度信息的空间相关性或时间相关性等,对深度信息进行编码,得到深度编码信息;视频编码器对视频帧进行编码,得到视频编码信息。
具体地,视频编码器采用视频编解码协议,对视频帧进行编码,得到视频编码信息;视频编解码协议可以为H.264、H.265、H.266、VP9或AV1等。
具体地,深度信息编码器采用行业标准或特定组织的特定标准,对深度信息进行编码,得到深度编码信息。
在一些实施例中,编码装置对深度信息进行缩减处理,得到缩减后的深度信息;缩减后的深度信息的数据量小于深度信息的数据量;对缩减后的深度信息进行编码,得到深度编码信息。
编码装置中的编码器对深度信息进行缩减处理,使得缩减后的深度信息的数据量小于深度信息的数据量,减小了深度信息的编码工作量。
在一些实施例中,编码装置从视频帧中确定部分视频帧,并从深度信息中确定与部分视频帧对应的部分深度信息;或者,从视频帧中确定部分图像位置,并从深度信息中确定与部分图像位置对应的部分深度信息;将部分深度信息,作为深度编码信息。
编码装置可以对所有深度信息都进行编码;或者,仅对视频帧中的部分视频帧对应的深度信息进行编码,不对视频帧中的非部分视频帧对应的深度信息进行编码;或者,仅对视频帧中每个视频帧的部分 图像位置对应的深度信息进行编码,不对视频帧中每个视频帧的非部分图像位置对应的深度信息进行编码;本申请实施例不做限制。
在一些实施例中,编码装置利用深度信息的相位相关性、深度信息的空间相关性、深度信息的时间相关性、预设深度范围或者深度信息的频域相关性,对深度信息进行冗余消除,得到消除后的深度信息;将消除后的深度信息,作为缩减后的深度信息。
编码装置为了压缩编码信息的大小,在对深度信息进行编码的过程中,执行消除冗余的操作,再对消除后的深度信息进行编码,得到深度编码信息。
示例性地,编码装置中的深度信息模组确定深度信息为至少两个相位信息时,利用至少两个相位信息之间的相位相关性,对至少两个相位信息进行冗余消除,得到消除后的深度信息;
或者,确定深度信息不是至少两个相位信息时,利用深度信息的空间相关性,对深度信息进行冗余消除,得到消除后的深度信息;
或者,利用深度信息的时间相关性,对深度信息进行冗余消除,得到消除后的深度信息;
或者,利用预设深度范围,对深度信息进行冗余消除,得到消除后的深度信息;
或者,对深度信息进行频域转换,得到频域信息;利用频域相关性,对频域信息进行冗余消除,得到消除后的深度信息。
需要说明的是,预设深度范围为深度信息传感器能够采集到深度信息的范围。
在一些实施例中,解码装置从至少一个视点,采集深度信息和视频帧;从至少一个视点中确定间隔视点,将与间隔视点对应的深度信息,作为间隔深度信息;对间隔深度信息和视频帧进行联合编码或独立编码,得到间隔编码信息,并将间隔编码信息发送至解码装置,以使得解码装置基于间隔编码信息进行图像处理;其中,视点表征拍摄角度。
编码装置针对多个视点,考虑到在同一时刻,从同一个场景的多个视点采集到的多个深度信息,如相位信息或电荷图像,存在很强的相关性,为了减少传输的编码信息,可以只对多个视点中的间隔视点对应的深度信息进行编码并发送;解码装置得到间隔视点的深度信息后,可以用多个视点中的间隔视点的深度信息,生成多个视点中除了间隔视点的其他视点的深度信息。
示例性地,对于3维高性能视频编码(3D HEVC,3Dimension High Efficiency Video Coding),编码装置往往采集多个视点的深度信息和视频帧,可以对多个视点中间隔视点的深度信息和多个视点的视频帧进行独立编码或联合编码,得到间隔编码信息,间隔编码信息为与间隔视点的深度信息和多个视点的视频帧对应的信息,或者为间隔视点的深度信息对应的信息和多个视点的视频帧对应的信息。
示例性地,针对同一个场景的3个视点,3个视点中的间隔视点为左右两个视点,3个视点中的其他视点为中间视点。
S103、将编码信息写入码流,并将码流发送至解码装置,以使得解码装置基于编码信息进行图像处理。
编码装置将编码信息写入码流,将该码流发送至解码装置。
示例性地,编码装置中的视频编码器将混合编码信息写入混合码流,将混合码流发送至解码装置。
示例性地,编码装置中的视频编码器将视频编码信息写入视频编码码流,并将视频编码码流发送至解码装置;编码装置中的深度信息编码器将深度编码信息写入深度编码信息码流,并将深度编码信息码流发送至解码装置。
在一些实施例中,如图2所示的一种信息处理方法的流程图,在步骤S102之后,信息处理方法还包括:
S201、利用编码二进制数据之间的相关性,对编码信息消除比特冗余,得到消除后的编码信息;
编码装置为了压缩编码信息的大小,在得到编码信息后,再执行比热冗余的消除操作,得到消除后的编码信息。
示例性地,编码装置中的深度信息编码器在得到深度编码信息后,对深度编码信息消除比特冗余,得到消除后的深度编码信息;编码装置中的视频编码器在得到视频编码信息后,对视频编码信息消除比特冗余,得到消除后的视频编码信息;消除后的深度编码信息和消除后的视频编码信息就是消除后的编码信息。
示例性地,编码装置中的视频编码器在得到混合编码信息后,对混合编码信息消除比特冗余,得到消除后的编码信息。
S202、将消除后的编码信息写入码流,并将码流发送至解码装置,以使得解码装置基于消除后的编码信息进行图像处理。
编码装置中的视频编码器将消除后的编码信息写入混合码流,将混合码流发送至解码装置;或者,编码装置中的视频编码器将消除后的视频编码信息写入视频编码码流,并将视频编码码流发送至解码装 置;编码装置中的深度信息编码器将消除后的深度编码信息写入深度编码信息码流,并将深度编码信息码流发送至解码装置。
可以理解的是,解码装置直接采用深度信息进行编码,得到表征深度信息的编码信息,将该编码信息发送给解码装置,如此,解码装置可以从编码信息中解码出深度信息和视频帧,进而,解码装置不仅可以利用深度信息,恢复得到深度图像,还可以利用深度信息,对视频帧进行图像处理,提高了信息利用率。
本申请实施例还提供了一种信息处理方法,应用于解码装置中,如图3所示,信息处理方法包括:
S301、当接收到携带有编码信息的码流时,对码流进行联合解码或独立解码,得到深度信息和视频帧;
解码装置中的解码器接收到码流后,对码流执行联合解码或独立解码,得到深度信息和视频帧。
在一些实施例中,解码装置还可能接收到携带有消除后的编码信息的码流,对携带有消除后的编码信息的码流进行联合解码或独立解码,得到深度信息和视频帧。
在一些实施例中,码流为混合编码信息码流;解码装置对混合编码信息码流进行解码,得到视频帧和深度信息。
解码装置中的解码器包括视频解码器,视频解码器对混合编码信息进行解码,得到深度信息和视频帧。
在一些实施例中,码流为视频编码信息码流和深度编码信息码流;解码装置对视频编码信息码流进行解码,得到视频帧;对深度编码信息码流进行解码,得到深度信息。
解码装置中的解码器包括视频解码器和深度信息解码器,视频解码器对视频编码信息进行解码,得到视频帧;深度信息解码器对深度编码信息进行解码,得到深度信息。
S302、利用深度信息,对视频帧进行图像处理,得到目标图像帧,并将目标图像帧合成视频。
解码装置可以在深度辅助功能开启时,利用深度信息中每个深度信息,对视频帧中与之对应的每个视频帧进行图像处理,得到一个目标图像帧,进而得到所有的目标图像帧,并由所有目标图像帧合成视频,显示视频。
在一些实施例中,解码装置按照默认解码要求,利用深度信息,对视频帧进行相应地处理;或者,接收解码指令,响应于解码指令,利用深度信息,对视频帧进行相应地处理;其中,解码指令可以为景深设置指令、图像增强指令或背景虚化指令等。
在一些实施例中,解码装置利用深度信息调整视频帧的景深,得到景深图像;将景深图像帧,作为目标图像帧。
解码装置中的图像处理器接收到景深设置指令时,响应于景深设置指令,利用深度信息中每个深度信息,对视频帧中与之对应的每个视频帧进行景深调整,得到景深图像。
需要说明的是,这里可以直接利用深度信息作用于视频帧,生成具有景深的图像,而不需要将利用深度信息生成的深度图像和视频帧进行叠加,生成具有景深的图像。
在一些实施例中,解码装置在深度信息为相位信息时,利用相位信息,对视频帧进行去模糊,得到去模糊图像;将去模糊图像帧,作为目标图像帧。
解码装置中的图像处理器收到图像增强指令时,响应于图像增强指令,解析每个相位信息,得到解析结果,利用解析结果,对与之对应的每个视频帧进行去模糊,得到去模糊图像。
在一些实施例中,解码装置在深度信息为相位信息时,利用相位信息,对视频帧进行虚化前景或后景处理,得到虚化的图像帧;将虚化的图像帧,作为目标图像帧。
解码装置中的图像处理器接收到背景虚化指令、且确定深度信息为相位信息时,响应于背景虚化指令,利用深度信息中每个深度信息,对视频帧中与之对应的每个视频帧进行虚化前景或后景处理,得到虚化的图像。
在一些实施例中,解码装置在深度信息为电荷信息时,利用电荷信息判断拍摄场景中的噪声和外部可见光,从而有助于进行视频帧的去噪和白平衡调节,生成质量更高的视频展示给用户,提高用户的图像视频体验。
在一些实施例中,解码装置对间隔编码信息进行独立解码或联合解码,得到间隔视点的深度信息和至少一个视点的视频帧;对间隔视点的深度信息进行差值,得到至少一个视点中除了间隔视点的其他视点的深度信息;利用间隔视点的深度信息、其他视点的深度信息,对至少一个视点的视频帧进行图像处理,得到目标图像帧。
示例性地,至少一帧为针对同一个场景的3个视点,3个视点中的间隔视点为左右两个视点,可以对左右两个视点的深度信息进行差值,得到中间视点的深度信息。
在一些实施例中,如图4所示的一种信息处理方法的流程图,在步骤S301之后,信息处理方法还 包括:
S303、对深度信息进行恢复,生成深度图像帧。
解码装置中的深度图像生成器对深度信息中每个深度信息进行处理,得到深度图像帧。
在一些实施例中,当深度信息为相位信息时,对于因运动造成的图像模糊,利用预设时长内的多个时间点下采集的多个相位信息,进行运动估计,来恢复得到一幅深度图像,该幅深度图像更加清晰;其中,一幅深度图像是多个时间点中的一个时间点对应的深度图像;多个时间点可以为连续的时间点。
可以理解的是,对于需要不同时间点下采集的多个相位信息来恢复一幅深度图像的情况,本申请实施例由于是对预设时长内的相位信息进行编码发送,不是对深度图像进行编码发送,解码装置可以从码流中解码得到预设时长内的相位信息,再利用从预设时长内的相位信息中获取多个时间点对应的多个相位信息,实现对一幅深度图像的恢复。
在一些实施例中,信息处理系统包括编码装置和解码装置,应用于信息处理系统的信息处理方法,如图5所示的一种信息处理方法的流程图,信息处理方法包括:
S401、编码装置采集深度信息和视频帧;
S402、编码装置对深度信息和视频帧进行联合编码或独立编码,得到编码信息;编码信息表征与深度信息和视频帧对应的信息、或者表征深度信息和视频帧各自对应的信息;
S403、编码装置将编码信息写入码流,并将码流发送至解码装置;
S404、解码装置当接收到携带有编码信息的码流时,对码流进行联合解码或独立解码,得到深度信息和视频帧;
S405、解码装置利用深度信息,对视频帧进行图像处理,得到目标图像帧,并将目标图像帧合成视频。
可以理解的是,解码装置接收表征深度信息的编码信息,如此,解码装置可以从编码信息中解码出深度信息和视频帧,进而,解码装置不仅可以利用深度信息,恢复得到深度图像,还可以利用深度信息,对视频帧进行景深调整和去模糊等优化处理,提高了信息利用率,并且优化处理后得到的目标图像帧相较于视频帧,图像效果更佳,也就是说,还提高了图像质量。
本申请实施例还提供了一种编码装置,如图6所示,编码装置6包括:深度信息模组61、图像传感器62和编码器60;
深度信息模组61,用于采集深度信息;
图像传感器62,用于采集视频帧;
编码器60,用于对深度信息和视频帧进行联合编码或独立编码,得到编码信息;以及将编码信息写入码流,并将码流发送至解码装置,以使得解码装置基于编码信息进行图像处理。
在一些实施例中,深度信息模组61包括深度信息传感器611;
图像传感器62,还用于在预设时长内,采集视频帧;
深度信息传感器611,用于在所述预设时长内,通过飞行时间模组或双目视觉模组采集初始深度信息;以及将所述初始深度信息,作为所述深度信息。
在一些实施例中,深度信息模组61,还用于在所述采集所述视频帧,并通过飞行时间模组或双目视觉模组采集初始深度信息之后,对所述初始深度信息进行相位校准,得到相位信息;以及将所述相位信息,作为所述深度信息。
在一些实施例中,深度信息模组61,还用于在所述采集所述视频帧,并通过飞行时间模组或双目视觉模组采集初始深度信息之后,对所述初始深度信息进行深度图像生成,得到冗余信息;所述冗余信息为生成深度图像的过程中产生的除了深度图像之外的其他信息;以及将所述冗余信息,作为所述深度信息。
在一些实施例中,编码信息为深度编码信息和视频编码信息;编码器60包括深度信息编码器63和视频编码器64;其中,
深度信息编码器63,用于对深度信息进行编码,得到深度编码信息;
视频编码器64,用于对视频帧进行编码,得到视频编码信息。
在一些实施例中,深度信息编码器63,还用于对深度信息进行缩减处理,得到缩减后的深度信息;缩减后的深度信息的数据量小于深度信息的数据量;以及对缩减后的深度信息进行编码,得到深度编码信息。
在一些实施例中,深度信息编码器63,还用于从视频帧中确定部分视频帧,并从深度信息中确定与部分视频帧对应的部分深度信息;
或者,从视频帧中确定部分图像位置,并从深度信息中确定与部分图像位置对应的部分深度信息;
以及将部分深度信息,作为缩减后的深度信息。
在一些实施例中,深度信息编码器63,还用于利用深度信息的相位相关性、深度信息的空间相关性、深度信息的时间相关性、预设深度范围或者深度信息的频域相关性,对深度信息进行冗余消除,得到消除后的深度信息;以及将消除后的深度信息,作为缩减后的深度信息。
在一些实施例中,深度信息编码器63,还用于当深度信息为至少两个相位信息时,利用至少两个相位信息之间的相位相关性,对至少两个相位信息进行冗余消除,得到消除后的深度信息;
或者,当深度信息不是至少两个相位信息时,利用深度信息的空间相关性,对深度信息进行冗余消除,得到消除后的深度信息;
或者,利用深度信息的时间相关性,对深度信息进行冗余消除,得到消除后的深度信息;
或者,利用预设深度范围,对深度信息进行冗余消除,得到消除后的深度信息;
或者,对深度信息进行频域转换,得到频域信息;利用频域相关性,对频域信息进行冗余消除,得到消除后的深度信息。
在一些实施例中,编码信息为混合编码信息;如图7所示的另一种编码装置的结构示意图,编码器60包括视频编码器71;
视频编码器71,用于利用深度信息和视频帧的相关性,对深度信息和视频帧进行联合编码,得到混合编码信息;
或者,对视频帧进行编码,得到视频编码信息;对深度信息进行编码,得到深度编码信息;将深度编码信息合并至视频编码信息中的预设位置处,得到混合编码信息。
在一些实施例中,视频编码器71,还用于对深度信息进行缩减处理,得到缩减后的深度信息;缩减后的深度信息的数据量小于深度信息的数据量;以及对缩减后的深度信息进行编码,得到深度编码信息。
在一些实施例中,视频编码器71,还用于从视频帧中确定部分视频帧,并从深度信息中确定与部分视频帧对应的部分深度信息;
或者,从视频帧中确定部分图像位置,并从深度信息中确定与部分图像位置对应的部分深度信息;
以及将部分深度信息,作为缩减后的深度信息。
在一些实施例中,视频编码器71,还用于利用深度信息的相位相关性、深度信息的空间相关性、深度信息的时间相关性、预设深度范围或者深度信息的频域相关性,对深度信息进行冗余消除,得到消除后的深度信息;以及将消除后的深度信息,作为缩减后的深度信息。
在一些实施例中,视频编码器71,还用于当深度信息为至少两个相位信息时,利用至少两个相位信息之间的相位相关性,对至少两个相位信息进行冗余消除,得到消除后的深度信息;
或者,当深度信息不是至少两个相位信息时,利用深度信息的空间相关性,对深度信息进行冗余消除,得到消除后的深度信息;
或者,利用深度信息的时间相关性,对深度信息进行冗余消除,得到消除后的深度信息;
或者,利用预设深度范围,对深度信息进行冗余消除,得到消除后的深度信息;
或者,对深度信息进行频域转换,得到频域信息;利用频域相关性,对频域信息进行冗余消除,得到消除后的深度信息。
在一些实施例中,编码器60,还用于在对深度信息和视频帧进行联合编码或独立编码,得到编码信息之后,利用编码二进制数据之间的相关性,对编码信息消除比特冗余,得到消除后的编码信息;以及将消除后的编码信息写入码流,并将码流发送至解码装置,以使得解码装置基于消除后的编码信息进行图像处理。
本申请实施例提供了一种计算机可读存储介质,应用于编码装置,计算机可读存储介质存储有一个或者多个程序,一个或者多个程序可被一个或者多个第一处理器执行,程序被第一处理器执行时实现如应用于编码装置的信息处理方法。
本申请实施例还提供了一种解码装置,解码装置包括:图像处理器和解码器;
解码器,用于当接收到携带有编码信息的码流时,对码流进行联合解码或独立解码,得到深度信息和视频帧;
图像处理器,用于利用深度信息,对视频帧进行图像处理,得到目标图像帧,并将目标图像帧合成视频。
在一些实施例中,码流为视频编码信息码流和深度编码信息码流;解码器包括视频解码器和深度信息解码器;
视频解码器,用于对视频编码信息码流进行解码,得到视频帧;
深度信息解码器,用于对深度编码信息码流进行解码,得到深度信息。
在一些实施例中,码流为混合编码信息码流;解码器包括视频解码器;
视频解码器,用于对混合编码信息码流进行解码,得到视频帧和深度信息。
在一些实施例中,图像处理器,还用于利用深度信息调整视频帧的景深,得到景深图像帧;以及将景深图像帧,作为目标图像帧。
在一些实施例中,图像处理器,还用于当深度信息为相位信息时,利用相位信息,对视频帧进行去模糊,得到去模糊图像帧;以及将去模糊图像帧,作为目标图像帧。
在一些实施例中,解码装置还包括深度图像生成器;
深度图像生成器,用于在对码流进行联合解码或独立解码,得到深度信息和视频帧之后,对深度信息进行恢复,生成深度图像帧。
在一些实施例中,解码器包括视频解码器,解码装置还包括深度图像生成器;
深度图像生成器和图像处理器独立于视频解码器,视频解码器连接深度图像生成器和图像处理器;或者,深度图像生成器和图像处理器集成于视频解码器中;或者,深度图像生成器集成于视频解码器中,图像处理器独立于视频解码器,视频解码器连接图像处理器;或者,图像处理器集成于视频解码器中,深度图像生成器独立于视频解码器,视频解码器连接深度图像生成器。
示例性地,如图8(a)所示的一种解码装置的结构示意图,解码装置18包括图像处理器181,还包括视频解码器182和深度图像生成器183;深度图像生成器183和图像处理器181都独立于视频解码器182,视频解码器182连接深度图像生成器183和图像处理器181;其中,视频解码器182对混合编码信息进行处理,输出深度信息和视频帧,视频解码器182将深度信息传输至深度图像生成器183,深度图像生成器183对深度信息进行恢复,输出深度图像帧;视频解码器182将视频帧和深度信息送入图像处理器181,图像处理器181利用深度信息对视频帧进行图像处理,输出目标图像帧。
示例性地,如图8(b)所示的一种解码装置的结构示意图,解码装置28包括图像处理器281,还包括视频解码器282和深度图像生成器283;深度图像生成器283和图像处理器281都集成于视频解码器282中;其中,视频解码器282对混合编码信息进行处理,直接输出深度图像帧和/或目标图像帧。
示例性地,如图8(c)所示的一种解码装置的结构示意图,解码装置38包括图像处理器381,还包括视频解码器382和深度图像生成器383;深度图像生成器383集成于视频解码器382中,图像处理器381独立于视频解码器382,视频解码器382连接图像处理器381;其中,视频解码器382对混合编码信息进行处理,输出深度图像帧、深度信息和视频帧,视频解码器382再将视频帧和深度信息送入图像处理器381;图像处理器381利用深度信息对视频帧进行图像处理,输出目标图像帧。
示例性地,如图8(d)所示的一种解码装置的结构示意图,解码装置48包括图像处理器481,还包括视频解码器482和深度图像生成器483;图像处理器481集成于视频解码器482中,深度图像生成器483独立于视频解码器482,视频解码器482连接深度图像生成器483;其中,视频解码器482对混合编码信息进行处理,输出深度信息和目标图像帧,视频解码器482再将深度信息送入深度图像生成器483;深度图像生成器483对深度信息进行恢复,输出深度图像帧。
在一些实施例中,解码器包括深度信息解码器和视频解码器,解码装置还包括深度图像生成器;
深度图像生成器独立于深度信息解码器,图像处理器独立于视频解码器,深度信息解码器连接深度图像生成器和图像处理器,视频解码器连接图像处理器;或者,深度图像生成器集成于深度信息解码器中,图像处理器独立于视频解码器,深度信息解码器和视频解码器连接图像处理器;或者,深度图像生成器独立于深度信息解码器,图像处理器集成于视频解码器中,深度信息解码器连接深度图像生成器和视频解码器;或者,深度图像生成器集成于视频解码器中,图像处理器集成于深度信息解码器中,深度信息解码器连接视频解码器。
示例性地,如图9(a)所示的一种解码装置的结构示意图,解码装置19包括图像处理器191,还包括深度信息解码器192、视频解码器193和深度图像生成器194;深度图像生成器194独立于深度信息解码器192,图像处理器191独立于视频解码器193,深度信息解码器192连接深度图像生成器194和图像处理器191,视频解码器193连接图像处理器191;其中,视频解码器193对视频编码信息进行处理,输出视频帧,深度信息解码器192对深度编码信息进行处理,输出深度信息;视频解码器193将视频帧传输至图像处理器191,深度信息解码器192将深度信息传输至深度图像生成器194和图像处理器191,深度图像生成器194输出深度图像帧,图像处理器191输出目标图像帧。
示例性地,如图9(b)所示的一种解码装置的结构示意图,解码装置29包括图像处理器291,还包括深度信息解码器292、视频解码器293和深度图像生成器294;深度图像生成器294集成于深度信息解码器292中,图像处理器291独立于视频解码器293,深度信息解码器292和视频解码器293连接图像处理器291;其中,视频解码器293对视频编码信息进行处理,输出视频帧,深度信息解码器292对深度编码信息进行处理,输出深度信息和深度图像帧;视频解码器293将视频帧传输至图像处理器291,深度信息解码器292将深度信息传输至图像处理器291,图像处理器291输出目标图像帧。
示例性地,如图9(c)所示的一种解码装置的结构示意图,解码装置39包括图像处理器391,还包括深度信息解码器392、视频解码器393和深度图像生成器394;深度图像生成器394独立于深度信息解码器392,图像处理器391集成于视频解码器393中,深度信息解码器392连接深度图像生成器394和视频解码器393;其中,深度信息解码器392对深度编码信息进行处理,输出深度信息;深度信息解码器392将深度信息传输至深度图像生成器394和视频解码器393,深度图像生成器394输出深度图像帧,视频解码器393基于视频编码信息和深度信息,输出目标图像帧。
示例性地,如图9(d)所示的一种解码装置的结构示意图,解码装置49包括图像处理器491,还包括深度信息解码器492、视频解码器493和深度图像生成器494;深度图像生成器494集成于深度信息解码器492中,图像处理器491集成于视频解码器493中,深度信息解码器492连接视频解码器493;其中,深度信息解码器492对深度编码信息进行处理,输出深度信息和深度图像帧;深度信息解码器492将深度信息传输至视频解码器493,视频解码器493基于视频编码信息和深度信息,输出目标图像帧。
本申请实施例提供了一种计算机可读存储介质,应用于解码装置,计算机可读存储介质存储有一个或者多个程序,一个或者多个程序可被一个或者多个第二处理器执行,程序被第二处理器执行时实现如应用于解码装置的信息处理方法。
本申请实施例还提供了一种信息处理系统,信息处理系统包括:编码装置和解码装置,编码装置包括深度信息模组、图像传感器和编码器,解码装置包括图像处理器和解码器;
深度信息模组,用于采集深度信息;
图像传感器,用于采集视频帧;
编码器,用于对深度信息和视频帧进行联合编码或独立编码,得到编码信息;以及将编码信息写入码流,并将码流发送至解码装置;
解码器,用于当接收到码流时,对码流进行联合解码或独立解码,得到深度信息和视频帧;
图像处理器,用于利用深度信息,对视频帧进行图像处理,得到目标图像帧,并将目标图像帧合成视频。
示例性地,如图10(a)所示的一种信息处理系统的结构示意图,信息处理系统包括编码装置7和解码装置18;编码装置7包括视频编码器71,解码装置18包括图像处理器181、视频解码器182和深度图像生成器183;其中,视频编码器71将混合编码信息发送至视频解码器182;视频解码器182对该混合编码信息进行处理,输出深度信息和视频帧;深度图像生成器183对该深度信息进行恢复,输出深度图像帧;图像处理器181利用深度信息对该视频帧进行图像处理,输出目标图像帧。
示例性地,如图10(b)所示的一种信息处理系统的结构示意图,信息处理系统包括编码装置7和解码装置28;编码装置7包括视频编码器71,解码装置28包括视频解码器282,视频解码器282包括深度图像生成器283和图像处理器281;其中,视频编码器71将混合编码信息发送至视频解码器282;视频解码器282对该混合编码信息进行处理,直接输出深度图像帧和目标图像帧。
示例性地,如图10(c)所示的一种信息处理系统的结构示意图,信息处理系统包括编码装置7和解码装置38;编码装置7包括视频编码器71,解码装置38包括图像处理器381和视频解码器382,视频解码器382包括深度图像生成器383;其中,视频编码器71将混合编码信息发送至视频解码器382;视频解码器382对该混合编码信息进行处理,输出深度图像帧、深度信息和视频帧;图像处理器381利用该深度信息对该视频帧进行图像处理,输出目标图像帧。
示例性地,如图10(d)所示的一种信息处理系统的结构示意图,信息处理系统包括编码装置7和解码装置48;编码装置7包括视频编码器71,解码装置48包括视频解码器482和深度图像生成器483,视频解码器482包括图像处理器481;其中,视频编码器71将混合编码信息发送至视频解码器482;视频解码器482对该混合编码信息进行处理,输出深度信息和目标图像帧,深度图像生成器483对该深度信息进行恢复,输出深度图像帧。
示例性地,如图11(a)所示的一种信息处理系统的结构示意图,信息处理系统包括编码装置6和解码装置19;编码装置6包括深度信息编码器63和视频编码器64;解码装置19包括图像处理器191、深度信息解码器192、视频解码器193和深度图像生成器194;其中,深度信息编码器63将深度编码信息发送至深度信息解码器192,视频编码器64将视频编码信息发送至视频解码器193;深度信息解码器192对该深度编码信息进行解码,输出深度信息;视频解码器193对该视频编码信息进行解码,输出视频帧;深度图像生成器194对该深度信息进行处理,输出深度图像帧;图像处理器191利用该深度信息对该视频帧进行图像处理,输出目标图像帧。
示例性地,如图11(b)所示的一种信息处理系统的结构示意图,信息处理系统包括编码装置6和解码装置29,编码装置6包括深度信息编码器63和视频编码器64;解码装置29包括图像处理器291、 深度信息解码器292和视频解码器293,深度信息解码器292包括深度图像生成器294;其中,深度信息编码器63将深度编码信息发送至深度信息解码器292,视频编码器64将视频编码信息发送至视频解码器293;深度信息解码器292对该深度编码信息进行处理,输出深度图像帧和深度信息;视频解码器293对该视频编码信息进行解码,输出视频帧;图像处理器291利用深度信息对视频帧进行图像处理,输出目标图像帧。
示例性地,如图11(c)所示的一种信息处理系统的结构示意图,信息处理系统包括编码装置6和解码装置39,编码装置6包括深度信息编码器63和视频编码器64;解码装置39包括深度信息解码器392、视频解码器393和深度图像生成器394,视频解码器393包括图像处理器391;其中,深度信息编码器63将深度编码信息发送至深度信息解码器392,视频编码器64将视频编码信息发送至视频解码器393;深度信息解码器392对该深度编码信息进行解码,输出深度信息;视频解码器393对该视频编码信息和该深度信息进行处理,输出目标图像帧;深度图像生成器394对该深度信息进行恢复,输出深度图像帧。
示例性地,如图11(d)所示的一种信息处理系统的结构示意图,信息处理系统包括编码装置6和解码装置49,编码装置6包括深度信息编码器63和视频编码器64;解码装置49包括深度信息解码器492和视频解码器493,深度信息解码器492包括深度图像生成器494,视频解码器493包括图像处理器491;其中,深度信息编码器63将深度编码信息发送至深度信息解码器492,视频编码器64将视频编码信息发送至视频解码器493;深度信息解码器492对该深度编码信息进行处理,输出深度图像帧和深度信息;视频解码器493对该视频编码信息和该深度信息进行处理,输出目标图像帧。
需要说明的是,信息处理系统中的深度信息编码器对深度信息进行编码,得到多个深度编码信息时,可以利用一个深度信息解码器对多个深度信息进行编码,生成多个深度编码信息,将多个深度编码信息写入多路码流;或者,多个深度信息编码器对多个深度信息进行编码,生成多个深度编码信息,将多个深度编码信息写入多路码流或一路码流;或者,当由深度信息生成深度图像和冗余信息时,由同一个深度信息编码器或多个深度信息编码器,对深度图像进行编码,得到深度图像编码信息,并将深度图像编码信息写入一路码流,再对冗余信息进行编码,得到冗余信息编码信息,并将冗余信息编码信息写入另一路码流;相应地,可以由一个深度信息解码器解析多路的码流,或多个深度信息解码器解析一路码流,或多个深度信息解码器解析多路码流,具体的可根据实际情况确定,本申请实施例对此不做限定。
本领域内的技术人员应明白,本申请的实施例可提供为方法、装置、系统或计算机程序产品。因此,本申请可采用硬件实施例、软件实施例、或结合软件和硬件方面的实施例的形式。而且,本申请可采用在一个或多个其中包含有计算机可用程序代码的计算机可用存储介质(包括但不限于磁盘存储器和光学存储器等)上实施的计算机程序产品的形式。
本申请是参照根据本申请实施例的方法、装置、系统和计算机程序产品的流程图和/或方框图来描述的。应理解可由计算机程序指令实现流程图和/或方框图中的每一流程和/或方框、以及流程图和/或方框图中的流程和/或方框的结合。可提供这些计算机程序指令到通用计算机、专用计算机、嵌入式处理机或其他可编程数据处理设备的处理器以产生一个机器,使得通过计算机或其他可编程数据处理设备的处理器执行的指令产生用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的装置。
这些计算机程序指令也可存储在能引导计算机或其他可编程数据处理设备以特定方式工作的计算机可读存储器中,使得存储在该计算机可读存储器中的指令产生包括指令装置的制造品,该指令装置实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能。
这些计算机程序指令也可装载到计算机或其他可编程数据处理设备上,使得在计算机或其他可编程设备上执行一系列操作步骤以产生计算机实现的处理,从而在计算机或其他可编程设备上执行的指令提供用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的步骤。
以上所述,仅为本申请的较佳实施例而已,并非用于限定本申请的保护范围。
工业实用性
本申请实施例采用上述技术实现方案,直接采用深度信息进行编码,得到表征深度信息的编码信息,将该编码信息发送给解码装置,如此,解码装置可以从编码信息中解码出深度信息和视频帧,进而,解码装置不仅可以利用深度信息,恢复得到深度图像,还可以利用深度信息,对视频帧进行图像处理,提高了信息利用率。

Claims (35)

  1. 一种信息处理方法,应用于编码装置,其中,所述方法包括:
    采集深度信息和视频帧;
    对所述深度信息和所述视频帧进行联合编码或独立编码,得到编码信息;
    将所述编码信息写入码流,并将所述码流发送至解码装置,以使得所述解码装置基于所述编码信息进行图像处理。
  2. 根据权利要求1所述的方法,其中,所述采集深度信息和视频帧,包括:
    在预设时长内,采集所述视频帧,并通过飞行时间模组或双目视觉模组采集初始深度信息;
    将所述初始深度信息,作为所述深度信息。
  3. 根据权利要求2所述的方法,其中,所述采集所述视频帧,并通过飞行时间模组或双目视觉模组采集初始深度信息之后,所述方法还包括:
    对所述初始深度信息进行相位校准,得到相位信息;
    将所述相位信息,作为所述深度信息。
  4. 根据权利要求2所述的方法,其中,所述采集所述视频帧,并通过飞行时间模组或双目视觉模组采集初始深度信息之后,所述方法还包括:
    对所述初始深度信息进行深度图像生成,得到冗余信息;所述冗余信息为生成深度图像的过程中产生的除了深度图像之外的其他信息;
    将所述冗余信息,作为所述深度信息。
  5. 根据权利要求1所述的方法,其中,所述编码信息为混合编码信息;所述对所述深度信息和所述视频帧进行联合编码,得到编码信息,包括:
    利用所述深度信息和所述视频帧的相关性,对所述深度信息和所述视频帧进行联合编码,得到所述混合编码信息;
    或者,对所述视频帧进行编码,得到视频编码信息;对所述深度信息进行编码,得到深度编码信息;将所述深度编码信息合并至所述视频编码信息中的预设位置处,得到所述混合编码信息。
  6. 根据权利要求1所述的方法,其中,所述编码信息为深度编码信息和视频编码信息;所述对所述深度信息和所述视频帧进行联合编码,得到编码信息,包括:
    对所述深度信息进行编码,得到所述深度编码信息;
    对所述视频帧进行编码,得到所述视频编码信息。
  7. 根据权利要求5或6所述的方法,其中,所述对所述深度信息进行编码,得到深度编码信息;或者,所述对所述深度信息进行编码,得到所述深度编码信息,包括:
    对所述深度信息进行缩减处理,得到缩减后的深度信息;所述缩减后的深度信息的数据量小于所述深度信息的数据量;
    对所述缩减后的深度信息进行编码,得到所述深度编码信息。
  8. 根据权利要求7所述的方法,其中,所述对所述深度信息进行缩减处理,得到缩减后的深度信息,包括:
    从所述视频帧中确定部分视频帧,并从所述深度信息中确定与所述部分视频帧对应的部分深度信息;
    或者,从所述视频帧中确定部分图像位置,并从所述深度信息中确定与所述部分图像位置对应的部分深度信息;
    将所述部分深度信息,作为所述缩减后的深度信息。
  9. 根据权利要求7所述的方法,其中,所述对所述深度信息进行缩减处理,得到缩减后的深度信息,包括:
    利用所述深度信息的相位相关性、所述深度信息的空间相关性、所述深度信息的时间相关性、预设深度范围或者所述深度信息的频域相关性,对所述深度信息进行冗余消除,得到消除后的深度信息;
    将所述消除后的深度信息,作为所述缩减后的深度信息。
  10. 根据权利要求9所述的方法,其中,所述利用所述深度信息的相位相关性、所述深度信息的空间相关性、所述深度信息的时间相关性、预设深度范围或者所述深度信息的频域相关性,对所述深度信息进行冗余消除,得到消除后的深度信息,包括:
    当所述深度信息为至少两个相位信息时,利用所述至少两个相位信息之间的相位相关性,对所述至 少两个相位信息进行冗余消除,得到所述消除后的深度信息;
    或者,当所述深度信息不是所述至少两个相位信息时,利用所述深度信息的空间相关性,对所述深度信息进行冗余消除,得到所述消除后的深度信息;
    或者,利用所述深度信息的时间相关性,对所述深度信息进行冗余消除,得到所述消除后的深度信息;
    或者,利用所述预设深度范围,对所述深度信息进行冗余消除,得到所述消除后的深度信息;
    或者,对所述深度信息进行频域转换,得到频域信息;利用所述频域相关性,对所述频域信息进行冗余消除,得到所述消除后的深度信息。
  11. 根据权利要求1所述的方法,其中,在所述对所述深度信息和所述视频帧进行联合编码或独立编码,得到编码信息之后,所述方法还包括:
    利用编码二进制数据之间的相关性,对所述编码信息消除比特冗余,得到消除后的编码信息;
    将所述消除后的编码信息写入码流,并将所述码流发送至解码装置,以使得所述解码装置基于所述消除后的编码信息进行图像处理。
  12. 一种信息处理方法,应用于解码装置,其中,所述方法包括:
    当接收到携带有编码信息的码流时,对所述码流进行联合解码或独立解码,得到所述深度信息和所述视频帧;
    利用所述深度信息,对所述视频帧进行图像处理,得到目标图像帧,并将所述目标图像帧合成视频。
  13. 根据权利要求12所述的方法,其中,所述利用所述深度信息,对所述视频帧进行图像处理,得到目标图像帧,包括:
    利用所述深度信息调整所述视频帧的景深,得到所述景深图像帧;
    将所述景深图像帧,作为所述目标图像帧。
  14. 根据权利要求12所述的方法,其中,所述利用所述深度信息,对所述视频帧进行图像处理,得到目标图像帧,包括:
    当所述深度信息为相位信息时,利用所述相位信息,对所述视频帧进行去模糊,得到去模糊图像帧;
    将所述去模糊图像帧,作为所述目标图像帧。
  15. 根据权利要求12所述的方法,其中,在所述对所述码流进行联合解码或独立解码,得到所述深度信息和所述视频帧之后,所述方法还包括:
    对所述深度信息进行恢复,生成深度图像帧。
  16. 一种编码装置,其中,所述编码装置包括:深度信息模组、图像传感器和编码器;
    所述深度信息模组,用于采集深度信息;
    所述图像传感器,用于采集视频帧;
    所述编码器,用于对所述深度信息和所述视频帧进行联合编码或独立编码,得到编码信息;以及将所述编码信息写入码流,并将所述码流发送至解码装置,以使得所述解码装置基于所述编码信息进行图像处理。
  17. 根据权利要求16所述的装置,其中,所述深度信息模组包括深度信息传感器;
    所述图像传感器,还用于在预设时长内,采集所述视频帧;
    所述深度信息传感器,用于在所述预设时长内,通过飞行时间模组或双目视觉模组采集初始深度信息;以及将所述初始深度信息,作为所述深度信息。
  18. 根据权利要求17所述的装置,其中,
    所述深度信息模组,还用于在所述采集所述视频帧,并通过飞行时间模组或双目视觉模组采集初始深度信息之后,对所述初始深度信息进行相位校准,得到相位信息;以及将所述相位信息,作为所述深度信息。
  19. 根据权利要求17所述的装置,其中,
    所述深度信息模组,还用于在所述采集所述视频帧,并通过飞行时间模组或双目视觉模组采集初始深度信息之后,对所述初始深度信息进行深度图像生成,得到冗余信息;所述冗余信息为生成深度图像的过程中产生的除了深度图像之外的其他信息;以及将所述冗余信息,作为所述深度信息。
  20. 根据权利要求16所述的装置,其中,所述编码信息为混合编码信息;所述编码器包括视频编码器;
    所述视频编码器,用于利用所述深度信息和所述视频帧的相关性,对所述深度信息和所述视频帧进行联合编码,得到所述混合编码信息;
    或者,对所述视频帧进行编码,得到视频编码信息;对所述深度信息进行编码,得到深度编码信息;将所述深度编码信息合并至所述视频编码信息中的预设位置处,得到所述混合编码信息。
  21. 根据权利要求20所述的装置,其中,
    所述视频编码器,还用于对所述深度信息进行缩减处理,得到缩减后的深度信息;所述缩减后的深度信息的数据量小于所述深度信息的数据量;以及对所述缩减后的深度信息进行编码,得到所述深度编码信息。
  22. 根据权利要求21所述的装置,其中,
    所述视频编码器,还用于从所述视频帧中确定部分视频帧,并从所述深度信息中确定与所述部分视频帧对应的部分深度信息;
    或者,从所述视频帧中确定部分图像位置,并从所述深度信息中确定与所述部分图像位置对应的部分深度信息;
    以及将所述部分深度信息,作为所述缩减后的深度信息。
  23. 根据权利要求21所述的装置,其中,
    所述视频编码器,还用于利用所述深度信息的相位相关性、所述深度信息的空间相关性、所述深度信息的时间相关性、预设深度范围或者所述深度信息的频域相关性,对所述深度信息进行冗余消除,得到消除后的深度信息;以及将所述消除后的深度信息,作为所述缩减后的深度信息。
  24. 根据权利要求23所述的装置,其中,
    所述视频编码器,还用于当所述深度信息为至少两个相位信息时,利用所述至少两个相位信息之间的相位相关性,对所述至少两个相位信息进行冗余消除,得到所述消除后的深度信息;
    或者,当所述深度信息不是所述至少两个相位信息时,利用所述深度信息的空间相关性,对所述深度信息进行冗余消除,得到所述消除后的深度信息;
    或者,利用所述深度信息的时间相关性,对所述深度信息进行冗余消除,得到所述消除后的深度信息;
    或者,利用所述预设深度范围,对所述深度信息进行冗余消除,得到所述消除后的深度信息;
    或者,对所述深度信息进行频域转换,得到频域信息;利用所述频域相关性,对所述频域信息进行冗余消除,得到所述消除后的深度信息。
  25. 根据权利要求16所述的装置,其中,所述编码信息为深度编码信息和视频编码信息;所述编码器包括深度信息编码器和视频编码器;其中,
    所述深度信息编码器,用于对所述深度信息进行编码,得到所述深度编码信息;
    所述视频编码器,用于对所述视频帧进行编码,得到所述视频编码信息。
  26. 根据权利要求16所述的装置,其中,
    所述编码器,还用于在所述对所述深度信息和所述视频帧进行联合编码或独立编码,得到编码信息之后,利用编码二进制数据之间的相关性,对所述编码信息消除比特冗余,得到消除后的编码信息;以及将所述消除后的编码信息写入码流,并将所述码流发送至解码装置,以使得所述解码装置基于所述消除后的编码信息进行图像处理。
  27. 一种解码装置,其中,所述解码装置包括:图像处理器和解码器;
    所述解码器,用于当接收到携带有编码信息的码流时,对所述码流进行联合解码或独立解码,得到所述深度信息和所述视频帧;
    所述图像处理器,用于利用所述深度信息,对所述视频帧进行图像处理,得到目标图像帧,并将所述目标图像帧合成视频。
  28. 根据权利要求27所述的装置,其中,
    所述图像处理器,还用于利用所述深度信息调整所述视频帧的景深,得到所述景深图像帧;以及将所述景深图像帧,作为所述目标图像帧。
  29. 根据权利要求27所述的装置,其中,
    所述图像处理器,还用于当所述深度信息为相位信息时,利用所述相位信息,对所述视频帧进行去模糊,得到去模糊图像帧;以及将所述去模糊图像帧,作为所述目标图像帧。
  30. 根据权利要求27所述的装置,其中,所述解码装置还包括深度图像生成器;
    所述深度图像生成器,用于在所述对所述码流进行联合解码或独立解码,得到所述深度信息和所述视频帧之后,对所述深度信息进行恢复,生成深度图像帧。
  31. 根据权利要求27所述的装置,其中,所述解码器包括视频解码器,所述解码装置还包括深度图像生成器;
    所述深度图像生成器和所述图像处理器独立于所述视频解码器,所述视频解码器连接所述深度图像生成器和所述图像处理器;或者,所述深度图像生成器和所述图像处理器集成于所述视频解码器中;或者,所述深度图像生成器集成于所述视频解码器中,所述图像处理器独立于所述视频解码器,所述视频 解码器连接所述图像处理器;或者,所述图像处理器集成于所述视频解码器中,所述深度图像生成器独立于所述视频解码器,所述视频解码器连接所述深度图像生成器。
  32. 根据权利要求27所述的装置,其中,所述解码器包括深度信息解码器和视频解码器,所述解码装置还包括深度图像生成器;
    所述深度图像生成器独立于所述深度信息解码器,所述图像处理器独立于所述视频解码器,所述深度信息解码器连接所述深度图像生成器和所述图像处理器,所述视频解码器连接所述图像处理器;或者,所述深度图像生成器集成于所述深度信息解码器中,所述图像处理器独立于所述视频解码器,所述深度信息解码器和所述视频解码器连接所述图像处理器;或者,所述深度图像生成器独立于所述深度信息解码器,所述图像处理器集成于所述视频解码器中,所述深度信息解码器连接所述深度图像生成器和所述视频解码器;或者,所述深度图像生成器集成于所述视频解码器中,所述图像处理器集成于所述深度信息解码器中,所述深度信息解码器连接所述视频解码器。
  33. 一种信息处理系统,其中,所述系统包括:编码装置和解码装置,所述编码装置包括深度信息模组、图像传感器和编码器,所述解码装置包括图像处理器和解码器;
    所述深度信息模组,用于采集深度信息;
    所述图像传感器,用于采集视频帧;
    所述编码器,用于对所述深度信息和所述视频帧进行联合编码或独立编码,得到编码信息;以及将所述编码信息写入码流,并将所述码流发送至所述解码装置;
    所述解码器,用于当接收到所述码流时,对所述码流进行联合解码或独立解码,得到所述深度信息和所述视频帧;
    所述图像处理器,用于利用所述深度信息,对所述视频帧进行图像处理,得到目标图像帧,并将所述目标图像帧合成视频。
  34. 一种计算机可读存储介质,应用于编码装置,其中,所述计算机可读存储介质存储有一个或者多个程序,所述一个或者多个程序可被一个或者多个第一处理器执行,以实现如权利要求1-11任一项所述的方法。
  35. 一种计算机可读存储介质,应用于解码装置,其中,所述计算机可读存储介质存储有一个或者多个程序,所述一个或者多个程序可被一个或者多个第二处理器执行,以实现如权利要求12-15任一项所述的方法。
PCT/CN2019/115935 2019-11-06 2019-11-06 信息处理方法、编码装置、解码装置、系统及存储介质 WO2021087800A1 (zh)

Priority Applications (3)

Application Number Priority Date Filing Date Title
CN201980098950.3A CN114175626B (zh) 2019-11-06 2019-11-06 信息处理方法、编码装置、解码装置、系统及存储介质
PCT/CN2019/115935 WO2021087800A1 (zh) 2019-11-06 2019-11-06 信息处理方法、编码装置、解码装置、系统及存储介质
US17/691,095 US20220230361A1 (en) 2019-11-06 2022-03-09 Information processing method, and encoding device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2019/115935 WO2021087800A1 (zh) 2019-11-06 2019-11-06 信息处理方法、编码装置、解码装置、系统及存储介质

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/691,095 Continuation US20220230361A1 (en) 2019-11-06 2022-03-09 Information processing method, and encoding device

Publications (1)

Publication Number Publication Date
WO2021087800A1 true WO2021087800A1 (zh) 2021-05-14

Family

ID=75849193

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/115935 WO2021087800A1 (zh) 2019-11-06 2019-11-06 信息处理方法、编码装置、解码装置、系统及存储介质

Country Status (3)

Country Link
US (1) US20220230361A1 (zh)
CN (1) CN114175626B (zh)
WO (1) WO2021087800A1 (zh)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102265617A (zh) * 2008-12-26 2011-11-30 日本胜利株式会社 图像编码装置、图像编码方法及其程序、以及图像解码装置、图像解码方法及其程序
CN102792699A (zh) * 2009-11-23 2012-11-21 通用仪表公司 作为到视频序列的附加通道的深度代码化
CN108053435A (zh) * 2017-11-29 2018-05-18 深圳奥比中光科技有限公司 基于手持移动设备的动态实时三维重建方法和系统
EP3457688A1 (en) * 2017-09-15 2019-03-20 Thomson Licensing Methods and devices for encoding and decoding three degrees of freedom and volumetric compatible video stream
CN109889809A (zh) * 2019-04-12 2019-06-14 深圳市光微科技有限公司 深度相机模组、深度相机、深度图获取方法以及深度相机模组形成方法
CN110268450A (zh) * 2017-02-13 2019-09-20 索尼公司 图像处理装置和图像处理方法

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101616322A (zh) * 2008-06-24 2009-12-30 深圳华为通信技术有限公司 立体视频编解码方法、装置及系统
WO2010043773A1 (en) * 2008-10-17 2010-04-22 Nokia Corporation Sharing of motion vector in 3d video coding
EP2868092A4 (en) * 2012-07-02 2016-05-04 Nokia Technologies Oy METHOD AND DEVICE FOR VIDEO CODING
CN105847777B (zh) * 2016-03-24 2018-04-17 湖南拓视觉信息技术有限公司 一种传输三维深度图像的方法及装置

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102265617A (zh) * 2008-12-26 2011-11-30 日本胜利株式会社 图像编码装置、图像编码方法及其程序、以及图像解码装置、图像解码方法及其程序
CN102792699A (zh) * 2009-11-23 2012-11-21 通用仪表公司 作为到视频序列的附加通道的深度代码化
CN110268450A (zh) * 2017-02-13 2019-09-20 索尼公司 图像处理装置和图像处理方法
EP3457688A1 (en) * 2017-09-15 2019-03-20 Thomson Licensing Methods and devices for encoding and decoding three degrees of freedom and volumetric compatible video stream
CN108053435A (zh) * 2017-11-29 2018-05-18 深圳奥比中光科技有限公司 基于手持移动设备的动态实时三维重建方法和系统
CN109889809A (zh) * 2019-04-12 2019-06-14 深圳市光微科技有限公司 深度相机模组、深度相机、深度图获取方法以及深度相机模组形成方法

Also Published As

Publication number Publication date
CN114175626B (zh) 2024-04-02
CN114175626A (zh) 2022-03-11
US20220230361A1 (en) 2022-07-21

Similar Documents

Publication Publication Date Title
JP5763184B2 (ja) 3次元画像に対する視差の算出
JP2019534606A (ja) ライトフィールドデータを使用して場面を表す点群を再構築するための方法および装置
WO1997004404A1 (en) Multi-viewpoint digital video encoding
RU2011122274A (ru) Устройство и способ обработки изображений
JP2017512420A (ja) モデルを使用して動画配信における遅延を低減するためのシステムおよび方法
JP7171169B2 (ja) ライトフィールド・コンテンツを表す信号を符号化する方法および装置
JP2015019326A (ja) 符号化装置および符号化方法、並びに、復号装置および復号方法
WO2021087819A1 (zh) 信息处理方法、终端设备及存储介质
WO2015115946A1 (en) Methods for encoding and decoding three-dimensional video content
WO2021087800A1 (zh) 信息处理方法、编码装置、解码装置、系统及存储介质
CN102333230A (zh) 一种提高三维视频系统中合成虚拟视图质量的方法
US20130120530A1 (en) Image processing apparatus and method and program
CN110784706B (zh) 信息处理方法、编码装置、解码装置、系统及存储介质
CN110809152A (zh) 信息处理方法、编码装置、解码装置、系统及存储介质
KR20130084227A (ko) 화상 처리 장치 및 화상 처리 방법
EP2839437B1 (en) View synthesis using low resolution depth maps
CN112788325B (zh) 一种图像处理方法、编码装置、解码装置及存储介质
WO2021087810A1 (zh) 信息处理方法和系统、编码装置、解码装置及存储介质
KR101502144B1 (ko) 계수 정보를 변환하는 방법 및 장치
JP4764516B1 (ja) 多視点画像符号化装置
TWI526044B (zh) 處理信號之方法及其系統
JP7382186B2 (ja) 符号化装置、復号装置、及びプログラム
JP2019083405A (ja) 復号装置、伝送装置、復号方法、伝送装置の制御方法、およびプログラム
JP7417388B2 (ja) 符号化装置、復号装置、及びプログラム
KR102094848B1 (ko) (초)다시점 미디어의 라이브 스트리밍 방법 및 장치

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19951662

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19951662

Country of ref document: EP

Kind code of ref document: A1