WO2021087810A1 - 信息处理方法和系统、编码装置、解码装置及存储介质 - Google Patents

信息处理方法和系统、编码装置、解码装置及存储介质 Download PDF

Info

Publication number
WO2021087810A1
WO2021087810A1 PCT/CN2019/116011 CN2019116011W WO2021087810A1 WO 2021087810 A1 WO2021087810 A1 WO 2021087810A1 CN 2019116011 W CN2019116011 W CN 2019116011W WO 2021087810 A1 WO2021087810 A1 WO 2021087810A1
Authority
WO
WIPO (PCT)
Prior art keywords
depth information
scene
target
code stream
depth
Prior art date
Application number
PCT/CN2019/116011
Other languages
English (en)
French (fr)
Inventor
贾玉虎
Original Assignee
Oppo广东移动通信有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Oppo广东移动通信有限公司 filed Critical Oppo广东移动通信有限公司
Priority to PCT/CN2019/116011 priority Critical patent/WO2021087810A1/zh
Priority to CN201980100411.9A priority patent/CN114402590A/zh
Publication of WO2021087810A1 publication Critical patent/WO2021087810A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T9/00Image coding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/271Image signal generators wherein the generated image signals comprise depth maps or disparity maps
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/42Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation

Definitions

  • the embodiments of the present application relate to the field of image coding and decoding technologies, and in particular, to an information processing method and system, an encoding device, a decoding device, and a storage medium.
  • Depth cameras have gradually been widely used in terminals.
  • the terminal can obtain not only color information from the image, but also depth information, that is, the terminal can obtain a three-dimensional model of the target scene through an image. Since the amount of depth information obtained by the depth information sensor is far greater than the amount of information presented by the depth image, the processing method of combining the depth image and the normal image in the prior art will not only produce decoding redundancy, but also reduce The utilization rate of in-depth information cannot fully utilize the in-depth information.
  • the embodiments of the present application provide an information processing method and system, an encoding device, a decoding device, and a storage medium, which can realize the maximum display of depth information, greatly improve the utilization rate of depth information, and effectively solve the existence of decoding redundancy. The problem.
  • an embodiment of the present application provides an information processing method, which is applied to an encoding device, and the method includes:
  • the depth information code stream is sent to a decoding device, so that the decoding device obtains an image corresponding to the target scene.
  • an embodiment of the present application provides an information processing method, which is applied to a decoding device, and the method includes:
  • the image corresponding to the target scene is obtained by using the scene depth information.
  • an embodiment of the present application provides an encoding device, the encoding device includes: a depth information module and a depth information encoder,
  • the depth information module is configured to collect the scene depth information of the target scene
  • the depth information encoder is configured to independently encode the scene depth information to obtain a depth information code stream; send the depth information code stream to a decoding device, so that the decoding device obtains an image corresponding to the target scene.
  • an embodiment of the present application provides a decoding device.
  • the decoding device includes a depth information decoder and a processor,
  • the depth information decoder is configured to receive the depth information code stream of the target scene; independently decode the depth information code stream to obtain scene depth information;
  • the processor is configured to obtain an image corresponding to the target scene by using the scene depth information.
  • an embodiment of the present application provides an information processing system.
  • the information processing system includes an encoding device and a decoding device.
  • the encoding device includes a depth information module and a depth information encoder.
  • the decoding device includes a depth information module and a depth information encoder.
  • the depth information module is configured to collect scene depth information of the target scene
  • the depth information encoder is configured to independently encode the scene depth information to obtain a depth information code stream; send the depth information code stream to a decoding device;
  • the depth information decoder is configured to receive the depth information code stream of the target scene; independently decode the depth information code stream to obtain scene depth information;
  • the processor is configured to obtain an image corresponding to the target scene by using the scene depth information.
  • the embodiments of the present application provide a computer-readable storage medium applied to an encoding device, and a computer program is stored thereon.
  • the computer program is executed by a processor, the foregoing information processing method applied to the encoding device is implemented.
  • an embodiment of the present application provides a computer-readable storage medium applied to a decoding device, and a computer program is stored thereon.
  • the computer program is executed by a processor, the above-mentioned information processing method applied to the decoding device is implemented.
  • the embodiments of the present application provide an information processing method and system, an encoding device, a decoding device, and a storage medium.
  • the information processing method applied to the encoding device includes: obtaining scene depth information of a target scene collected by a depth information module; The information is encoded independently to obtain the depth information code stream; the depth information code stream is sent to the decoding device so that the decoding device obtains the image corresponding to the target scene; the information processing method applied to the decoding device includes: receiving the depth information code stream of the target scene ; Decode the depth information code stream independently to obtain scene depth information; use the scene depth information to obtain an image corresponding to the target scene.
  • the depth information encoder in the encoding device independently encodes the scene depth information collected by the depth sensor and corresponding to the target scene, obtains and sends the depth information code stream, and the depth information decoder in the decoding device The depth information code stream is received and independently decoded, and the processor in the decoding device reuses the scene depth information obtained after independent decoding to obtain an image corresponding to the target scene. That is to say, the information processing method proposed in this application can perform independent coding and decoding processing on scene depth information, so as to realize the maximum display of depth information, greatly improve the utilization rate of depth information, and effectively solve the problem of decoding. The problem of redundancy.
  • FIG. 1 is a first flowchart of an information processing method provided by an embodiment of this application
  • FIG. 2 is a schematic diagram of encoding of an exemplary depth information encoder provided by an embodiment of the application
  • FIG. 3 is a schematic diagram of an exemplary information sampling provided by an embodiment of the application.
  • FIG. 4 is a second schematic flowchart of an information processing method provided by an embodiment of this application.
  • Fig. 5 is a schematic diagram of decoding of an exemplary depth information decoder provided by an embodiment of the application.
  • FIG. 6 is a schematic diagram of an exemplary information interpolation provided by an embodiment of this application.
  • FIG. 7 is a schematic diagram of an exemplary three-dimensional image imaging provided by an embodiment of the application.
  • FIG. 8 is a third schematic flowchart of an information processing method provided by an embodiment of this application.
  • FIG. 9 is a schematic structural diagram of an encoding device provided by an embodiment of this application.
  • FIG. 10 is a schematic structural diagram of a decoding device provided by an embodiment of this application.
  • FIG. 11 is a first structural diagram of an information processing system provided by an embodiment of this application.
  • FIG. 12 is a second structural diagram of an information processing system provided by an embodiment of this application.
  • the 3D HEVC encoder performs joint video encoding of normal images from multiple viewpoints and corresponding depth images to form a video stream.
  • the 3D HEVC decoder parses the code stream, it obtains normal images of multiple viewpoints and corresponding depth images.
  • the display terminal performs corresponding processing on the normal image and the corresponding depth image in each viewpoint to form a stereoscopic image perceivable by the human eye.
  • the depth image at the encoder end is captured by a depth camera.
  • the decoder still needs to follow the codec specifications and restore certain other redundant images. Additional information, resulting in decoding redundancy.
  • the depth image captured by the depth camera is directly encoded by the encoder, then at the decoder side, only the depth image can be obtained, but the amount of information actually obtained by the depth camera is much greater than the amount of information presented by the depth image , Leading to the defect of low information utilization.
  • the depth information encoder in the encoding device independently encodes the scene depth information collected by the depth sensor and corresponding to the target scene, and obtains and sends the depth information code stream.
  • the depth information decoder receives and independently decodes the depth information code stream, and the processor in the decoding device reuses the scene depth information obtained after independent decoding to obtain the image corresponding to the target scene. That is to say, the information processing method proposed in this application can perform independent coding and decoding processing on scene depth information, so as to realize the maximum display of depth information, greatly improve the utilization rate of depth information, and effectively solve the problem of decoding. The problem of redundancy.
  • FIG. 1 is a first schematic flowchart of an information processing method provided by an embodiment of this application. As shown in Fig. 1, the method for information processing by the encoding device may include the following steps:
  • Step 101 Obtain scene depth information of the target scene collected by the depth information module.
  • the encoding device may first use the depth information module to collect the scene depth information of the target scene.
  • the target scene may be an actual scene that the user needs to photograph, and the specific target scene is not limited in the embodiment of the present application.
  • the encoding device may be configured with a depth information module, where the depth information module is used to collect depth information.
  • the encoding device may be configured with an image sensor, and the image sensor is used to collect two-dimensional image data, so as to generate a normal image corresponding to the target scene. That is to say, the encoding device may be equipped with a photographing device to obtain a normal image of the target scene. Further, the encoding device may also communicate with other imaging devices to receive normal images of the target scene generated by other imaging devices.
  • the source of the specific normal images of the target scene is not limited in this embodiment of the application.
  • the encoding device when the encoding device obtains the scene depth information of the target scene collected by the depth information module, it may first use the depth information module to collect the original depth information of the target scene, and then The original depth information is directly determined as the scene depth information.
  • the encoding device when the encoding device obtains the scene depth information of the target scene collected by the depth information module, it may also use the depth information module to collect the original depth information of the target scene, The original depth information is preprocessed, so that the scene depth information can be obtained.
  • the preprocessing method may be phase calibration or other methods, which is not limited in the embodiment of the present application.
  • the scene depth information may be the original depth information obtained by the depth information module, or may be the data information obtained by the original depth information after certain processing. If the scene depth information is original depth information, the scene depth information can be charge information or other information, such as electrical signals after photoelectric conversion; if the scene depth information is data information obtained after processing, the scene depth information can be generated depth
  • the intermediate image data of the image can also be the final generated depth image and other redundant information.
  • the encoding device may include a depth information module and a depth information encoder, and the depth information module may be provided with a depth information sensor.
  • the depth information output by the depth information module may actually include scene depth information and auxiliary depth information, where the scene depth information is the depth directly output by the depth information sensor configured by the depth information module.
  • the information can be either the original depth information obtained by the depth information module, or the depth information obtained after the original depth information is preprocessed.
  • the scene depth information may be charge information or other information, such as electrical signals after photoelectric conversion.
  • the scene depth information may be the intermediate image data for generating the depth image, or the finally generated depth image and
  • the specific processing method can be phase calibration or other methods.
  • the depth information output by it not only includes scene depth information, but may also include auxiliary depth information. Since the auxiliary depth information is actually the mapping information of the defined distance and phase encoding in the depth information module, the information remains unchanged for a long time and the amount of data is small. Therefore, in the process of sending information, it can be sent directly without coding
  • the auxiliary depth information can also be merged into the scene depth information for encoding and sending, which is not limited in the embodiment of the present application.
  • Step 102 Perform independent encoding on the scene depth information to obtain a depth information code stream.
  • the encoding device can continue to independently encode the scene depth information, so as to obtain the depth information code stream.
  • the depth information encoder set in the encoding device can independently encode the scene depth information to obtain the depth information code stream, that is, in the embodiment of the present application, the encoding device It directly encodes the scene depth information corresponding to the target scene.
  • the encoding device when it independently encodes the scene depth information through the depth information encoder, it may first perform de-redundancy processing on the scene depth information, so as to obtain the corresponding target depth information; Then the target depth information is entropy-encoded, and finally the depth information code stream is obtained.
  • the depth information encoder in the encoding device when it performs de-redundancy processing on the scene depth information, it may perform de-redundancy processing on the scene depth information according to a preset coding strategy, thereby The target depth information can be obtained.
  • the preset coding strategy is used to perform at least one of frame prediction, frequency domain transformation, quantization, and sampling.
  • the depth information encoder can perform at least any one of frame prediction, frequency domain transformation, quantization, and sampling on the scene depth information, so as to obtain target depth information.
  • the depth information encoder may also perform scene depth information de-redundancy processing in other ways, which is not limited in the embodiment of the present application.
  • FIG. 2 is a schematic diagram of encoding of an exemplary depth information encoder provided by an embodiment of the application. As shown in FIG. 2, the depth information encoder can select at least one mode from intra prediction, inter prediction, and other prediction modes for prediction according to the scene depth information.
  • multiple encoding methods can be used to eliminate correlation.
  • the following encoding methods can be used and not limited to eliminate correlation Performance: If the scene depth information is multiple phase images, the correlation between the phases can be used to eliminate phase data redundancy. If the scene depth information is other data, the spatial correlation between these data can be used to eliminate data redundancy. For example, for intra-frame prediction, the temporal correlation of scene depth information can also be used to eliminate data redundancy, for example, for inter-frame prediction.
  • Frequency domain transform processing can transform scene depth information into frequency domain, and use frequency domain correlation to eliminate frequency domain data redundancy, such as performing discrete Fourier transform.
  • the quantization process can use the scene sensitivity depth to eliminate scene-based data redundancy.
  • the depth information encoder can use the quantization result as the target depth information to perform entropy encoding on the target depth information. Entropy encoding can actually use the correlation between encoded binary data to eliminate encoding bit redundancy.
  • FIG. 3 is a schematic diagram of an exemplary information sampling provided by an embodiment of the application.
  • the depth information encoder can sample the scene depth information with a fixed step size and encode the sampling information. Among them, each small box in Figure 3 is actually a part of the scene depth information.
  • the information, specifically, the sampling step is 3, so that the corresponding information is selected, which is actually used as the target depth information for encoding to obtain the depth information code stream.
  • the depth information encoder may also select a part of the scene depth information for encoding, for example, in an augmented reality (Augmented Reality, AR) scene, as a possible implementation
  • augmented reality Augmented Reality
  • the depth information encoder can encode all scene depth information, or can only encode depth information in a specified time or space, which is not limited in the embodiment of the present application.
  • the viewpoints can also be encoded at intervals, that is, because the scene depth information between different viewpoints at the same time, For example, a phase-encoded image or a charge image has strong correlation information. Therefore, the correlation can be used to reduce the amount of data sent in the code stream. For example, for three-view video coding, only the scene depth information of the left and right viewpoints needs to be stored in the code stream. Correspondingly, the subsequent use of the depth information decoder can interpolate the scene depth information of the left and right viewpoints to obtain the middle The scene depth information of the viewpoint.
  • Step 103 Send the depth information code stream to the decoding device, so that the decoding device obtains an image corresponding to the target scene.
  • the encoding device after the encoding device obtains the depth information code stream through the depth information encoder, it can send the depth information code stream to the decoding device, and the decoding device can thus obtain the target scene corresponding to the target scene based on the depth information code stream. image.
  • the embodiment of the application provides an information processing method, which is applied to an encoding device.
  • the encoding device obtains scene depth information of a target scene collected by a depth information module; independently encodes the scene depth information to obtain a depth information code stream;
  • the code stream is sent to the decoding device, so that the decoding device obtains an image corresponding to the target scene.
  • the depth information encoder in the encoding device independently encodes the scene depth information collected by the depth sensor and corresponding to the target scene, obtains and sends the depth information code stream, and the depth information decoder in the decoding device The depth information code stream is received and independently decoded, and the processor in the decoding device reuses the scene depth information obtained after independent decoding to obtain an image corresponding to the target scene. That is to say, the information processing method proposed in this application can perform independent coding and decoding processing on scene depth information, so as to realize the maximum display of depth information, greatly improve the utilization rate of depth information, and effectively solve the problem of decoding. The problem of redundancy.
  • FIG. 4 is a second schematic flowchart of an information processing method provided by an embodiment of this application. As shown in FIG. 4, the method for information processing by the decoding device may include the following steps:
  • Step 401 Receive the depth information code stream of the target scene.
  • the decoding device may receive the depth information code stream corresponding to the target scene output by the encoding device.
  • the decoding device includes a depth information decoder and a processor, where the depth information decoder can receive a depth information code stream, so that the processor performs subsequent decoding processing.
  • the target scene may be an actual scene that the user needs to photograph, and the specific target scene is not limited in the embodiment of the present application.
  • Step 402 Perform independent decoding on the depth information code stream to obtain scene depth information.
  • the decoding device may independently decode the depth information code stream through the depth information decoder to obtain the scene depth information.
  • the decoding device may independently decode the depth information code stream through the configured depth information decoder, so as to obtain the scene depth information corresponding to the target scene.
  • the decoding device when it independently decodes the depth information code stream, it can first perform entropy decoding on the depth information code stream to obtain the target depth information; and then can perform the entropy decoding on the target depth information. The reconstruction process finally obtains the scene depth information.
  • the process of independent decoding of the depth information code stream by the depth information decoder is opposite to the process of independent encoding of the scene depth information performed by the depth information encoder in step 102 above.
  • the depth information decoder in the decoding device when it performs reconstruction processing on the target depth information, it may perform reconstruction processing on the target depth information according to a preset decoding strategy, so as to obtain The scene depth information.
  • the preset decoding strategy is used to perform at least one of frame prediction and reconstruction, frequency domain inverse transform, inverse quantization, and interpolation.
  • the depth information decoder can perform at least any one of frame prediction and reconstruction, frequency domain inverse transformation, inverse quantization, and interpolation on the target depth information, so as to obtain scene depth information.
  • the depth information decoder may also perform the reconstruction processing of the target depth information in other ways, which is not limited in the embodiment of the present application.
  • Fig. 5 is a schematic diagram of decoding of an exemplary depth information decoder provided by an embodiment of the application.
  • the depth information decoder first performs entropy decoding on the depth information code stream to obtain the target depth information, and then dequantizes the target depth information and further performs frequency Inverse domain transformation, and finally, corresponding to the coding prediction and performing corresponding prediction reconstruction, the scene depth information can be obtained.
  • the specific decoding method is not limited in this embodiment of the application.
  • the prediction process of the depth information encoder and the corresponding prediction and reconstruction process of the depth information decoder can select one or more of them.
  • the prediction process and the corresponding prediction reconstruction process can also be skipped.
  • the frequency domain transform of the depth information encoder and the frequency domain inverse transform of the corresponding depth information decoder can be adopted or skipped.
  • the quantization of the depth information encoder and the corresponding depth The inverse quantization of the information decoder can be used or skipped, which is not limited in the embodiment of the present application.
  • Fig. 6 is a schematic diagram of an exemplary information interpolation provided by an embodiment of the application.
  • the depth information decoder can use a fixed step to interpolate the target depth information obtained after entropy decoding of the depth information code stream to restore the unsampled information, where Each small box in FIG. 6 is actually a piece of information included in the target depth information.
  • the adjacent information can be used to perform interpolation information recovery to obtain scene depth information. For example, information 2 can be interpolated and restored based on information 1 and information 4.
  • Step 403 Obtain an image corresponding to the target scene by using the scene depth information.
  • the processor can use the scene depth information to obtain the image corresponding to the target scene.
  • the processor in the decoding device may be a depth image generator and/or an information processor. Specifically, if the processor is a depth image generator, the decoding device can generate the corresponding depth image according to the scene depth information; if the processor is an information processor, the decoding device can generate other corresponding derived data according to the scene depth information .
  • the decoding device when the decoding device uses the scene depth information to obtain the image corresponding to the target scene through the processor, it may first obtain the phase information from the scene depth information; and then use the phase information to compare the target scene The corresponding normal image is optimized, so that an optimized image of the target scene can be obtained.
  • the decoding device may be configured with an image sensor, and the image sensor is used to collect two-dimensional image data, so as to generate a normal image corresponding to the target scene.
  • the decoding device may be configured with a photographing device to obtain a normal image of the target scene.
  • the decoding device may also communicate with other imaging devices to receive the normal image of the target scene generated by other imaging devices, and the specific target The source of the normal image of the scene is not limited in this embodiment of the application.
  • a long-exposure image and a short-exposure image are usually required to be obtained by fusing.
  • the phase information that is, the phase image as a kind of scene depth information
  • the processor can assist the normal image to perform deblurring processing by obtaining the phase information from the scene depth information, thereby obtaining an optimized image.
  • the phase information is also subjected to matching encoding.
  • the processor may The normal image of the target scene is denoised according to the phase information, where the normal image is a frame image in the target scene video, so as to obtain an optimized image.
  • the decoding device when the decoding device obtains the image corresponding to the target scene through the processor based on the scene depth information, it may also use the scene depth information to generate a depth image of the target scene.
  • the scene depth information includes related information that characterizes the depth of the target scene. Therefore, the processor can generate a depth image of the target scene by using this information. Compared with the prior art, where the depth image and the normal image are merged, and then coded and decoded and sent, the technical solution of the present application can generate the depth image more flexibly.
  • the decoding device may also fuse the depth image with the acquired normal image of the target scene, thereby A three-dimensional image of the target scene can be generated.
  • FIG. 7 is a schematic diagram of an exemplary three-dimensional image imaging provided by an embodiment of the application.
  • the depth information module 901 collects the scene depth information of the target scene
  • the image sensor 701 collects the normal image of the target scene
  • the video image encoder 702 encodes the normal image of the target scene.
  • the depth information encoder 902 in the encoding device encodes the scene depth information of the target scene to form two bit streams, namely the depth information bit stream and the image bit stream, and sends the depth information bit stream to the depth information decoder 1001 ,
  • the image code stream is sent to the video image decoder 703.
  • the depth information decoder 1001 and the video image decoder 703 of the decoding device respectively decode the corresponding code streams.
  • the processor 1002 uses the scene depth information to generate a depth image, and merges the normal image and the depth image to obtain a three-dimensional image of the target scene.
  • the embodiment of the application provides an information processing method, which is applied to a decoding device, and the decoding device receives a depth information code stream of a target scene; independently decodes the depth information code stream to obtain scene depth information; and obtains the target by using the scene depth information The image corresponding to the scene.
  • the depth information encoder in the encoding device independently encodes the scene depth information collected by the depth sensor and corresponding to the target scene, obtains and sends the depth information code stream, and the depth information decoder in the decoding device
  • the depth information code stream is received and independently decoded, and the processor in the decoding device reuses the scene depth information obtained after independent decoding to obtain an image corresponding to the target scene. That is to say, the information processing method proposed in this application can perform independent coding and decoding processing on scene depth information, thereby achieving the maximum display of depth information, greatly improving the utilization rate of depth information, and effectively solving the problem of decoding.
  • the problem of redundancy is to perform independent coding and decoding processing on scene depth information,
  • FIG. 8 is a third schematic flowchart of an information processing method provided by an embodiment of this application. As shown in FIG.
  • the depth information sensor 9011 provided by the depth information module 901 in the encoding device can collect the scene depth information of the target scene, and then the depth information encoder 902 in the encoding device independently encodes the scene depth information , Get the depth information code stream, and send the depth information code stream to the decoding device; after the depth information decoder 1001 in the decoding device receives the depth information code stream, it performs independent decoding to obtain the scene depth information, which is set in the decoding device
  • the depth image generator 1201 can use the scene depth information to generate a depth image.
  • the scene depth information parsed by the decoding device can not only be used to generate a depth image, but also more other processing can be performed.
  • the information processor 1202 provided in the decoding device can process the scene depth information to obtain Corresponding other derived data.
  • the phase image is used as a kind of original depth information at the same time as the video image is coded to match the normal image.
  • each normal image corresponds to multiple phase images sampled at different time points;
  • decoding when the normal image is blurred due to motion, the multiple phase images obtained by analysis can carry more information at different time points.
  • the blurred image can be restored by motion estimation to get a clearer image.
  • the depth information bitstream can not only be used for the generation of depth images, but also the noise and external visible light of the shooting scene can be judged based on the charge information. This information is helpful for the de-darkening and white balance adjustment of the corresponding image, which can be processed to obtain better image quality and give users a more beautiful and realistic image and video experience.
  • TOF Time Of Flight
  • the encoding device acquires scene depth information, it may also include but not limited to the following methods:
  • the encoding device can sample the TOF method of continuous modulation.
  • the sensor samples a total of 8 sets of signals with different phases. After photoelectric conversion, these 8 sets The signal is quantized with 10 bits to generate 8 original charge images.
  • These 8 original charge images, together with sensor attribute parameters, such as temperature, are encoded as scene depth information; or 8 original charge images are generated to generate 2 process depth data And a piece of background data, as the scene depth information for encoding.
  • the encoding device can sample the principle of binocular imaging, use the two images captured by the binocular camera, calculate the parallax and other information according to the pose of the two images, and use the parallax information, camera parameters, etc. as the scene depth information Encode.
  • the embodiments of the present application provide an information processing method, and the information processing method applied to an encoding device includes: acquiring scene depth information of a target scene collected by a depth information module; independently encoding the scene depth information to obtain a depth information code stream; Send the depth information code stream to the decoding device so that the decoding device obtains the image corresponding to the target scene; the information processing method applied to the decoding device includes: receiving the depth information code stream of the target scene; independently decoding the depth information code stream to obtain Scene depth information; using the scene depth information to obtain an image corresponding to the target scene.
  • the depth information encoder in the encoding device independently encodes the scene depth information collected by the depth sensor and corresponding to the target scene, obtains and sends the depth information code stream, and the depth information decoder in the decoding device The depth information code stream is received and independently decoded, and the processor in the decoding device reuses the scene depth information obtained after the independent decoding to obtain an image corresponding to the target scene. That is to say, the information processing method proposed in this application can perform independent coding and decoding processing on scene depth information, so as to realize the maximum display of depth information, greatly improve the utilization rate of depth information, and effectively solve the problem of decoding. The problem of redundancy.
  • FIG. 9 is a schematic structural diagram of an encoding device provided in an embodiment of this application.
  • the encoding device 90 includes: a depth information module 901 and a depth information encoder 902.
  • the depth information module 901 is configured to collect scene depth information of the target scene
  • the depth information encoder 902 is configured to independently encode the scene depth information to obtain a depth information code stream; send the depth information code stream to a decoding device, so that the decoding device obtains an image corresponding to the target scene .
  • the depth information module 901 includes a depth information sensor 9011,
  • the depth information sensor 9011 is specifically configured to collect original depth information of the target scene; and determine the original depth information as the scene depth information.
  • the depth information sensor 9011 is further specifically configured to preprocess the original depth information after the original depth information of the target scene is collected by the depth information sensor, Obtain the scene depth information.
  • the depth information encoder 902 is specifically configured to perform de-redundancy processing on the scene depth information to obtain target depth information; perform entropy coding on the target depth information to obtain The depth information code stream.
  • the depth information encoder 902 is specifically configured to perform de-redundancy processing on the scene depth information according to a preset coding strategy to obtain the target depth information; wherein, the The preset coding strategy is used to perform at least one of frame prediction, frequency domain transformation, quantization, and sampling.
  • the embodiment of the application provides an encoding device that obtains the scene depth information of the target scene collected by the depth information module; independently encodes the scene depth information to obtain the depth information code stream; and sends the depth information code stream to the decoding Device so that the decoding device obtains the image corresponding to the target scene.
  • the depth information encoder in the encoding device independently encodes the scene depth information collected by the depth sensor and corresponding to the target scene, obtains and sends the depth information code stream, and the depth information decoder in the decoding device
  • the depth information code stream is received and independently decoded, and the processor in the decoding device reuses the scene depth information obtained after the independent decoding to obtain an image corresponding to the target scene. That is to say, the information processing method proposed in this application can perform independent coding and decoding processing on scene depth information, thereby achieving the maximum display of depth information, greatly improving the utilization rate of depth information, and effectively solving the problem of decoding.
  • the problem of redundancy can perform independent coding and decoding processing on scene depth information,
  • FIG. 10 is a schematic structural diagram of a decoding apparatus provided in an embodiment of the present application.
  • the decoding device 100 includes a depth information decoder 1001 and a processor 1002,
  • the depth information decoder 1001 is configured to receive the depth information code stream of the target scene; independently decode the depth information code stream to obtain the scene depth information;
  • the processor 1002 is configured to obtain an image corresponding to the target scene by using the scene depth information.
  • the depth information decoder 1001 is specifically configured to perform entropy decoding on the depth information code stream to obtain target depth information; perform reconstruction processing on the target depth information to obtain the target depth information. Describe the depth information of the scene.
  • the depth information decoder 1001 is specifically configured to perform reconstruction processing on the target depth information according to a preset decoding strategy to obtain the scene depth information; wherein, the preset The decoding strategy is used to perform at least one of frame prediction and reconstruction, frequency domain inverse transformation, inverse quantization, and interpolation.
  • the processor 1002 is specifically configured to obtain phase information from the scene depth information; use the phase information to optimize the normal image corresponding to the target scene to obtain The optimized image of the target scene.
  • the processor 1002 is specifically configured to use the scene depth information to generate a depth image of the target scene.
  • the processor 1002 is specifically configured to fuse the depth image with the acquired normal image of the target scene to generate a three-dimensional image of the target scene.
  • the embodiment of the application provides a decoding device that receives a depth information code stream of a target scene; independently decodes the depth information code stream to obtain scene depth information; and uses the scene depth information to obtain an image corresponding to the target scene.
  • the depth information encoder in the encoding device independently encodes the scene depth information collected by the depth sensor and corresponding to the target scene, obtains and sends the depth information code stream, and the depth information decoder in the decoding device
  • the depth information code stream is received and independently decoded, and the processor in the decoding device reuses the scene depth information obtained after the independent decoding to obtain an image corresponding to the target scene. That is to say, the information processing method proposed in this application can perform independent coding and decoding processing on scene depth information, so as to realize the maximum display of depth information, greatly improve the utilization rate of depth information, and effectively solve the problem of decoding.
  • the problem of redundancy can perform independent coding and decoding processing on scene depth information, so as to realize the maximum display of depth information, greatly improve the utilization rate
  • FIG. 11 is a first structural diagram of an information processing system provided by an embodiment of the present application.
  • the information processing system 110 includes an encoding device 90 and a decoding device 100.
  • the encoding device 90 includes a depth information module 901 and a depth information encoder 902.
  • the decoding device 100 includes a depth information decoder. 1001 and processor 1002;
  • the depth information module 901 is configured to collect scene depth information of the target scene
  • the depth information encoder 902 is configured to independently encode the scene depth information to obtain a depth information code stream; send the depth information code stream to a decoding device;
  • the depth information decoder 1001 is configured to receive the depth information code stream of the target scene; independently decode the depth information code stream to obtain scene depth information;
  • the processor 1002 is configured to obtain an image corresponding to the target scene by using the scene depth information.
  • FIG. 12 is a second structural diagram of an information processing system provided by an embodiment of this application.
  • the information processing system 110 not only includes the above-mentioned depth information module 901, the depth information encoder 902, and the depth information decoder 1001
  • the depth information module 901 includes the depth information sensor 9011, It also includes a depth image generator 1201 and N information processors 1202. N is a natural number greater than or equal to 1.
  • the N information processors 1202 may specifically include an information processor 1, an information processor 2, ..., an information processor N, Among them, each of the depth image generator 1202 and the N information processors can generate different information, or perform different imaging processing, the depth image generator 1201 and the N information processors 1202 are actually integrated into one
  • the processor is the aforementioned processor 1002, that is, the processor 1002 shown in FIG. 10 and FIG. 11, and the depth image generator 1201 and N information processors 1202 shown in FIG. The difference lies only in the division of entities, which is not limited in the embodiment of the present application.
  • the embodiment of the present application also provides a computer-readable storage medium, which is applied to an encoding device, and a computer program is stored thereon, and when the computer program is executed by a processor, the foregoing information processing method applied to the encoding device is implemented.
  • the embodiment of the present application also provides a computer-readable storage medium, which is applied to a decoding device, and a computer program is stored thereon, and when the computer program is executed by a processor, the foregoing information processing method applied to the decoding device is implemented.
  • the computer-readable storage medium may be a volatile memory (volatile memory), such as a random-access memory (Random-Access Memory, RAM); or a non-volatile memory (non-volatile memory), such as a read-only memory (Read-only memory). Only Memory, ROM, flash memory, Hard Disk Drive (HDD), or Solid-State Drive (SSD); it can also be a respective device including one or any combination of the above-mentioned memories, such as Mobile phones, computers, tablet devices, personal digital assistants, etc.
  • volatile memory volatile memory
  • RAM random-access memory
  • non-volatile memory such as a read-only memory (Read-only memory).
  • SSD Solid-State Drive
  • this application can be provided as methods, systems, or computer program products. Therefore, this application may adopt the form of hardware embodiment, software embodiment, or a combination of software and hardware embodiments. Moreover, this application may adopt the form of a computer program product implemented on one or more computer-usable storage media (including but not limited to disk storage, optical storage, etc.) containing computer-usable program codes.
  • These computer program instructions can be provided to the processor of a general-purpose computer, a special-purpose computer, an embedded processor, or other programmable data processing equipment to generate a machine, so that the instructions executed by the processor of the computer or other programmable data processing equipment are generated It is used to realize a system that implements a process or multiple processes in a schematic diagram and/or a block or multiple blocks in a block diagram.
  • These computer program instructions can also be stored in a computer-readable memory that can guide a computer or other programmable data processing equipment to work in a specific manner, so that the instructions stored in the computer-readable memory produce an article of manufacture including the instruction system.
  • the system realizes the functions specified in one process or multiple processes in the realization process schematic diagram and/or one block or multiple blocks in the block diagram.
  • These computer program instructions can also be loaded on a computer or other programmable data processing equipment, so that a series of operation steps are executed on the computer or other programmable equipment to produce computer-implemented processing, so as to execute on the computer or other programmable equipment.
  • the instructions provide steps for implementing functions specified in one or more processes in the schematic diagram and/or one block or more in the block diagram.
  • the embodiments of the present application provide an information processing method and system, an encoding device, a decoding device, and a storage medium.
  • the information processing method applied to the encoding device includes: obtaining scene depth information of a target scene collected by a depth information module; The information is encoded independently to obtain the depth information code stream; the depth information code stream is sent to the decoding device so that the decoding device obtains the image corresponding to the target scene; the information processing method applied to the decoding device includes: receiving the depth information code stream of the target scene ; Decode the depth information code stream independently to obtain scene depth information; use the scene depth information to obtain an image corresponding to the target scene.
  • the depth information encoder in the encoding device independently encodes the scene depth information collected by the depth sensor and corresponding to the target scene, obtains and sends the depth information code stream, and the depth information decoder in the decoding device The depth information code stream is received and independently decoded, and the processor in the decoding device reuses the scene depth information obtained after the independent decoding to obtain an image corresponding to the target scene. That is to say, the information processing method proposed in this application can perform independent coding and decoding processing on scene depth information, so as to realize the maximum display of depth information, greatly improve the utilization rate of depth information, and effectively solve the problem of decoding. The problem of redundancy.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Signal Processing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)
  • Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)

Abstract

本申请实施例公开了一种信息处理方法和系统、编码装置、解码装置及存储介质,应用于编码装置的信息处理方法包括:获取深度信息模组采集的目标场景的场景深度信息;对场景深度信息进行独立编码,得到深度信息码流;将深度信息码流发送至解码装置,以使解码装置获得目标场景对应的图像;应用于解码装置的信息处理方法包括:接收目标场景的深度信息码流;对深度信息码流进行独立解码,获得场景深度信息;利用场景深度信息获得所述目标场景对应的图像。

Description

信息处理方法和系统、编码装置、解码装置及存储介质 技术领域
本申请实施例涉及图像编解码技术领域,尤其涉及一种信息处理方法和系统、编码装置、解码装置及存储介质。
背景技术
随着科技的逐步发展,具有多种功能的终端已经逐渐成为生活和工作中不可或缺的重要工具,尤其是终端具有的拍摄功能,使终端的应用更加广泛。
深度相机已经逐渐在终端中被大量应用,终端从图像中不仅能得到颜色信息,还能得到深度信息,也就是说,终端可以通过一幅图像得到目标场景的三维模型。由于通过深度信息传感器获得的深度信息的信息量远远大于深度图像所呈现的信息量,因此现有技术中将深度图像与正常图像合并编码的处理方式,不但会产生解码冗余,还会降低深度信息的利用率,无法实现对深度信息的充分利用。
发明内容
本申请实施例提供一种信息处理方法和系统、编码装置、解码装置及存储介质,能够实现对深度信息的最大程度的展现,大大提高了深度信息的利用率,并有效解决了存在解码冗余的问题。
本申请实施例的技术方案是这样实现的:
第一方面,本申请实施例提供了一种信息处理方法,应用于编码装置,所述方法包括:
获取深度信息模组采集的目标场景的场景深度信息;
对所述场景深度信息进行独立编码,得到深度信息码流;
将所述深度信息码流发送至解码装置,以使所述解码装置获得所述目标场景对应的图像。
第二方面,本申请实施例提供了一种信息处理方法,应用于解码装置,所述方法包括:
接收目标场景的深度信息码流;
对所述深度信息码流进行独立解码,获得场景深度信息;
利用所述场景深度信息获得所述目标场景对应的图像。
第三方面,本申请实施例提供了一种编码装置,所述编码装置包括:深度信息模组和深度信息编码器,
深度信息模组,配置于采集目标场景的场景深度信息;
深度信息编码器,配置于对所述场景深度信息进行独立编码,得到深度信息码流;将所述深度信息码流发送至解码装置,以使所述解码装置获得所述目标场景对应的图像。
第四方面,本申请实施例提供了一种解码装置,所述解码装置包括:深度信息解码器和处理器,
所述深度信息解码器,用于接收所述目标场景的深度信息码流;对所述深度信息码流进行独立解码,获得场景深度信息;
所述处理器,配置于利用所述场景深度信息获得所述目标场景对应的图像。
第五方面,本申请实施例提供了一种信息处理系统,所述信息处理系统包括:编码装置和解码装置,所述编码装置包括深度信息模组和深度信息编码器,所述解码装置包括深度信息解码器和处理器;
所述深度信息模组,配置于采集目标场景的场景深度信息;
所述深度信息编码器,配置于对所述场景深度信息进行独立编码,得到深度信息码流;将所述深度信息码流发送至解码装置;
所述深度信息解码器,用于接收所述目标场景的深度信息码流;对所述深度信息码流进行独立解码,获得场景深度信息;
所述处理器,配置于利用所述场景深度信息获得所述目标场景对应的图像。
第六方面,本申请实施例提供了一种计算机可读存储介质,应用于编码装置,其上存储有计算机程序,该计算机程序被处理器执行时,实现上述应用于编码装置的信息处理方法。
第七方面,本申请实施例提供了一种计算机可读存储介质,应用于解码装置,其上存储有计算机程序,该计算机程序被处理器执行时,实现上述应用于解码装置的信息处理方法。
本申请实施例提供了一种信息处理方法和系统、编码装置、解码装置及存储介质,应用于编码装置的信息处理方法包括:获取深度信息模组采集的目标场景的场景深度信息;对场景深度信息进行独立编码,得到深度信息码流;将深度信息码流发送至解码装置,以使解码装置获得目标场景对应的图像;应用于解码装置的信息处理方法包括:接收目标场景的深度信息码流;对深度信息码流进行独立解码,获得场景深度信息;利用场景深度信息获得所述目标场景对应的图像。由此可见,在本申请中,编码装置中的深度信息编码器对深度传感器采集的、目标场景对应的场景深度信息进行独立编码,获得并发送深度信息码流,解码装置中的深度信息解码器接收并独立解码深度信息码流,解码装置中的处理器再利用独立解码后获得的场景深度信息获得所述目标场景对应的图像。也就是说,本申请提出的信息处理方法,可以对场景深度信息进行独立编解码处理,从而能够 实现对深度信息的最大程度的展现,大大提高了深度信息的利用率,并有效解决了存在解码冗余的问题。
附图说明
图1为本申请实施例提供的一种信息处理方法的流程示意图一;
图2为本申请实施例提供的一种示例性的深度信息编码器的编码示意图;
图3为本申请实施例提供的一种示例性的信息采样示意图;
图4为本申请实施例提供的一种信息处理方法的流程示意图二;
图5为本申请实施例提供的一种示例性的深度信息解码器的解码示意图;
图6为本申请实施例提供的一种示例性的信息插值示意图;
图7为本申请实施例提供的一种示例性的三维图像的成像示意图;
图8为本申请实施例提供的一种信息处理方法的流程示意图三;
图9为本申请实施例提供的一种编码装置的结构示意图;
图10为本申请实施例提供的一种解码装置的结构示意图;
图11为本申请实施例提供的一种信息处理系统的结构示意图一;
图12为本申请实施例提供的一种信息处理系统的结构示意图二。
具体实施方式
可以理解的是,此处所描述的具体实施例仅仅用于解释相关申请,而非对该申请的限定。另外还需要说明的是,为了便于描述,附图中仅示出了与有关申请相关的部分。
三维高性能视频编码(3 Dimension High Efficiency Video Coding,3D HEVC),在编码端,3D HEVC编码器将多个视点的正常图像和与之对应的深度图像进行联合视频编码,形成视频码流。在解码端,3D HEVC解码器解析码流后,得到多个视点的正常图像和与之对应的深度图像。显示端将每个视点中正常图像和对应的深度图像进行相应的处理,形成人眼可感知的立体图像。其中,编码器端深度图像的获取由深度相机拍摄得到。
目前,如果将深度相机捕获形成的深度图像与正常图像,通过编码器合并进行编码传输至解码器,即使仅期望恢复深度图像,在解码器端,仍需要遵循编解码规范,恢复一定的其它冗余信息,造成解码冗余。此外,如果将深度相机捕获形成的深度图像直接通过编码器进行编码,那么在解码器端,也仅能得到深度图像,但是深度相机实际获得的信息量是远远大于深度图像所呈现的信息量,导致信息利用率较低的缺陷。
由此可见,现有技术中将深度信息模组采集的深度图像与图像传感器采集的颜色信息进行联合编码,一方面会在解码过程中产生大量的冗余信 息,造成解码冗余,另一方面存在大量的深度信息被浪费的问题,导致没有充分利用深度信息的缺陷。
为了克服现有技术存在的缺陷,在本申请中,编码装置中的深度信息编码器对深度传感器采集的、目标场景对应的场景深度信息进行独立编码,获得并发送深度信息码流,解码装置中的深度信息解码器接收并独立解码深度信息码流,解码装置中的处理器再利用独立解码后获得的场景深度信息获得所述目标场景对应的图像。也就是说,本申请提出的信息处理方法,可以对场景深度信息进行独立编解码处理,从而能够实现对深度信息的最大程度的展现,大大提高了深度信息的利用率,并有效解决了存在解码冗余的问题。
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述。
本申请实施例提供了一种信息处理方法,应用于编码装置。图1为本申请实施例提供的一种信息处理方法的流程示意图一。如图1所示,编码装置进行信息处理的方法可以包括以下步骤:
步骤101、获取深度信息模组采集的目标场景的场景深度信息。
在本申请的实施例中,编码装置可以先利用深度信息模组采集目标场景的场景深度信息。
需要说明的是,在本申请的实施例中,目标场景可以为用户需要拍摄的实际场景,具体的目标场景本申请实施例不作限定。
进一步地,在本申请的实施例中,编码装置可以配置有深度信息模组,其中,深度信息模组用于进行深度信息的采集。
可以理解的是,在本申请的实施例中,编码装置可以配置有图像传感器,图像传感器用于进行二维图像数据的采集,从而生成目标场景对应的正常图像。也就是说,编码装置可以配置有拍摄器件,从而获取目标场景的正常图像。进一步地,编码装置还可以与其它成像装置通信,从而接收到其它成像装置生成的目标场景的正常图像,具体的目标场景的正常图像的来源本申请实施例不作限定。
具体的,在本申请的实施例中,编码装置在获取深度信息模组采集的目标场景的场景深度信息时,既可以先利用所述深度信息模组采集所述目标场景的原始深度信息,然后直接将所述原始深度信息确定为所述场景深度信息。
进一步地,在本申请的实施例中,编码装置在获取深度信息模组采集的目标场景的场景深度信息时,还可以在利用所述深度信息模组采集所述目标场景的原始深度信息之后,对所述原始深度信息进行预处理,从而可以获得所述场景深度信息。
可以理解的是,在本申请的实施例中,预处理方式可以为相位校准或其他方式,本申请实施例不作限定。
也就是说,在本申请的实施例中,场景深度信息可以为深度信息模组所获取得到的原始深度信息,也可以为原始深度信息经一定的处理后所得到的数据信息。若场景深度信息为原始深度信息,该场景深度信息可以为电荷信息或其他信息,如光电转换后的电信号;若场景深度信息为处理后所得到的数据信息,该场景深度信息可以为生成深度图像的中间图像数据,也可以为最终生成的深度图像和其他冗余信息。
需要说明的是,在本申请的实施例中,编码装置可以包括深度信息模组和深度信息编码器,深度信息模组中可以设置有深度信息传感器。
进一步地,在本申请的实施例中,深度信息模组输出的深度信息实际上可以包括场景深度信息和辅助深度信息,其中,场景深度信息为深度信息模组配置的深度信息传感器直接输出的深度信息,既可以是深度信息模组所获取得到的原始深度信息,也可以是原始深度信息进行预处理后得到的深度信息。
示例性的,在本申请的实施例中,如果直接将原始深度信息确定为场景深度信息,那么场景深度信息可以为电荷信息或其他信息,如光电转换后的电信号。
示例性的,在本申请的实施例中,如果通过对原始深度信息进行预处理后获得场景深度信息,那么场景深度信息可以为生成深度图像的中间图像数据,也可以为最终生成的深度图像和其他冗余信息,其中具体的处理方式可以为相位校准或其他方式。
需要说明的是,在本申请的实施例中,对于深度信息模组,其输出的深度信息不仅包括场景深度信息,还可以包括辅助深度信息。由于辅助深度信息实际上就是深度信息模组中已定义的距离和相位编码的映射信息,该信息长期保持不变,且数据量较小,因此,在发送信息过程中,可以不进行编码直接发送,当然,也可以将辅助深度信息合并到场景深度信息中,进行编码发送,本申请实施例不作限定。
步骤102、对场景深度信息进行独立编码,得到深度信息码流。
在本申请的实施例中,编码装置在通过深度信息模组采集到目标场景的场景深度信息之后,进一步的,编码装置可以继续对场景深度信息进行独立编码,从而可以得到深度信息码流。
具体的,在本申请的实施例中,编码装置中所设置的深度信息编码器可以对场景深度信息进行独立编码,得到深度信息码流,也就是说,在本申请的实施例中,编码装置是直接对目标场景对应的场景深度信息进行编码的。
需要说明的是,在本申请的实施例中,编码装置在通过深度信息编码器对场景深度信息进行独立编码时,可以先对场景深度信息进行去冗余处理,从而得到对应的目标深度信息;然后再对目标深度信息进行熵编码,最终获得深度信息码流。
进一步地,在本申请的实施例中,编码装置中的深度信息编码器在对场景深度信息进行去冗余处理时,可以按照预设编码策略对所述场景深度信息进行去冗余处理,从而可以获得所述目标深度信息。具体地,在本申请中,预设编码策略用于进行帧预测、频域变换、量化以及采样中的至少一种处理。也就是说,深度信息编码器可以对场景深度信息,至少进行帧预测、频域变换、量化以及采样中的任意一种处理,从而可以得到目标深度信息。此外,深度信息编码器还可以以其它方式进行场景深度信息的去冗余处理,本申请实施例不作限定。
图2为本申请实施例提供的一种示例性的深度信息编码器的编码示意图。如图2所示,深度信息编码器可以根据场景深度信息,从帧内预测、帧间预测和其它预测模式中选择至少一种模式进行预测。
对于深度信息编码器,为了压缩数据量,在编码场景深度信息时,可以利用多种编码方式消除相关性,具体地,在编码场景深度信息的过程中,可以利用且不限于如下编码方式消除相关性:若场景深度信息为多个相位图像,可以利用相位之间的相关性消除相位数据冗余,若场景深度信息为其他数据,既可以利用这些数据之间的空间相关性消除数据冗余,例如进行帧内预测,还可以利用场景深度信息的时间相关性消除数据冗余,例如进行帧间预测。频域变换处理,可以将场景深度信息转换到频域,利用频域相关性消除频域的数据冗余,例如进行离散傅里叶变换。量化处理,可以利用场景敏感度深度消除基于场景的数据冗余。最后,深度信息编码器可以将量化结果即为目标深度信息,从而对目标深度信息进行熵编码,熵编码实际上可以利用编码二进制数据之间的相关性,消除编码的比特冗余。
图3为本申请实施例提供的一种示例性的信息采样示意图。如图3所示,深度信息编码器可以针对场景深度信息,采用固定步长进行采样,并对采样信息进行编码,其中,图3中的每一个小方框实际上为场景深度信息包括的一个信息,具体的,采样步长为3,从而选取出相应的信息,实际上就作为目标深度信息,以进行编码,得到深度信息码流。
需要说明的是,在本申请的实施例中,上述帧预测、频域变换、量化以及采样的具体形式,以及涉及到的参数,例如采样步长,可以根据实际需求预先设置,本申请实施例不作限定。
需要说明的是,在本申请的实施例中,深度信息编码器还可以以从场景深度信息中选取部分进行编码,例如,在增强现实(Augmented Reality,AR)场景中,作为一种可能的实现方式,不需要对目标场景的全部场景深度信息进行编码,仅需要从场景深度信息中选取出目标场景中现实部分画面的深度信息作为目标深度信息,从而对该目标深度信息进行编码,以及后续发送。也就是说,深度信息编码器可以对所有的场景深度信息进行编码,也可以仅对指定时间或空间的深度信息进行编码,本申请实施例不作限定。
需要说明的是,在本申请的实施例中,对于多视点三维视频,作为另一种可能的实现方式,视点也可以进行间隔编码,即由于在同一时刻的不同视点之间,场景深度信息,如相位编码图像或电荷图像,存在很强的相关信息,因此,可以利用该相关性减少发送码流的数据量。例如,三个视点的视频编码,编码码流中仅需要存储左右两个视点的场景深度信息,相应的,后续利用深度信息解码器,可以通过左右两个视点的场景深度信息进行插值后获取中间视点的场景深度信息。
步骤103、将深度信息码流发送至解码装置,以使解码装置获得目标场景对应的图像。
在本申请的实施例中,编码装置在通过深度信息编码器获得深度信息码流之后,即可将深度信息码流发送至解码装置,解码装置从而可以基于深度信息码流,获得目标场景对应的图像。
可以理解的是,在现有技术中,通过将深度图像与正常图像合并编码,或者直接将深度图像编码发送至解码装置,进行后续成像处理,不仅解码冗余,而且信息利用率较低,而本申请可以直接将用于生成深度图像的场景深度信息形成码流发送至解码装置,从而解码装置不仅可以利用深度信息码流解码得到的场景深度信息生成深度图像,还可以进行其它成像处理,即解决了存在解码冗余的问题,也提高了深度信息的利用率。
本申请实施例提供了一种信息处理方法,应用于编码装置,编码装置获取深度信息模组采集的目标场景的场景深度信息;对场景深度信息进行独立编码,得到深度信息码流;将深度信息码流发送至解码装置,以使解码装置获得目标场景对应的图像。由此可见,在本申请中,编码装置中的深度信息编码器对深度传感器采集的、目标场景对应的场景深度信息进行独立编码,获得并发送深度信息码流,解码装置中的深度信息解码器接收并独立解码深度信息码流,解码装置中的处理器再利用独立解码后获得的场景深度信息获得所述目标场景对应的图像。也就是说,本申请提出的信息处理方法,可以对场景深度信息进行独立编解码处理,从而能够实现对深度信息的最大程度的展现,大大提高了深度信息的利用率,并有效解决了存在解码冗余的问题。
本申请的另一实施例提供了一种信息处理方法,应用于解码装置。图4为本申请实施例提供的一种信息处理方法的流程示意图二。如图4所示,解码装置进行信息处理的方法可以包括以下步骤:
步骤401、接收目标场景的深度信息码流。
在本申请的实施例中,解码装置可以接收到编码装置输出的、目标场景对应的深度信息码流。
需要说明的是,在本申请的实施例中,解码装置包括深度信息解码器和处理器,其中,深度信息解码器可以接收到深度信息码流,以使得处理器进行后续的解码处理。
进一步地,在本申请的实施例中,目标场景可以为用户需要拍摄的实际场景,具体的目标场景本申请实施例不作限定。
步骤402、对深度信息码流进行独立解码,获得场景深度信息。
在本申请的实施例中,解码装置在接收目标场景的深度信息码流之后,可以通过深度信息解码器对对深度信息码流进行独立解码,获得场景深度信息。
具体的,在本申请的实施例中,解码装置可以通过配置的深度信息解码器对深度信息码流进行独立解码,从而获得目标场景对应的场景深度信息。
需要说明的是,在本申请的实施例中,解码装置在对深度信息码流进行独立解码时,可以先对深度信息码流进行熵解码,从而获得目标深度信息;然后可以对目标深度信息进行重建处理,最终得到场景深度信息。
可以理解的是,在本申请的实施例中,深度信息解码器进行深度信息码流独立解码的过程,与上述步骤102的深度信息编码器进行场景深度信息的独立编码过程相反。
进一步地,在本申请的实施例中,解码装置中的深度信息解码器在对所述目标深度信息进行重建处理时,可以按照预设解码策略对所述目标深度信息进行重建处理,从而可以获得所述场景深度信息。具体地,在本申请中,预设解码策略用于进行帧预测重建、频域反变换、反量化以及插值中的至少一种处理。也就是说,深度信息解码器可以对目标深度信息,至少进行帧预测重建、频域反变换、反量化以及插值中任意一种处理,,从而可以得到场景深度信息。此外,深度信息解码器还可以以其它方式进行目标深度信息的重建处理,本申请实施例不作限定。
图5为本申请实施例提供的一种示例性的深度信息解码器的解码示意图。如图5所示,与图2所示的编码过程正相反,深度信息解码器先对深度信息码流进行熵解码,得到目标深度信息,之后,对目标深度信息进行反量化,并进一步进行频域反变换,最后,与编码预测相对应,进行相应的预测重建,即可得到场景深度信息,具体的解码方式本申请实施例不作限定。
需要说明的是,在本申请的实施例中,如图2和图5所示,其中,深度信息编码器的预测过程和对应的深度信息解码器预测重建过程可以选其中的一种或几种,也可以跳过预测过程和对应的预测重建过程,深度信息编码器的频域变换和对应的深度信息解码器的频域反变换可以采用或跳过,深度信息编码器的量化和对应的深度信息解码器的反量化可以采用或跳过,本申请实施例不作限定。
图6为本申请实施例提供的一种示例性的信息插值示意图。如图6所示,与图3所示的采样相对应,深度信息解码器可以针对深度信息码流熵解码后,得到的目标深度信息采用固定步长进行插值,恢复未被采样的信 息,其中,图6的每一个小方框实际上为目标深度信息包括的一个信息,可以利用相邻的信息进行插值信息恢复,得到场景深度信息,例如,可以根据信息1和信息4插值恢复信息2。
步骤403、利用场景深度信息获得所述目标场景对应的图像。
在本申请的实施例中,解码装置在通过深度信息解码器将深度信息码流独立解码,获得场景深度信息之后,便可通过处理器利用场景深度信息获得所述目标场景对应的图像。
需要说明的是,在本申请的实施例中,解码装置中的处理器可以为深度图像生成器和/或信息处理器。具体地,如果处理器为深度图像生成器,那么解码装置便可以根据场景深度信息生成对应的深度图像;如果处理器为信息处理器,那么解码装置便可以根据场景深度信息生成对应的其他衍生数据。
具体的,在本申请的实施例中,解码装置在通过处理器利用场景深度信息获得所述目标场景对应的图像时,可以先从场景深度信息中获取相位信息;然后利用相位信息,对目标场景对应的正常图像进行优化,从而可以得到目标场景的优化图像。
需要说明的是,在本申请的实施例中,解码装置可以配置有图像传感器,图像传感器用于进行二维图像数据的采集,从而生成目标场景对应的正常图像。也就是说,解码装置可以配置有拍摄器件,从而获取目标场景的正常图像,进一步地,解码装置还可以与其它成像装置通信,从而接收到其它成像装置生成的目标场景的正常图像,具体的目标场景的正常图像的来源本申请实施例不作限定。
示例性的,在本申请的实施例中,对于高动态范围图像(High-Dynamic Range,HDR),通常需要一张长曝光图像和一张短曝光图像进行融合才能得到。若采用带有深度信息传感器的相机拍摄HDR照片,且开启深度信息辅助功能,那么相位信息,即相位图像作为场景深度信息的一种,可以和两张正常图像进行相匹配编码,从而在解码时,当HDR图像因为合成造成模糊时,处理器通过从场景深度信息中获取相位信息,即可辅助正常图像进行去模糊处理,从而得到优化图像。
示例性的,在本申请的实施例中,在深度信息视频图像通路中,在视频图像编码的同时,相位信息作为场景深度信息的一种,也进行匹配的编码,在解码时,处理器可以根据相位信息对目标场景的正常图像进行去噪,这里的正常图像为目标场景视频中的帧图像,从而得到优化图像。
具体的,在本申请的实施例中,解码装置在基于场景深度信息通过处理器获得所述目标场景对应的图像时,还可以利用场景深度信息生成目标场景的深度图像。
可以理解的是,在本申请的实施例中,场景深度信息中包括了表征目标场景深度的相关信息,因此,处理器利用这些信息即可生成目标场景的 深度图像。相比于现有技术中,进行深度图像与正常图像的合并,之后编解码发送,本申请的技术方案可以更灵活的进行深度图像的生成。
具体的,在本申请的实施例中,解码装置在通过处理器利用场景深度信息生成目标场景的深度图像之后,解码装置还可以将深度图像,与获取到的目标场景的正常图像进行融合,从而可以生成目标场景的三维图像。
图7为本申请实施例提供的一种示例性的三维图像的成像示意图。如图7所示,基于上述编码装置和解码装置,深度信息模组901采集目标场景的场景深度信息,图像传感器701采集目标场景的正常图像,视频图像编码器702对目标场景的正常图像进行编码,同时,编码装置中的深度信息编码器902编码目标场景的场景深度信息,从而形成两路码流,即深度信息码流和图像码流,并将深度信息码流发送至深度信息解码器1001,将图像码流发送至视频图像解码器703。解码装置的深度信息解码器1001和视频图像解码器703各自解码对应码流,处理器1002利用场景深度信息生成深度图像,并将正常图像和深度图像合并,从而可以得到目标场景的三维图像。
本申请实施例提供了一种信息处理方法,应用于解码装置,解码装置接收目标场景的深度信息码流;对深度信息码流进行独立解码,获得场景深度信息;利用场景深度信息获得所述目标场景对应的图像。由此可见,在本申请中,编码装置中的深度信息编码器对深度传感器采集的、目标场景对应的场景深度信息进行独立编码,获得并发送深度信息码流,解码装置中的深度信息解码器接收并独立解码深度信息码流,解码装置中的处理器再利用独立解码后获得的场景深度信息获得所述目标场景对应的图像。也就是说,本申请提出的信息处理方法,可以对场景深度信息进行独立编解码处理,从而能够实现对深度信息的最大程度的展现,大大提高了深度信息的利用率,并有效解决了存在解码冗余的问题。
基于上述实施例,在本申请的再一实施例中,结合上述实施例的信息处理方法,可以得到完整的编码端和解码端这两端信息处理方法。图8为本申请实施例提供的一种信息处理方法的流程示意图三。如图8所示,编码装置中的深度信息模组901所设置的深度信息传感器9011可以采集到目标场景的场景深度信息,然后,编码装置中的深度信息编码器902对场景深度信息进行独立编码,得到深度信息码流,并将深度信息码流发送至解码装置中;解码装置中的深度信息解码器1001接收到深度信息码流之后,进行独立解码,从而获得场景深度信息,解码装置中设置的深度图像生成器1201便可以利用场景深度信息进行深度图像的生成。具体地,解码装置解析得到的场景深度信息不仅可以用于生成深度图像,还可以进行更多的其他处理,例如,解码装置中设置的信息处理器1202可以利用对场景深度信息进行加工处理,得到对应的其他衍生数据。
进一步地,在本申请的实施例中,对于三维摄像,在视频图像编码的 同时,相位图像作为原始深度信息的一种,和正常图像进行相匹配的编码。编码每一幅正常图像都对应不同时间点采样得到的多幅相位图像;解码时,当正常图像因为运动而造成模糊时,由于解析得到的多幅相位图像可携带不同时间点的更多信息,可以通过运动估计将模糊的图像进行恢复,以得到更为清晰的图像。
示例性的,在本申请的实施例中,在时间飞行法(Time Of Flight,TOF)架构或模组中,在视频图像编码的同时,电荷信息作为原始深度信息的一种,也进行匹配的编码。解码时,深度信息码流不仅可以用于深度图像的生成,而且可以根据电荷信息来判断该拍摄场景的噪声和外部可见光。这些信息有助于进行对应图像的去燥和白平衡调节,从而可以处理得到更好的图像质量,给用户更美更真实的图像视频体验。
在本申请的实施例中,进一步地,编码装置在获取场景深度信息时,也可以包括但不限于一下方式:
示例性的,编码装置可以采样连续调制的TOF方法,在两种不同的发射信号频率下,通过控制积分时间,传感器采样得到不同相位的共8组信号,进行光电转换后,再将这8组信号进行10比特量化,以生成8张原始电荷图像,这8张原始电荷图像,连同传感器属性参数,如温度,一起作为场景深度信息进行编码;或者将8张原始电荷图像生成2幅过程深度数据和一幅背景数据,作为场景深度信息进行编码。
示例性的,编码装置可以采样双目成像的原理,利用双目摄像头拍摄得到的两幅图像,根据两幅图像的位姿将计算得到视差等信息,将视差信息,摄像头参数等作为场景深度信息进行编码。
本申请实施例提供了一种信息处理方法,应用于编码装置的信息处理方法包括:获取深度信息模组采集的目标场景的场景深度信息;对场景深度信息进行独立编码,得到深度信息码流;将深度信息码流发送至解码装置,以使解码装置获得目标场景对应的图像;应用于解码装置的信息处理方法包括:接收目标场景的深度信息码流;对深度信息码流进行独立解码,获得场景深度信息;利用场景深度信息获得所述目标场景对应的图像。由此可见,在本申请中,编码装置中的深度信息编码器对深度传感器采集的、目标场景对应的场景深度信息进行独立编码,获得并发送深度信息码流,解码装置中的深度信息解码器接收并独立解码深度信息码流,解码装置中的处理器再利用独立解码后获得的场景深度信息获得所述目标场景对应的图像。也就是说,本申请提出的信息处理方法,可以对场景深度信息进行独立编解码处理,从而能够实现对深度信息的最大程度的展现,大大提高了深度信息的利用率,并有效解决了存在解码冗余的问题。
本申请的另一实施例提供了一种编码装置,图9为本申请实施例提供的一种编码装置的结构示意图。如图9所示,所述编码装置90包括:深度信息模组901和深度信息编码器902。
深度信息模组901,配置于采集目标场景的场景深度信息;
深度信息编码器902,配置于对所述场景深度信息进行独立编码,得到深度信息码流;将所述深度信息码流发送至解码装置,以使所述解码装置获得所述目标场景对应的图像。
进一步地,在本申请的实施例中,所述深度信息模组901包括深度信息传感器9011,
所述深度信息传感器9011,具体用于采集所述目标场景的原始深度信息;将所述原始深度信息确定为所述场景深度信息。
进一步地,在本申请的实施例中,所述深度信息传感器9011,还具体用于在利用所述深度信息传感器采集所述目标场景的原始深度信息之后,对所述原始深度信息进行预处理,获得所述场景深度信息。
进一步地,在本申请的实施例中,所述深度信息编码器902,具体用于对所述场景深度信息进行去冗余处理,得到目标深度信息;对所述目标深度信息进行熵编码,得到所述深度信息码流。
进一步地,在本申请的实施例中,所述深度信息编码器902,具体用于按照预设编码策略对所述场景深度信息进行去冗余处理,获得所述目标深度信息;其中,所述预设编码策略用于进行帧预测、频域变换、量化以及采样中的至少一种处理。
本申请实施例提供了一种编码装置,该编码装置获取深度信息模组采集的目标场景的场景深度信息;对场景深度信息进行独立编码,得到深度信息码流;将深度信息码流发送至解码装置,以使解码装置获得目标场景对应的图像。由此可见,在本申请中,编码装置中的深度信息编码器对深度传感器采集的、目标场景对应的场景深度信息进行独立编码,获得并发送深度信息码流,解码装置中的深度信息解码器接收并独立解码深度信息码流,解码装置中的处理器再利用独立解码后获得的场景深度信息获得所述目标场景对应的图像。也就是说,本申请提出的信息处理方法,可以对场景深度信息进行独立编解码处理,从而能够实现对深度信息的最大程度的展现,大大提高了深度信息的利用率,并有效解决了存在解码冗余的问题。
本申请的另一实施例提供了一种编码装置,图10为本申请实施例提供的一种解码装置的结构示意图。如图10所示,所述解码装置100包括:深度信息解码器1001和处理器1002,
深度信息解码器1001,用于接收所述目标场景的深度信息码流;对所述深度信息码流进行独立解码,获得场景深度信息;
处理器1002,配置于利用所述场景深度信息获得所述目标场景对应的图像。
进一步地,在本申请的实施例中,所述深度信息解码器1001,具体用于对所述深度信息码流进行熵解码,得到目标深度信息;对所述目标深度 信息进行重建处理,得到所述场景深度信息。
进一步地,在本申请的实施例中,所述深度信息解码器1001,具体用于按照预设解码策略对所述目标深度信息进行重建处理,获得所述场景深度信息;其中,所述预设解码策略用于进行帧预测重建、频域反变换、反量化以及插值中的至少一种处理。
进一步地,在本申请的实施例中,所述处理器1002,具体用于从所述场景深度信息中获取相位信息;利用所述相位信息,对所述目标场景对应的正常图像进行优化,得到所述目标场景的优化图像。
进一步地,在本申请的实施例中,所述处理器1002,具体用于利用所述场景深度信息生成所述目标场景的深度图像。
进一步地,在本申请的实施例中,所述处理器1002,具体用于将所述深度图像,与获取到的所述目标场景的正常图像进行融合,生成所述目标场景的三维图像。
本申请实施例提供了一种解码装置,该解码装置接收目标场景的深度信息码流;对深度信息码流进行独立解码,获得场景深度信息;利用场景深度信息获得所述目标场景对应的图像。由此可见,在本申请中,编码装置中的深度信息编码器对深度传感器采集的、目标场景对应的场景深度信息进行独立编码,获得并发送深度信息码流,解码装置中的深度信息解码器接收并独立解码深度信息码流,解码装置中的处理器再利用独立解码后获得的场景深度信息获得所述目标场景对应的图像。也就是说,本申请提出的信息处理方法,可以对场景深度信息进行独立编解码处理,从而能够实现对深度信息的最大程度的展现,大大提高了深度信息的利用率,并有效解决了存在解码冗余的问题。
本申请的又一实施例提供了一种信息处理系统,图11为本申请实施例提供的一种信息处理系统的结构示意图一。如图11所示,所述信息处理系统110包括:编码装置90和解码装置100,所述编码装置90包括深度信息模组901和深度信息编码器902,所述解码装置100包括深度信息解码器1001和处理器1002;
所述深度信息模组901,配置于采集目标场景的场景深度信息;
所述深度信息编码器902,配置于对所述场景深度信息进行独立编码,得到深度信息码流;将所述深度信息码流发送至解码装置;
所述深度信息解码器1001,用于接收所述目标场景的深度信息码流;对所述深度信息码流进行独立解码,获得场景深度信息;
所述处理器1002,配置于利用所述场景深度信息获得所述目标场景对应的图像。
图12为本申请实施例提供的一种信息处理系统的结构示意图二。如图12所示,在本申请的实施例中,信息处理系统110不仅包括上述深度信息模组901、深度信息编码器902和深度信息解码器1001,深度信息模组901 包括深度信息传感器9011,还包括深度图像生成器1201和N个信息处理器1202,N为大于等于1的自然数,N个信息处理器1202具体可以包括信息处理器1、信息处理器2,……,信息处理器N,其中,深度图像生成器1202和N个信息处理器包括的每一个处理器可以生成不同的信息,或者,进行不同的成像处理,深度图像生成器1201和N个信息处理器1202实际上集成为一个处理器时即为上述处理器1002,即图10和图11所示的处理器1002与图12所示的深度图像生成器1201和N个信息处理器1202,在实现的功能上可以完全相同,区别仅在于实体的划分,本申请实施例不作限定。
本申请实施例还提供了一种计算机可读存储介质,应用于编码装置,其上存储有计算机程序,该计算机程序被处理器执行时实现上述应用于编码装置的信息处理方法。
本申请实施例还提供了一种计算机可读存储介质,应用于解码装置,其上存储有计算机程序,该计算机程序被处理器执行时实现上述应用于解码装置的信息处理方法。
计算机可读存储介质可以是易失性存储器(volatile memory),例如随机存取存储器(Random-Access Memory,RAM);或者非易失性存储器(non-volatile memory),例如只读存储器(Read-Only Memory,ROM),快闪存储器(flash memory),硬盘(Hard Disk Drive,HDD)或固态硬盘(Solid-State Drive,SSD);也可以是包括上述存储器之一或任意组合的各自设备,如移动电话、计算机、平板设备、个人数字助理等。
本领域内的技术人员应明白,本申请的实施例可提供为方法、系统、或计算机程序产品。因此,本申请可采用硬件实施例、软件实施例、或结合软件和硬件方面的实施例的形式。而且,本申请可采用在一个或多个其中包含有计算机可用程序代码的计算机可用存储介质(包括但不限于磁盘存储器和光学存储器等)上实施的计算机程序产品的形式。
本申请是参照根据本申请实施例的方法、设备(系统)、和计算机程序产品的实现流程示意图和/或方框图来描述的。应理解可由计算机程序指令实现流程示意图和/或方框图中的每一流程和/或方框、以及实现流程示意图和/或方框图中的流程和/或方框的结合。可提供这些计算机程序指令到通用计算机、专用计算机、嵌入式处理机或其他可编程数据处理设备的处理器以产生一个机器,使得通过计算机或其他可编程数据处理设备的处理器执行的指令产生用于实现在实现流程示意图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的系统。
这些计算机程序指令也可存储在能引导计算机或其他可编程数据处理设备以特定方式工作的计算机可读存储器中,使得存储在该计算机可读存储器中的指令产生包括指令系统的制造品,该指令系统实现在实现流程示意图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能。
这些计算机程序指令也可装载到计算机或其他可编程数据处理设备上,使得在计算机或其他可编程设备上执行一系列操作步骤以产生计算机实现的处理,从而在计算机或其他可编程设备上执行的指令提供用于实现在实现流程示意图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的步骤。
以上所述,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本实用申请揭露的技术范围内,可轻易想到的变化或替换,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以所述权利要求的保护范围为准。
工业实用性
本申请实施例提供了一种信息处理方法和系统、编码装置、解码装置及存储介质,应用于编码装置的信息处理方法包括:获取深度信息模组采集的目标场景的场景深度信息;对场景深度信息进行独立编码,得到深度信息码流;将深度信息码流发送至解码装置,以使解码装置获得目标场景对应的图像;应用于解码装置的信息处理方法包括:接收目标场景的深度信息码流;对深度信息码流进行独立解码,获得场景深度信息;利用场景深度信息获得所述目标场景对应的图像。由此可见,在本申请中,编码装置中的深度信息编码器对深度传感器采集的、目标场景对应的场景深度信息进行独立编码,获得并发送深度信息码流,解码装置中的深度信息解码器接收并独立解码深度信息码流,解码装置中的处理器再利用独立解码后获得的场景深度信息获得所述目标场景对应的图像。也就是说,本申请提出的信息处理方法,可以对场景深度信息进行独立编解码处理,从而能够实现对深度信息的最大程度的展现,大大提高了深度信息的利用率,并有效解决了存在解码冗余的问题。

Claims (25)

  1. 一种信息处理方法,应用于编码装置,所述方法包括:
    获取深度信息模组采集的目标场景的场景深度信息;
    对所述场景深度信息进行独立编码,得到深度信息码流;
    将所述深度信息码流发送至解码装置,以使所述解码装置获得所述目标场景对应的图像。
  2. 根据权利要求1所述的方法,其中,所述获取深度信息传感器采集的目标场景的场景深度信息,包括:
    利用所述深度信息模组采集所述目标场景的原始深度信息;
    将所述原始深度信息确定为所述场景深度信息。
  3. 根据权利要求1所述的方法,其中,所述获取深度信息传感器采集的目标场景的场景深度信息,包括:
    利用所述深度信息模组采集所述目标场景的原始深度信息;
    对所述原始深度信息进行预处理,获得所述场景深度信息。
  4. 根据权利要求1所述的方法,其中,所述对所述场景深度信息进行独立编码,得到深度信息码流,包括:
    对所述场景深度信息进行去冗余处理,得到目标深度信息;
    对所述目标深度信息进行熵编码,得到所述深度信息码流。
  5. 根据权利要求4所述的方法,其中,所述对所述场景深度信息进行去冗余处理,得到目标深度信息,包括:
    按照预设编码策略对所述场景深度信息进行去冗余处理,获得所述目标深度信息;其中,所述预设编码策略用于进行帧预测、频域变换、量化以及采样中的至少一种处理。
  6. 一种信息处理方法,应用于解码装置,所述方法包括:
    接收目标场景的深度信息码流;
    对所述深度信息码流进行独立解码,获得场景深度信息;
    利用所述场景深度信息获得所述目标场景对应的图像。
  7. 根据权利要求6所述的方法,其中,所述对所述深度信息码流进行独立解码,获得场景深度信息,包括:
    对所述深度信息码流进行熵解码,得到目标深度信息;
    对所述目标深度信息进行重建处理,得到所述场景深度信息。
  8. 根据权利要求7所述的方法,其中,所述对所述目标深度信息进行重建处理,得到所述场景深度信息,包括:
    按照预设解码策略对所述目标深度信息进行重建处理,获得所述场景深度信息;其中,所述预设解码策略用于进行帧预测重建、频域反变换、反量化以及插值中的至少一种处理。
  9. 根据权利要求6所述的方法,其中,所述利用所述场景深度信息获得所述目标场景对应的图像,包括:
    从所述场景深度信息中获取相位信息;
    利用所述相位信息,对所述目标场景对应的正常图像进行优化,得到所述目标场景的优化图像。
  10. 根据权利要求6所述的方法,其中,所述利用所述场景深度信息获得所述目标场景对应的图像,包括:
    利用所述场景深度信息生成所述目标场景的深度图像。
  11. 根据权利要求10所述的方法,其中,所述利用所述场景深度信息生成所述目标场景的深度图像之后,所述方法还包括:
    将所述深度图像,与获取到的所述目标场景的正常图像进行融合,生成所述目标场景的三维图像。
  12. 一种编码装置,所述编码装置包括:深度信息模组和深度信息编码器,
    深度信息模组,配置于采集目标场景的场景深度信息;
    深度信息编码器,配置于对所述场景深度信息进行独立编码,得到深度信息码流;将所述深度信息码流发送至解码装置,以使所述解码装置获得所述目标场景对应的图像。
  13. 根据权利要求12所述的编码装置,其中,所述深度信息模组包括深度信息传感器,
    所述深度信息模组,具体用于利用所述深度信息传感器采集所述目标场景的原始深度信息;将所述原始深度信息确定为所述场景深度信息。
  14. 根据权利要求13所述的编码装置,其中,
    所述深度信息模组,还具体用于在利用所述深度信息传感器采集所述目标场景的原始深度信息之后,对所述原始深度信息进行预处理,获得所述场景深度信息。
  15. 根据权利要求12所述的编码装置,其中,
    所述深度信息编码器,具体用于对所述场景深度信息进行去冗余处理,得到目标深度信息;对所述目标深度信息进行熵编码,得到所述深度信息码流。
  16. 根据权利要求15所述的编码装置,其中,
    所述深度信息编码器,具体用于按照预设编码策略对所述场景深度信息进行去冗余处理,获得所述目标深度信息;其中,所述预设编码策略用于进行帧预测、频域变换、量化以及采样中的至少一种处理。
  17. 一种解码装置,所述解码装置包括:深度信息解码器和处理器,
    所述深度信息解码器,用于接收所述目标场景的深度信息码流;对所述深度信息码流进行独立解码,获得场景深度信息;
    所述处理器,配置于利用所述场景深度信息获得所述目标场景对应的 图像。
  18. 根据权利要求17所述的解码装置,其中,
    所述深度信息解码器,具体用于对所述深度信息码流进行熵解码,得到目标深度信息;对所述目标深度信息进行重建处理,得到所述场景深度信息。
  19. 根据权利要求18所述的解码装置,其中,
    所述深度信息解码器,具体用于按照预设解码策略对所述目标深度信息进行重建处理,获得所述场景深度信息;其中,所述预设解码策略用于进行帧预测重建、频域反变换、反量化以及插值中的至少一种处理。
  20. 根据权利要求17所述的解码装置,其中,
    所述处理器,具体用于从所述场景深度信息中获取相位信息;利用所述相位信息,对所述目标场景对应的正常图像进行优化,得到所述目标场景的优化图像。
  21. 根据权利要求17所述的装置,其中,
    所述处理器,具体用于利用所述场景深度信息生成所述目标场景的深度图像。
  22. 根据权利要求21所述的装置,其中,
    所述处理器,具体用于将所述深度图像,与获取到的所述目标场景的正常图像进行融合,生成所述目标场景的三维图像。
  23. 一种信息处理系统,所述信息处理系统包括:编码装置和解码装置,所述编码装置包括深度信息模组和深度信息编码器,所述解码装置包括深度信息解码器和处理器;
    所述深度信息模组,配置于采集目标场景的场景深度信息;
    所述深度信息编码器,配置于对所述场景深度信息进行独立编码,得到深度信息码流;将所述深度信息码流发送至解码装置;
    所述深度信息解码器,配置于接收所述目标场景的深度信息码流;对所述深度信息码流进行独立解码,获得场景深度信息;
    所述处理器,配置于利用所述场景深度信息获得所述目标场景对应的图像。
  24. 一种计算机可读存储介质,应用于编码装置,其上存储有计算机程序,该计算机程序被处理器执行时,实现如权利要求1-5任一项所述的方法。
  25. 一种计算机可读存储介质,应用于解码装置,其上存储有计算机程序,该计算机程序被处理器执行时,实现如权利要求6-11任一项所述的方法。
PCT/CN2019/116011 2019-11-06 2019-11-06 信息处理方法和系统、编码装置、解码装置及存储介质 WO2021087810A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/CN2019/116011 WO2021087810A1 (zh) 2019-11-06 2019-11-06 信息处理方法和系统、编码装置、解码装置及存储介质
CN201980100411.9A CN114402590A (zh) 2019-11-06 2019-11-06 信息处理方法和系统、编码装置、解码装置及存储介质

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2019/116011 WO2021087810A1 (zh) 2019-11-06 2019-11-06 信息处理方法和系统、编码装置、解码装置及存储介质

Publications (1)

Publication Number Publication Date
WO2021087810A1 true WO2021087810A1 (zh) 2021-05-14

Family

ID=75849412

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/116011 WO2021087810A1 (zh) 2019-11-06 2019-11-06 信息处理方法和系统、编码装置、解码装置及存储介质

Country Status (2)

Country Link
CN (1) CN114402590A (zh)
WO (1) WO2021087810A1 (zh)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102265617A (zh) * 2008-12-26 2011-11-30 日本胜利株式会社 图像编码装置、图像编码方法及其程序、以及图像解码装置、图像解码方法及其程序
CN102792699A (zh) * 2009-11-23 2012-11-21 通用仪表公司 作为到视频序列的附加通道的深度代码化
CN108053435A (zh) * 2017-11-29 2018-05-18 深圳奥比中光科技有限公司 基于手持移动设备的动态实时三维重建方法和系统
EP3457688A1 (en) * 2017-09-15 2019-03-20 Thomson Licensing Methods and devices for encoding and decoding three degrees of freedom and volumetric compatible video stream
CN109889809A (zh) * 2019-04-12 2019-06-14 深圳市光微科技有限公司 深度相机模组、深度相机、深度图获取方法以及深度相机模组形成方法
CN110268450A (zh) * 2017-02-13 2019-09-20 索尼公司 图像处理装置和图像处理方法

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101616322A (zh) * 2008-06-24 2009-12-30 深圳华为通信技术有限公司 立体视频编解码方法、装置及系统
US20140218473A1 (en) * 2013-01-07 2014-08-07 Nokia Corporation Method and apparatus for video coding and decoding

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102265617A (zh) * 2008-12-26 2011-11-30 日本胜利株式会社 图像编码装置、图像编码方法及其程序、以及图像解码装置、图像解码方法及其程序
CN102792699A (zh) * 2009-11-23 2012-11-21 通用仪表公司 作为到视频序列的附加通道的深度代码化
CN110268450A (zh) * 2017-02-13 2019-09-20 索尼公司 图像处理装置和图像处理方法
EP3457688A1 (en) * 2017-09-15 2019-03-20 Thomson Licensing Methods and devices for encoding and decoding three degrees of freedom and volumetric compatible video stream
CN108053435A (zh) * 2017-11-29 2018-05-18 深圳奥比中光科技有限公司 基于手持移动设备的动态实时三维重建方法和系统
CN109889809A (zh) * 2019-04-12 2019-06-14 深圳市光微科技有限公司 深度相机模组、深度相机、深度图获取方法以及深度相机模组形成方法

Also Published As

Publication number Publication date
CN114402590A (zh) 2022-04-26

Similar Documents

Publication Publication Date Title
Li et al. Scalable coding of plenoptic images by using a sparse set and disparities
JP6901468B2 (ja) 光照射野ベース画像を符号化及び復号する方法と装置、および対応するコンピュータプログラム製品
JP5436458B2 (ja) 多視点画像符号化方法、多視点画像復号方法、多視点画像符号化装置、多視点画像復号装置、多視点画像符号化プログラムおよび多視点画像復号プログラム
JP6837056B2 (ja) ライトフィールドベースの画像を符号化及び復号する方法及び機器並びに対応するコンピュータプログラム製品
JP2013538474A (ja) 3次元画像に対する視差の算出
JP2007036800A (ja) 映像符号化方法、映像復号方法、映像符号化プログラム、映像復号プログラム及びそれらのプログラムを記録したコンピュータ読み取り可能な記録媒体
JP5947977B2 (ja) 画像符号化方法、画像復号方法、画像符号化装置、画像復号装置、画像符号化プログラム及び画像復号プログラム
JP6307152B2 (ja) 画像符号化装置及び方法、画像復号装置及び方法、及び、それらのプログラム
WO2014166360A1 (en) Method and apparatus for bi-prediction of illumination compensation
US20150016517A1 (en) Encoding device and encoding method, and decoding device and decoding method
CN114391259B (zh) 信息处理方法、终端设备及存储介质
JP7171169B2 (ja) ライトフィールド・コンテンツを表す信号を符号化する方法および装置
JP2009213161A (ja) 映像符号化方法、映像復号方法、映像符号化プログラム、映像復号プログラム及びそれらのプログラムを記録したコンピュータ読み取り可能な記録媒体
US20120287237A1 (en) Method and apparatus for processing video signals, related computer program product, and encoded signal
WO2021087810A1 (zh) 信息处理方法和系统、编码装置、解码装置及存储介质
JP4851563B2 (ja) 映像符号化方法、映像復号方法、映像符号化プログラム、映像復号プログラム及びそれらのプログラムを記録したコンピュータ読み取り可能な記録媒体
CN110784722B (zh) 编解码方法、编解码装置、编解码系统及存储介质
Wang et al. Learning-based high-efficiency compression framework for light field videos
CN114175626B (zh) 信息处理方法、编码装置、解码装置、系统及存储介质
KR20160087207A (ko) 다시점 영상의 부호화/복호화 방법 및 장치
CN111225218A (zh) 信息处理方法、编码装置、解码装置、系统及存储介质
WO2015141549A1 (ja) 動画像符号化装置及び方法、及び、動画像復号装置及び方法
CN110784706B (zh) 信息处理方法、编码装置、解码装置、系统及存储介质
WO2024078403A1 (zh) 图像处理方法、装置及设备
US20120212578A1 (en) Method and apparatus of coding stereoscopic video

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19952001

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19952001

Country of ref document: EP

Kind code of ref document: A1