WO2023093768A1 - 图像处理方法和装置 - Google Patents

图像处理方法和装置 Download PDF

Info

Publication number
WO2023093768A1
WO2023093768A1 PCT/CN2022/133761 CN2022133761W WO2023093768A1 WO 2023093768 A1 WO2023093768 A1 WO 2023093768A1 CN 2022133761 W CN2022133761 W CN 2022133761W WO 2023093768 A1 WO2023093768 A1 WO 2023093768A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
processed
sensory experience
visual sensory
parameters
Prior art date
Application number
PCT/CN2022/133761
Other languages
English (en)
French (fr)
Inventor
胡翔宇
徐巍炜
文锦松
贾彦冰
周建同
翟其彦
余全合
曾毅华
王梓仲
王弋川
陈虎
艾金钦
胡宇彤
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Publication of WO2023093768A1 publication Critical patent/WO2023093768A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/10Image enhancement or restoration using non-spatial domain filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration using local operators
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/73Deblurring; Sharpening
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/90Dynamic range modification of images or parts thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T9/00Image coding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20048Transform domain processing
    • G06T2207/20052Discrete cosine transform [DCT]

Definitions

  • the present application relates to image processing technologies, and in particular to an image processing method and device.
  • a local area of the picture or video for example, the main body or region of interest (region of interest, ROI) in the picture or video
  • Related technologies use traditional algorithms to enlarge a local area, for example, an image enlargement algorithm (Lanczos), an edge-preserving enlargement algorithm, and the like.
  • the above algorithm is usually implemented based on the principle of a low-pass filter, and the enlarged image has problems such as unclearness and blurred image.
  • the present application provides an image processing method and device for simulating the visual sensory experience of human eyes perceiving the real scene of the area to be enlarged by using the processed image presented in the enlarged area, as if a person really walked into the area to be enlarged.
  • it solves the problem of unclear and blurred images after zooming in, and on the other hand, it can improve the user's experience of visual zooming.
  • the present application provides an image processing method.
  • the method includes: the encoding side acquires the image to be processed; acquires multiple sets of visual sensory experience parameters; and encodes the image to be processed and the multiple sets of visual sensory experience parameters.
  • the method includes: the decoding side obtains an image to be processed; obtains an enlargement operation instruction, and the enlargement operation instruction is used to indicate an area to be enlarged in the image to be processed; obtains one or more groups corresponding to one or more partial images
  • Visual sensory experience parameters respectively process the corresponding partial images according to one or more sets of visual sensory experience parameters to obtain processed partial images.
  • the encoding side after the encoding side acquires the image to be processed, multiple sets of visual sensory experience parameters are obtained for the image to be processed, and the encoding side encodes the image to be processed and multiple sets of visual sensory experience parameters to the decoding side, so that the decoding side can
  • the area to be enlarged is determined according to the user operation, local image processing is performed based on the visual sensory experience parameters corresponding to the area to be enlarged to obtain a processed partial image.
  • the picture presented by the processed partial image simulates the real perception of the area to be enlarged by the human eye
  • the visual and sensory experience of the scene is like the picture that people see when they really walk into the real scene corresponding to the area to be enlarged. On the one hand, it solves the problem of unclear and blurred images after zooming in, and on the other hand, it can improve the user's visual experience. Amplified experience.
  • the local image processing on the decoding side based on the visual sensory experience parameters corresponding to the area to be enlarged may include: constructing three adaptive field kernels: brightness perception model (brightness perception model), contrast perception model (contrast perception model), color perception model ( color perception model), where the brightness perception model is used to ensure that the perception of brightness and darkness under various display capabilities is consistent with the perception of the real scene by the human eye; the contrast perception model is used to ensure that the variation can be distinguished under various display capabilities (just noticeable difference (JND) number is consistent with the perception of the real scene by the human eye; the color perception model ensures that the color is consistent with the perception of the real scene by the human eye under various display capabilities.
  • JND just noticeable difference
  • the adaptation field can solve the problem of mapping natural scenes to the best display D1, the best display D1 to various displays D2, and various displays D2 in various viewing environments. It should be noted that the above process is only an exemplary method for determining a visual sensory experience parameter, and the embodiment of the present application may also use other methods to determine the visual sensory experience parameter, which is not specifically limited.
  • the image to be processed can also be referred to as a global image.
  • the camera device can capture an image including the target area by facing the target area, and the complete image is the global image.
  • the encoding side divides the image to be processed to obtain multiple candidate partial images. Since the candidate partial images are divided by the image to be processed, each candidate partial image corresponds to a local area in the image to be processed. For example, the image to be processed Carry out quadtree division, the candidate local image in the upper left corner corresponds to the local region located in the upper left quarter of the image to be processed.
  • the visual sensory experience parameters can include brightness, contrast, color and details.
  • the processed and presented picture simulates (approximates or enhances) the reality of the human eye to perceive the area to be enlarged.
  • the visual sensory experience parameters corresponding to the candidate partial image may be determined. It can be seen that there is a corresponding relationship between the visual sensory experience parameters and the candidate partial images in the embodiment of the present application, and based on the pixel characteristics of any candidate partial image, a set of visual sensory experience parameters can be determined for it to realize the Adjustment of at least one of brightness, contrast, color and detail is performed on the candidate partial image.
  • the brightness and contrast can be improved, the color can be adapted to the local dark area, and the underexposed details can be increased, so that the visual sensory experience parameters of the candidate partial image can be determined, including brightness, There are four types of parameters: contrast, color and detail, and the specific values of the parameters correspond to the aforementioned adjustment requirements.
  • the brightness and contrast can be reduced, the color can be adapted to the local bright area, and the overexposure details can be increased, so that the visual sensory experience parameters of the candidate partial image can be determined including brightness , Contrast, Color and Detail four types of parameters, and the specific values of the parameters correspond to the aforementioned adjustment requirements.
  • the brightness and contrast can be fine-tuned, and the color can be adapted to the main body of the image.
  • the visual sensory experience parameters of the candidate partial image can be determined to include three types of parameters: brightness, contrast and color, and the parameter The specific value of corresponds to the aforementioned adjustment requirements.
  • the encoding side can encode the image to be processed and the multiple sets of visual sensory experience parameters after obtaining multiple sets of visual sensory experience parameters, wherein, the encoding method of the image to be processed can refer to the joint photographic experts group (JPEG) encoding standard , hybrid video coding standards or scalable video coding standards, end-to-end coding methods can also be adopted, which will not be repeated here; multiple sets of visual sensory experience parameters can be coded as metadata (metadata), and can refer to the CUVA1.0 standard .
  • JPEG joint photographic experts group
  • the encoding side can also write in the code stream the division method of the image to be processed, as well as the corresponding relationship between the candidate partial images and the visual sensory experience parameters, so that the decoding side can obtain multiple candidate partial images and Its corresponding sets of visual sensory experience parameters.
  • the method of writing the aforementioned information into the code stream can refer to related technologies, as long as the decoding side can understand the division method of the image to be processed, and the corresponding relationship between each candidate partial image and multiple sets of visual sensory experience parameters, the embodiment of the present application No specific limitation is made here.
  • the decoding side decodes the code stream by adopting a decoding method corresponding to the encoding side to obtain the image to be processed.
  • the enlargement operation instruction is generated by the operation on the image to be processed. For example, when a user looks at an image on a mobile phone and wants to zoom in on a partial area of the image to see details, he or she can use the thumb and forefinger to perform a two-finger zoom-in gesture on the screen of the mobile phone where the partial area is displayed, thereby The picture of the local area is displayed on the screen of the mobile phone, and the gesture can generate the above-mentioned enlargement operation instruction. For another example, the user casts the video on the mobile phone to the big screen for playback. When wanting to zoom in on a certain area of the video, the user can use the thumb and index finger to place two fingers on the screen of the mobile phone where the local area is displayed.
  • a zoom-in gesture so as to display the video of the local area on the large screen, and the gesture can generate the above-mentioned zoom-in operation instruction.
  • the zoom-in operation instruction may also be generated in other ways, which is not specifically limited in this embodiment of the present application.
  • the decoding side can obtain the division method of the image to be processed according to the information carried in the code stream, as well as the corresponding relationship between the divided multiple candidate partial images and multiple sets of visual sensory experience parameters. Based on this, the decoding side can first use the aforementioned The division method divides the image to be processed to obtain multiple candidate partial images, and then obtains one or more partial images corresponding to the area to be enlarged, and decodes to obtain one or more sets of visual sensory experience parameters.
  • the experience parameters correspond to the one or more partial images, and finally the corresponding partial images are respectively processed according to one or more sets of visual sensory experience parameters to obtain processed partial images.
  • the decoding side processes the corresponding partial images according to the obtained one or more sets of visual sensory experience parameters to obtain the processed partial images.
  • These processed partial images can present a picture that simulates the real scene of the area to be enlarged perceived by the human eye. visual sensory experience.
  • the processing that the decoding side can perform on the corresponding partial image includes at least one of the following:
  • the visual sensory experience parameter includes a brightness parameter, adjust the brightness of the corresponding partial image
  • the visual sensory experience parameters include a contrast parameter, adjusting the contrast of the corresponding partial image
  • the visual sensory experience parameters include color parameters
  • color adjustment is performed on the corresponding partial image
  • the detail adjustment is performed on the corresponding partial image.
  • the brightness and contrast can be improved, the color can be adapted to the local dark area, and the underexposed details can be increased, so that the visual sensory experience parameters of the partial image can be determined including brightness, contrast, color and There are four types of parameters in detail, and the specific values of the parameters correspond to the aforementioned adjustment requirements.
  • the brightness and contrast can be reduced, the color can be adapted to the local bright area, and the overexposure details can be increased, so that the visual sensory experience parameters of the partial image can be determined, including brightness, contrast, color and details, and the specific values of the parameters correspond to the aforementioned adjustment requirements.
  • the brightness and contrast can be fine-tuned, and the color can be adapted to the main body of the image.
  • the visual sensory experience parameters of the partial image can be determined to include three types of parameters: brightness, contrast, and color, and the specific values of the parameters Corresponding to the aforementioned adjustment requirements.
  • the decoding side can adjust the details of one or more partial images through the following methods:
  • the decoding side may store the processed partial image locally, so that when the subsequent user zooms in on the same area again, the processed partial image is extracted from the memory and directly displayed.
  • the decoding side may transmit the processed partial image to a display device (such as a display) for display.
  • a display device such as a display
  • the embodiment of the present application provides an image processing method.
  • the method includes: the encoding side acquires the image to be processed, and divides the image to be processed to obtain multiple candidate partial images; acquires multiple sets of visual sensory experience parameters; respectively performs corresponding candidate partial images according to multiple sets of visual sensory experience parameters Processing to obtain multiple processed candidate partial images; encoding the image to be processed and the multiple processed candidate partial images.
  • the method includes: the decoding side acquires the image to be processed and multiple processed candidate partial images; acquires an enlargement operation instruction, and the enlargement operation instruction is used to indicate the area to be enlarged in the image to be processed; acquires the processed image according to the area to be enlarged processed partial image.
  • the encoding side after the encoding side acquires the image to be processed, it obtains the respective visual sensory experience parameters for the multiple candidate partial images in the image to be processed, and then respectively conducts the corresponding candidate partial images according to the multiple sets of visual sensory experience parameters.
  • the encoding side encodes the image to be processed and multiple processed candidate partial images to the decoding side, so that the decoding side can directly decode after determining the area to be enlarged according to the user operation
  • the processed partial image is obtained, and the picture presented by the processed partial image simulates (approximates or enhances) the visual sensory experience of the real scene of the region to be enlarged by the human eye, as if a person really walks into the real scene corresponding to the region to be enlarged.
  • the picture seen, on the one hand solves the problem of unclear and blurred images after zooming in, and on the other hand, can improve the user's visual zooming experience.
  • the difference from the embodiment of the first aspect is that: after obtaining multiple sets of visual sensory experience parameters, the encoding side processes multiple candidate partial images respectively to obtain multiple processed candidate partial images without transmitting multiple sets of visual sensory experience parameters For the decoding side, the steps of image processing are performed by the decoding side.
  • the encoding side can implement TM on each local area of the image to be processed before encoding multiple candidate partial images to be processed, so as to improve the local area of the image to be processed and the corresponding processed local area.
  • the similarity between images reduces the amount of residual data for candidate partial images.
  • the decoding side can decode to obtain multiple candidate partial images, then acquire one or more partial images corresponding to the region to be enlarged, and obtain a processed partial image according to the one or more partial images.
  • the candidate partial images decoded by the decoding side are images that have been processed by the encoding side, so after obtaining the region to be enlarged, the decoding side determines from multiple candidate partial images that correspond to the region to be enlarged One or more partial images, the one or more partial images directly constitute the processed partial image.
  • the embodiment of the present application provides an image processing method.
  • the method includes: the encoding side acquires the image to be processed; and encodes the image to be processed.
  • the method includes: the decoding side acquires the image to be processed; acquires an enlargement operation instruction, and the enlargement operation instruction is used to indicate an area to be enlarged in the image to be processed; acquires visual sensory experience parameters corresponding to the area to be enlarged according to preset rules; The region to be enlarged is processed according to the visual sensory experience parameters to obtain a processed partial image.
  • the encoding side directly encodes the image to be processed without obtaining multiple candidate partial images and their respective visual sensory experience parameters, which can reduce the occupation of the code stream.
  • the decoding side after determining the area to be enlarged according to the user operation, one or more corresponding partial images can be obtained based on the area to be enlarged, and then one or more sets of visual sensory experience parameters can be obtained based on the preset rules, so that based on these parameters, the The aforementioned partial image is processed to obtain a processed partial image, and the picture presented by the processed partial image simulates (approximates or enhances) the visual sensory experience of the real scene of the area to be enlarged by the human eye, as if a person really walks into the area to be enlarged.
  • the picture seen when zooming in on the real scene corresponding to the area on the one hand, solves the problem of unclear and blurred images after zooming in, and on the other hand, can improve the user's experience of visual zooming.
  • the encoding side only carries the image to be processed (that is, the global image) in the code stream, there are no multiple alternative partial images, and there are no multiple sets of visual sensory experience parameters, let alone segmenting the image to be processed before performing image processing. Therefore, the decoder can only obtain the global reconstructed image after parsing the code stream. In this way, if the decoding side processes the area to be enlarged, it needs to obtain the visual sensory experience parameters corresponding to the area to be enlarged according to historical data or empirical information.
  • the decoding side may first divide the image to be processed according to the first preset rule to obtain multiple candidate partial images, and obtain one or more partial images corresponding to the area to be enlarged, where the multiple candidate partial images include one or more partial images. Then one or more sets of visual sensory experience parameters corresponding to one or more partial images are acquired according to a second preset rule.
  • the aforementioned preset rules include a first preset rule and a second preset rule.
  • the decoding side may first divide the reconstructed global image by using the description about the division method in the embodiment of the first aspect to obtain multiple candidate partial images. Then, according to the position of the region to be enlarged, it is determined which candidate partial images it contains, and these candidate partial images are partial images corresponding to the region to be enlarged.
  • the encoding side can first process multiple candidate partial images, and then process the image to be processed, multiple processed candidate partial images, and multiple sets of visual sensory experience parameters Encoded into the code stream and transmitted to the decoding side.
  • the visual sensory experience parameters can be divided into two parts, and one part is used by the encoding side to process multiple candidate partial images, so that the multiple processed candidate partial images have been selected from brightness, contrast, color and detail. At least one of them has been adjusted to get a better effect; the other part is encoded into the code stream and transmitted to the decoding side for use, which is also used to process multiple candidate partial images, so that after processing on the decoding side, the processed partial images can be processed.
  • the image is more in line with the needs of the display side to achieve the best display effect.
  • the visual sensory experience parameters transmitted to the decoding side can include brightness, contrast, etc.
  • the decoding side analyzes the code stream, reconstructs the image to be processed, multiple processed candidate partial images, and multiple sets of visual sensory experience parameters, and uses multiple sets of visual sensory experience parameters to process multiple processed candidate partial images again A final processed partial image is obtained.
  • the embodiment of the present application provides a decoding device, including: an acquisition module and a processing module.
  • an acquisition module configured to acquire an image to be processed; acquire an enlargement operation instruction, the enlargement operation instruction is used to indicate an area to be enlarged in the image to be processed, and the area to be enlarged corresponds to one or more partial Image; acquire one or more sets of visual sensory experience parameters corresponding to the one or more partial images; processing module, used to process the corresponding partial images respectively according to the one or more sets of visual sensory experience parameters to obtain processed local images.
  • the visual sensory experience parameters include at least one of brightness parameters, contrast parameters, color parameters, and detail parameters;
  • the processing module is specifically configured to perform at least one of the following operations: when the When the visual sensory experience parameter includes a brightness parameter, adjust the brightness of the corresponding partial image; when the visual sensory experience parameter includes a contrast parameter, perform contrast adjustment on the corresponding partial image; when the visual sensory experience parameter includes a color parameter When the color adjustment is performed on the corresponding partial image; when the visual sensory experience parameter includes a detail parameter, the detail adjustment is performed on the corresponding partial image; wherein, the corresponding partial image is the one or more partial images one of the images.
  • the processing module is specifically configured to, when the corresponding partial image corresponds to a dark area of the image, perform brightness enhancement, contrast enhancement, color adaptation to dark areas, and underexposure details increase on the corresponding partial image. at least one of processing; or, when the corresponding partial image corresponds to a bright area of the image, at least one of processing of brightness reduction, contrast reduction, color adaptation to bright areas, and overexposure detail increase is performed on the corresponding partial image; or, when When the corresponding partial image corresponds to the subject area of the image, the corresponding partial image is subjected to color adaptation to the subject; wherein the corresponding partial image is one of the one or more partial images.
  • the processing module is specifically configured to acquire multiple reference images, and the multiple reference images and the image to be processed are obtained by multiple cameras for the same scene; according to the multiple A reference image is used to adjust the details of the corresponding partial image.
  • the processing module is specifically configured to obtain a plurality of historical images whose similarity with the image to be processed exceeds a preset threshold; Make detailed adjustments.
  • the picture presented by the processed partial image simulates the visual sensory experience of the real scene of the region to be enlarged perceived by human eyes.
  • the zoom-in operation instruction is generated by the outward sliding operation performed by the user's two fingers on the area to be zoomed in; or, the zoom-in operation instruction is generated by the user's two fingers on the generated by clicking on the area to be zoomed in.
  • the acquiring module is specifically configured to decode the acquired code stream to obtain the one or more sets of visual sensory experience parameters.
  • the acquiring module is specifically configured to perform scalable video decoding on the acquired code stream to obtain the image to be processed; or, perform image decompression on the acquired image file to obtain the Image to be processed.
  • the processing module is further configured to display the processed partial image; or store the processed partial image.
  • the acquiring module is further configured to acquire a zoom-in termination instruction, where the zoom-in termination command is generated by an inward sliding operation performed by the user's two fingers on the processed partial image , or, the magnification termination instruction is generated by the click operation performed by the user on the processed partial image with a single finger; the processing module is further configured to display the image to be processed according to the magnification termination instruction .
  • the acquiring module is configured to acquire an image to be processed; acquire an enlargement operation instruction, the enlargement operation instruction is used to indicate an area to be enlarged in the image to be processed; acquire an image corresponding to the area to be enlarged according to a preset rule.
  • visual sensory experience parameters a processing module, configured to process the region to be enlarged according to the visual sensory experience parameters to obtain a processed partial image.
  • the visual sensory experience parameters include at least one of brightness parameters, contrast parameters, color parameters, and detail parameters;
  • the processing module is specifically configured to perform at least one of the following operations: when the When the visual sensory experience parameter includes a brightness parameter, adjust the brightness of the corresponding partial image; when the visual sensory experience parameter includes a contrast parameter, perform contrast adjustment on the corresponding partial image; when the visual sensory experience parameter includes a color parameter When the color adjustment is performed on the corresponding partial image; when the visual sensory experience parameter includes a detail parameter, the detail adjustment is performed on the corresponding partial image; wherein, the corresponding partial image is the one or more partial images one of the images.
  • the processing module is specifically configured to, when the corresponding partial image corresponds to a dark area of the image, perform brightness enhancement, contrast enhancement, color adaptation to dark areas, and underexposure details increase on the corresponding partial image. at least one of processing; or, when the corresponding partial image corresponds to a bright area of the image, at least one of processing of brightness reduction, contrast reduction, color adaptation to bright areas, and overexposure detail increase is performed on the corresponding partial image; or, when When the corresponding partial image corresponds to the subject area of the image, the corresponding partial image is subjected to color adaptation to the subject; wherein the corresponding partial image is one of the one or more partial images.
  • the processing module is specifically configured to acquire multiple reference images, and the multiple reference images and the image to be processed are obtained by multiple cameras for the same scene; according to the multiple A reference image is used to adjust the details of the corresponding partial image.
  • the processing module is specifically configured to obtain a plurality of historical images whose similarity with the image to be processed exceeds a preset threshold; Make detailed adjustments.
  • the picture presented by the processed partial image simulates the visual sensory experience of the real scene of the region to be enlarged perceived by human eyes.
  • the zoom-in operation instruction is generated by the outward sliding operation performed by the user's two fingers on the area to be zoomed in; or, the zoom-in operation instruction is generated by the user's two fingers on the generated by clicking on the area to be zoomed in.
  • the acquiring module is specifically configured to divide the image to be processed according to a first preset rule to obtain multiple candidate partial images; and acquire one or more partial images corresponding to the area to be enlarged.
  • the preset rule includes the first preset rule and the second preset rule.
  • the acquiring module is specifically configured to perform scalable video decoding on the acquired code stream to obtain the image to be processed; or, perform image decompression on the acquired image file to obtain the Image to be processed.
  • the processing module is further configured to display the processed partial image; or store the processed partial image.
  • the acquiring module is further configured to acquire a zoom-in termination instruction, where the zoom-in termination command is generated by an inward sliding operation performed by the user's two fingers on the processed partial image , or, the magnification termination instruction is generated by the click operation performed by the user on the processed partial image with a single finger; the processing module is further configured to display the image to be processed according to the magnification termination instruction .
  • the embodiment of the present application provides an encoding device, including: an acquisition module, an encoding module, and a processing module.
  • the acquiring module is configured to acquire the image to be processed; acquire multiple sets of visual sensory experience parameters; and the encoding module is configured to encode the image to be processed and the multiple sets of visual sensory experience parameters.
  • an acquisition module configured to acquire an image to be processed; divide the image to be processed to obtain multiple candidate partial images; acquire multiple sets of visual sensory experience parameters, the multiple sets of visual sensory experience parameters and the multiple candidate partial images Image correspondence; the processing module is used to process the corresponding candidate partial images according to the multiple sets of visual sensory experience parameters to obtain multiple processed candidate partial images; the encoding module is used to process the image to be processed Coding with the plurality of processed candidate partial images.
  • the visual sensory experience parameters include at least one of brightness parameters, contrast parameters, color parameters, and detail parameters;
  • the processing module is specifically configured to perform at least one of the following operations: when the When the visual sensory experience parameter includes a brightness parameter, adjust the brightness of the corresponding partial image; when the visual sensory experience parameter includes a contrast parameter, perform contrast adjustment on the corresponding partial image; when the visual sensory experience parameter includes a color parameter , performing color adjustment on the corresponding partial image; when the visual sensory experience parameters include detail parameters, performing detail adjustment on the corresponding partial image; wherein, the corresponding partial image is one of the plurality of candidate partial images one.
  • the processing module is specifically configured to, when the corresponding partial image corresponds to a dark area of the image, perform brightness enhancement, contrast enhancement, color adaptation to dark areas, and underexposure details increase on the corresponding partial image. at least one of processing; or, when the corresponding partial image corresponds to a bright area of the image, at least one of processing of brightness reduction, contrast reduction, color adaptation to bright areas, and overexposure detail increase is performed on the corresponding partial image; or, when When the corresponding partial image corresponds to the subject area of the image, the corresponding partial image is subjected to color adaptation to the subject; wherein the corresponding partial image is one of the plurality of candidate partial images.
  • the processing module is specifically configured to acquire multiple reference images, and the multiple reference images and the image to be processed are obtained by multiple cameras for the same scene; according to the multiple A reference image is used to adjust the details of the corresponding partial image.
  • the processing module is specifically configured to obtain a plurality of historical images whose similarity with the image to be processed exceeds a preset threshold; Make detailed adjustments.
  • the acquiring module is specifically configured to acquire the multiple sets of visual sensory experience parameters according to a third preset rule.
  • the coding module is specifically configured to perform scalable video coding on the image to be processed and the plurality of processed candidate partial images to obtain a code stream; or, on the performing image compression on the image to be processed and the plurality of processed candidate partial images to obtain an image file.
  • the present application provides an encoder, including: one or more processors; a non-transitory computer-readable storage medium, coupled to the processor and storing a program executed by the processor, wherein the When the program is executed by the processor, the encoder executes the method according to any one of the encoding side in the first to third aspects.
  • the present application provides a decoder, including: one or more processors; a non-transitory computer-readable storage medium, coupled to the processors and storing programs executed by the processors, wherein the When the program is executed by the processor, the decoder executes the method according to any one of the first to third aspects executed by the decoding side.
  • the present application provides a non-transitory computer-readable storage medium, including program code, which, when executed by a computer device, is used to execute the method according to any one of the first to third aspects.
  • the present application provides a non-transitory storage medium, including the bit stream in the method according to any one of the first to third aspects.
  • the present application provides a computer program product containing instructions, which, when run on a computer, cause the computer to execute the method described in any one of the first to third aspects.
  • FIG. 1A is an exemplary block diagram of a decoding system 10 according to an embodiment of the present application.
  • FIG. 1B is an exemplary block diagram of a video decoding system 40 according to an embodiment of the present application.
  • FIG. 2 is an exemplary block diagram of a video encoder 20 according to an embodiment of the present application.
  • FIG. 3 is an exemplary block diagram of a video decoder 30 according to an embodiment of the present application.
  • FIG. 4 is an exemplary block diagram of a video decoding device 400 according to an embodiment of the present application.
  • FIG. 5 is an exemplary hierarchical schematic diagram of scalable video coding in the present application.
  • Fig. 6 is an exemplary flow chart of the encoding method of the enhancement layer of the present application.
  • Fig. 7 is an exemplary flowchart of the image processing method of the present application.
  • Fig. 8 is a schematic diagram of a pyramidal division method
  • Fig. 9 is a schematic diagram of determining visual sensory experience parameters
  • FIG. 10 is an example diagram of an image encoding process
  • FIG. 11 is an example diagram of an image decoding process
  • Fig. 12a is an example diagram of the processing effect on a partial image
  • Fig. 12b is an example diagram of the processing effect on a partial image
  • Fig. 13 is an exemplary flow chart of the image processing method of the present application.
  • Fig. 14 is an exemplary flow chart of the image processing method of the present application.
  • FIG. 15 is a schematic structural diagram of an exemplary decoding device 1500 according to an embodiment of the present application.
  • FIG. 16 is a schematic structural diagram of an exemplary encoding device 1600 according to an embodiment of the present application.
  • At least one (item) means one or more, and “multiple” means two or more.
  • “And/or” is used to describe the association relationship of associated objects, indicating that there can be three types of relationships, for example, “A and/or B” can mean: only A exists, only B exists, and A and B exist at the same time , where A and B can be singular or plural.
  • the character “/” generally indicates that the contextual objects are an “or” relationship.
  • At least one of the following” or similar expressions refer to any combination of these items, including any combination of single or plural items.
  • At least one item (piece) of a, b or c can mean: a, b, c, "a and b", “a and c", “b and c", or "a and b and c ", where a, b, c can be single or multiple.
  • Video coding generally refers to the processing of sequences of images that form a video or video sequence.
  • the terms "picture”, “frame” or “image” may be used as synonyms.
  • Video coding (or commonly referred to as coding) includes two parts: video coding and video decoding.
  • Video encoding is performed on the source side and typically involves processing (eg, compressing) raw video images to reduce the amount of data needed to represent the video images (and thus more efficient storage and/or transmission).
  • Video decoding is performed at the destination and typically involves inverse processing relative to the encoder to reconstruct the video image.
  • the "encoding" of video images (or generally referred to as images) involved in the embodiments should be understood as “encoding” or “decoding” of video images or video sequences.
  • the encoding part and the decoding part are also collectively referred to as codec (encoding and decoding, CODEC).
  • the original video image can be reconstructed, ie the reconstructed video image has the same quality as the original video image (assuming no transmission loss or other data loss during storage or transmission).
  • further compression is performed by quantization, etc., to reduce the amount of data required to represent the video image, and the decoder side cannot completely reconstruct the video image, that is, the quality of the reconstructed video image is lower than that of the original video image. low or poor.
  • Video coding standards belong to "lossy hybrid video codecs" (ie, combining spatial and temporal prediction in the pixel domain with 2D transform coding in the transform domain for applying quantization).
  • Each image in a video sequence is usually partitioned into a non-overlapping set of blocks, usually encoded at the block level.
  • encoders usually process, i.e.
  • video at the block (video block) level e.g., through spatial (intra) prediction and temporal (inter) prediction to produce a predicted block; from the current block (currently processed/to be processed block) to obtain the residual block; transform the residual block in the transform domain and quantize the residual block to reduce the amount of data to be transmitted (compressed), and the decoder side will be inversely processed relative to the encoder Partially applied to encoded or compressed blocks to reconstruct the current block for representation.
  • the encoder needs to repeat the decoder's processing steps such that the encoder and decoder generate the same predicted (eg, intra and inter) and/or reconstructed pixels for processing, ie encoding, subsequent blocks.
  • the encoder 20 and the decoder 30 are described with reference to FIGS. 1A to 3 .
  • FIG. 1A is an exemplary block diagram of a decoding system 10 according to an embodiment of the present application, such as a video decoding system 10 (or simply referred to as the decoding system 10 ) that can utilize the technology of the present application.
  • Video encoder 20 (or simply encoder 20) and video decoder 30 (or simply decoder 30) in video coding system 10 represent devices, etc. that may be used to perform techniques according to various examples described in this application. .
  • a decoding system 10 includes a source device 12 for providing encoded image data 21 such as encoded images to a destination device 14 for decoding the encoded image data 21 .
  • the source device 12 includes an encoder 20 , and optionally, an image source 16 , a preprocessor (or a preprocessing unit) 18 such as an image preprocessor, and a communication interface (or a communication unit) 22 .
  • Image source 16 may include or be any type of image capture device for capturing real world images, etc., and/or any type of image generation device, such as a computer graphics processor or any type of Devices for acquiring and/or providing real-world images, computer-generated images (e.g., screen content, virtual reality (VR) images, and/or any combination thereof (e.g., augmented reality (AR) images). So
  • the image source may be any type of memory or storage that stores any of the above images.
  • the image (or image data) 17 may also be referred to as an original image (or original image data) 17 .
  • the preprocessor 18 is used to receive the original image data 17 and perform preprocessing on the original image data 17 to obtain a preprocessed image (or preprocessed image data) 19 .
  • preprocessing performed by preprocessor 18 may include cropping, color format conversion (eg, from RGB to YCbCr), color grading, or denoising. It can be understood that the preprocessing unit 18 can be an optional component.
  • a video encoder (or encoder) 20 is used to receive preprocessed image data 19 and provide encoded image data 21 (further described below with reference to FIG. 2 etc.).
  • the communication interface 22 in the source device 12 may be used to receive the encoded image data 21 and send the encoded image data 21 (or any other processed version) via the communication channel 13 to another device such as the destination device 14 or any other device for storage Or rebuild directly.
  • the destination device 14 includes a decoder 30 , and may also optionally include a communication interface (or communication unit) 28 , a post-processor (or post-processing unit) 32 and a display device 34 .
  • the communication interface 28 in the destination device 14 is used to receive the coded image data 21 (or any other processed version) directly from the source device 12 or from any other source device such as a storage device, for example, the storage device is a coded image data storage device, And the coded image data 21 is supplied to the decoder 30 .
  • the communication interface 22 and the communication interface 28 can be used to pass through a direct communication link between the source device 12 and the destination device 14, such as a direct wired or wireless connection, etc., or through any type of network, such as a wired network, a wireless network, or any other Combination, any type of private network and public network or any combination thereof, send or receive coded image data (or coded data) 21 .
  • the communication interface 22 can be used to encapsulate the encoded image data 21 into a suitable format such as a message, and/or use any type of transmission encoding or processing to process the encoded image data, so that it can be transmitted over a communication link or communication network on the transmission.
  • the communication interface 28 corresponds to the communication interface 22, eg, can be used to receive the transmission data and process the transmission data using any type of corresponding transmission decoding or processing and/or decapsulation to obtain the encoded image data 21 .
  • Both the communication interface 22 and the communication interface 28 can be configured as a one-way communication interface as indicated by an arrow pointing from the source device 12 to the corresponding communication channel 13 of the destination device 14 in FIG. 1A , or a two-way communication interface, and can be used to send and receive messages etc., to establish the connection, confirm and exchange any other information related to the communication link and/or data transmission such as encoded image data transmission, etc.
  • the video decoder (or decoder) 30 is used to receive encoded image data 21 and provide decoded image data (or decoded image data) 31 (which will be further described below with reference to FIG. 3 , etc.).
  • the post-processor 32 is used to perform post-processing on decoded image data 31 (also referred to as reconstructed image data) such as a decoded image to obtain post-processed image data 33 such as a post-processed image.
  • Post-processing performed by post-processing unit 32 may include, for example, color format conversion (e.g., from YCbCr to RGB), color grading, cropping, or resampling, or any other processing for producing decoded image data 31 for display by a display device 34 or the like. .
  • the display device 34 is used to receive the post-processed image data 33 to display the image to a user or viewer or the like.
  • Display device 34 may be or include any type of display for representing the reconstructed image, eg, an integrated or external display screen or display.
  • the display screen may include a liquid crystal display (LCD), an organic light emitting diode (OLED) display, a plasma display, a projector, a micro LED display, a liquid crystal on silicon (LCoS) display, or a liquid crystal on silicon (LCoS) display. ), a digital light processor (DLP), or any type of other display.
  • LCD liquid crystal display
  • OLED organic light emitting diode
  • plasma display e.g., a plasma display, a projector, a micro LED display, a liquid crystal on silicon (LCoS) display, or a liquid crystal on silicon (LCoS) display.
  • DLP digital light processor
  • FIG. 1A shows the source device 12 and the destination device 14 as independent devices
  • device embodiments may also include the source device 12 and the destination device 14 or the functions of the source device 12 and the destination device 14 at the same time, that is, include both the source device 12 and the destination device 14 at the same time.
  • Device 12 or corresponding function and destination device 14 or corresponding function In these embodiments, source device 12 or corresponding functionality and destination device 14 or corresponding functionality may be implemented using the same hardware and/or software or by separate hardware and/or software or any combination thereof.
  • Encoder 20 e.g., video encoder 20
  • decoder 30 e.g., video decoder 30
  • processing circuitry such as one or more microprocessors, digital signal processors, (digital signal processor, DSP), application-specific integrated circuit (ASIC), field-programmable gate array (field-programmable gate array, FPGA), discrete logic, hardware, video encoding dedicated processor or any combination thereof .
  • Encoder 20 may be implemented by processing circuitry 46 to include the various modules discussed with reference to encoder 20 of FIG. 2 and/or any other encoder system or subsystem described herein.
  • Decoder 30 may be implemented by processing circuitry 46 to include the various modules discussed with reference to decoder 30 of FIG.
  • the processing circuitry 46 may be used to perform various operations discussed below. As shown in Figure 5, if part of the technology is implemented in software, the device can store the instructions of the software in a suitable computer-readable storage medium, and use one or more processors to execute the instructions in hardware, thereby implementing the present invention technology.
  • One of the video encoder 20 and the video decoder 30 may be integrated in a single device as part of a combined codec (encoder/decoder, CODEC), as shown in FIG. 1B .
  • Source device 12 and destination device 14 may comprise any of a variety of devices, including any type of handheld or stationary device, such as a notebook or laptop computer, cell phone, smartphone, tablet or tablet computer, camera, desktop computers, set-top boxes, televisions, display devices, digital media players, video game consoles, video streaming devices (such as content service servers or content distribution servers), broadcast receiving devices, broadcast transmitting devices, etc., and may not Use or use any type of operating system.
  • source device 12 and destination device 14 may be equipped with components for wireless communication. Accordingly, source device 12 and destination device 14 may be wireless communication devices.
  • the video coding system 10 shown in FIG. 1A is merely exemplary, and the techniques provided herein may be applicable to video coding settings (e.g., video coding or video decoding) that do not necessarily include coding devices and Decode any data communication between devices.
  • data is retrieved from local storage, sent over a network, and so on.
  • a video encoding device may encode and store data into memory, and/or a video decoding device may retrieve and decode data from memory.
  • encoding and decoding are performed by devices that do not communicate with each other but simply encode data to memory and/or retrieve and decode data from memory.
  • FIG. 1B is an exemplary block diagram of a video decoding system 40 according to an embodiment of the present application.
  • the video decoding system 40 may include an imaging device 41, a video encoder 20, a video decoder 30 (and/or by Video codec implemented by processing circuit 46 ), antenna 42 , one or more processors 43 , one or more memory stores 44 and/or display device 45 .
  • imaging device 41 , antenna 42 , processing circuit 46 , video encoder 20 , video decoder 30 , processor 43 , memory storage 44 and/or display device 45 are capable of communicating with each other.
  • the video coding system 40 may include only the video encoder 20 or only the video decoder 30 .
  • antenna 42 may be used to transmit or receive an encoded bitstream of video data.
  • display device 45 may be used to present video data.
  • the processing circuit 46 may include application-specific integrated circuit (ASIC) logic, a graphics processor, a general-purpose processor, and the like.
  • the video decoding system 40 may also include an optional processor 43, and the optional processor 43 may similarly include application-specific integrated circuit (ASIC) logic, a graphics processor, a general-purpose processor, and the like.
  • the memory storage 44 can be any type of memory, such as volatile memory (for example, static random access memory (static random access memory, SRAM), dynamic random access memory (dynamic random access memory, DRAM), etc.) or non-volatile memory volatile memory (for example, flash memory, etc.) and the like.
  • volatile memory for example, static random access memory (static random access memory, SRAM), dynamic random access memory (dynamic random access memory, DRAM), etc.
  • non-volatile memory volatile memory for example, flash memory, etc.
  • memory storage 44 may be implemented by cache memory.
  • processing circuitry 46 may include memory (eg, cache, etc.) for implementing an image buffer or the like.
  • video encoder 20 implemented by logic circuitry may include an image buffer (eg, implemented by processing circuitry 46 or memory storage 44 ) and a graphics processing unit (eg, implemented by processing circuitry 46 ).
  • a graphics processing unit may be communicatively coupled to the image buffer.
  • Graphics processing unit may include video encoder 20 implemented by processing circuitry 46 to implement the various modules discussed with reference to FIG. 2 and/or any other encoder system or subsystem described herein.
  • Logic circuits may be used to perform the various operations discussed herein.
  • video decoder 30 may be implemented by processing circuitry 46 in a similar manner to implement the various aspects discussed with reference to video decoder 30 of FIG. 3 and/or any other decoder system or subsystem described herein. module.
  • logic circuit implemented video decoder 30 may include an image buffer (implemented by processing circuit 46 or memory storage 44 ) and a graphics processing unit (eg, implemented by processing circuit 46 ).
  • a graphics processing unit may be communicatively coupled to the image buffer.
  • Graphics processing unit may include video decoder 30 implemented by processing circuitry 46 to implement the various modules discussed with reference to FIG. 3 and/or any other decoder system or subsystem described herein.
  • antenna 42 may be used to receive an encoded bitstream of video data.
  • an encoded bitstream may contain data related to encoded video frames, indicators, index values, mode selection data, etc., as discussed herein, such as data related to encoding partitions (e.g., transform coefficients or quantized transform coefficients , (as discussed) an optional indicator, and/or data defining an encoding split).
  • Video coding system 40 may also include video decoder 30 coupled to antenna 42 and used to decode the encoded bitstream.
  • a display device 45 is used to present video frames.
  • the video decoder 30 may be used to perform a reverse process.
  • the video decoder 30 may be configured to receive and parse such syntax elements and decode the associated video data accordingly.
  • video encoder 20 may entropy encode the syntax elements into an encoded video bitstream.
  • video decoder 30 may parse such syntax elements and decode the related video data accordingly.
  • VVC general video coding
  • VCEG video coding experts group
  • MPEG motion picture experts group
  • HEVC high-efficiency video coding
  • JCT-VC joint collaboration team on video coding
  • FIG. 2 is an exemplary block diagram of a video encoder 20 according to an embodiment of the present application.
  • the video encoder 20 includes an input terminal (or input interface) 201, a residual calculation unit 204, a transformation processing unit 206, a quantization unit 208, an inverse quantization unit 210, an inverse transformation processing unit 212, a reconstruction unit 214, Loop filter 220 , decoded picture buffer (decoded picture buffer, DPB) 230 , mode selection unit 260 , entropy coding unit 270 and output terminal (or output interface) 272 .
  • Mode selection unit 260 may include inter prediction unit 244 , intra prediction unit 254 , and partition unit 262 .
  • Inter prediction unit 244 may include a motion estimation unit and a motion compensation unit (not shown).
  • the video encoder 20 shown in FIG. 2 may also be called a hybrid video encoder or a video encoder based on a hybrid video codec.
  • the residual calculation unit 204, the transform processing unit 206, the quantization unit 208, and the mode selection unit 260 constitute the forward signal path of the encoder 20, while the inverse quantization unit 210, the inverse transform processing unit 212, the reconstruction unit 214, the buffer 216, the loop A path filter 220, a decoded picture buffer (decoded picture buffer, DPB) 230, an inter prediction unit 244, and an intra prediction unit 254 form the backward signal path of the encoder, wherein the backward signal path of the encoder 20 corresponds to the decoding signal path of the decoder (see decoder 30 in FIG. 3).
  • Inverse quantization unit 210, inverse transform processing unit 212, reconstruction unit 214, loop filter 220, decoded picture buffer 230, inter prediction unit 244, and intra prediction unit 254 also make up the "built-in decoder" of video encoder 20 .
  • the encoder 20 is operable to receive, via an input 201 or the like, an image (or image data) 17, eg an image in a sequence of images forming a video or a video sequence.
  • the received image or image data may also be a preprocessed image (or preprocessed image data) 19.
  • image 17 may also be referred to as a current image or an image to be encoded (especially when the current image is distinguished from other images in video encoding, other images such as the same video sequence, that is, the video sequence that also includes the current image, before encoding post image and/or decoded image).
  • a (digital) image is or can be viewed as a two-dimensional array or matrix of pixel points with intensity values. Pixels in the array may also be referred to as pixels (pixel or pel) (short for image element). The number of pixels in the array or image in the horizontal and vertical directions (or axes) determines the size and/or resolution of the image. In order to represent a color, three color components are usually used, that is, an image can be represented as or include three pixel arrays. In the RBG format or color space, an image includes corresponding red, green and blue pixel arrays.
  • each pixel is usually expressed in a luminance/chroma format or color space, such as YCbCr, including a luminance component indicated by Y (sometimes also indicated by L) and two chrominance components indicated by Cb and Cr.
  • the luminance (luma) component Y represents brightness or grayscale level intensity (e.g., both are the same in a grayscale image), while the two chrominance (chroma) components Cb and Cr represent chrominance or color information components .
  • an image in the YCbCr format includes a luminance pixel point array of luminance pixel point values (Y) and two chrominance pixel point arrays of chrominance values (Cb and Cr).
  • Images in RGB format can be converted or transformed to YCbCr format and vice versa, a process also known as color transformation or conversion. If the image is black and white, the image may only include an array of luminance pixels. Correspondingly, the image can be, for example, an array of luma pixels in monochrome format or an array of luma pixels and two corresponding arrays of chrominance pixels in 4:2:0, 4:2:2 and 4:4:4 color formats .
  • an embodiment of the video encoder 20 may include an image partitioning unit (not shown in FIG. 2 ) for partitioning the image 17 into a plurality of (typically non-overlapping) image blocks (coding tree units) 203 .
  • These blocks can also be called root blocks, macro blocks (H.264/AVC) or coding tree blocks (Coding Tree Block, CTB), or coding tree units (Coding Tree Unit, CTU) in the H.265/HEVC and VVC standards ).
  • the segmentation unit can be used to use the same block size for all images in a video sequence and to use a corresponding grid that defines the block size, or to vary the block size between images or subsets or groups of images and segment each image into corresponding piece.
  • the video encoder may be adapted to directly receive the blocks 203 of the image 17 , for example one, several or all blocks making up said image 17 .
  • the image block 203 may also be referred to as a current image block or an image block to be encoded.
  • the image block 203 is also or can be regarded as a two-dimensional array or matrix composed of pixels with intensity values (pixel values), but the image block 203 is smaller than that of the image 17 .
  • block 203 may comprise one pixel point array (e.g., a luminance array in the case of a monochrome image 17 or a luminance array or a chrominance array in the case of a color image) or three pixel point arrays (e.g., in the case of a color image 17 one luma array and two chrominance arrays) or any other number and/or type of arrays depending on the color format employed.
  • a block may be an array of M ⁇ N (M columns ⁇ N rows) pixel points, or an array of M ⁇ N transform coefficients, and the like.
  • the video encoder 20 shown in FIG. 2 is used to encode the image 17 block by block, eg, performing encoding and prediction on each block 203 .
  • the video encoder 20 shown in FIG. 2 can also be used to segment and/or encode an image using slices (also called video slices), where an image can use one or more slices (typically non-overlapping ) for segmentation or encoding.
  • slices also called video slices
  • Each slice may include one or more blocks (for example, a coding tree unit CTU) or one or more block groups (for example, a coding block (tile) in the H.265/HEVC/VVC standard and a tile in the VVC standard ( brick).
  • the video encoder 20 shown in FIG. 2 can also be configured to use slices/coded block groups (also called video coded block groups) and/or coded blocks (also called video coded block groups) ) to segment and/or encode an image, where an image may be segmented or encoded using one or more slices/coded block groups (usually non-overlapping), each slice/coded block group may consist of one or more A block (such as a CTU) or one or more coding blocks, etc., wherein each coding block may be in the shape of a rectangle or the like, and may include one or more complete or partial blocks (such as a CTU).
  • slices/coded block groups also called video coded block groups
  • coded blocks also called video coded block groups
  • the residual calculation unit 204 is used to calculate the residual block 205 according to the image block 203 and the prediction block 265 in the following manner (the prediction block 265 will be described in detail later): for example, pixel by pixel (pixel by pixel) from the pixel of the image block 203 Subtract the pixel value of the prediction block 265 from the value to obtain the residual block 205 in the pixel domain.
  • the transform processing unit 206 is configured to perform discrete cosine transform (discrete cosine transform, DCT) or discrete sine transform (discrete sine transform, DST) etc. on the pixel point values of the residual block 205 to obtain transform coefficients 207 in the transform domain.
  • the transform coefficients 207 may also be referred to as transform residual coefficients, representing the residual block 205 in the transform domain.
  • Transform processing unit 206 may be configured to apply an integer approximation of DCT/DST, such as the transform specified for H.265/HEVC. This integer approximation is usually scaled by some factor compared to the orthogonal DCT transform. To maintain the norm of the forward and inverse transformed residual blocks, other scaling factors are used as part of the transformation process. The scaling factor is usually chosen according to certain constraints, such as the scaling factor being a power of 2 for the shift operation, the bit depth of the transform coefficients, the trade-off between accuracy and implementation cost, etc.
  • specifying a specific scaling factor for the inverse transform at the encoder 20 side by the inverse transform processing unit 212 (and for the corresponding inverse transform at the decoder 30 side by, for example, the inverse transform processing unit 312), and correspondingly, can The side 20 specifies the corresponding scaling factor for the forward transform through the transform processing unit 206 .
  • the video encoder 20 (correspondingly, the transform processing unit 206) can be used to output transform parameters such as one or more transform types, for example, directly output or output after encoding or compression by the entropy encoding unit 270 , for example, so that the video decoder 30 can receive and use the transformation parameters for decoding.
  • transform parameters such as one or more transform types, for example, directly output or output after encoding or compression by the entropy encoding unit 270 , for example, so that the video decoder 30 can receive and use the transformation parameters for decoding.
  • the quantization unit 208 is configured to quantize the transform coefficient 207 by, for example, scalar quantization or vector quantization, to obtain a quantized transform coefficient 209 .
  • Quantized transform coefficients 209 may also be referred to as quantized residual coefficients 209 .
  • the quantization process may reduce the bit depth associated with some or all of the transform coefficients 207 .
  • n-bit transform coefficients may be rounded down to m-bit transform coefficients during quantization, where n is greater than m.
  • the degree of quantization can be modified by adjusting a quantization parameter (quantization parameter, QP).
  • QP quantization parameter
  • QP quantization parameter
  • a suitable quantization step size can be indicated by a quantization parameter (quantization parameter, QP).
  • a quantization parameter may be an index to a predefined set of suitable quantization step sizes.
  • Quantization may include dividing by a quantization step size, while corresponding or inverse dequantization performed by the inverse quantization unit 210 or the like may include multiplying by a quantization step size.
  • Embodiments according to some standards such as HEVC may be used to determine the quantization step size using quantization parameters.
  • the quantization step size can be calculated from the quantization parameter using a fixed-point approximation of an equation involving division.
  • the video encoder 20 (correspondingly, the quantization unit 208) can be used to output a quantization parameter (quantization parameter, QP), for example, directly output or output after being encoded or compressed by the entropy encoding unit 270, for example, making the video Decoder 30 may receive and use the quantization parameters for decoding.
  • a quantization parameter quantization parameter, QP
  • the inverse quantization unit 210 is used to perform the inverse quantization of the quantization unit 208 on the quantization coefficients to obtain the dequantization coefficients 211, for example, perform the inverse quantization of the quantization scheme performed by the quantization unit 208 according to or use the same quantization step size as that of the quantization unit 208 plan.
  • the dequantized coefficients 211 may also be referred to as dequantized residual coefficients 211 , corresponding to the transform coefficients 207 , but due to loss caused by quantization, the dequantized coefficients 211 are usually not exactly the same as the transform coefficients.
  • the inverse transform processing unit 212 is configured to perform an inverse transform of the transform performed by the transform processing unit 206, for example, an inverse discrete cosine transform (discrete cosine transform, DCT) or an inverse discrete sine transform (discrete sine transform, DST), to transform in the pixel domain
  • DCT inverse discrete cosine transform
  • DST inverse discrete sine transform
  • a reconstructed residual block 213 (or corresponding dequantization coefficients 213) is obtained.
  • the reconstructed residual block 213 may also be referred to as a transform block 213 .
  • the reconstruction unit 214 (e.g., summer 214) is used to add the transform block 213 (i.e., the reconstructed residual block 213) to the predicted block 265 to obtain the reconstructed block 215 in the pixel domain, for example, the reconstructed residual block 213
  • the pixel value is added to the pixel value of the prediction block 265 .
  • the loop filter unit 220 (or “loop filter” 220 for short) is used to filter the reconstructed block 215 to obtain the filtered block 221, or generally used to filter the reconstructed pixels to obtain filtered pixel values.
  • a loop filter unit is used to smooth pixel transitions or improve video quality.
  • the loop filter unit 220 may include one or more loop filters, such as deblocking filters, pixel adaptive offset (sample-adaptive offset, SAO) filters, or one or more other filters, such as auto Adaptive loop filter (ALF), noise suppression filter (NSF), or any combination.
  • the loop filter unit 220 may include a deblocking filter, an SAO filter, and an ALF filter.
  • the order of the filtering process may be deblocking filter, SAO filter and ALF filter.
  • add a process called luma mapping with chroma scaling (LMCS) ie adaptive in-loop shaper. This process is performed before deblocking.
  • LMCS luma mapping with chroma scaling
  • the deblocking filtering process can also be applied to internal sub-block edges, such as affine sub-block edges, ATMVP sub-block edges, sub-block transform (sub-block transform, SBT) edges and intra sub-partition (ISP )edge.
  • loop filter unit 220 is shown in FIG. 2 as a loop filter, in other configurations, loop filter unit 220 may be implemented as a post-loop filter.
  • the filtering block 221 may also be referred to as a filtering reconstruction block 221 .
  • video encoder 20 (correspondingly, loop filter unit 220) can be used to output loop filter parameters (such as SAO filter parameters, ALF filter parameters or LMCS parameters), for example, directly or by entropy
  • the encoding unit 270 performs entropy encoding to output, for example, so that the decoder 30 can receive and use the same or different loop filter parameters for decoding.
  • a decoded picture buffer (DPB) 230 may be a reference picture memory that stores reference picture data for use by the video encoder 20 when encoding video data.
  • the DPB 230 may be formed by any of a variety of memory devices, such as dynamic random access memory (dynamic random access memory, DRAM), including synchronous DRAM (synchronous DRAM, SDRAM), magnetoresistive RAM (magnetoresistive RAM, MRAM), Resistive RAM (resistive RAM, RRAM) or other types of storage devices.
  • DRAM dynamic random access memory
  • DRAM dynamic random access memory
  • synchronous DRAM synchronous DRAM, SDRAM
  • magnetoresistive RAM magnetoresistive RAM
  • MRAM magnetoresistive RAM
  • Resistive RAM resistive RAM
  • the decoded picture buffer 230 may also be used to store other previously filtered blocks, such as the previously reconstructed and filtered block 221, of the same current picture or a different picture such as a previous reconstructed block, and may provide the complete previously reconstructed, i.e. decoded, picture (and corresponding reference blocks and pixels) and/or a partially reconstructed current image (and corresponding reference blocks and pixels), for example for inter-frame prediction.
  • the decoded image buffer 230 can also be used to store one or more unfiltered reconstruction blocks 215, or generally store unfiltered reconstruction pixels, for example, the reconstruction blocks 215 that have not been filtered by the loop filter unit 220, or have not been filtered. Any other processed reconstruction blocks or reconstructed pixels.
  • the mode selection unit 260 includes a segmentation unit 262, an inter prediction unit 244, and an intra prediction unit 254 for receiving or obtaining raw blocks from the decoded image buffer 230 or other buffers (e.g., column buffers, not shown in the figure).
  • 203 current block 203 of current image 17
  • original image data such as reconstructed block data, e.g. filtered and/or unfiltered reconstructed pixels or reconstructions of the same (current) image and/or one or more previously decoded images piece.
  • the reconstructed block data is used as reference image data required for prediction such as inter-frame prediction or intra-frame prediction to obtain a prediction block 265 or a prediction value 265 .
  • the mode selection unit 260 can be used to determine or select a partition for the prediction mode (such as intra-frame or inter-frame prediction mode) of the current block (including no partition), generate a corresponding prediction block 265, and perform calculation and calculation on the residual block 205
  • the reconstruction block 215 is reconstructed.
  • mode selection unit 260 is operable to select a partitioning and prediction mode (e.g., from among the prediction modes supported or available by mode selection unit 260) that provides the best match or the smallest residual (minimum Residual refers to better compression in transmission or storage), or provides minimal signaling overhead (minimum signaling overhead refers to better compression in transmission or storage), or considers or balances both of the above.
  • the mode selection unit 260 may be configured to determine the partition and prediction mode according to rate distortion optimization (RDO), that is, to select the prediction mode that provides the minimum rate distortion optimization.
  • RDO rate distortion optimization
  • segmentation unit 262 may be used to segment images in a video sequence into a sequence of coding tree units (CTUs), and CTUs 203 may be further segmented into smaller block portions or sub-blocks (again forming blocks), e.g. By iteratively using quad-tree partitioning (QT) partitioning, binary-tree partitioning (BT) partitioning or triple-tree partitioning (TT) partitioning or any combination thereof, and for example or each of the sub-blocks to perform prediction, wherein the mode selection includes selecting the tree structure of the partition block 203 and selecting the prediction mode to be applied to the block portion or each of the sub-blocks.
  • QT quad-tree partitioning
  • BT binary-tree partitioning
  • TT triple-tree partitioning
  • partitioning eg, performed by partition unit 262
  • prediction processing eg, performed by inter-prediction unit 244 and intra-prediction unit 254
  • the division unit 262 may divide (or divide) one coding tree unit 203 into smaller parts, such as small blocks in the shape of a square or a rectangle.
  • a CTU For an image with three pixel arrays, a CTU consists of N ⁇ N luma pixel blocks and two corresponding chrominance pixel blocks.
  • the H.265/HEVC video coding standard divides a frame of image into non-overlapping CTUs, and the size of the CTU can be set to 64 ⁇ 64 (the size of the CTU can also be set to other values, such as the size of the CTU in the JVET reference software JEM increases 128 ⁇ 128 or 256 ⁇ 256).
  • a 64 ⁇ 64 CTU includes a rectangular pixel matrix consisting of 64 columns and 64 pixels in each column, and each pixel includes a luminance component or/and a chrominance component.
  • H.265 uses a QT-based CTU division method, uses the CTU as the root node (root) of the QT, and recursively divides the CTU into several leaf nodes (leaf nodes) according to the QT division method.
  • a node corresponds to an image area. If the node is not divided, the node is called a leaf node, and its corresponding image area is a CU; if the node continues to be divided, the image area corresponding to the node can be divided into four of the same size The area (its length and width are half of the divided area), each area corresponds to a node, and it is necessary to determine whether these nodes will be divided separately.
  • Whether a node is split is indicated by the split_cu_flag corresponding to the node in the code stream.
  • the QT level (qtDepth) of the root node is 0, and the QT level of the node is the four QT level of the parent node of the node plus 1.
  • a CTU contains a luma block and two chroma blocks, and the luma block and the chroma block can be divided in the same way, called luma-chroma union coding tree.
  • VVC if the current frame is an I frame, when a CTU is a node of a preset size (such as 64 ⁇ 64) in an intra-frame coding frame (I frame), the luma block contained in the node is divided through the luma coding tree into a group of coding units containing only luma blocks, and the chroma blocks contained in this node are divided into a group of coding units containing only chroma blocks through the chroma coding tree; the division of the luma coding tree and the chroma coding tree are independent of each other.
  • Such luma blocks and chrominance blocks use separate coding trees, called separate trees.
  • a CU includes luma pixels and chrominance pixels; in H.266, AVS3 and other standards, in addition to CUs that contain both luma pixels and chrominance pixels, there are luma CUs that only contain luma pixels and A chroma CU that contains only chroma pixels.
  • the video encoder 20 is configured to determine or select the best or optimal prediction mode from a set of (predetermined) prediction modes.
  • the set of prediction modes may include, for example, intra prediction modes and/or inter prediction modes.
  • the set of intra prediction modes can include 35 different intra prediction modes, e.g. non-directional modes like DC (or mean) mode and planar mode, or directional modes as defined by HEVC, or can include 67 different Intra prediction modes, eg non-directional modes like DC (or mean) mode and planar mode, or directional modes as defined in VVC.
  • intra prediction modes e.g. non-directional modes like DC (or mean) mode and planar mode, or directional modes as defined in VVC.
  • several traditional angle intra prediction modes are adaptively replaced with wide angle intra prediction modes for non-square blocks defined in VVC.
  • to avoid the division operation of DC prediction only the longer side is used to calculate the average value of non-square blocks.
  • the intra prediction result of the planar mode can also be modified by using a position dependent intra prediction combination (PDPC) method.
  • PDPC position dependent intra prediction combination
  • the intra prediction unit 254 is configured to generate an intra prediction block 265 by using reconstructed pixels of adjacent blocks of the same current image according to an intra prediction mode in the intra prediction mode set.
  • Intra prediction unit 254 (or generally mode selection unit 260) is also configured to output intra prediction parameters (or generally information indicating the selected intra prediction mode for a block) in the form of syntax elements 266 to entropy encoding unit 270 , to be included in the encoded image data 21, so that the video decoder 30 can perform operations such as receiving and using prediction parameters for decoding.
  • the set of inter prediction modes depends on available reference pictures (i.e., e.g. at least some previously decoded pictures previously stored in DBP 230) and other inter prediction parameters, e.g. on whether the entire reference picture is used or only Use part of the reference image, e.g. the search window area around the area of the current block, to search for the best matching reference block, and/or e.g. depending on whether half-pel, quarter-pel and/or 16th interpolation is performed pixel interpolation.
  • available reference pictures i.e., e.g. at least some previously decoded pictures previously stored in DBP 230
  • other inter prediction parameters e.g. on whether the entire reference picture is used or only Use part of the reference image, e.g. the search window area around the area of the current block, to search for the best matching reference block, and/or e.g. depending on whether half-pel, quarter-pel and/or 16th interpolation is performed pixel interpolation.
  • skip mode and/or direct mode may also be employed.
  • the merge candidate list of this mode consists of the following five candidate types in order: spatial MVP from spatially adjacent CUs, temporal MVPs from collocated CUs, history-based MVPs from FIFO tables, For average MVP and zero MV.
  • Decoder side motion vector refinement (DMVR) based on bilateral matching can be used to increase the accuracy of MV in merge mode.
  • Merge mode with MVD (merge mode with MVD, MMVD) comes from merge mode with motion vector difference. Send the MMVD flag immediately after sending the skip flag and the merge flag to specify whether the CU uses MMVD mode.
  • a CU-level adaptive motion vector resolution (AMVR) scheme may be used. AMVR supports CU's MVD encoding at different precisions.
  • the MVD of the current CU is adaptively selected.
  • a combined inter/intra prediction (CIIP) mode can be applied to the current CU.
  • a weighted average is performed on the inter-frame and intra-frame prediction signals to obtain CIIP prediction.
  • the affine motion field of a block is described by the motion information of 2 control points (4 parameters) or 3 control points (6 parameters) motion vector.
  • SBTMVP subblock-based temporal motion vector prediction
  • TMVP temporal motion vector prediction
  • Bi-directional optical flow (BDOF), formerly known as BIO, is a simplified version that reduces computation, especially in terms of the number of multiplications and the size of the multiplier.
  • the triangular partition mode the CU is evenly divided into two triangular parts in two ways: diagonal division and anti-diagonal division.
  • the bidirectional prediction mode extends simple averaging to support weighted averaging of two prediction signals.
  • the inter prediction unit 244 may include a motion estimation (motion estimation, ME) unit and a motion compensation (motion compensation, MC) unit (both are not shown in FIG. 2 ).
  • the motion estimation unit is operable to receive or acquire image block 203 (current image block 203 of current image 17) and decoded image 231, or at least one or more previously reconstructed blocks, e.g., of one or more other/different previously decoded images 231 Reconstruct blocks for motion estimation.
  • a video sequence may comprise a current picture and a previous decoded picture 231, or in other words, the current picture and a previous decoded picture 231 may be part of or form a sequence of pictures forming the video sequence.
  • encoder 20 may be configured to select a reference block from a plurality of reference blocks in the same or different images in a plurality of other images, and assign the reference image (or reference image index) and/or the position (x, y coordinates) of the reference block ) and the position of the current block (spatial offset) are provided to the motion estimation unit as inter prediction parameters.
  • This offset is also called a motion vector (MV).
  • the motion compensation unit is configured to obtain, for example, receive, inter-frame prediction parameters, and perform inter-frame prediction according to or using the inter-frame prediction parameters to obtain an inter-frame prediction block 246 .
  • Motion compensation performed by the motion compensation unit may include extracting or generating a prediction block from a motion/block vector determined by motion estimation, and may include performing interpolation to sub-pixel precision. Interpolation filtering can generate pixels of other pixels from pixels of known pixels, thereby potentially increasing the number of candidate predictive blocks that can be used to encode an image block.
  • the motion compensation unit may locate the prediction block pointed to by the motion vector in one of the reference image lists.
  • the motion compensation unit may also generate block- and video-slice-related syntax elements for use by video decoder 30 when decoding image blocks of video slices. Additionally, or instead of slices and corresponding syntax elements, coding block groups and/or coding blocks and corresponding syntax elements may be generated or used.
  • the entropy coding unit 270 is used to use an entropy coding algorithm or scheme (for example, a variable length coding (variable length coding, VLC) scheme, a context adaptive VLC scheme (context adaptive VLC, CALVC), an arithmetic coding scheme, a binarization algorithm, Context Adaptive Binary Arithmetic Coding (CABAC), Syntax-based context-adaptive Binary Arithmetic Coding (SBAC), Probability Interval Partitioning Entropy (PIPE) ) encoding or other entropy encoding methods or techniques) are applied to the quantized residual coefficient 209, inter prediction parameters, intra prediction parameters, loop filter parameters and/or other syntax elements, and the obtained bit stream can be encoded by the output terminal 272 21 etc., so that the video decoder 30 etc. can receive and use parameters for decoding.
  • Encoded bitstream 21 may be transmitted to video decoder 30 or stored in memory for later transmission or retrieval by video decoder 30 .
  • a non-transform based encoder 20 may directly quantize the residual signal without a transform processing unit 206 for certain blocks or frames.
  • encoder 20 may have quantization unit 208 and inverse quantization unit 210 combined into a single unit.
  • FIG. 3 is an exemplary block diagram of a video decoder 30 according to an embodiment of the present application.
  • the video decoder 30 is used to receive the encoded image data 21 (eg, the encoded bit stream 21 ) encoded by the encoder 20 to obtain a decoded image 331 .
  • the coded image data or bitstream comprises information for decoding said coded image data, eg data representing image blocks of a coded video slice (and/or coded block group or coded block) and associated syntax elements.
  • the decoder 30 includes an entropy decoding unit 304, an inverse quantization unit 310, an inverse transform processing unit 312, a reconstruction unit 314 (such as a summer 314), a loop filter 320, a decoded picture buffer (DBP ) 330, mode application unit 360, inter prediction unit 344, and intra prediction unit 354.
  • Inter prediction unit 344 may be or include a motion compensation unit.
  • video decoder 30 may perform a decoding process that is substantially inverse to the encoding process described with reference to video encoder 100 of FIG. 2 .
  • the inverse quantization unit 210, the inverse transform processing unit 212, the reconstruction unit 214, the loop filter 220, the decoded picture buffer DPB 230, the inter prediction unit 344 and the intra prediction unit 354 also constitute a video encoder 20's "built-in decoder".
  • the inverse quantization unit 310 may be functionally the same as the inverse quantization unit 110
  • the inverse transform processing unit 312 may be functionally the same as the inverse transform processing unit 122
  • the reconstruction unit 314 may be functionally the same as the reconstruction unit 214
  • the loop The filter 320 may be functionally the same as the loop filter 220
  • the decoded picture buffer 330 may be functionally the same as the decoded picture buffer 230 . Therefore, the explanation of the corresponding elements and functions of the video encoder 20 applies to the corresponding elements and functions of the video decoder 30 accordingly.
  • the entropy decoding unit 304 is used to analyze the bit stream 21 (or generally coded image data 21) and perform entropy decoding on the coded image data 21 to obtain quantization coefficients 309 and/or decoded coding parameters (not shown in FIG. 3 ), etc. , such as inter prediction parameters (such as reference image index and motion vector), intra prediction parameters (such as intra prediction mode or index), transformation parameters, quantization parameters, loop filter parameters and/or other syntax elements, etc. either or all.
  • the entropy decoding unit 304 may be configured to apply a decoding algorithm or scheme corresponding to the encoding scheme of the entropy encoding unit 270 of the encoder 20 .
  • Entropy decoding unit 304 may also be configured to provide inter-prediction parameters, intra-prediction parameters, and/or other syntax elements to mode application unit 360, as well as other parameters to other units of decoder 30.
  • Video decoder 30 may receive video slice and/or video block level syntax elements. Additionally, or instead of slices and corresponding syntax elements, coding block groups and/or coding blocks and corresponding syntax elements may be received or used.
  • the inverse quantization unit 310 may be configured to receive a quantization parameter (quantization parameter, QP) (or generally information related to inverse quantization) and quantization coefficients from the encoded image data 21 (for example, parsed and/or decoded by the entropy decoding unit 304), and based on The quantization parameter performs inverse quantization on the decoded quantization coefficient 309 to obtain an inverse quantization coefficient 311 , and the inverse quantization coefficient 311 may also be called a transform coefficient 311 .
  • the inverse quantization process may include using quantization parameters calculated by video encoder 20 for each video block in the video slice to determine the degree of quantization, as well as the degree of inverse quantization that needs to be performed.
  • the inverse transform processing unit 312 is operable to receive dequantized coefficients 311 , also referred to as transform coefficients 311 , and apply a transform to the dequantized coefficients 311 to obtain a reconstructed residual block 213 in the pixel domain.
  • the reconstructed residual block 213 may also be referred to as a transform block 313 .
  • the transform may be an inverse transform, such as an inverse DCT, an inverse DST, an inverse integer transform, or a conceptually similar inverse transform process.
  • the inverse transform processing unit 312 may also be configured to receive transform parameters or corresponding information from the encoded image data 21 (eg, parsed and/or decoded by the entropy decoding unit 304 ) to determine the transform to apply to the dequantized coefficients 311 .
  • the reconstruction unit 314 (for example, the summer 314) is used to add the reconstruction residual block 313 to the prediction block 365 to obtain the reconstruction block 315 in the pixel domain, for example, the pixel value of the reconstruction residual block 313 and the prediction block 365 pixel values are added.
  • the loop filter unit 320 is used (in the encoding loop or after) to filter the reconstructed block 315 to obtain the filtered block 321 to smooth pixel transformation or improve video quality, etc.
  • the loop filter unit 320 may include one or more loop filters, such as deblocking filters, pixel adaptive offset (sample-adaptive offset, SAO) filters, or one or more other filters, such as auto Adaptive loop filter (ALF), noise suppression filter (NSF), or any combination.
  • the loop filter unit 220 may include a deblocking filter, an SAO filter, and an ALF filter. The order of the filtering process may be deblocking filter, SAO filter and ALF filter.
  • LMCS luma mapping with chroma scaling
  • This process is performed before deblocking.
  • the deblocking filtering process can also be applied to internal sub-block edges, such as affine sub-block edges, ATMVP sub-block edges, sub-block transform (sub-block transform, SBT) edges and intra sub-partition (ISP )edge.
  • loop filter unit 320 is shown in FIG. 3 as a loop filter, in other configurations, loop filter unit 320 may be implemented as a post-loop filter.
  • the decoded video block 321 in one picture is then stored in a decoded picture buffer 330 which stores the decoded picture 331 as a reference picture for subsequent motion compensation in other pictures and/or for respective output display.
  • the decoder 30 is used to output the decoded image 311 through the output terminal 312 and so on, for displaying or viewing by the user.
  • the inter prediction unit 344 may be functionally the same as the inter prediction unit 244 (especially the motion compensation unit), and the intra prediction unit 354 may be functionally the same as the inter prediction unit 254, and is based on the coded image data 21 (eg Partitioning and/or prediction parameters or corresponding information received by the entropy decoding unit 304 (parsed and/or decoded) determines partitioning or partitioning and performs prediction.
  • the mode application unit 360 can be used to perform prediction (intra-frame or inter-frame prediction) of each block according to the reconstructed block, block or corresponding pixels (filtered or unfiltered), to obtain the predicted block 365 .
  • the intra prediction unit 354 in the mode application unit 360 is used to generate an input frame based on the indicated intra prediction mode and data from a previously decoded block of the current picture.
  • a prediction block 365 based on an image block of the current video slice.
  • inter prediction unit 344 e.g., motion compensation unit
  • the element generates a prediction block 365 for a video block of the current video slice.
  • the predicted blocks may be generated from one of the reference pictures in one of the reference picture lists.
  • Video decoder 30 may construct reference frame list 0 and list 1 from the reference pictures stored in DPB 330 using a default construction technique.
  • slices e.g., video slices
  • the same or similar process can be applied to embodiments of encoding block groups (e.g., video encoding block groups) and/or encoding blocks (e.g., video encoding blocks),
  • video may be encoded using I, P or B coding block groups and/or coding blocks.
  • the mode application unit 360 is configured to determine prediction information for a video block of the current video slice by parsing motion vectors and other syntax elements, and use the prediction information to generate a prediction block for the current video block being decoded. For example, the mode application unit 360 uses some of the received syntax elements to determine the prediction mode (such as intra prediction or inter prediction), the inter prediction slice type (such as B slice, P slice or GPB slice) for encoding the video block of the video slice. slice), construction information for one or more reference picture lists for the slice, motion vectors for each inter-coded video block of the slice, inter prediction state for each inter-coded video block of the slice, Other information to decode video blocks within the current video slice.
  • the prediction mode such as intra prediction or inter prediction
  • the inter prediction slice type such as B slice, P slice or GPB slice
  • construction information for one or more reference picture lists for the slice motion vectors for each inter-coded video block of the slice, inter prediction state for each inter-coded video block of the slice, Other information to decode video blocks within the
  • encoding block groups e.g., video encoding block groups
  • encoding blocks e.g., video encoding blocks
  • video may be encoded using I, P or B coding block groups and/or coding blocks.
  • the video encoder 30 shown in FIG. 3 can also be used to segment and/or decode an image using slices (also called video slices), where an image can use one or more slices (typically non-overlapping ) for segmentation or decoding.
  • slices also called video slices
  • Each slice may include one or more blocks (eg, CTUs) or one or more block groups (eg, coded blocks in the H.265/HEVC/VVC standard and tiles in the VVC standard.
  • the video decoder 30 shown in FIG. 3 can also be configured to use slices/coded block groups (also called video coded block groups) and/or coded blocks (also called video coded block groups) ) to segment and/or decode an image, where an image may be segmented or decoded using one or more slices/coded block groups (usually non-overlapping), each slice/coded block group may consist of one or more A block (such as a CTU) or one or more coding blocks, etc., wherein each coding block may be in the shape of a rectangle or the like, and may include one or more complete or partial blocks (such as a CTU).
  • slices/coded block groups also called video coded block groups
  • coded blocks also called video coded block groups
  • video decoder 30 may be used to decode encoded image data 21 .
  • decoder 30 may generate an output video stream without loop filter unit 320 .
  • the non-transform based decoder 30 can directly inverse quantize the residual signal if some blocks or frames do not have the inverse transform processing unit 312 .
  • video decoder 30 may have inverse quantization unit 310 and inverse transform processing unit 312 combined into a single unit.
  • the processing result of the current step can be further processed, and then output to the next step.
  • further operations such as clipping or shifting operations, may be performed on the processing results of interpolation filtering, motion vector derivation or loop filtering.
  • the value of the motion vector is limited to a predefined range according to the representation bits of the motion vector. If the representation bit of the motion vector is bitDepth, the range is -2 ⁇ (bitDepth-1) to 2 ⁇ (bitDepth-1)-1, where " ⁇ " represents a power. For example, if the bitDepth is set to 16, the range is -32768 to 32767; if the bitDepth is set to 18, the range is -131072 to 131071.
  • the value of deriving a motion vector (e.g. the MVs of 4 4x4 sub-blocks in an 8x8 block) is constrained such that the maximum difference between the integer parts of the 4 4x4 sub-blocks MVs is not More than N pixels, for example, no more than 1 pixel.
  • a motion vector e.g. the MVs of 4 4x4 sub-blocks in an 8x8 block
  • bitDepth two ways to limit motion vectors based on bitDepth.
  • embodiments of the decoding system 10, encoder 20, and decoder 30, as well as other embodiments described herein may also be used for still image processing or codecs, That is, the processing or coding of a single image in a video codec independently of any previous or successive images.
  • image processing is limited to a single image 17, inter prediction unit 244 (encoder) and inter prediction unit 344 (decoder) may not be available.
  • All other functions (also referred to as tools or techniques) of video encoder 20 and video decoder 30 are equally applicable to still image processing, such as residual calculation 204/304, transform 206, quantization 208, inverse quantization 210/310, (inverse ) transformation 212/312, segmentation 262/362, intra prediction 254/354 and/or loop filtering 220/320, entropy encoding 270 and entropy decoding 304.
  • FIG. 4 is an exemplary block diagram of a video decoding device 400 according to an embodiment of the present application.
  • the video coding apparatus 400 is suitable for implementing the disclosed embodiments described herein.
  • the video decoding device 400 may be a decoder, such as the video decoder 30 in FIG. 1A , or an encoder, such as the video encoder 20 in FIG. 1A .
  • the video decoding device 400 includes: an input port 410 (or input port 410) for receiving data and a receiving unit (receiver unit, Rx) 420; a processor, a logic unit or a central processing unit (central processing unit) for processing data , CPU) 430; a transmitter unit (transmitter unit, Tx) 440 and an output port 450 (or output port 450) for transmitting data; a memory 460 for storing data.
  • the video decoding device 400 may also include an optical-to-electrical (OE) component and an electrical-to-optical (EO) component coupled to the input port 410, the receiving unit 420, the transmitting unit 440 and the output port 450, For the exit or entrance of optical or electrical signals.
  • OE optical-to-electrical
  • EO electrical-to-optical
  • the processor 430 is realized by hardware and software.
  • Processor 430 may be implemented as one or more processor chips, cores (eg, multi-core processors), FPGAs, ASICs, and DSPs.
  • the processor 430 is in communication with the ingress port 410 , the receiving unit 420 , the transmitting unit 440 , the egress port 450 and the memory 460 .
  • the processor 430 includes a decoding module 470 .
  • the decoding module 470 implements the embodiments disclosed above. For example, the decode module 470 performs, processes, prepares, or provides for various encoding operations. Thus, a substantial improvement is provided to the functionality of the video coding device 400 by the decoding module 470 and the switching of the video coding device 400 to different states is effected.
  • decode module 470 is implemented as instructions stored in memory 460 and executed by processor 430 .
  • Memory 460 including one or more magnetic disks, tape drives, and solid-state drives, may be used as an overflow data storage device for storing programs when such programs are selected for execution, and for storing instructions and data that are read during execution of the programs.
  • Memory 460 may be volatile and/or nonvolatile, and may be a read-only memory (ROM), random access memory (random access memory, RAM), ternary content-addressable memory (ternary content-addressable memory (TCAM) and/or static random-access memory (static random-access memory, SRAM).
  • ROM read-only memory
  • RAM random access memory
  • TCAM ternary content-addressable memory
  • SRAM static random-access memory
  • Scalable video coding also known as scalable video coding, is an extension coding standard of the current video coding standard (generally an extension standard of advanced video coding (AVC) (H.264) scalable video coding (scalable video coding) , SVC), or the extended standard Scalable High Efficiency Video Coding (SHVC)) of High Efficiency Video Coding (HEVC) (H.265).
  • AVC advanced video coding
  • SHVC Scalable High Efficiency Video Coding
  • HEVC High Efficiency Video Coding
  • the basic structure in scalable video coding can be called a hierarchy.
  • the scalable video coding technology can obtain code streams of different resolution levels by performing spatial classification (resolution classification) on original image blocks.
  • the resolution can refer to the size of the image block in pixels.
  • the resolution of the low-level is lower, and the resolution of the high-level is not lower than the resolution of the low-level; or, through the temporal domain of the original image block Grading (frame rate grading), you can get streams of different frame rates.
  • the frame rate can refer to the number of image frames contained in the video per unit time.
  • the frame rate of the low-level is lower, and the frame rate of the high-level is not lower than the frame rate of the low-level; or, by performing quality domain classification on the original image blocks , you can get code streams of different encoding quality levels.
  • the encoding quality may refer to the quality of the video, and the degree of image distortion at the low level is greater, while the degree of image distortion at the high level is not higher than the degree of image distortion at the low level.
  • a layer called the base layer is the lowest layer in scalable video coding.
  • the base layer image blocks are encoded using the lowest resolution; in the temporal domain classification, the base layer image blocks are encoded using the lowest frame rate; in the quality domain classification, the base layer image blocks are encoded using the highest QP or the lowest code rate encoding. That is, the base layer is the lowest quality layer in scalable video coding.
  • a layer called an enhancement layer is a layer above the base layer in scalable video coding, and can be divided into multiple enhancement layers from low to high.
  • the enhancement layer of the lowest layer Based on the encoding information obtained by the base layer, the enhancement layer of the lowest layer encodes a merged code stream, which has a higher encoding resolution, or a higher frame rate, or a higher code rate than the base layer.
  • the higher-level enhancement layer can encode higher-quality image blocks according to the coding information of the lower-level enhancement layer.
  • FIG. 5 is an exemplary hierarchical diagram of scalable video coding in the present application.
  • the original image block after the original image block is sent to the scalable encoder, it can be layered into base layer image block B according to different encoding configurations. and enhancement layer image blocks (E1 ⁇ En, n ⁇ 1), and then encode them respectively to obtain a code stream including a base layer code stream and an enhancement layer code stream.
  • the base layer code stream is generally a code stream obtained by using the lowest resolution, the lowest frame rate, or the lowest encoding quality parameters for image blocks.
  • the code stream of the enhancement layer is based on the basic layer, and the code stream obtained by encoding image blocks with high resolution, high frame rate or high coding quality parameters is superimposed.
  • the spatial domain level, time domain level or quality level of coding will also become higher and higher.
  • the encoder transmits the code stream to the decoder, the transmission of the basic layer code stream is given priority, and when the network has a margin, the higher-level code stream is gradually transmitted.
  • the decoder first receives the basic layer code stream and decodes it, and then decodes the code stream with higher and higher levels in the space domain, time domain or quality layer by layer according to the received enhancement layer code stream in the order from low level to high level. Then, the higher-level decoding information is superimposed on the lower-level reconstruction blocks to obtain higher-resolution, higher-frame-rate or higher-quality reconstruction blocks.
  • each image in a video sequence is usually partitioned into a non-overlapping set of blocks, usually encoded at the block level.
  • encoders usually process or encode video at the block (image block) level, for example, through spatial (intra) prediction and temporal (inter) prediction to generate predicted blocks; from image blocks (currently processed/to be processed block) to obtain a residual block; transforming the residual block in the transform domain and quantizing the residual block can reduce the amount of data to be transmitted (compressed).
  • the encoder also needs to undergo inverse quantization and inverse transformation to obtain the reconstructed residual block, and then add the pixel value of the reconstructed residual block and the pixel value of the predicted block to obtain the reconstructed block.
  • the reconstruction block of the base layer refers to the reconstruction block obtained by performing the above operations on the base layer image block obtained by layering the original image block.
  • FIG. 6 is an exemplary flow chart of the encoding method of the enhancement layer of the present application.
  • the encoder obtains the prediction block of the base layer according to the original image block (for example, LCU), and then performs the calculation of the original image block and the base layer
  • the corresponding pixel points in the prediction block of the layer are calculated to obtain the residual block of the basic layer, and then the residual block of the basic layer is divided, transformed and quantized, together with the basic layer encoding control information, prediction information, motion information, etc.
  • the encoder performs inverse quantization and inverse transformation on the quantized quantized coefficients to obtain the reconstructed residual block of the base layer, and then sums the corresponding pixels in the prediction block of the base layer and the reconstructed residual block of the base layer to obtain the base layer Rebuild blocks.
  • the image processing method can process the local area to solve problems such as blurred and blurred images in the enlarged image.
  • the application scenario of the image processing method may be services involving image/video collection, storage, and display in electronic equipment, such as image gallery, Huawei video, etc.
  • An electronic device can be, for example, a smart terminal, a tablet, a wearable device, etc.
  • the electronic device has the functions of image compression and image decompression at the same time, that is, the electronic device collects the original image/video, and compresses the original image/video to obtain a compressed image /video, and then store the compressed image/video in the memory of the electronic device; when the user wants to watch the picture/video, the electronic device decompresses the compressed image/video to obtain the reconstructed image/video, and displays it on the screen superior.
  • the electronic device reference may be made to the embodiment shown in FIG. 1B , but the electronic device is not limited thereto.
  • the application scenario of the image processing method can also be services involving image/video acquisition, storage or transmission in terminal-cloud sharing, video surveillance, screen projection, etc., for example, Huawei Cloud, video surveillance, live broadcast, photo album/video projection, etc. screen etc.
  • the application scenario includes the source device and the destination device of the image/video.
  • the source device has the function of image compression.
  • the source device collects the original image/video, compresses the original image/video to obtain the code stream, and then stores the code stream In the memory of the electronic device, the destination device has the function of image decompression.
  • the destination device When the user wants to watch pictures/videos, the destination device requests the source device to load the code stream, and the destination device decompresses the code stream to obtain the reconstructed image/video , and display it on the screen.
  • the source device and the destination device reference may be made to the embodiment shown in FIG. 1A , but the source device and the destination device are not limited.
  • the following embodiments are described in terms of encoding side and decoding side.
  • the encoding side and decoding side can be set on the same electronic device, such as a smart phone; the encoding side and decoding side can also be set on different
  • the encoding side is on the cloud and the decoding side is on the smartphone, or the encoding side is on the surveillance camera and the decoding side is on the monitoring center platform, or the encoding side is on the smartphone and the decoding side is on the large screen.
  • FIG. 7 is an exemplary flow chart of the image processing method of the present application.
  • Process 700 may be performed by an encoding side and a decoding side.
  • the process 700 describes the encoding side and the decoding side together.
  • the encoding side and the decoding side may be performed independently.
  • the process 700 is described as a series of steps or operations. It should be understood that the process 700 may be performed in various orders and/or concurrently, and is not limited to the order of execution shown in FIG. 7 .
  • the process 700 includes the following steps:
  • Step 701 the coding side acquires the image to be processed.
  • the image to be processed can also be referred to as a global image.
  • the camera device can capture an image including the target area by facing the target area, and the complete image is the global image.
  • the encoding side can directly acquire images to be processed through its own camera device, can also extract images to be processed from the gallery, and can also obtain images to be processed from other places through the network or storage media. limited.
  • Step 702 the encoding side acquires multiple sets of visual sensory experience parameters.
  • the encoding side may first divide the image to be processed to obtain multiple candidate partial images, and then acquire multiple sets of visual sensory experience parameters corresponding to the multiple candidate partial images.
  • each candidate partial image corresponds to a local area in the image to be processed.
  • the division method can include dividing according to the pixel characteristics of the image to be processed, for example, dividing according to the pixel brightness to obtain the dark area of the image, the bright area of the image, etc.; dividing according to the color of the pixel to obtain ROI, the main area of the image, etc.;
  • Figure 8 is a schematic diagram of the pyramidal division method, as shown in Figure 8, 1 ⁇ in the figure represents the global image, 2 ⁇ represents the local image reduced by 2 times horizontally and vertically, and 4 ⁇ represents the horizontally and vertically reduced 4 times 64 ⁇ means that the partial image is reduced horizontally and vertically by 64 times, and this embodiment can support the maximum horizontal and vertical reduction to 64 times. All partial images obtained by division may be referred to as candidate partial images. It should be noted that in this embodiment of the present application, other methods may be used to divide the image to be processed, for example, a quadtree division method, a binary tree division method, etc., which are not specifically limited.
  • the visual sensory experience parameters can include brightness, contrast, color and details.
  • the processed and presented picture simulates (approximates or enhances) the reality of the human eye to perceive the area to be enlarged.
  • the visual sensory experience parameters corresponding to the candidate partial image may be determined. It can be seen that there is a corresponding relationship between the visual sensory experience parameters and the candidate partial images in the embodiment of the present application, and based on the pixel characteristics of any candidate partial image, a set of visual sensory experience parameters can be determined for it to realize the Adjustment of at least one of brightness, contrast, color and detail is performed on the candidate partial image.
  • the brightness and contrast can be improved, the color can be adapted to the local dark area, and the underexposed details can be increased, so that the visual sensory experience parameters of the candidate partial image can be determined, including brightness, There are four types of parameters: contrast, color and detail, and the specific values of the parameters correspond to the aforementioned adjustment requirements.
  • the brightness and contrast can be reduced, the color can be adapted to the local bright area, and the overexposure details can be increased, so that the visual sensory experience parameters of the candidate partial image can be determined including brightness , Contrast, Color and Detail four types of parameters, and the specific values of the parameters correspond to the aforementioned adjustment requirements.
  • the brightness and contrast can be fine-tuned, and the color can be adapted to the main body of the image.
  • the visual sensory experience parameters of the candidate partial image can be determined to include three types of parameters: brightness, contrast and color, and the parameter The specific value of corresponds to the aforementioned adjustment requirements.
  • Figure 9 is a schematic diagram of determining visual sensory experience parameters.
  • three adaptive field kernels are constructed: brightness perception model (brightness perception model), contrast perception model (contrast perception model), color perception model (color perception model) ), where the brightness perception model is used to ensure that the perception of brightness and darkness is consistent with the perception of the real scene by the human eye under various display capabilities; the contrast perception model is used to ensure that the variation can be distinguished under various display capabilities (just noticeable difference, The number of JND) is consistent with the perception of the real scene by the human eye; the color perception model ensures that the color is consistent with the perception of the real scene by the human eye under various display capabilities.
  • the adaptation field can solve the problem of mapping natural scenes to the best display D1, the best display D1 to various displays D2, and various displays D2 in various viewing environments.
  • L S .adapt expresses the mapping of human eyes to natural scenes
  • L D1 .adapt expresses the adaptation of human eyes to the best display D1
  • L D2 .adapt expresses the adaptation of human eyes to various displays D2
  • f() It is a function that can be derived from the fitting of experimental data
  • hist() represents the histogram
  • X, Y, Z represent the stimulus values in the three directions of X, Y, and Z that enter the human eye in the real world
  • RGB represents the image Pixel value
  • hist(X, Y, Z) means to take the histogram in the three directions of X, Y, and Z
  • hist(R, G, B) means to take the histogram of the three color channels of RGB
  • max() means to take the histogram
  • D1_peak_lum represents the display peak brightness of the best display D1
  • D2_peak_lum represents the display peak brightness of various displays D2
  • D2_lux_reflect represents the reflectivity of ambient light
  • the brightness model defines multiple perception intervals, for example, "white”, “bright gray”, “grey”, “dark gray”, “black”, L.adapt describes the adaptation of a specific field of view in the gray interval In the field of view, the human eye perceives the brightness of each object in the field of view.
  • the brightness model does not allow the contrast adjustment (tone mapping, TM) to cross the perceptual interval. For example, the pixel value belonging to the white interval is not allowed to fall into the bright gray interval after TM.
  • the contrast perception model describes how the human eye perceives contrast in an adaptive field formed by a specific field of view.
  • the perception model does not allow the number of JNDs to decrease significantly after TM, for example, the threshold is lower than 10%.
  • the color perception model describes the adaptability of the human eye to the chromaticity of different light sources in the adaptive field formed by a specific field of view, so that the perceived color of the object tends to be the color of the memory itself (for example, white paper is still perceived as white), Among them, the pixel value adjustment for the chromaticity adaptation change of the light source can be realized by using the chromatic adaptation transformation algorithm (Chromatic Adaptation Transform).
  • Step 703 the encoding side encodes the image to be processed and multiple sets of visual sensory experience parameters.
  • the encoding side can encode the image to be processed and the multiple sets of visual sensory experience parameters after obtaining multiple sets of visual sensory experience parameters, wherein, the encoding method of the image to be processed can refer to the joint photographic experts group (JPEG) encoding standard , hybrid video coding standards or scalable video coding standards, end-to-end coding methods can also be adopted, which will not be repeated here; multiple sets of visual sensory experience parameters can be coded as metadata (metadata), and can refer to the CUVA1.0 standard .
  • JPEG joint photographic experts group
  • the encoding side can also write in the code stream the division method of the image to be processed, as well as the corresponding relationship between the candidate partial images and the visual sensory experience parameters, so that the decoding side can obtain multiple candidate partial images and Its corresponding sets of visual sensory experience parameters.
  • the method of writing the aforementioned information into the code stream can refer to related technologies, as long as the decoding side can understand the division method of the image to be processed, and the corresponding relationship between each candidate partial image and multiple sets of visual sensory experience parameters, the embodiment of the present application No specific limitation is made here.
  • the compressed data or code stream obtained by encoding an image to be processed on the encoding side is stored in the APP field of the joint photographic experts group (joint photographic experts group, JPEG) file or in the enhancement layer of H264/265, and also stores the corresponding metadata to ensure consistent display effects on screens with different display capabilities.
  • JPEG joint photographic experts group
  • FIG. 10 is an example diagram of an image encoding process. As shown in FIG. 10 , JPEG encoding is performed on the image to be encoded (global image) to obtain a code stream of the global image. Correspondingly, the encoding side uses reverse JPEG decoding to obtain the reconstructed image of the global image.
  • the residual of the candidate partial image is encoded to obtain the code stream of the candidate partial image.
  • the code stream corresponding to the residual can be placed in some fields of the general encoder. For example, for pictures, if the general coder is JPEG, then the code stream corresponding to the residual can be placed in the extension field of JPEG, APP9 or APP10. For example, for video, the common encoder is H.265, then the code stream corresponding to the residual is placed in the SEI layer.
  • the encoding side generates metadata based on multiple sets of visual sensory experience parameters to ensure consistent screen effects with different display capabilities.
  • the generation method can refer to the CUVA1.0 standard.
  • the metadata can be placed before, after or in the middle of the code stream corresponding to the residual, which is not specifically limited.
  • the method of generating metadata is extended to each candidate partial image, which can ensure that the display effect of each candidate partial image on screens with different display capabilities is consistent.
  • the code stream obtained at the encoding side can be transmitted to the decoding end through a wired or wireless communication link, or can be transmitted to the decoding side through the internal bus of the electronic device.
  • the process 700 includes the following steps:
  • Step 704 the decoding side acquires the image to be processed.
  • the decoding side can obtain the image to be processed by decoding the code stream by adopting a decoding method corresponding to the encoding side.
  • FIG. 11 is an example diagram of an image decoding process.
  • the decoding side performs JPEG decoding on the code stream of the global image to obtain a reconstructed image of the global image (image to be processed).
  • the decoding side performs residual decoding on the code stream to obtain the residual of the candidate local image, and then calculates the residual of the candidate local image with the corresponding pixels of the reconstructed image of the global image. and get the candidate partial image.
  • Step 705 the decoding side acquires an enlargement operation instruction, which is used to indicate a region to be enlarged in the image to be processed.
  • the enlargement operation instruction is generated by the operation on the image to be processed. For example, when a user looks at an image on a mobile phone and wants to zoom in on a partial area of the image to see details, he or she can perform a two-finger zoom-in gesture on the screen of the mobile phone where the partial area is displayed, or Use the thumb and forefinger to double-click the place where the partial area is displayed on the screen of the mobile phone, so as to display the picture of the partial area on the screen of the mobile phone, and this gesture can generate the above-mentioned enlargement operation instruction. For another example, the user casts the video on the mobile phone to the big screen for playback.
  • zoom-in gesture or double-tap with the thumb and forefinger on the screen of the mobile phone where the local area is displayed, so as to display the video of the local area on the large screen, and this gesture can generate the above-mentioned zoom-in operation instruction.
  • zoom-in operation instruction may also be generated in other ways, which is not specifically limited in this embodiment of the present application.
  • the zoom-in operation command also indicates the area to be zoomed in, which is associated with the position corresponding to the operation that generates the zoom-in operation command, for example, centered on the starting position of the two-finger zoom-in gesture , a rectangular area with the set length as the side length; or, a circular area with the set length as the radius, centered on the starting position of the two-finger zoom-in gesture.
  • the region to be enlarged may also be determined in other ways, which is not specifically limited in this embodiment of the present application.
  • Step 706 the decoding side acquires one or more sets of visual sensory experience parameters corresponding to one or more partial images.
  • the decoding side can obtain the division method of the image to be processed according to the information carried in the code stream, and the divided multiple candidate partial images and multiple sets of visual sensory experience parameters. Based on this, the decoding side can first divide the image to be processed according to the aforementioned division method to obtain multiple candidate partial images, and then determine one or more partial images corresponding to the area to be enlarged from the multiple candidate partial images, for example, According to the position of the region to be enlarged, it is determined which candidate partial images are included in the region to be enlarged, and these candidate partial images are partial images corresponding to the region to be enlarged.
  • multiple candidate partial images correspond to a set of visual sensory experience parameters
  • the decoding side can obtain one or more sets of visual sensory experience corresponding to the above one or more partial images by decoding the code stream (such as parsing metadata). Sensory experience parameters.
  • Step 707 the decoding side processes the corresponding partial images according to one or more sets of visual sensory experience parameters to obtain processed partial images.
  • the decoding side processes the corresponding partial images according to the obtained one or more sets of visual sensory experience parameters to obtain the processed partial images, and these processed partial images can simulate (approximate or enhance) the human perception of the picture to be presented.
  • the visual sensory experience of the real scene of the enlarged area is a set of visual sensory experience parameters to obtain the processed partial images.
  • the processing that the decoding side can perform on the corresponding partial image includes at least one of the following:
  • the visual sensory experience parameter includes a brightness parameter, adjust the brightness of the corresponding partial image
  • the visual sensory experience parameters include a contrast parameter, adjusting the contrast of the corresponding partial image
  • the visual sensory experience parameters include color parameters
  • color adjustment is performed on the corresponding partial image
  • the detail adjustment is performed on the corresponding partial image.
  • the brightness and contrast can be improved, the color can be adapted to the local dark area, and the underexposed details can be increased, so that the visual sensory experience parameters of the partial image can be determined including brightness, contrast, color and There are four types of parameters in detail, and the specific values of the parameters correspond to the aforementioned adjustment requirements.
  • the brightness and contrast can be reduced, the color can be adapted to the local bright area, and the overexposure details can be increased, so that the visual sensory experience parameters of the partial image can be determined, including brightness, contrast, color and details, and the specific values of the parameters correspond to the aforementioned adjustment requirements.
  • the brightness and contrast can be fine-tuned, and the color can be adapted to the main body of the image.
  • the visual sensory experience parameters of the partial image can be determined to include three types of parameters: brightness, contrast, and color, and the specific values of the parameters Corresponding to the aforementioned adjustment requirements.
  • the decoding side can adjust the details of one or more partial images through the following methods:
  • multiple cameras can be turned on at the same time to shoot the same scene, so that multiple reference images can be collected with different focal lengths, different angles, etc., because the multiple reference images capture the same scene , so even if the details of the image to be processed are not captured, it may be captured by other cameras, so that the encoding side can obtain the detail parameters of the image to be processed based on the multiple reference images, so that the decoding side can be enlarged based on the detail parameters area for fine-tuning.
  • the user has taken multiple images for the same scene.
  • the similarity between the historical image and the image to be processed is high, it can be considered that the historical image can provide detailed reference for the image to be processed, so that the encoding side can obtain based on the multiple historical images.
  • the decoding side may store the processed partial image locally, so that when the subsequent user zooms in on the same area again, the processed partial image is extracted from the memory and directly displayed.
  • the decoding side may transmit the processed partial image to a display device (such as a display) for display.
  • a display device such as a display
  • Figure 12a is an example diagram of the processing effect on a partial image.
  • the area to be enlarged is the area that contains the shirt in the image to be processed, and the adaptive light source in the adaptive field tends to have a high color temperature under the full-image viewing angle, and the color conforms to the perception .
  • the color adaptation field of the light source does not match the overall situation, and is warmer than the real perception.
  • Fig. 12b is an example diagram of the processing effect on a partial image.
  • the area to be enlarged is the area containing the dome light in the image to be processed, and the contrast, brightness, and details are all appropriate in the full-image viewing angle. Directly zoom in on the area to be zoomed in, the details are lost, and the image is still overexposed, and the contrast and brightness do not improve the visual perception.
  • the overexposed details are restored, the contrast is improved, the color saturation is improved, and the visual perception is significantly improved.
  • the decoding side can obtain the zoom-in termination instruction, which is generated by the user's two fingers sliding inward on the processed partial image, or,
  • the magnification termination instruction is generated by the click operation of the user's single finger on the processed partial image, and then the image to be processed is displayed according to the magnification termination instruction.
  • the user may perform operations on the aforementioned partial image to restore the image before the enlargement (image to be processed).
  • the foregoing operation may be to use the thumb and forefinger to perform a two-finger zoom-out gesture on the enlarged image displayed on the screen of the mobile phone, or to click on the enlarged image displayed on the screen of the mobile phone with one finger. This is not specifically limited.
  • the encoding side acquires the image to be processed
  • multiple sets of visual sensory experience parameters are obtained for the image to be processed
  • the encoding side encodes the image to be processed and multiple sets of visual sensory experience parameters to the decoding side, so that the decoding side can
  • local image processing is performed based on the visual sensory experience parameters corresponding to the area to be enlarged to obtain a processed partial image.
  • the picture presented by the processed partial image simulates (approximates or enhances) human perception
  • the visual and sensory experience of the real scene in the area to be enlarged is like the picture that people see when they really walk into the real scene corresponding to the area to be enlarged. On the one hand, it solves the problems of unclear and blurred images after enlargement. It can improve the user experience of visual zooming.
  • FIG. 13 is an exemplary flow chart of the image processing method of the present application.
  • Process 1300 may be performed by an encoding side and a decoding side.
  • the process 1300 describes the encoding side and the decoding side together.
  • the encoding side and the decoding side may be performed independently.
  • the process 1300 is described as a series of steps or operations. It should be understood that the process 1300 may be performed in various orders and/or concurrently, and is not limited to the order of execution shown in FIG. 13 .
  • the process 1300 includes the following steps:
  • Step 1301 the encoding side obtains the image to be processed, and divides the image to be processed to obtain multiple candidate partial images.
  • step 1301 reference may be made to step 701 in the embodiment shown in FIG. 7 , which will not be repeated here.
  • Step 1302 the encoding side acquires multiple sets of visual sensory experience parameters.
  • step 1302 reference may be made to step 702 in the embodiment shown in FIG. 7 , which will not be repeated here.
  • Step 1303 the encoding side respectively processes the corresponding candidate partial images according to multiple sets of visual sensory experience parameters to obtain multiple processed candidate partial images.
  • the coding side processes multiple candidate partial images respectively to obtain multiple processed candidate partial images without combining multiple sets of visual sensory experience parameters. It is transmitted to the decoding side, and the steps of image processing are performed by the decoding side.
  • the image processing performed by the encoding side can refer to the description of step 706 in the embodiment shown in FIG. Processing is performed separately, and in step 1303, the encoding side performs processing on all candidate local regions respectively.
  • Step 1304 the encoding side encodes the image to be processed and multiple processed candidate partial images.
  • Step 1304 can refer to step 703 of the embodiment shown in FIG. 7 , the difference is that the encoding side only needs to encode the image to be processed and multiple processed candidate partial images, and does not need to generate corresponding metadata based on visual sensory experience parameters.
  • the encoding side can implement TM on each local area of the image to be processed before encoding multiple processed candidate partial images, so as to improve the similarity between the local area of the image to be processed and the corresponding processed candidate partial image. Degree, and then reduce the amount of residual data of the candidate partial image.
  • the process 1300 includes the following steps:
  • Step 1305 the decoding side acquires the image to be processed and multiple processed candidate partial images.
  • step 1305 reference may be made to step 704 in the embodiment shown in FIG. 7 , which will not be repeated here.
  • Step 1306 the decoding side obtains an enlargement operation instruction, which is used to indicate the area to be enlarged in the image to be processed.
  • step 1306 reference may be made to step 705 in the embodiment shown in FIG. 7 , which will not be repeated here.
  • Step 1307 the decoding side obtains the processed partial image according to the region to be enlarged.
  • the decoding side can decode the code stream to obtain multiple processed candidate partial images, and then obtain one or more processed partial images corresponding to the region to be enlarged, that is, which processed candidate partial images are included in the region to be enlarged , then these processed candidate partial images are one or more processed partial images corresponding to the area to be enlarged.
  • the candidate partial images decoded by the decoding side are images that have been processed by the encoding side, so after obtaining the region to be enlarged, the decoding side determines the region to be enlarged from multiple candidate partial images Corresponding to one or more processed candidate partial images, the one or more processed candidate partial images directly constitute a processed partial image.
  • the encoding side after the encoding side acquires the image to be processed, it obtains the respective visual sensory experience parameters for the multiple candidate partial images in the image to be processed, and then respectively conducts the corresponding candidate partial images according to the multiple sets of visual sensory experience parameters.
  • the encoding side encodes the image to be processed and multiple processed candidate partial images to the decoding side, so that the decoding side can directly decode after determining the area to be enlarged according to the user operation
  • the processed partial image is obtained, and the picture presented by the processed partial image simulates (approximates or enhances) the visual sensory experience of the real scene of the region to be enlarged by the human eye, as if a person really walks into the real scene corresponding to the region to be enlarged.
  • the picture seen, on the one hand solves the problem of unclear and blurred images after zooming in, and on the other hand, can improve the user's visual zooming experience.
  • FIG. 14 is an exemplary flow chart of the image processing method of the present application.
  • Process 1400 may be performed by an encoding side and a decoding side.
  • the process 1400 describes the encoding side and the decoding side together.
  • the encoding side and the decoding side may be performed independently.
  • Process 1400 is described as a series of steps or operations. It should be understood that process 1400 may be performed in various orders and/or concurrently, and is not limited to the order of execution shown in FIG. 14 .
  • the process 1400 includes the following steps:
  • Step 1401 the coding side acquires the image to be processed.
  • step 1401 and step 701 of the embodiment shown in FIG. 7 The difference between step 1401 and step 701 of the embodiment shown in FIG. 7 is that: after the encoding side obtains the image to be processed (global image), it does not need to divide it.
  • Step 1402 the encoding side encodes the image to be processed.
  • the encoding method of the image to be processed on the encoding side may refer to the JPEG encoding standard, the hybrid video encoding standard or the scalable video encoding standard, and may also adopt an end-to-end encoding manner, which will not be repeated here.
  • the process 1400 includes the following steps:
  • Step 1403 the decoding side obtains the image to be processed.
  • step 1403 reference may be made to step 704 in the embodiment shown in FIG. 7 , which will not be repeated here.
  • Step 1404 the decoding side acquires an enlargement operation instruction, which is used to indicate the area to be enlarged in the image to be processed.
  • step 1404 reference may be made to step 705 in the embodiment shown in FIG. 7 , which will not be repeated here.
  • Step 1405 the decoding side acquires visual sensory experience parameters corresponding to the area to be enlarged according to preset rules.
  • the encoding side only carries the image to be processed (that is, the global image) in the code stream, there are no multiple alternative partial images, and there are no multiple sets of visual sensory experience parameters, let alone segmenting the image to be processed before performing image processing. Therefore, the decoder can only obtain the global reconstructed image after parsing the code stream. In this way, if the decoding side processes the area to be enlarged, it needs to obtain the visual sensory experience parameters corresponding to the area to be enlarged according to historical data or empirical information.
  • the decoding side may first divide the image to be processed according to the first preset rule to obtain multiple candidate partial images, and obtain one or more partial images corresponding to the area to be enlarged, where the multiple candidate partial images include one or more partial images. Then one or more sets of visual sensory experience parameters corresponding to one or more partial images are acquired according to a second preset rule.
  • the aforementioned preset rules include a first preset rule and a second preset rule.
  • the decoding side may first divide the reconstructed global image by using the description about the division method in step 701 of the embodiment shown in FIG. 7 to obtain multiple candidate partial images. Then, according to the position of the region to be enlarged, it is determined which candidate partial images it contains, and these candidate partial images are partial images corresponding to the region to be enlarged.
  • the decoding side can obtain one or more sets of visual sensory experience parameters in the following ways:
  • the brightness adjustment follows the following principles:
  • the whole pixel number represents the number of pixels contained in the global image; the local pixel number represents the number of pixels contained in the area to be enlarged; Indicates the cumulative sum of pixel values in the area to be enlarged.
  • f() represents an iterative rule to make the equality true, for example, if the left side of the equal sign is greater than the right side of the equal sign, then reduce the value of some pixels on the right side of the equal sign to make the equal sign true.
  • f1 is an operator for calculating contrast, such as the Laplacian operator; Represents a [M,N] window centered on pixel value coordinates (i,j).
  • f2 is a detail extraction operator, such as a sampling sobel operator.
  • R/B, B/G respectively represent the ratio of the RGB individual components of each pixel value
  • f3 is an adjustment method to make the equation true, for example, iterates by multiplying the RGB component by the corrected gain value.
  • Step 1406 the decoding side processes the region to be enlarged according to the visual sensory experience parameters to obtain a processed partial image.
  • step 1406 reference may be made to step 707 in the embodiment shown in FIG. 7 , which will not be repeated here.
  • the encoding side directly encodes the image to be processed without obtaining multiple candidate partial images and their respective visual sensory experience parameters, which can reduce the occupation of the code stream.
  • the decoding side after determining the area to be enlarged according to the user operation, one or more corresponding partial images can be obtained based on the area to be enlarged, and then one or more sets of visual sensory experience parameters can be obtained based on the preset rules, so that based on these parameters, the The aforementioned partial image is processed to obtain a processed partial image, and the picture presented by the processed partial image simulates (approximates or enhances) the visual sensory experience of the real scene of the area to be enlarged by the human eye, as if a person really walks into the area to be enlarged.
  • the picture seen when zooming in on the real scene corresponding to the area on the one hand, solves the problem of unclear and blurred images after zooming in, and on the other hand, can improve the user's experience of visual zooming.
  • the encoding side can first process multiple candidate partial images, and then process the image to be processed, multiple processed candidate partial images, and multiple sets of visual sensory experience parameters Encoded into the code stream and transmitted to the decoding side.
  • the visual sensory experience parameters can be divided into two parts, and one part is used by the encoding side to process multiple candidate partial images, so that the multiple processed candidate partial images have been selected from brightness, contrast, color and detail.
  • the other part is encoded into the code stream and transmitted to the decoding side for use, which is also used to process multiple candidate partial images, so that after processing on the decoding side, the processed partial images can be processed.
  • the image is more in line with the needs of the display side to achieve the best display effect.
  • the visual sensory experience parameters transmitted to the decoding side can include brightness, contrast, etc.
  • the decoding side analyzes the code stream, reconstructs the image to be processed, multiple processed candidate partial images, and multiple sets of visual sensory experience parameters, and uses multiple sets of visual sensory experience parameters to process multiple processed candidate partial images again A final processed partial image is obtained.
  • Fig. 15 is a schematic structural diagram of an exemplary decoding device 1500 according to an embodiment of the present application.
  • the device 1500 of this embodiment can be applied to a device on the decoding side, or can be applied to an electronic device with a function on the decoding side.
  • the electronic device also has an encoding side function.
  • the apparatus 1500 may include: an acquisition module 1501 and a processing module 1502 .
  • the acquiring module 1501 is configured to acquire an image to be processed; acquire an enlargement operation instruction, the enlargement operation instruction is used to indicate an area to be enlarged in the image to be processed, and the area to be enlarged corresponds to one or more Partial images; acquire one or more sets of visual sensory experience parameters corresponding to the one or more partial images; processing module 1502, used to respectively process corresponding partial images according to the one or more sets of visual sensory experience parameters Processing is performed to obtain a processed partial image.
  • the visual sensory experience parameters include at least one of brightness parameters, contrast parameters, color parameters, and detail parameters;
  • the processing module 1502 is specifically configured to perform at least one of the following operations: when When the visual sensory experience parameter includes a brightness parameter, adjust the brightness of the corresponding partial image; when the visual sensory experience parameter includes a contrast parameter, perform contrast adjustment on the corresponding partial image; when the visual sensory experience parameter includes a color parameter, color adjustment is performed on the corresponding partial image; when the visual sensory experience parameter includes a detail parameter, detailed adjustment is performed on the corresponding partial image; wherein, the corresponding partial image is the one or more One of the partial images.
  • the processing module 1502 is specifically configured to, when the corresponding partial image corresponds to a dark area of the image, perform brightness enhancement, contrast enhancement, color adaptation to dark areas, and underexposure detail enhancement on the corresponding partial image at least one of processing; or, when the corresponding partial image corresponds to a bright area of the image, at least one of processing of brightness reduction, contrast reduction, color adaptation to bright areas, and overexposure detail increase is performed on the corresponding partial image; or, When the corresponding partial image corresponds to the subject area of the image, the corresponding partial image is subjected to color adaptation to the subject; wherein the corresponding partial image is one of the one or more partial images.
  • the processing module 1502 is specifically configured to acquire multiple reference images, and the multiple reference images and the image to be processed are obtained by multiple cameras for the same scene; according to the The details of the corresponding partial images are adjusted by multiple reference images.
  • the processing module 1502 is specifically configured to obtain multiple historical images whose similarity with the image to be processed exceeds a preset threshold; The image is adjusted in detail.
  • the picture presented by the processed partial image simulates the visual sensory experience of the real scene of the region to be enlarged perceived by human eyes.
  • the zoom-in operation instruction is generated by the outward sliding operation performed by the user's two fingers on the area to be zoomed in; or, the zoom-in operation instruction is generated by the user's two fingers on the generated by clicking on the area to be zoomed in.
  • the acquiring module 1501 is specifically configured to decode the acquired code stream to obtain the one or more sets of visual sensory experience parameters.
  • the acquiring module 1501 is specifically configured to perform scalable video decoding on the acquired code stream to obtain the image to be processed; or, perform image decompression on the acquired image file to obtain the Describe the image to be processed.
  • the processing module 1502 is further configured to display the processed partial image; or store the processed partial image.
  • the acquiring module 1501 is further configured to acquire a magnification termination instruction, where the magnification termination instruction is generated by an inward sliding operation performed by the user's two fingers on the processed partial image Alternatively, the zoom-in termination instruction is generated by the click operation performed by the user on the processed partial image with a single finger; the processing module 1502 is further configured to display the pending zoom-in instruction according to the zoom-in termination instruction Process images.
  • the obtaining module 1501 is configured to obtain an image to be processed; obtain a zoom-in operation instruction, the zoom-in operation instruction is used to indicate the region to be enlarged in the image to be processed; obtain the region to be enlarged according to a preset rule Corresponding visual sensory experience parameters; processing module 1502, configured to process the region to be enlarged according to the visual sensory experience parameters to obtain a processed partial image.
  • the visual sensory experience parameters include at least one of brightness parameters, contrast parameters, color parameters, and detail parameters;
  • the processing module 1502 is specifically configured to perform at least one of the following operations: when When the visual sensory experience parameter includes a brightness parameter, adjust the brightness of the corresponding partial image; when the visual sensory experience parameter includes a contrast parameter, perform contrast adjustment on the corresponding partial image; when the visual sensory experience parameter includes a color parameter, color adjustment is performed on the corresponding partial image; when the visual sensory experience parameter includes a detail parameter, detailed adjustment is performed on the corresponding partial image; wherein, the corresponding partial image is the one or more One of the partial images.
  • the processing module 1502 is specifically configured to, when the corresponding partial image corresponds to a dark area of the image, perform brightness enhancement, contrast enhancement, color adaptation to dark areas, and underexposure detail enhancement on the corresponding partial image at least one of processing; or, when the corresponding partial image corresponds to a bright area of the image, at least one of processing of brightness reduction, contrast reduction, color adaptation to bright areas, and overexposure detail increase is performed on the corresponding partial image; or, When the corresponding partial image corresponds to the subject area of the image, the corresponding partial image is subjected to color adaptation to the subject; wherein the corresponding partial image is one of the one or more partial images.
  • the processing module 1502 is specifically configured to acquire multiple reference images, and the multiple reference images and the image to be processed are obtained by multiple cameras for the same scene; according to the The details of the corresponding partial images are adjusted by multiple reference images.
  • the processing module 1502 is specifically configured to obtain multiple historical images whose similarity with the image to be processed exceeds a preset threshold; The image is adjusted in detail.
  • the picture presented by the processed partial image simulates the visual sensory experience of the real scene of the region to be enlarged perceived by human eyes.
  • the zoom-in operation instruction is generated by the outward sliding operation performed by the user's two fingers on the area to be zoomed in; or, the zoom-in operation instruction is generated by the user's two fingers on the generated by clicking on the area to be zoomed in.
  • the acquisition module 1501 is specifically configured to divide the image to be processed according to a first preset rule to obtain multiple candidate partial images; acquire one corresponding to the area to be enlarged or a plurality of partial images, the plurality of candidate partial images including the one or more partial images; obtaining one or more sets of visual sensory experience corresponding to the one or more partial images according to a second preset rule parameter; wherein, the preset rule includes the first preset rule and the second preset rule.
  • the acquiring module 1501 is specifically configured to perform scalable video decoding on the acquired code stream to obtain the image to be processed; or, perform image decompression on the acquired image file to obtain the Describe the image to be processed.
  • the processing module 1502 is further configured to display the processed partial image; or store the processed partial image.
  • the acquiring module 1501 is further configured to acquire a magnification termination instruction, where the magnification termination instruction is generated by an inward sliding operation performed by the user's two fingers on the processed partial image Alternatively, the zoom-in termination instruction is generated by the click operation performed by the user on the processed partial image with a single finger; the processing module 1502 is further configured to display the pending zoom-in instruction according to the zoom-in termination instruction Process images.
  • the device in this embodiment can be used to execute the technical solutions on the decoding side in the method embodiments shown in FIG. 7 , FIG. 13 or FIG. 14 , and its implementation principles and technical effects are similar, and will not be repeated here.
  • Fig. 16 is a schematic structural diagram of an exemplary encoding device 1600 according to the embodiment of the present application.
  • the device 1600 of this embodiment can be applied to encoding side equipment, or can be applied to electronic equipment with encoding side functions.
  • the electronic device also has a decoding side function.
  • the apparatus 1600 may include: an acquisition module 1601 , an encoding module 1602 and a processing module 1603 . in,
  • the acquiring module 1601 is configured to acquire images to be processed; acquire multiple sets of visual sensory experience parameters; the encoding module 1602 is configured to encode the image to be processed and the multiple sets of visual sensory experience parameters.
  • the acquiring module 1601 is configured to acquire an image to be processed; divide the image to be processed to obtain multiple candidate partial images; acquire multiple sets of visual sensory experience parameters, the multiple sets of visual sensory experience parameters and the multiple candidate partial images Partial image correspondence; the processing module 1603 is used to process the corresponding candidate partial images according to the multiple sets of visual sensory experience parameters to obtain multiple processed candidate partial images; the encoding module 1602 is used to process the corresponding candidate partial images.
  • the image to be processed and the plurality of processed candidate partial images are encoded.
  • the visual sensory experience parameters include at least one of brightness parameters, contrast parameters, color parameters, and detail parameters;
  • the processing module 1603 is specifically configured to perform at least one of the following operations: when When the visual sensory experience parameter includes a brightness parameter, adjust the brightness of the corresponding partial image; when the visual sensory experience parameter includes a contrast parameter, perform contrast adjustment on the corresponding partial image; when the visual sensory experience parameter includes a color parameter, color adjustment is performed on the corresponding partial image; when the visual sensory experience parameter includes a detail parameter, detail adjustment is performed on the corresponding partial image; wherein, the corresponding partial image is the plurality of candidate partial images one of them.
  • the processing module 1603 is specifically configured to, when the corresponding partial image corresponds to a dark area of the image, perform brightness enhancement, contrast enhancement, color adaptation to dark areas, and underexposure detail enhancement on the corresponding partial image at least one of processing; or, when the corresponding partial image corresponds to a bright area of the image, at least one of processing of brightness reduction, contrast reduction, color adaptation to bright areas, and overexposure detail increase is performed on the corresponding partial image; or, When the corresponding partial image corresponds to the subject area of the image, the corresponding partial image is subjected to color adaptation to the subject; wherein the corresponding partial image is one of the plurality of candidate partial images.
  • the processing module 1603 is specifically configured to acquire multiple reference images, and the multiple reference images and the image to be processed are obtained by multiple cameras for the same scene; according to the The details of the corresponding partial images are adjusted by multiple reference images.
  • the processing module 1603 is specifically configured to obtain multiple historical images whose similarity with the image to be processed exceeds a preset threshold; The image is adjusted in detail.
  • the acquiring module 1601 is specifically configured to acquire the multiple sets of visual sensory experience parameters according to a third preset rule.
  • the encoding module 1602 is specifically configured to perform scalable video encoding on the image to be processed and the plurality of processed candidate partial images to obtain a code stream; or, performing image compression on the image to be processed and the plurality of processed candidate partial images to obtain an image file.
  • the device in this embodiment can be used to implement the technical solutions on the encoding side in the method embodiments shown in FIG. 7 , FIG. 13 or FIG. 14 , and its implementation principles and technical effects are similar, and will not be repeated here.
  • each step of the above-mentioned method embodiments may be completed by an integrated logic circuit of hardware in a processor or instructions in the form of software.
  • the processor can be a general-purpose processor, a digital signal processor (DSP), an application-specific integrated circuit (ASIC), a field programmable gate array (FPGA), or other possible Program logic devices, discrete gate or transistor logic devices, discrete hardware components.
  • a general-purpose processor may be a microprocessor, or the processor may be any conventional processor, or the like.
  • the steps of the methods disclosed in the embodiments of the present application may be directly implemented by a hardware coded processor, or executed by a combination of hardware and software modules in the coded processor.
  • the software module can be located in a mature storage medium in the field such as random access memory, flash memory, read-only memory, programmable read-only memory or electrically erasable programmable memory, register.
  • the storage medium is located in the memory, and the processor reads the information in the memory, and completes the steps of the above method in combination with its hardware.
  • the memories mentioned in the above embodiments may be volatile memories or nonvolatile memories, or may include both volatile and nonvolatile memories.
  • the non-volatile memory can be read-only memory (read-only memory, ROM), programmable read-only memory (programmable ROM, PROM), erasable programmable read-only memory (erasable PROM, EPROM), electrically programmable Erases programmable read-only memory (electrically EPROM, EEPROM) or flash memory.
  • Volatile memory can be random access memory (RAM), which acts as external cache memory.
  • RAM random access memory
  • SRAM static random access memory
  • DRAM dynamic random access memory
  • DRAM synchronous dynamic random access memory
  • SDRAM double data rate synchronous dynamic random access memory
  • ESDRAM enhanced synchronous dynamic random access memory
  • SLDRAM direct memory bus random access memory
  • direct rambus RAM direct rambus RAM
  • the disclosed systems, devices and methods may be implemented in other ways.
  • the device embodiments described above are only illustrative.
  • the division of the units is only a logical function division. In actual implementation, there may be other division methods.
  • multiple units or components can be combined or May be integrated into another system, or some features may be ignored, or not implemented.
  • the mutual coupling or direct coupling or communication connection shown or discussed may be through some interfaces, and the indirect coupling or communication connection of devices or units may be in electrical, mechanical or other forms.
  • the units described as separate components may or may not be physically separated, and the components shown as units may or may not be physical units, that is, they may be located in one place, or may be distributed to multiple network units. Part or all of the units can be selected according to actual needs to achieve the purpose of the solution of this embodiment.
  • each functional unit in each embodiment of the present application may be integrated into one processing unit, each unit may exist separately physically, or two or more units may be integrated into one unit.
  • the functions described above are realized in the form of software function units and sold or used as independent products, they can be stored in a computer-readable storage medium.
  • the technical solution of the present application is essentially or the part that contributes to the prior art or the part of the technical solution can be embodied in the form of a software product, and the computer software product is stored in a storage medium, including Several instructions are used to make a computer device (personal computer, server, or network device, etc.) execute all or part of the steps of the methods described in the various embodiments of the present application.
  • the aforementioned storage medium includes: U disk, mobile hard disk, read-only memory (read-only memory, ROM), random access memory (random access memory, RAM), magnetic disk or optical disc and other media that can store program codes. .

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

本申请提供一种图像处理方法和装置。在编码侧,方法包括: 编码侧获取待处理图像; 获取多组视觉感官体验参数; 对待处理图像和多组视觉感官体验参数进行编码。在解码侧,方法包括: 解码侧获取待处理图像; 获取放大操作指令;获取与一张或多张局部图像对应的一组或多组视觉感官体验参数; 根据一组或多组视觉感官体验参数分别对对应的局部图像进行处理以得到经处理的局部图像。本申请可以对放大区域所呈现的画面仿真人眼感知待放大区域的真实场景的视觉感官体验,一方面解决放大后的图像存在不清晰、图像模糊的问题,另一方面可以提升用户进行视觉放大的体验。

Description

图像处理方法和装置
本申请要求于2021年11月27日提交中国专利局、申请号为202111449229.X、申请名称为“图像处理方法和装置”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及图像处理技术,尤其涉及一种图像处理方法和装置。
背景技术
用户通过终端设备(例如,手机、平板、大屏等)看图片或视频时,有时会手动放大图片或视频的局部区域(例如,图片或视频中的主体或感兴趣区域(region of interest,ROI))以查看该局部区域的细节。相关技术采用传统算法对局部区域进行放大,例如,图像放大算法(Lanczos)、保边放大算法等。
但是,上述算法通常是基于低通滤波器的原理实现的,放大后的图像存在不清晰、图像模糊等问题。
发明内容
本申请提供一种图像处理方法和装置,以对放大区域所呈现的画面以处理后呈现的画面仿真人眼感知待放大区域的真实场景的视觉感官体验,仿佛人真的走进待放大区域对应的真实场景时所看到的画面,一方面解决放大后的图像存在不清晰、图像模糊的问题,另一方面可以提升用户进行视觉放大的体验。
第一方面,本申请提供一种图像处理方法。在编码侧,方法包括:编码侧获取待处理图像;获取多组视觉感官体验参数;对待处理图像和多组视觉感官体验参数进行编码。
在解码侧,方法包括:解码侧获取待处理图像;获取放大操作指令,放大操作指令用于指示待处理图像中的待放大区域;获取与一张或多张局部图像对应的一组或多组视觉感官体验参数;根据一组或多组视觉感官体验参数分别对对应的局部图像进行处理以得到经处理的局部图像。
本申请实施例,编码侧在采集到待处理图像后,针对待处理图像获取多组视觉感官体验参数,编码侧将待处理图像和多组视觉感官体验参数编码发送给解码侧,这样解码侧可以在根据用户操作确定待放大区域后,基于待放大区域对应的视觉感官体验参数进行局部图像处理从而得到经处理的局部图像,该经处理的局部图像呈现的画面仿真人眼感知待放大区域的真实场景的视觉感官体验,仿佛人真的走进待放大区域对应的真实场景时所看到的画面,一方面解决放大后的图像存在不清晰、图像模糊的问题,另一方面可以提升用户进行视觉放大的体验。
上述解码侧基于待放大区域对应的视觉感官体验参数进行局部图像处理可以包括:构建三个适应场内核:视明度感知模型(brightness perception model)、对比度感知模型(contrast perception model),颜色感知模型(color perception model),其中,视明度感知模型用于确保在各种显示器能力下亮暗感知跟人眼对真实场景感知一致;对比度感知模 型用于确保在各种显示器能力下可区分变化量(just noticeable difference,JND)数量跟人眼对真实场景感知一致;颜色感知模型确保在各种显示器能力下颜色跟人眼对真实场景感知一致。适应场可以解决自然场景到最佳显示器D1,最佳显示器D1到各种显示器D2,各种显示D2在各种观看环境的映射问题。需要说明的是,上述过程只是示例性的介绍了一种视觉感官体验参数的确定方法,本申请实施例还可以采用其它方式确定视觉感官体验参数,对此不做具体限定。
待处理图像也可以称作全局图像,通常摄像装置对着目标区域可以采集到包含目标区域的一张图像,该完整的图像即为全局图像。编码侧对待处理图像进行划分可以得到多张备选局部图像,由于备选局部图像是由待处理图像划分的到的,因此每个备选局部图像对应待处理图像中的局部区域,例如,对待处理图像进行四叉树划分,左上角的备选局部图像对应位于待处理图像左上角的四分之一的局部区域。
视觉感官体验参数可以包括亮度、对比度、颜色和细节这四类参数,通过对备选局部图像的像素特征进行分析,以处理后呈现的画面仿真(逼近或者增强)人眼感知待放大区域的真实场景的视觉感官体验为目的,可以确定该备选局部图像对应的视觉感官体验参数。由此可见,本申请实施例中视觉感官体验参数和备选局部图像之间具备对应关系,基于任意一个备选局部图像的像素特征,可以为其确定一组视觉感官体验参数,以实现对该备选局部图像进行亮度、对比度、颜色和细节中至少其一的调整。
例如,对图像暗区的备选局部图像,可以对亮度和对比度进行提升,将颜色适应局部暗区,对欠曝细节进行增加,这样可以确定该备选局部图像的视觉感官体验参数包括亮度、对比度、颜色和细节四类参数,而参数的具体取值与前述调节需求相对应。又例如,对图像亮区的备选局部图像,可以对亮度和对比度进行降低,将颜色适应局部亮区,对过曝细节进行增加,这样可以确定该备选局部图像的视觉感官体验参数包括亮度、对比度、颜色和细节四类参数,而参数的具体取值与前述调节需求相对应。又例如对图像主体区域的备选局部图像,可以对亮度和对比度微调,将颜色适应图像主体,这样可以确定该备选局部图像的视觉感官体验参数包括亮度、对比度和颜色三类参数,而参数的具体取值与前述调节需求相对应。
编码侧可以在获取多组视觉感官体验参数后,对待处理图像和多组视觉感官体验参数进行编码,其中,对待处理图像的编码方式可以参照联合图像专家组(joint photographic experts group,JPEG)编码标准、混合视频编码标准或者可分级视频编码标准,还可以采用端到端编码方式,此处不再赘述;对多组视觉感官体验参数可以作为元数据(metadata)进行编码,可以参照CUVA1.0标准。此外,编码侧还可以在码流中写入对待处理图像的划分方式,以及备选局部图像和视觉感官体验参数之间的对应关系,以使得解码侧可以通过解析码流获取多张备选局部图像及其对应的多组视觉感官体验参数。前述信息写入码流的方式可以参照相关技术,只要能满足使解码侧了解待处理图像的划分方式,以及各个备选局部图像与多组视觉感官体验参数的对应关系的目的,本申请实施例此处不做具体限定。
解码侧通过采用与编码侧相对应的解码方式对码流进行解码得到待处理图像。
放大操作指令是作用于待处理图像上的操作产生的。例如,用户在手机上看一张图像,当想要将该图片的局部区域放大看细节时,可以用大拇指和食指在手机的屏幕上显示该局部区域的地方做两指放大的手势,从而在手机的屏幕上显示该局部区域的图片,该手势即 可产生上述放大操作指令。又例如,用户将手机上的视频投屏到大屏上播放,当想要将某部分区域的视频放大播放时,可以用大拇指和食指在手机的屏幕上显示该局部区域的地方做两指放大的手势,从而在大屏上显示该局部区域的视频,该手势即可产生上述放大操作指令。需要说明的是,放大操作指令还可以通过其它方式产生,本申请实施例对此不做具体限定。
解码侧可以根据码流中携带的信息可以获取待处理图像的划分方式,以及划分的到的多张备选局部图像和多组视觉感官体验参数之间的对应关系,基于此,解码侧可以先根据前述划分方式对待处理图像进行划分得到多张备选局部图像,然后从中获取与待放大区域对应的一张或多张局部图像,解码得到一组或多组视觉感官体验参数,该一组或多组视觉感官体验参数和该一张或多张局部图像对应,最后根据一组或多组视觉感官体验参数分别对对应的局部图像进行处理以得到经处理的局部图像。
解码侧根据得到的一组或多组视觉感官体验参数分别对对应的局部图像进行处理以得到经处理的局部图像,这些经处理的局部图像可以呈现的画面仿真人眼感知待放大区域的真实场景的视觉感官体验。
根据一组视觉感官体验参数包含的参数内容,解码侧可以对与其对应的局部图像进行的处理包括以下至少一种:
当视觉感官体验参数包括亮度参数时,对对应局部图像进行亮度调节;
当视觉感官体验参数包括对比度参数时,对对应局部图像进行对比度调节;
当视觉感官体验参数包括颜色参数时,对对应局部图像进行颜色调节;
当视觉感官体验参数包括细节参数时,对对应局部图像进行细节调节。
例如,对图像暗区的局部图像,可以对亮度和对比度进行提升,将颜色适应局部暗区,对欠曝细节进行增加,这样可以确定该局部图像的视觉感官体验参数包括亮度、对比度、颜色和细节四类参数,而参数的具体取值与前述调节需求相对应。又例如,对图像亮区的局部图像,可以对亮度和对比度进行降低,将颜色适应局部亮区,对过曝细节进行增加,这样可以确定该局部图像的视觉感官体验参数包括亮度、对比度、颜色和细节四类参数,而参数的具体取值与前述调节需求相对应。又例如对图像主体区域的局部图像,可以对亮度和对比度微调,将颜色适应图像主体,这样可以确定该局部图像的视觉感官体验参数包括亮度、对比度和颜色三类参数,而参数的具体取值与前述调节需求相对应。
在一种可能的实现方式中,解码侧可以通过以下方法实现对一张或多张局部图像的细节调节:
(1)获取多张参考图像,该多张参考图像和待处理图像是多个摄像头针对同一场景拍摄得到的,根据多张参考图像对对应局部图像进行细节调节。
(2)获取与待处理图像的相似度超过预设阈值的多张历史图像,根据多张历史图像对对应局部图像进行细节调节。
在一种可能的实现方式中,解码侧可以将经处理的局部图像存储在本地,以便于后续用户再次放大同样区域时,将经处理的局部图像从存储器中提取出来直接显示。
在一种可能的实现方式中,解码侧可以将经处理的局部图像传输给显示装置(例如显示器)进行显示。
第二方面,本申请实施例提供一种图像处理方法。在编码侧,方法包括:编码侧获取待处理图像,并对待处理图像进行划分得到多张备选局部图像;获取多组视觉感官体验参数;根据多组视觉感官体验参数分别对对应的备选局部图像进行处理以得到多张经处理的备选局部图像;对待处理图像和多张经处理的备选局部图像进行编码。
在解码侧,方法包括:解码侧获取待处理图像和多张经处理的备选局部图像;获取放大操作指令,放大操作指令用于指示待处理图像中的待放大区域;根据待放大区域获取经处理的局部图像。
本申请实施例,编码侧在采集到待处理图像后,针对待处理图像中的多张备选局部图像获取各自的视觉感官体验参数,然后根据多组视觉感官体验参数分别对对应的备选局部图像进行处理得到多张经处理的备选局部图像,编码侧将待处理图像和多张经处理的备选局部图像编码发送给解码侧,这样解码侧可以在根据用户操作确定待放大区域后,直接解码得到经处理的局部图像,该经处理的局部图像呈现的画面仿真(逼近或者增强)人眼感知待放大区域的真实场景的视觉感官体验,仿佛人真的走进待放大区域对应的真实场景时所看到的画面,一方面解决放大后的图像存在不清晰、图像模糊的问题,另一方面可以提升用户进行视觉放大的体验。
与第一方面实施例的区别在于:编码侧在得到多组视觉感官体验参数即对多张备选局部图像分别进行处理得到多张经处理的备选局部图像,而无需将多组视觉感官体验参数传输给解码侧,由解码侧执行图像处理的步骤。
在一种可能的实现方式中,编码侧可以编码多张经处理的备选局部图像之前,对待处理图像的各个局部区域实施TM,从而提升待处理图像的局部区域和与其对应的经处理的局部图像之间的相似度,进而在减少备选局部图像的残差数据量。
解码侧可以解码得到多张备选局部图像,然后获取与待放大区域对应的一张或多张局部图像,根据一张或多张局部图像得到经处理的局部图像。
与第一方面实施例的区别在于:解码侧解码得到的备选局部图像是编码侧已经处理过的图像,因此解码侧在获取待放大区域后,从多张备选局部图像中确定与待放大区域对应的一张或多张局部图像,该一张或多张局部图像直接构成经处理的局部图像。
第三方面,本申请实施例提供一种图像处理方法。在编码侧,方法包括:编码侧获取待处理图像;对待处理图像进行编码。
在解码侧,方法包括:解码侧获取待处理图像;获取放大操作指令,放大操作指令用于指示待处理图像中的待放大区域;根据预设规则获取与待放大区域对应的视觉感官体验参数;根据视觉感官体验参数对待放大区域进行处理以得到经处理的局部图像。
本申请实施例,编码侧在采集到待处理图像后,直接编码待处理图像,无需获取其多张备选局部图像,以及各自的视觉感官体验参数,这样可以减少码流的占用。而解码侧可以在根据用户操作确定待放大区域后,基于待放大区域得到对应的一张或多张局部图像,再基于预设规则得到一组或多组视觉感官体验参数,从而基于这些参数对前述局部图像进行进行处理从而得到经处理的局部图像,该经处理的局部图像呈现的画面仿真(逼近或者增强)人眼感知待放大区域的真实场景的视觉感官体验,仿佛人真的走进待放大区域对应的真实场景时所看到的画面,一方面解决放大后的图像存在不清晰、图像模糊的问题,另一方面可以提升用户进行视觉放大的体验。
本申请实施例中,编码侧只在码流中携带了待处理图像(即全局图像),没有多张备选局部图像,也没有多组视觉感官体验参数,更不会对待处理图像分割后再进行图像处理的操作,因此解码端解析码流后只能得到全局的重建图像。这样解码侧要对待放大区域进行处理就需要根据历史数据或经验信息来获取待放大区域对应的视觉感官体验参数。
解码侧可以先根据第一预设规则对待处理图像进行划分以得到多张备选局部图像,获取与待放大区域对应的一张或多张局部图像,多张备选局部图像包括一张或多张局部图像。然后根据第二预设规则获取与一张或多张局部图像对应的一组或多组视觉感官体验参数。上述预设规则包括第一预设规则和第二预设规则。
解码侧可以先采用第一方面实施例中的关于划分方式的描述对重建得到的全局图像进行划分得到多张备选局部图像。再根据待放大区域的位置,确定其包含了哪些备选局部图像,这些备选局部图像即为与待放大区域对应的局部图像。
此外,结合第一或二方面实施例,本申请实施例可以由编码侧先对多张备选局部图像进行处理,然后将待处理图像、多张经处理的备选局部图像以及多组视觉感官体验参数编入码流,传输至解码侧。此时,视觉感官体验参数可以分成两部分,一部分给编码侧使用,用于对多张备选局部图像进行处理,这样多张经处理的备选局部图像是已经从亮度、对比度、颜色和细节中的至少其一进行了调整,得到较好的效果;另一部分编入码流传输至解码侧使用,也是用于对多张备选局部图像进行处理,这样再经过解码侧的处理,可以使得经处理的局部图像更符合显示端的需求,达到最好的显示效果,相应的,传输至解码侧的视觉感官体验参数可以包括对亮度、对比度等。解码侧解析码流,重建得到待处理的图像、多张经处理的备选局部图像以及多组视觉感官体验参数,采用多组视觉感官体验参数再次对多张经处理的备选局部图像进行处理得到最终的经处理的局部图像。
第四方面,本申请实施例提供一种解码装置,包括:获取模块和处理模块。
可选的,获取模块,用于获取待处理图像;获取放大操作指令,所述放大操作指令用于指示所述待处理图像中的待放大区域,所述待放大区域对应一张或多张局部图像;获取与所述一张或多张局部图像对应的一组或多组视觉感官体验参数;处理模块,用于根据所述一组或多组视觉感官体验参数分别对对应的局部图像进行处理以得到经处理的局部图像。
在一种可能的实现方式中,所述视觉感官体验参数包括亮度参数、对比度参数、颜色参数和细节参数中的至少之一;所述处理模块,具体用于执行以下至少一种操作:当所述视觉感官体验参数包括亮度参数时,对对应局部图像进行亮度调节;当所述视觉感官体验参数包括对比度参数时,对所述对应局部图像进行对比度调节;当所述视觉感官体验参数包括颜色参数时,对所述对应局部图像进行颜色调节;当所述视觉感官体验参数包括细节参数时,对所述对应局部图像进行细节调节;其中,所述对应局部图像是所述一张或多张局部图像的其中之一。
在一种可能的实现方式中,所述处理模块,具体用于当对应局部图像对应图像暗区时,对所述对应局部图像进行亮度提升、对比度提升、颜色适应暗区和欠曝细节增加中的至少一个处理;或者,当所述对应局部图像对应图像亮区时,对所述对应局部图像进行亮度降低、对比度降低、颜色适应亮区和过曝细节增加中的至少一个处理;或者,当所述对应局部图像对应图像主体区域时,对所述对应局部图像进行颜色适应主体的处理;其中,所述 对应局部图像是所述一张或多张局部图像的其中之一。
在一种可能的实现方式中,所述处理模块,具体用于获取多张参考图像,所述多张参考图像和所述待处理图像是多个摄像头针对同一场景拍摄得到的;根据所述多张参考图像对所述对应局部图像进行细节调节。
在一种可能的实现方式中,所述处理模块,具体用于获取与所述待处理图像的相似度超过预设阈值的多张历史图像;根据所述多张历史图像对所述对应局部图像进行细节调节。
在一种可能的实现方式中,所述经处理的局部图像呈现的画面仿真人眼感知所述待放大区域的真实场景的视觉感官体验。
在一种可能的实现方式中,所述放大操作指令是由用户的两指在所述待放大区域进行的向外滑动操作产生的;或者,所述放大操作指令是由用户的两指在所述待放大区域进行的点击操作产生的。
在一种可能的实现方式中,所述获取模块,具体用于对获取的码流进行解码以得到所述一组或多组视觉感官体验参数。
在一种可能的实现方式中,所述获取模块,具体用于对获取的码流进行可分级视频解码以得到所述待处理图像;或者,对获取的图像文件进行图像解压缩以得到所述待处理图像。
在一种可能的实现方式中,所述处理模块,还用于显示所述经处理的局部图像;或者,存储所述经处理的局部图像。
在一种可能的实现方式中,所述获取模块,还用于获取放大终止指令,所述放大终止指令是由用户的两指在所述经处理的局部图像上进行的向内滑动操作产生的,或者,所述放大终止指令是由用户的单指在所述经处理的局部图像上进行的点击操作产生的;所述处理模块,还用于根据所述放大终止指令显示所述待处理图像。
可选的,获取模块,用于获取待处理图像;获取放大操作指令,所述放大操作指令用于指示所述待处理图像中的待放大区域;根据预设规则获取与所述待放大区域对应的视觉感官体验参数;处理模块,用于根据所述视觉感官体验参数对所述待放大区域进行处理以得到经处理的局部图像。
在一种可能的实现方式中,所述视觉感官体验参数包括亮度参数、对比度参数、颜色参数和细节参数中的至少之一;所述处理模块,具体用于执行以下至少一种操作:当所述视觉感官体验参数包括亮度参数时,对对应局部图像进行亮度调节;当所述视觉感官体验参数包括对比度参数时,对所述对应局部图像进行对比度调节;当所述视觉感官体验参数包括颜色参数时,对所述对应局部图像进行颜色调节;当所述视觉感官体验参数包括细节参数时,对所述对应局部图像进行细节调节;其中,所述对应局部图像是所述一张或多张局部图像的其中之一。
在一种可能的实现方式中,所述处理模块,具体用于当对应局部图像对应图像暗区时,对所述对应局部图像进行亮度提升、对比度提升、颜色适应暗区和欠曝细节增加中的至少一个处理;或者,当所述对应局部图像对应图像亮区时,对所述对应局部图像进行亮度降低、对比度降低、颜色适应亮区和过曝细节增加中的至少一个处理;或者,当所述对应局部图像对应图像主体区域时,对所述对应局部图像进行颜色适应主体的处理;其中,所述对应局部图像是所述一张或多张局部图像的其中之一。
在一种可能的实现方式中,所述处理模块,具体用于获取多张参考图像,所述多张参考图像和所述待处理图像是多个摄像头针对同一场景拍摄得到的;根据所述多张参考图像对所述对应局部图像进行细节调节。
在一种可能的实现方式中,所述处理模块,具体用于获取与所述待处理图像的相似度超过预设阈值的多张历史图像;根据所述多张历史图像对所述对应局部图像进行细节调节。
在一种可能的实现方式中,所述经处理的局部图像呈现的画面仿真人眼感知所述待放大区域的真实场景的视觉感官体验。
在一种可能的实现方式中,所述放大操作指令是由用户的两指在所述待放大区域进行的向外滑动操作产生的;或者,所述放大操作指令是由用户的两指在所述待放大区域进行的点击操作产生的。
在一种可能的实现方式中,所述获取模块,具体用于根据第一预设规则对所述待处理图像进行划分以得到多张备选局部图像;获取与所述待放大区域对应的一张或多张局部图像,所述多张备选局部图像包括所述一张或多张局部图像;根据第二预设规则获取与所述一张或多张局部图像对应的一组或多组视觉感官体验参数;其中,所述预设规则包括所述第一预设规则和所述第二预设规则。
在一种可能的实现方式中,所述获取模块,具体用于对获取的码流进行可分级视频解码以得到所述待处理图像;或者,对获取的图像文件进行图像解压缩以得到所述待处理图像。
在一种可能的实现方式中,所述处理模块,还用于显示所述经处理的局部图像;或者,存储所述经处理的局部图像。
在一种可能的实现方式中,所述获取模块,还用于获取放大终止指令,所述放大终止指令是由用户的两指在所述经处理的局部图像上进行的向内滑动操作产生的,或者,所述放大终止指令是由用户的单指在所述经处理的局部图像上进行的点击操作产生的;所述处理模块,还用于根据所述放大终止指令显示所述待处理图像。
第五方面,本申请实施例提供一种编码装置,包括:获取模块、编码模块和处理模块。
可选的,获取模块,获取待处理图像;获取多组视觉感官体验参数;编码模块,用于对所述待处理图像和所述多组视觉感官体验参数进行编码。
可选的,获取模块,用于获取待处理图像;对所述待处理图像进行划分得到多张备选局部图像;获取多组视觉感官体验参数,所述多组视觉感官体验参数和所述多张备选局部图像对应;处理模块,用于根据所述多组视觉感官体验参数分别对对应的备选局部图像进行处理以得到多张经处理的备选局部图像;编码模块,用于对所述待处理图像和所述多张经处理的备选局部图像进行编码。
在一种可能的实现方式中,所述视觉感官体验参数包括亮度参数、对比度参数、颜色参数和细节参数中的至少之一;所述处理模块,具体用于执行以下至少一种操作:当所述视觉感官体验参数包括亮度参数时,对对应局部图像进行亮度调节;当所述视觉感官体验参数包括对比度参数时,对所述对应局部图像进行对比度调节;当所述视觉感官体验参数包括颜色参数时,对所述对应局部图像进行颜色调节;当所述视觉感官体验参数包括细节参数时,对所述对应局部图像进行细节调节;其中,所述对应局部图像是所述多张备选局部图像的其中之一。
在一种可能的实现方式中,所述处理模块,具体用于当对应局部图像对应图像暗区时,对所述对应局部图像进行亮度提升、对比度提升、颜色适应暗区和欠曝细节增加中的至少一个处理;或者,当所述对应局部图像对应图像亮区时,对所述对应局部图像进行亮度降低、对比度降低、颜色适应亮区和过曝细节增加中的至少一个处理;或者,当所述对应局部图像对应图像主体区域时,对所述对应局部图像进行颜色适应主体的处理;其中,所述对应局部图像是所述多张备选局部图像的其中之一。
在一种可能的实现方式中,所述处理模块,具体用于获取多张参考图像,所述多张参考图像和所述待处理图像是多个摄像头针对同一场景拍摄得到的;根据所述多张参考图像对所述对应局部图像进行细节调节。
在一种可能的实现方式中,所述处理模块,具体用于获取与所述待处理图像的相似度超过预设阈值的多张历史图像;根据所述多张历史图像对所述对应局部图像进行细节调节。
在一种可能的实现方式中,所述获取模块,具体用于根据第三预设规则获取所述多组视觉感官体验参数。
在一种可能的实现方式中,所述编码模块,具体用于对所述待处理图像和所述多张经处理的备选局部图像进行可分级视频编码以得到码流;或者,对所述待处理图像和所述多张经处理的备选局部图像进行图像压缩以得到图像文件。
第六方面,本申请提供一种编码器,包括:一个或多个处理器;非瞬时性计算机可读存储介质,耦合到所述处理器并存储由所述处理器执行的程序,其中所述程序在由所述处理器执行时,使得所述编码器执行根据第一至三方面中由编码侧执行的任一项所述的方法。
第七方面,本申请提供一种解码器,包括:一个或多个处理器;非瞬时性计算机可读存储介质,耦合到所述处理器并存储由所述处理器执行的程序,其中所述程序在由所述处理器执行时,使得所述解码器执行根据第一至三方面中由解码侧执行的任一项所述的方法。
第八方面,本申请提供一种非瞬时性计算机可读存储介质,包括程序代码,当其由计算机设备执行时,用于执行根据第一至三方面任一项所述的方法。
第九方面,本申请提供一种非瞬时性存储介质,包括根据第一至三方面任一项所述的方法中的比特流。
第十方面,本申请提供一种包含指令的计算机程序产品,当其在计算机上运行时,使得计算机执行如第一至三方面任一项所述的方法。
附图说明
图1A为本申请实施例的译码系统10的示例性框图;
图1B为本申请实施例的视频译码系统40的示例性框图;
图2为本申请实施例的视频编码器20的示例性框图;
图3为本申请实施例的视频解码器30的示例性框图;
图4为本申请实施例的视频译码设备400的示例性框图;
图5为本申请可分级视频编码的一个示例性的层级示意图;
图6为本申请增强层的编码方法的一个示例性的流程图;
图7为本申请图像处理方法的一个示例性的流程图;
图8为金字塔式划分方式的示意图;
图9为确定视觉感官体验参数的示意图;
图10为图像编码流程的示例图;
图11为图像解码流程的示例图;
图12a为对局部图像的处理效果的示例图;
图12b为对局部图像的处理效果的示例图;
图13为本申请图像处理方法的一个示例性的流程图;
图14为本申请图像处理方法的一个示例性的流程图;
图15为本申请实施例解码装置1500的一个示例性的结构示意图;
图16为本申请实施例编码装置1600的一个示例性的结构示意图。
具体实施方式
为使本申请的目的、技术方案和优点更加清楚,下面将结合本申请中的附图,对本申请中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
本申请的说明书实施例和权利要求书及附图中的术语“第一”、“第二”等仅用于区分描述的目的,而不能理解为指示或暗示相对重要性,也不能理解为指示或暗示顺序。此外,术语“包括”和“具有”以及他们的任何变形,意图在于覆盖不排他的包含,例如,包含了一系列步骤或单元。方法、系统、产品或设备不必限于清楚地列出的那些步骤或单元,而是可包括没有清楚地列出的或对于这些过程、方法、产品或设备固有的其它步骤或单元。
应当理解,在本申请中,“至少一个(项)”是指一个或者多个,“多个”是指两个或两个以上。“和/或”,用于描述关联对象的关联关系,表示可以存在三种关系,例如,“A和/或B”可以表示:只存在A,只存在B以及同时存在A和B三种情况,其中A,B可以是单数或者复数。字符“/”一般表示前后关联对象是一种“或”的关系。“以下至少一项(个)”或其类似表达,是指这些项中的任意组合,包括单项(个)或复数项(个)的任意组合。例如,a,b或c中的至少一项(个),可以表示:a,b,c,“a和b”,“a和c”,“b和c”,或“a和b和c”,其中a,b,c可以是单个,也可以是多个。
视频编码通常是指处理形成视频或视频序列的图像序列。在视频编码领域,术语“图像(picture)”、“帧(frame)”或“图片(image)”可以用作同义词。视频编码(或通常称为编码)包括视频编码和视频解码两部分。视频编码在源侧执行,通常包括处理(例如,压缩)原始视频图像以减少表示该视频图像所需的数据量(从而更高效存储和/或传输)。视频解码在目的地侧执行,通常包括相对于编码器作逆处理,以重建视频图像。实施例涉及的视频图像(或通常称为图像)的“编码”应理解为视频图像或视频序列的“编码”或“解码”。编码部分和解码部分也合称为编解码(编码和解码,CODEC)。
在无损视频编码情况下,可以重建原始视频图像,即重建的视频图像与原始视频图像具有相同的质量(假设存储或传输期间没有传输损耗或其它数据丢失)。在有损视频编码情况下,通过量化等执行进一步压缩,来减少表示视频图像所需的数据量,而解码器侧无法完全重建视频图像,即重建的视频图像的质量比原始视频图像的质量较低或较差。
几个视频编码标准属于“有损混合型视频编解码”(即,将像素域中的空间和时间预测 与变换域中用于应用量化的2D变换编码结合)。视频序列中的每个图像通常分割成不重叠的块集合,通常在块级上进行编码。换句话说,编码器通常在块(视频块)级处理即编码视频,例如,通过空间(帧内)预测和时间(帧间)预测来产生预测块;从当前块(当前处理/待处理的块)中减去预测块,得到残差块;在变换域中变换残差块并量化残差块,以减少待传输(压缩)的数据量,而解码器侧将相对于编码器的逆处理部分应用于编码或压缩的块,以重建用于表示的当前块。另外,编码器需要重复解码器的处理步骤,使得编码器和解码器生成相同的预测(例如,帧内预测和帧间预测)和/或重建像素,用于处理,即编码后续块。
在以下译码系统10的实施例中,编码器20和解码器30根据图1A至图3进行描述。
图1A为本申请实施例的译码系统10的示例性框图,例如可以利用本申请技术的视频译码系统10(或简称为译码系统10)。视频译码系统10中的视频编码器20(或简称为编码器20)和视频解码器30(或简称为解码器30)代表可用于根据本申请中描述的各种示例执行各技术的设备等。
如图1A所示,译码系统10包括源设备12,源设备12用于将编码图像等编码图像数据21提供给用于对编码图像数据21进行解码的目的设备14。
源设备12包括编码器20,另外即可选地,可包括图像源16、图像预处理器等预处理器(或预处理单元)18、通信接口(或通信单元)22。
图像源16可包括或可以为任意类型的用于捕获现实世界图像等的图像捕获设备,和/或任意类型的图像生成设备,例如用于生成计算机动画图像的计算机图形处理器或任意类型的用于获取和/或提供现实世界图像、计算机生成图像(例如,屏幕内容、虚拟现实(virtual reality,VR)图像和/或其任意组合(例如增强现实(augmented reality,AR)图像)的设备。所述图像源可以为存储上述图像中的任意图像的任意类型的内存或存储器。
为了区分预处理器(或预处理单元)18执行的处理,图像(或图像数据)17也可称为原始图像(或原始图像数据)17。
预处理器18用于接收原始图像数据17,并对原始图像数据17进行预处理,得到预处理图像(或预处理图像数据)19。例如,预处理器18执行的预处理可包括修剪、颜色格式转换(例如从RGB转换为YCbCr)、调色或去噪。可以理解的是,预处理单元18可以为可选组件。
视频编码器(或编码器)20用于接收预处理图像数据19并提供编码图像数据21(下面将根据图2等进一步描述)。
源设备12中的通信接口22可用于:接收编码图像数据21并通过通信信道13向目的设备14等另一设备或任何其它设备发送编码图像数据21(或其它任意处理后的版本),以便存储或直接重建。
目的设备14包括解码器30,另外即可选地,可包括通信接口(或通信单元)28、后处理器(或后处理单元)32和显示设备34。
目的设备14中的通信接口28用于直接从源设备12或从存储设备等任意其它源设备接收编码图像数据21(或其它任意处理后的版本),例如,存储设备为编码图像数据存储设备,并将编码图像数据21提供给解码器30。
通信接口22和通信接口28可用于通过源设备12与目的设备14之间的直连通信链 路,例如直接有线或无线连接等,或者通过任意类型的网络,例如有线网络、无线网络或其任意组合、任意类型的私网和公网或其任意类型的组合,发送或接收编码图像数据(或编码数据)21。
例如,通信接口22可用于将编码图像数据21封装为报文等合适的格式,和/或使用任意类型的传输编码或处理来处理所述编码后的图像数据,以便在通信链路或通信网络上进行传输。
通信接口28与通信接口22对应,例如,可用于接收传输数据,并使用任意类型的对应传输解码或处理和/或解封装对传输数据进行处理,得到编码图像数据21。
通信接口22和通信接口28均可配置为如图1A中从源设备12指向目的设备14的对应通信信道13的箭头所指示的单向通信接口,或双向通信接口,并且可用于发送和接收消息等,以建立连接,确认并交换与通信链路和/或例如编码后的图像数据传输等数据传输相关的任何其它信息,等等。
视频解码器(或解码器)30用于接收编码图像数据21并提供解码图像数据(或解码图像数据)31(下面将根据图3等进一步描述)。
后处理器32用于对解码后的图像等解码图像数据31(也称为重建后的图像数据)进行后处理,得到后处理后的图像等后处理图像数据33。后处理单元32执行的后处理可以包括例如颜色格式转换(例如从YCbCr转换为RGB)、调色、修剪或重采样,或者用于产生供显示设备34等显示的解码图像数据31等任何其它处理。
显示设备34用于接收后处理图像数据33,以向用户或观看者等显示图像。显示设备34可以为或包括任意类型的用于表示重建后图像的显示器,例如,集成或外部显示屏或显示器。例如,显示屏可包括液晶显示器(liquid crystal display,LCD)、有机发光二极管(organic light emitting diode,OLED)显示器、等离子显示器、投影仪、微型LED显示器、硅基液晶显示器(liquid crystal on silicon,LCoS)、数字光处理器(digital light processor,DLP)或任意类型的其它显示屏。
尽管图1A示出了源设备12和目的设备14作为独立的设备,但设备实施例也可以同时包括源设备12和目的设备14或同时包括源设备12和目的设备14的功能,即同时包括源设备12或对应功能和目的设备14或对应功能。在这些实施例中,源设备12或对应功能和目的设备14或对应功能可以使用相同硬件和/或软件或通过单独的硬件和/或软件或其任意组合来实现。
根据描述,图1A所示的源设备12和/或目的设备14中的不同单元或功能的存在和(准确)划分可能根据实际设备和应用而有所不同,这对技术人员来说是显而易见的。
编码器20(例如视频编码器20)或解码器30(例如视频解码器30)或两者都可通过如图1B所示的处理电路实现,例如一个或多个微处理器、数字信号处理器(digital signal processor,DSP)、专用集成电路(application-specific integrated circuit,ASIC)、现场可编程门阵列(field-programmable gate array,FPGA)、离散逻辑、硬件、视频编码专用处理器或其任意组合。编码器20可以通过处理电路46实现,以包含参照图2编码器20论述的各种模块和/或本文描述的任何其它编码器系统或子系统。解码器30可以通过处理电路46实现,以包含参照图3解码器30论述的各种模块和/或本文描述的任何其它解码器系统或子系统。所述处理电路46可用于执行下文论述的各种操作。如图5所示,如果部 分技术在软件中实施,则设备可以将软件的指令存储在合适的计算机可读存储介质中,并且使用一个或多个处理器在硬件中执行指令,从而执行本发明技术。视频编码器20和视频解码器30中的其中一个可作为组合编解码器(encoder/decoder,CODEC)的一部分集成在单个设备中,如图1B所示。
源设备12和目的设备14可包括各种设备中的任一种,包括任意类型的手持设备或固定设备,例如,笔记本电脑或膝上型电脑、手机、智能手机、平板或平板电脑、相机、台式计算机、机顶盒、电视机、显示设备、数字媒体播放器、视频游戏控制台、视频流设备(例如,内容业务服务器或内容分发服务器)、广播接收设备、广播发射设备,等等,并可以不使用或使用任意类型的操作系统。在一些情况下,源设备12和目的设备14可配备用于无线通信的组件。因此,源设备12和目的设备14可以是无线通信设备。
在一些情况下,图1A所示的视频译码系统10仅仅是示例性的,本申请提供的技术可适用于视频编码设置(例如,视频编码或视频解码),这些设置不一定包括编码设备与解码设备之间的任何数据通信。在其它示例中,数据从本地存储器中检索,通过网络发送,等等。视频编码设备可以对数据进行编码并将数据存储到存储器中,和/或视频解码设备可以从存储器中检索数据并对数据进行解码。在一些示例中,编码和解码由相互不通信而只是编码数据到存储器和/或从存储器中检索并解码数据的设备来执行。
图1B为本申请实施例的视频译码系统40的示例性框图,如图1B所示,视频译码系统40可以包含成像设备41、视频编码器20、视频解码器30(和/或藉由处理电路46实施的视频编/解码器)、天线42、一个或多个处理器43、一个或多个内存存储器44和/或显示设备45。
如图1B所示,成像设备41、天线42、处理电路46、视频编码器20、视频解码器30、处理器43、内存存储器44和/或显示设备45能够互相通信。在不同实例中,视频译码系统40可以只包含视频编码器20或只包含视频解码器30。
在一些实例中,天线42可以用于传输或接收视频数据的经编码比特流。另外,在一些实例中,显示设备45可以用于呈现视频数据。处理电路46可以包含专用集成电路(application-specific integrated circuit,ASIC)逻辑、图形处理器、通用处理器等。视频译码系统40也可以包含可选的处理器43,该可选处理器43类似地可以包含专用集成电路(application-specific integrated circuit,ASIC)逻辑、图形处理器、通用处理器等。另外,内存存储器44可以是任何类型的存储器,例如易失性存储器(例如,静态随机存取存储器(static random access memory,SRAM)、动态随机存储器(dynamic random access memory,DRAM)等)或非易失性存储器(例如,闪存等)等。在非限制性实例中,内存存储器44可以由超速缓存内存实施。在其它实例中,处理电路46可以包含存储器(例如,缓存等)用于实施图像缓冲器等。
在一些实例中,通过逻辑电路实施的视频编码器20可以包含(例如,通过处理电路46或内存存储器44实施的)图像缓冲器和(例如,通过处理电路46实施的)图形处理单元。图形处理单元可以通信耦合至图像缓冲器。图形处理单元可以包含通过处理电路46实施的视频编码器20,以实施参照图2和/或本文中所描述的任何其它编码器系统或子系统所论述的各种模块。逻辑电路可以用于执行本文所论述的各种操作。
在一些实例中,视频解码器30可以以类似方式通过处理电路46实施,以实施参照图 3的视频解码器30和/或本文中所描述的任何其它解码器系统或子系统所论述的各种模块。在一些实例中,逻辑电路实施的视频解码器30可以包含(通过处理电路46或内存存储器44实施的)图像缓冲器和(例如,通过处理电路46实施的)图形处理单元。图形处理单元可以通信耦合至图像缓冲器。图形处理单元可以包含通过处理电路46实施的视频解码器30,以实施参照图3和/或本文中所描述的任何其它解码器系统或子系统所论述的各种模块。
在一些实例中,天线42可以用于接收视频数据的经编码比特流。如所论述,经编码比特流可以包含本文所论述的与编码视频帧相关的数据、指示符、索引值、模式选择数据等,例如与编码分割相关的数据(例如,变换系数或经量化变换系数,(如所论述的)可选指示符,和/或定义编码分割的数据)。视频译码系统40还可包含耦合至天线42并用于解码经编码比特流的视频解码器30。显示设备45用于呈现视频帧。
应理解,本申请实施例中对于参考视频编码器20所描述的实例,视频解码器30可以用于执行相反过程。关于信令语法元素,视频解码器30可以用于接收并解析这种语法元素,相应地解码相关视频数据。在一些例子中,视频编码器20可以将语法元素熵编码成经编码视频比特流。在此类实例中,视频解码器30可以解析这种语法元素,并相应地解码相关视频数据。
为便于描述,参考通用视频编码(versatile video coding,VVC)参考软件或由ITU-T视频编码专家组(video coding experts group,VCEG)和ISO/IEC运动图像专家组(motion picture experts group,MPEG)的视频编码联合工作组(joint collaboration team on video coding,JCT-VC)开发的高性能视频编码(high-efficiency video coding,HEVC)描述本发明实施例。本领域普通技术人员理解本发明实施例不限于HEVC或VVC。
编码器和编码方法
图2为本申请实施例的视频编码器20的示例性框图。如图2所示,视频编码器20包括输入端(或输入接口)201、残差计算单元204、变换处理单元206、量化单元208、反量化单元210、逆变换处理单元212、重建单元214、环路滤波器220、解码图像缓冲器(decoded picture buffer,DPB)230、模式选择单元260、熵编码单元270和输出端(或输出接口)272。模式选择单元260可包括帧间预测单元244、帧内预测单元254和分割单元262。帧间预测单元244可包括运动估计单元和运动补偿单元(未示出)。图2所示的视频编码器20也可称为混合型视频编码器或基于混合型视频编解码器的视频编码器。
残差计算单元204、变换处理单元206、量化单元208和模式选择单元260组成编码器20的前向信号路径,而反量化单元210、逆变换处理单元212、重建单元214、缓冲器216、环路滤波器220、解码图像缓冲器(decoded picture buffer,DPB)230、帧间预测单元244和帧内预测单元254组成编码器的后向信号路径,其中编码器20的后向信号路径对应于解码器的信号路径(参见图3中的解码器30)。反量化单元210、逆变换处理单元212、重建单元214、环路滤波器220、解码图像缓冲器230、帧间预测单元244和帧内预测单元254还组成视频编码器20的“内置解码器”。
图像和图像分割(图像和块)
编码器20可用于通过输入端201等接收图像(或图像数据)17,例如,形成视频或视频序列的图像序列中的图像。接收的图像或图像数据也可以是预处理后的图像(或预处 理后的图像数据)19。为简单起见,以下描述使用图像17。图像17也可称为当前图像或待编码的图像(尤其是在视频编码中将当前图像与其它图像区分开时,其它图像例如同一视频序列,即也包括当前图像的视频序列,中的之前编码后图像和/或解码后图像)。
(数字)图像为或可以视为具有强度值的像素点组成的二维阵列或矩阵。阵列中的像素点也可以称为像素(pixel或pel)(图像元素的简称)。阵列或图像在水平方向和垂直方向(或轴线)上的像素点数量决定了图像的大小和/或分辨率。为了表示颜色,通常采用三个颜色分量,即图像可以表示为或包括三个像素点阵列。在RBG格式或颜色空间中,图像包括对应的红色、绿色和蓝色像素点阵列。但是,在视频编码中,每个像素通常以亮度/色度格式或颜色空间表示,例如YCbCr,包括Y指示的亮度分量(有时也用L表示)以及Cb、Cr表示的两个色度分量。亮度(luma)分量Y表示亮度或灰度水平强度(例如,在灰度等级图像中两者相同),而两个色度(chrominance,简写为chroma)分量Cb和Cr表示色度或颜色信息分量。相应地,YCbCr格式的图像包括亮度像素点值(Y)的亮度像素点阵列和色度值(Cb和Cr)的两个色度像素点阵列。RGB格式的图像可以转换或变换为YCbCr格式,反之亦然,该过程也称为颜色变换或转换。如果图像是黑白的,则该图像可以只包括亮度像素点阵列。相应地,图像可以为例如单色格式的亮度像素点阵列或4:2:0、4:2:2和4:4:4彩色格式的亮度像素点阵列和两个相应的色度像素点阵列。
在一个实施例中,视频编码器20的实施例可包括图像分割单元(图2中未示出),用于将图像17分割成多个(通常不重叠)图像块(编码树单元)203。这些块在H.265/HEVC和VVC标准中也可以称为根块、宏块(H.264/AVC)或编码树块(Coding Tree Block,CTB),或编码树单元(Coding Tree Unit,CTU)。分割单元可用于对视频序列中的所有图像使用相同的块大小和使用限定块大小的对应网格,或在图像或图像子集或图像组之间改变块大小,并将每个图像分割成对应块。
在其它实施例中,视频编码器可用于直接接收图像17的块203,例如,组成所述图像17的一个、几个或所有块。图像块203也可以称为当前图像块或待编码图像块。
与图像17一样,图像块203同样是或可认为是具有强度值(像素点值)的像素点组成的二维阵列或矩阵,但是图像块203的比图像17的小。换句话说,块203可包括一个像素点阵列(例如,单色图像17情况下的亮度阵列或彩色图像情况下的亮度阵列或色度阵列)或三个像素点阵列(例如,彩色图像17情况下的一个亮度阵列和两个色度阵列)或根据所采用的颜色格式的任何其它数量和/或类型的阵列。块203的水平方向和垂直方向(或轴线)上的像素点数量限定了块203的大小。相应地,块可以为M×N(M列×N行)个像素点阵列,或M×N个变换系数阵列等。
在一个实施例中,图2所示的视频编码器20用于逐块对图像17进行编码,例如,对每个块203执行编码和预测。
在一个实施例中,图2所示的视频编码器20还可以用于使用片(也称为视频片)分割和/或编码图像,其中图像可以使用一个或多个片(通常为不重叠的)进行分割或编码。每个片可包括一个或多个块(例如,编码树单元CTU)或一个或多个块组(例如H.265/HEVC/VVC标准中的编码区块(tile)和VVC标准中的砖(brick)。
在一个实施例中,图2所示的视频编码器20还可以用于使用片/编码区块组(也称为视频编码区块组)和/或编码区块(也称为视频编码区块)对图像进行分割和/或编码,其 中图像可以使用一个或多个片/编码区块组(通常为不重叠的)进行分割或编码,每个片/编码区块组可包括一个或多个块(例如CTU)或一个或多个编码区块等,其中每个编码区块可以为矩形等形状,可包括一个或多个完整或部分块(例如CTU)。
残差计算
残差计算单元204用于通过如下方式根据图像块203和预测块265来计算残差块205(后续详细介绍了预测块265):例如,逐个像素点(逐个像素)从图像块203的像素点值中减去预测块265的像素点值,得到像素域中的残差块205。
变换
变换处理单元206用于对残差块205的像素点值执行离散余弦变换(discrete cosine transform,DCT)或离散正弦变换(discrete sine transform,DST)等,得到变换域中的变换系数207。变换系数207也可称为变换残差系数,表示变换域中的残差块205。
变换处理单元206可用于应用DCT/DST的整数化近似,例如为H.265/HEVC指定的变换。与正交DCT变换相比,这种整数化近似通常由某一因子按比例缩放。为了维持经过正变换和逆变换处理的残差块的范数,使用其它比例缩放因子作为变换过程的一部分。比例缩放因子通常是根据某些约束条件来选择的,例如比例缩放因子是用于移位运算的2的幂、变换系数的位深度、准确性与实施成本之间的权衡等。例如,在编码器20侧通过逆变换处理单元212为逆变换(以及在解码器30侧通过例如逆变换处理单元312为对应逆变换)指定具体的比例缩放因子,以及相应地,可以在编码器20侧通过变换处理单元206为正变换指定对应比例缩放因子。
在一个实施例中,视频编码器20(对应地,变换处理单元206)可用于输出一种或多种变换的类型等变换参数,例如,直接输出或由熵编码单元270进行编码或压缩后输出,例如使得视频解码器30可接收并使用变换参数进行解码。
量化
量化单元208用于通过例如标量量化或矢量量化对变换系数207进行量化,得到量化变换系数209。量化变换系数209也可称为量化残差系数209。
量化过程可减少与部分或全部变换系数207有关的位深度。例如,可在量化期间将n位变换系数向下舍入到m位变换系数,其中n大于m。可通过调整量化参数(quantization parameter,QP)修改量化程度。例如,对于标量量化,可以应用不同程度的比例来实现较细或较粗的量化。较小量化步长对应较细量化,而较大量化步长对应较粗量化。可通过量化参数(quantization parameter,QP)指示合适的量化步长。例如,量化参数可以为合适的量化步长的预定义集合的索引。例如,较小的量化参数可对应精细量化(较小量化步长),较大的量化参数可对应粗糙量化(较大量化步长),反之亦然。量化可包括除以量化步长,而反量化单元210等执行的对应或逆解量化可包括乘以量化步长。根据例如HEVC一些标准的实施例可用于使用量化参数来确定量化步长。一般而言,可以根据量化参数使用包含除法的等式的定点近似来计算量化步长。可以引入其它比例缩放因子来进行量化和解量化,以恢复可能由于在用于量化步长和量化参数的等式的定点近似中使用的比例而修改的残差块的范数。在一种示例性实现方式中,可以合并逆变换和解量化的比例。或者,可以使用自定义量化表并在比特流中等将其从编码器向解码器指示。量化是有损操作,其中量化步长越大,损耗越大。
在一个实施例中,视频编码器20(对应地,量化单元208)可用于输出量化参数(quantization parameter,QP),例如,直接输出或由熵编码单元270进行编码或压缩后输出,例如使得视频解码器30可接收并使用量化参数进行解码。
反量化
反量化单元210用于对量化系数执行量化单元208的反量化,得到解量化系数211,例如,根据或使用与量化单元208相同的量化步长执行与量化单元208所执行的量化方案的反量化方案。解量化系数211也可称为解量化残差系数211,对应于变换系数207,但是由于量化造成损耗,反量化系数211通常与变换系数不完全相同。
逆变换
逆变换处理单元212用于执行变换处理单元206执行的变换的逆变换,例如,逆离散余弦变换(discrete cosine transform,DCT)或逆离散正弦变换(discrete sine transform,DST),以在像素域中得到重建残差块213(或对应的解量化系数213)。重建残差块213也可称为变换块213。
重建
重建单元214(例如,求和器214)用于将变换块213(即重建残差块213)添加到预测块265,以在像素域中得到重建块215,例如,将重建残差块213的像素点值和预测块265的像素点值相加。
滤波
环路滤波器单元220(或简称“环路滤波器”220)用于对重建块215进行滤波,得到滤波块221,或通常用于对重建像素点进行滤波以得到滤波像素点值。例如,环路滤波器单元用于顺利进行像素转变或提高视频质量。环路滤波器单元220可包括一个或多个环路滤波器,例如去块滤波器、像素点自适应偏移(sample-adaptive offset,SAO)滤波器或一个或多个其它滤波器,例如自适应环路滤波器(adaptive loop filter,ALF)、噪声抑制滤波器(noise suppression filter,NSF)或任意组合。例如,环路滤波器单元220可以包括去块滤波器、SAO滤波器和ALF滤波器。滤波过程的顺序可以是去块滤波器、SAO滤波器和ALF滤波器。再例如,增加一个称为具有色度缩放的亮度映射(luma mapping with chroma scaling,LMCS)(即自适应环内整形器)的过程。该过程在去块之前执行。再例如,去块滤波过程也可以应用于内部子块边缘,例如仿射子块边缘、ATMVP子块边缘、子块变换(sub-block transform,SBT)边缘和内子部分(intra sub-partition,ISP)边缘。尽管环路滤波器单元220在图2中示为环路滤波器,但在其它配置中,环路滤波器单元220可以实现为环后滤波器。滤波块221也可称为滤波重建块221。
在一个实施例中,视频编码器20(对应地,环路滤波器单元220)可用于输出环路滤波器参数(例如SAO滤波参数、ALF滤波参数或LMCS参数),例如,直接输出或由熵编码单元270进行熵编码后输出,例如使得解码器30可接收并使用相同或不同的环路滤波器参数进行解码。
解码图像缓冲器
解码图像缓冲器(decoded picture buffer,DPB)230可以是存储参考图像数据以供视频编码器20在编码视频数据时使用的参考图像存储器。DPB 230可以由多种存储器设备中的任一种形成,例如动态随机存取存储器(dynamic random access memory,DRAM), 包括同步DRAM(synchronous DRAM,SDRAM)、磁阻RAM(magnetoresistive RAM,MRAM)、电阻RAM(resistive RAM,RRAM)或其它类型的存储设备。解码图像缓冲器230可用于存储一个或多个滤波块221。解码图像缓冲器230还可用于存储同一当前图像或例如之前的重建块等不同图像的其它之前的滤波块,例如之前重建和滤波的块221,并可提供完整的之前重建即解码图像(和对应参考块和像素点)和/或部分重建的当前图像(和对应参考块和像素点),例如用于帧间预测。解码图像缓冲器230还可用于存储一个或多个未经滤波的重建块215,或一般存储未经滤波的重建像素点,例如,未被环路滤波单元220滤波的重建块215,或未进行任何其它处理的重建块或重建像素点。
模式选择(分割和预测)
模式选择单元260包括分割单元262、帧间预测单元244和帧内预测单元254,用于从解码图像缓冲器230或其它缓冲器(例如,列缓冲器,图中未显示)接收或获得原始块203(当前图像17的当前块203)和重建块数据等原始图像数据,例如,同一(当前)图像和/或一个或多个之前解码图像的滤波和/或未经滤波的重建像素点或重建块。重建块数据用作帧间预测或帧内预测等预测所需的参考图像数据,以得到预测块265或预测值265。
模式选择单元260可用于为当前块(包括不分割)的预测模式(例如帧内或帧间预测模式)确定或选择一种分割,生成对应的预测块265,以对残差块205进行计算和对重建块215进行重建。
在一个实施例中,模式选择单元260可用于选择分割和预测模式(例如,从模式选择单元260支持的或可用的预测模式中),所述预测模式提供最佳匹配或者说最小残差(最小残差是指传输或存储中更好的压缩),或者提供最小信令开销(最小信令开销是指传输或存储中更好的压缩),或者同时考虑或平衡以上两者。模式选择单元260可用于根据码率失真优化(rate distortion Optimization,RDO)确定分割和预测模式,即选择提供最小码率失真优化的预测模式。本文“最佳”、“最低”、“最优”等术语不一定指总体上“最佳”、“最低”、“最优”的,但也可以指满足终止或选择标准的情况,例如,超过或低于阈值的值或其他限制可能导致“次优选择”,但会降低复杂度和处理时间。
换言之,分割单元262可用于将视频序列中的图像分割为编码树单元(coding tree unit,CTU)序列,CTU 203可进一步被分割成较小的块部分或子块(再次形成块),例如,通过迭代使用四叉树(quad-tree partitioning,QT)分割、二叉树(binary-tree partitioning,BT)分割或三叉树(triple-tree partitioning,TT)分割或其任意组合,并且用于例如对块部分或子块中的每一个执行预测,其中模式选择包括选择分割块203的树结构和选择应用于块部分或子块中的每一个的预测模式。
下文将详细地描述由视频编码器20执行的分割(例如,由分割单元262执行)和预测处理(例如,由帧间预测单元244和帧内预测单元254执行)。
分割
分割单元262可将一个编码树单元203分割(或划分)为较小的部分,例如正方形或矩形形状的小块。对于具有三个像素点阵列的图像,一个CTU由N×N个亮度像素点块和两个对应的色度像素点块组成。
H.265/HEVC视频编码标准把一帧图像分割成互不重叠的CTU,CTU的大小可设置为64×64(CTU的大小也可设置为其它值,如JVET参考软件JEM中CTU大小增大为 128×128或256×256)。64×64的CTU包含由64列、每列64个像素的矩形像素点阵,每个像素包含亮度分量或/和色度分量。
H.265使用基于QT的CTU划分方法,将CTU作为QT的根节点(root),按照QT的划分方式,将CTU递归划分成若干个叶节点(leaf node)。一个节点对应于一个图像区域,节点如果不划分,则该节点称为叶节点,其对应的图像区域即为一个CU;如果节点继续划分,则节点对应的图像区域可以被划分成四个相同大小的区域(其长和宽各为被划分区域的一半),每个区域对应一个节点,需要分别确定这些节点是否还会划分。一个节点是否划分由码流中该节点对应的划分标志位split_cu_flag指示。一个节点A划分一次得到4个节点Bi,i=0~3,Bi称为A的子节点,A称为Bi的父节点。根节点的QT层级(qtDepth)为0,节点的QT层级是该节点的父节点的四QT层级加1。
H.265/HEVC标准中,对于YUV4:2:0格式的图像,一个CTU包含一个亮度块和两个色度块,亮度块和色度块可以使用相同的方式划分,称作亮度色度联合编码树。VVC中,如果当前帧为I帧,则当一个CTU为帧内编码帧(I帧)中的预设大小(如64×64)的节点时,该节点包含的亮度块通过亮度编码树被划分成一组只包含亮度块的编码单元,该节点包含的色度块通过色度编码树被划分成一组只包含色度块的编码单元;亮度编码树和色度编码树的划分相互独立。这种亮度块和色度块使用独立的编码树,称为分离树(separate trees)。在H.265中,CU包含亮度像素和色度像素;在H.266、AVS3等标准中,除了具有同时包含亮度像素和色度像素的CU之外,还存在只包含亮度像素的亮度CU和只包含色度像素的色度CU。
如上所述,视频编码器20用于从(预定的)预测模式集合中确定或选择最好或最优的预测模式。预测模式集合可包括例如帧内预测模式和/或帧间预测模式。
帧内预测
帧内预测模式集合可包括35种不同的帧内预测模式,例如,像DC(或均值)模式和平面模式的非方向性模式,或如HEVC定义的方向性模式,或者可包括67种不同的帧内预测模式,例如,像DC(或均值)模式和平面模式的非方向性模式,或如VVC中定义的方向性模式。例如,若干传统角度帧内预测模式自适应地替换为VVC中定义的非正方形块的广角帧内预测模式。又例如,为了避免DC预测的除法运算,仅使用较长边来计算非正方形块的平均值。并且,平面模式的帧内预测结果还可以使用位置决定的帧内预测组合(position dependent intra prediction combination,PDPC)方法修改。
帧内预测单元254用于根据帧内预测模式集合中的帧内预测模式使用同一当前图像的相邻块的重建像素点来生成帧内预测块265。
帧内预测单元254(或通常为模式选择单元260)还用于输出帧内预测参数(或通常为指示块的选定帧内预测模式的信息)以语法元素266的形式发送到熵编码单元270,以包含到编码图像数据21中,从而视频解码器30可执行操作,例如接收并使用用于解码的预测参数。
帧间预测
在可能的实现中,帧间预测模式集合取决于可用参考图像(即,例如前述存储在DBP230中的至少部分之前解码的图像)和其它帧间预测参数,例如取决于是否使用整个参考图像或只使用参考图像的一部分,例如当前块的区域附近的搜索窗口区域,来搜索最佳匹 配参考块,和/或例如取决于是否执行半像素、四分之一像素和/或16分之一内插的像素内插。
除上述预测模式外,还可以采用跳过模式和/或直接模式。
例如,扩展合并预测,这种模式的合并候选列表由以下五种候选类型按顺序组成:来自空间相邻CU的空间MVP、来自并置CU的时间MVP、来自FIFO表的基于历史的MVP、成对平均MVP和零MV。可以使用基于双边匹配的解码器侧运动矢量修正(decoder side motion vector refinement,DMVR)来增加合并模式的MV的准确度。带有MVD的合并模式(merge mode with MVD,MMVD)来自有运动矢量差异的合并模式。在发送跳过标志和合并标志之后立即发送MMVD标志,以指定CU是否使用MMVD模式。可以使用CU级自适应运动矢量分辨率(adaptive motion vector resolution,AMVR)方案。AMVR支持CU的MVD以不同的精度进行编码。根据当前CU的预测模式,自适应地选择当前CU的MVD。当CU以合并模式进行编码时,可以将合并的帧间/帧内预测(combined inter/intra prediction,CIIP)模式应用于当前CU。对帧间和帧内预测信号进行加权平均,得到CIIP预测。对于仿射运动补偿预测,通过2个控制点(4参数)或3个控制点(6参数)运动矢量的运动信息来描述块的仿射运动场。基于子块的时间运动矢量预测(subblock-based temporal motion vector prediction,SbTMVP),与HEVC中的时间运动矢量预测(temporal motion vector prediction,TMVP)类似,但预测的是当前CU内的子CU的运动矢量。双向光流(bi-directional optical flow,BDOF)以前称为BIO,是一种减少计算的简化版本,特别是在乘法次数和乘数大小方面的计算。在三角形分割模式中,CU以对角线划分和反对角线划分两种划分方式被均匀划分为两个三角形部分。此外,双向预测模式在简单平均的基础上进行了扩展,以支持两个预测信号的加权平均。
帧间预测单元244可包括运动估计(motion estimation,ME)单元和运动补偿(motion compensation,MC)单元(两者在图2中未示出)。运动估计单元可用于接收或获取图像块203(当前图像17的当前图像块203)和解码图像231,或至少一个或多个之前重建块,例如,一个或多个其它/不同之前解码图像231的重建块,来进行运动估计。例如,视频序列可包括当前图像和之前的解码图像231,或换句话说,当前图像和之前的解码图像231可以为形成视频序列的图像序列的一部分或形成该图像序列。
例如,编码器20可用于从多个其它图像中的同一或不同图像的多个参考块中选择参考块,并将参考图像(或参考图像索引)和/或参考块的位置(x、y坐标)与当前块的位置之间的偏移(空间偏移)作为帧间预测参数提供给运动估计单元。该偏移也称为运动矢量(motion vector,MV)。
运动补偿单元用于获取,例如接收,帧间预测参数,并根据或使用该帧间预测参数执行帧间预测,得到帧间预测块246。由运动补偿单元执行的运动补偿可能包含根据通过运动估计确定的运动/块矢量来提取或生成预测块,还可能包括对子像素精度执行内插。内插滤波可从已知像素的像素点中产生其它像素的像素点,从而潜在地增加可用于对图像块进行编码的候选预测块的数量。一旦接收到当前图像块的PU对应的运动矢量时,运动补偿单元可在其中一个参考图像列表中定位运动矢量指向的预测块。
运动补偿单元还可以生成与块和视频片相关的语法元素,以供视频解码器30在解码视频片的图像块时使用。此外,或者作为片和相应语法元素的替代,可以生成或使用编码 区块组和/或编码区块以及相应语法元素。
熵编码
熵编码单元270用于将熵编码算法或方案(例如,可变长度编码(variable length coding,VLC)方案、上下文自适应VLC方案(context adaptive VLC,CALVC)、算术编码方案、二值化算法、上下文自适应二进制算术编码(context adaptive binary arithmetic coding,CABAC)、基于语法的上下文自适应二进制算术编码(syntax-based context-adaptive binary arithmetic coding,SBAC)、概率区间分割熵(probability interval partitioning entropy,PIPE)编码或其它熵编码方法或技术)应用于量化残差系数209、帧间预测参数、帧内预测参数、环路滤波器参数和/或其它语法元素,得到可以通过输出端272以编码比特流21等形式输出的编码图像数据21,使得视频解码器30等可以接收并使用用于解码的参数。可将编码比特流21传输到视频解码器30,或将其保存在存储器中稍后由视频解码器30传输或检索。
视频编码器20的其它结构变体可用于对视频流进行编码。例如,基于非变换的编码器20可以在某些块或帧没有变换处理单元206的情况下直接量化残差信号。在另一种实现方式中,编码器20可以具有组合成单个单元的量化单元208和反量化单元210。
解码器和解码方法
图3为本申请实施例的视频解码器30的示例性框图。视频解码器30用于接收例如由编码器20编码的编码图像数据21(例如编码比特流21),得到解码图像331。编码图像数据或比特流包括用于解码所述编码图像数据的信息,例如表示编码视频片(和/或编码区块组或编码区块)的图像块的数据和相关的语法元素。
在图3的示例中,解码器30包括熵解码单元304、反量化单元310、逆变换处理单元312、重建单元314(例如求和器314)、环路滤波器320、解码图像缓冲器(DBP)330、模式应用单元360、帧间预测单元344和帧内预测单元354。帧间预测单元344可以为或包括运动补偿单元。在一些示例中,视频解码器30可执行大体上与参照图2的视频编码器100描述的编码过程相反的解码过程。
如编码器20所述,反量化单元210、逆变换处理单元212、重建单元214、环路滤波器220、解码图像缓冲器DPB230、帧间预测单元344和帧内预测单元354还组成视频编码器20的“内置解码器”。相应地,反量化单元310在功能上可与反量化单元110相同,逆变换处理单元312在功能上可与逆变换处理单元122相同,重建单元314在功能上可与重建单元214相同,环路滤波器320在功能上可与环路滤波器220相同,解码图像缓冲器330在功能上可与解码图像缓冲器230相同。因此,视频编码器20的相应单元和功能的解释相应地适用于视频解码器30的相应单元和功能。
熵解码
熵解码单元304用于解析比特流21(或一般为编码图像数据21)并对编码图像数据21执行熵解码,得到量化系数309和/或解码后的编码参数(图3中未示出)等,例如帧间预测参数(例如参考图像索引和运动矢量)、帧内预测参数(例如帧内预测模式或索引)、变换参数、量化参数、环路滤波器参数和/或其它语法元素等中的任一个或全部。熵解码单元304可用于应用编码器20的熵编码单元270的编码方案对应的解码算法或方案。熵解码单元304还可用于向模式应用单元360提供帧间预测参数、帧内预测参数和/或其它语 法元素,以及向解码器30的其它单元提供其它参数。视频解码器30可以接收视频片和/或视频块级的语法元素。此外,或者作为片和相应语法元素的替代,可以接收或使用编码区块组和/或编码区块以及相应语法元素。
反量化
反量化单元310可用于从编码图像数据21(例如通过熵解码单元304解析和/或解码)接收量化参数(quantization parameter,QP)(或一般为与反量化相关的信息)和量化系数,并基于所述量化参数对所述解码的量化系数309进行反量化以获得反量化系数311,所述反量化系数311也可以称为变换系数311。反量化过程可包括使用视频编码器20为视频片中的每个视频块计算的量化参数来确定量化程度,同样也确定需要执行的反量化的程度。
逆变换
逆变换处理单元312可用于接收解量化系数311,也称为变换系数311,并对解量化系数311应用变换以得到像素域中的重建残差块213。重建残差块213也可称为变换块313。变换可以为逆变换,例如逆DCT、逆DST、逆整数变换或概念上类似的逆变换过程。逆变换处理单元312还可以用于从编码图像数据21(例如通过熵解码单元304解析和/或解码)接收变换参数或相应信息,以确定应用于解量化系数311的变换。
重建
重建单元314(例如,求和器314)用于将重建残差块313添加到预测块365,以在像素域中得到重建块315,例如,将重建残差块313的像素点值和预测块365的像素点值相加。
滤波
环路滤波器单元320(在编码环路中或之后)用于对重建块315进行滤波,得到滤波块321,从而顺利进行像素转变或提高视频质量等。环路滤波器单元320可包括一个或多个环路滤波器,例如去块滤波器、像素点自适应偏移(sample-adaptive offset,SAO)滤波器或一个或多个其它滤波器,例如自适应环路滤波器(adaptive loop filter,ALF)、噪声抑制滤波器(noise suppression filter,NSF)或任意组合。例如,环路滤波器单元220可以包括去块滤波器、SAO滤波器和ALF滤波器。滤波过程的顺序可以是去块滤波器、SAO滤波器和ALF滤波器。再例如,增加一个称为具有色度缩放的亮度映射(luma mapping with chroma scaling,LMCS)(即自适应环内整形器)的过程。该过程在去块之前执行。再例如,去块滤波过程也可以应用于内部子块边缘,例如仿射子块边缘、ATMVP子块边缘、子块变换(sub-block transform,SBT)边缘和内子部分(intra sub-partition,ISP)边缘。尽管环路滤波器单元320在图3中示为环路滤波器,但在其它配置中,环路滤波器单元320可以实现为环后滤波器。
解码图像缓冲器
随后将一个图像中的解码视频块321存储在解码图像缓冲器330中,解码图像缓冲器330存储作为参考图像的解码图像331,参考图像用于其它图像和/或分别输出显示的后续运动补偿。
解码器30用于通过输出端312等输出解码图像311,向用户显示或供用户查看。
预测
帧间预测单元344在功能上可与帧间预测单元244(特别是运动补偿单元)相同,帧内预测单元354在功能上可与帧间预测单元254相同,并基于从编码图像数据21(例如通过熵解码单元304解析和/或解码)接收的分割和/或预测参数或相应信息决定划分或分割和执行预测。模式应用单元360可用于根据重建块、块或相应的像素点(已滤波或未滤波)执行每个块的预测(帧内或帧间预测),得到预测块365。
当将视频片编码为帧内编码(intra coded,I)片时,模式应用单元360中的帧内预测单元354用于根据指示的帧内预测模式和来自当前图像的之前解码块的数据生成用于当前视频片的图像块的预测块365。当视频图像编码为帧间编码(即,B或P)片时,模式应用单元360中的帧间预测单元344(例如运动补偿单元)用于根据运动矢量和从熵解码单元304接收的其它语法元素生成用于当前视频片的视频块的预测块365。对于帧间预测,可从其中一个参考图像列表中的其中一个参考图像产生这些预测块。视频解码器30可以根据存储在DPB 330中的参考图像,使用默认构建技术来构建参考帧列表0和列表1。除了片(例如视频片)或作为片的替代,相同或类似的过程可应用于编码区块组(例如视频编码区块组)和/或编码区块(例如视频编码区块)的实施例,例如视频可以使用I、P或B编码区块组和/或编码区块进行编码。
模式应用单元360用于通过解析运动矢量和其它语法元素,确定用于当前视频片的视频块的预测信息,并使用预测信息产生用于正在解码的当前视频块的预测块。例如,模式应用单元360使用接收到的一些语法元素确定用于编码视频片的视频块的预测模式(例如帧内预测或帧间预测)、帧间预测片类型(例如B片、P片或GPB片)、用于片的一个或多个参考图像列表的构建信息、用于片的每个帧间编码视频块的运动矢量、用于片的每个帧间编码视频块的帧间预测状态、其它信息,以解码当前视频片内的视频块。除了片(例如视频片)或作为片的替代,相同或类似的过程可应用于编码区块组(例如视频编码区块组)和/或编码区块(例如视频编码区块)的实施例,例如视频可以使用I、P或B编码区块组和/或编码区块进行编码。
在一个实施例中,图3所示的视频编码器30还可以用于使用片(也称为视频片)分割和/或解码图像,其中图像可以使用一个或多个片(通常为不重叠的)进行分割或解码。每个片可包括一个或多个块(例如CTU)或一个或多个块组(例如H.265/HEVC/VVC标准中的编码区块和VVC标准中的砖。
在一个实施例中,图3所示的视频解码器30还可以用于使用片/编码区块组(也称为视频编码区块组)和/或编码区块(也称为视频编码区块)对图像进行分割和/或解码,其中图像可以使用一个或多个片/编码区块组(通常为不重叠的)进行分割或解码,每个片/编码区块组可包括一个或多个块(例如CTU)或一个或多个编码区块等,其中每个编码区块可以为矩形等形状,可包括一个或多个完整或部分块(例如CTU)。
视频解码器30的其它变型可用于对编码图像数据21进行解码。例如,解码器30可以在没有环路滤波器单元320的情况下产生输出视频流。例如,基于非变换的解码器30可以在某些块或帧没有逆变换处理单元312的情况下直接反量化残差信号。在另一种实现方式中,视频解码器30可以具有组合成单个单元的反量化单元310和逆变换处理单元312。
应理解,在编码器20和解码器30中,可以对当前步骤的处理结果进一步处理,然后输出到下一步骤。例如,在插值滤波、运动矢量推导或环路滤波之后,可以对插值滤波、 运动矢量推导或环路滤波的处理结果进行进一步的运算,例如裁剪(clip)或移位(shift)运算。
应该注意的是,可以对当前块的推导运动矢量(包括但不限于仿射模式的控制点运动矢量、仿射、平面、ATMVP模式的子块运动矢量、时间运动矢量等)进行进一步运算。例如,根据运动矢量的表示位将运动矢量的值限制在预定义范围。如果运动矢量的表示位为bitDepth,则范围为-2^(bitDepth-1)至2^(bitDepth-1)-1,其中“^”表示幂次方。例如,如果bitDepth设置为16,则范围为-32768~32767;如果bitDepth设置为18,则范围为-131072~131071。例如,推导运动矢量的值(例如一个8×8块中的4个4×4子块的MV)被限制,使得所述4个4×4子块MV的整数部分之间的最大差值不超过N个像素,例如不超过1个像素。这里提供了两种根据bitDepth限制运动矢量的方法。
尽管上述实施例主要描述了视频编解码,但应注意的是,译码系统10、编码器20和解码器30的实施例以及本文描述的其它实施例也可以用于静止图像处理或编解码,即视频编解码中独立于任何先前或连续图像的单个图像的处理或编解码。一般情况下,如果图像处理仅限于单个图像17,帧间预测单元244(编码器)和帧间预测单元344(解码器)可能不可用。视频编码器20和视频解码器30的所有其它功能(也称为工具或技术)同样可用于静态图像处理,例如残差计算204/304、变换206、量化208、反量化210/310、(逆)变换212/312、分割262/362、帧内预测254/354和/或环路滤波220/320、熵编码270和熵解码304。
图4为本申请实施例的视频译码设备400的示例性框图。视频译码设备400适用于实现本文描述的公开实施例。在一个实施例中,视频译码设备400可以是解码器,例如图1A中的视频解码器30,也可以是编码器,例如图1A中的视频编码器20。
视频译码设备400包括:用于接收数据的入端口410(或输入端口410)和接收单元(receiver unit,Rx)420;用于处理数据的处理器、逻辑单元或中央处理器(central processing unit,CPU)430;用于传输数据的发送单元(transmitter unit,Tx)440和出端口450(或输出端口450);用于存储数据的存储器460。视频译码设备400还可包括耦合到入端口410、接收单元420、发送单元440和出端口450的光电(optical-to-electrical,OE)组件和电光(electrical-to-optical,EO)组件,用于光信号或电信号的出口或入口。
处理器430通过硬件和软件实现。处理器430可实现为一个或多个处理器芯片、核(例如,多核处理器)、FPGA、ASIC和DSP。处理器430与入端口410、接收单元420、发送单元440、出端口450和存储器460通信。处理器430包括译码模块470。译码模块470实施上文所公开的实施例。例如,译码模块470执行、处理、准备或提供各种编码操作。因此,通过译码模块470为视频译码设备400的功能提供了实质性的改进,并且影响了视频译码设备400到不同状态的切换。或者,以存储在存储器460中并由处理器430执行的指令来实现译码模块470。
存储器460包括一个或多个磁盘、磁带机和固态硬盘,可以用作溢出数据存储设备,用于在选择执行程序时存储此类程序,并且存储在程序执行过程中读取的指令和数据。存储器460可以是易失性和/或非易失性的,可以是只读存储器(read-only memory,ROM)、随机存取存储器(random access memory,RAM)、三态内容寻址存储器(ternary content-addressable memory,TCAM)和/或静态随机存取存储器(static random-access memory, SRAM)。
可分级视频编码,又称可伸缩视频编码,是当前视频编码标准的扩展编码标准(一般为高级视频编码(advanced video coding,AVC)(H.264)的扩展标准可伸缩视频编码(scalable video coding,SVC),或高效率视频编码(high efficiency video coding,HEVC)(H.265)的扩展标准可伸缩高效视频编码(scalable high efficiency video coding,SHVC))。可分级视频编码的出现主要是为了解决实时视频传输中出现的由于网络带宽实时变化带来的丢包和时延抖动问题。
可分级视频编码中的基本结构可称作层级,可分级视频编码技术通过对原始的图像块进行空域分级(分辨率分级),可以得到不同分辨率的层级的码流。分辨率可以是指图像块的以像素为单位的尺寸大小,低层级的分辨率较低,而高层级的分辨率不低于低层级的分辨率;或者,通过对原始的图像块进行时域分级(帧率分级),可以得到不同帧率的层级的码流。帧率可以是指单位时间内视频包含的图像帧数,低层级的帧率较低,而高层级的帧率不低于低层级的帧率;或者,通过对原始的图像块进行质量域分级,可以得到不同编码质量的层级的码流。编码质量可以是指视频的品质,低层级的图像失真程度较大,而高层级的图像失真程度不高于低层级的图像失真程度。
通常,被称作基本层的层级是可分级视频编码中的最底层。在空域分级中,基本层图像块使用最低分辨率进行编码;在时域分级中,基本层图像块使用最低帧率进行编码;在质量域分级中,基本层图像块使用最高QP或是最低码率进行编码。即基本层是可分级视频编码中品质最低的一层。被称作增强层的层级是可分级视频编码中在基本层之上的层级,由低到高可以分为多个增强层。最低层增强层依据基本层获得的编码信息,编码得到的合并码流,其编码分辨率比基本层高,或是帧率比基本层高,或是码率比基本层大。较高层增强层可以依据较低层增强层的编码信息,来编码更高品质的图像块。
例如,图5为本申请可分级视频编码的一个示例性的层级示意图,如图5所示,原始图像块送入可分级编码器后,根据不同的编码配置可分层为基本层图像块B和增强层图像块(E1~En,n≥1),再分别编码得到包含基本层码流和增强层码流的码流。基本层码流一般是对图像块采用最低分辨率、最低帧率或者最低编码质量参数得到的码流。增强层码流是以基本层作为基础,叠加采用高分辨率、高帧率或者高编码质量参数对图像块进行编码得到的码流。随着增强层层数增加,编码的空域层级、时域层级或者质量层级也会越来越高。编码器向解码器传输码流时,优先保证基本层码流的传输,当网络有余量时,逐步传输越来越高层级的码流。解码器先接收基本层码流并解码,然后根据收到的增强层码流,按照从低层级到高层级的顺序,逐层解码空域、时域或者质量的层级越来越高的码流,然后将较高层级的解码信息叠加在较低层级的重建块上,获得较高分辨率、较高帧率或者较高质量的重建块。
如上,视频序列中的每个图像通常分割成不重叠的块集合,通常在块级上进行编码。换句话说,编码器通常在块(图像块)级处理即编码视频,例如,通过空间(帧内)预测和时间(帧间)预测来产生预测块;从图像块(当前处理/待处理的块)中减去预测块,得到残差块;在变换域中变换残差块并量化残差块,可以减少待传输(压缩)的数据量。编码器还需要经过反量化和反变换获得重建残差块,然后将重建残差块的像素点值和预测块的像素点值相加以获得重建块。基本层的重建块是指针对原始的图像块分层得到的基本层 图像块执行上述操作所得到的重建块。例如,图6为本申请增强层的编码方法的一个示例性的流程图,如图6所示,编码器根据原始图像块(例如LCU)获得基本层的预测块,然后对原始图像块和基本层的预测块中的对应像素点求差以获得基本层的残差块,再对基本层的残差块进行划分后,进行变换、量化,连同基本层编码控制信息、预测信息、运动信息等共同进行熵编码得到基本层的码流。编码器对量化后的量化系数进行反量化、反变换得到基本层的重建残差块,然后对基本层的预测块和基本层的重建残差块中的对应像素点求和以获得基本层的重建块。
当用户通过终端设备(例如,手机、平板、大屏等)看图片或视频时,有时会手动放大图片或视频的局部区域(例如,图片或视频中的主体或感兴趣区域(region of interest,ROI))以查看该局部区域的细节,此时本申请实施例提供的图像处理方法可以对局部区域进行处理,以解决放大后的图像存在不清晰、图像模糊等问题。
可选的,图像处理方法的应用场景可以是电子设备中涉及图像/视频采集、存储、显示的业务,例如,图库、华为视频等。电子设备例如可以是智能终端、平板、可穿戴设备等,该电子设备同时具备图像压缩和图像解压缩的功能,即电子设备采集原始图像/视频,对该原始图像/视频进行压缩处理得到压缩图像/视频,然后将压缩图像/视频存储在电子设备的内存中;当用户想要看图片/视频时,电子设备对压缩图像/视频进行解压缩得到重建的图像/视频,并将其显示于屏幕上。电子设备可以参照图1B所示实施例,但并不对电子设备构成限定。
可选的,图像处理方法的应用场景也可以是端云共享、视频监控、投屏等中涉及图像/视频采集、存储或传输的业务,例如,华为云、视频监控、直播、相册/视频投屏等。通常该应用场景下包括图像/视频的源设备和目的设备,源设备具备图像压缩的功能,源设备采集原始图像/视频,对该原始图像/视频进行压缩处理得到码流,然后将码流存储在电子设备的内存中,目的设备具备图像解压缩的功能,当用户想要看图片/视频时,目的设备向源设备请求加载码流,目的设备对码流进行解压缩得到重建的图像/视频,并将其显示于屏幕上。源设备和目的设备可以参照图1A所示实施例,但并不对源设备和目的设备构成限定。
为了便于描述,下文的实施例按照编码侧和解码侧的方式进行描述,应理解,编码侧和解码侧可以设置于同一电子设备上,例如,智能手机;编码侧和解码侧也可以设置于不同的设备上,例如,编码侧在云端,解码侧在智能手机,或者,编码侧在监控摄像头,解码侧在监控中心平台,或者,编码侧在智能手机,解码侧在大屏。
图7为本申请图像处理方法的一个示例性的流程图。过程700可由编码侧和解码侧执行。为了方便,过程700把编码侧和解码侧放在了一起描述。在具体实现的过程中,编码侧和解码侧可以单独执行。过程700描述为一系列的步骤或操作,应当理解的是,过程700可以以各种顺序执行和/或同时发生,不限于图7所示的执行顺序。
针对编码侧,过程700包括如下步骤:
步骤701、编码侧获取待处理图像。
待处理图像也可以称作全局图像,通常摄像装置对着目标区域可以采集到包含目标区域的一张图像,该完整的图像即为全局图像。
编码侧可以直接通过自身的摄像装置采集得到待处理图像,也可以从图库中提取待处理图像,还可以通过网络或存储介质等从其它地方获取待处理图像,本申请实施例对此不 做具体限定。
步骤702、编码侧获取多组视觉感官体验参数。
编码侧可以先对待处理图像进行划分得到多张备选局部图像,然后获取与多张备选局部图像对应的多组视觉感官体验参数。
由于备选局部图像是由待处理图像划分的到的,因此每个备选局部图像对应待处理图像中的局部区域,例如,对待处理图像进行四叉树划分,左上角的备选局部图像对应位于待处理图像左上角的四分之一的局部区域。
划分方法可以包括根据待处理图像的像素特征划分,例如,按照像素亮度划分得到图像暗区、图像亮区等;按照像素颜色划分得到ROI、图像主体区等;划分方法也可以包括根据待处理图像的尺寸划分,例如,图8为金字塔式划分方式的示意图,如图8所示,图中1×表示全局图像,2×表示水平垂直缩小2倍的局部图像,4×表示水平垂直缩小4倍的局部图像,64×表示水平垂直缩小64倍的局部图像,本实施例可以支持最大水平垂直缩小到64倍。划分得到的所有局部图像均可以称作备选局部图像。需要说明的是,本申请实施例还可以采用其它方式对待处理图像进行划分,例如,四叉树划分方式、二叉树划分方式等,对此不做具体限定。
视觉感官体验参数可以包括亮度、对比度、颜色和细节这四类参数,通过对备选局部图像的像素特征进行分析,以处理后呈现的画面仿真(逼近或者增强)人眼感知待放大区域的真实场景的视觉感官体验为目的,可以确定该备选局部图像对应的视觉感官体验参数。由此可见,本申请实施例中视觉感官体验参数和备选局部图像之间具备对应关系,基于任意一个备选局部图像的像素特征,可以为其确定一组视觉感官体验参数,以实现对该备选局部图像进行亮度、对比度、颜色和细节中至少其一的调整。
例如,对图像暗区的备选局部图像,可以对亮度和对比度进行提升,将颜色适应局部暗区,对欠曝细节进行增加,这样可以确定该备选局部图像的视觉感官体验参数包括亮度、对比度、颜色和细节四类参数,而参数的具体取值与前述调节需求相对应。又例如,对图像亮区的备选局部图像,可以对亮度和对比度进行降低,将颜色适应局部亮区,对过曝细节进行增加,这样可以确定该备选局部图像的视觉感官体验参数包括亮度、对比度、颜色和细节四类参数,而参数的具体取值与前述调节需求相对应。又例如对图像主体区域的备选局部图像,可以对亮度和对比度微调,将颜色适应图像主体,这样可以确定该备选局部图像的视觉感官体验参数包括亮度、对比度和颜色三类参数,而参数的具体取值与前述调节需求相对应。
示例性的,以下介绍一种视觉感官体验参数的确定方法:
图9为确定视觉感官体验参数的示意图,如图9所示,构建三个适应场内核:视明度感知模型(brightness perception model)、对比度感知模型(contrast perception model),颜色感知模型(color perception model),其中,视明度感知模型用于确保在各种显示器能力下亮暗感知跟人眼对真实场景感知一致;对比度感知模型用于确保在各种显示器能力下可区分变化量(just noticeable difference,JND)数量跟人眼对真实场景感知一致;颜色感知模型确保在各种显示器能力下颜色跟人眼对真实场景感知一致。适应场可以解决自然场景到最佳显示器D1,最佳显示器D1到各种显示器D2,各种显示D2在各种观看环境的映射问题。
Figure PCTCN2022133761-appb-000001
其中,L S.adapt表达人眼对自然场景的映射;L D1.adapt表达对人眼对最佳显示器D1的适应;L D2.adapt表达对人眼对各种显示器D2的适应;f()是一个函数,可以因实验数据拟合而出;hist()表示直方图;X,Y,Z表示在真实世界进入人眼光的X,Y,Z三个方向上的刺激值;RGB表示图像的像素值;hist(X,Y,Z)表示取X,Y,Z三个方向的直方图;hist(R,G,B)表示取RGB三个颜色通道的直方图;max()表示取直方图的每个区间的最大值;D1_peak_lum表示最佳显示器D1的显示峰值亮度;D2_peak_lum表示各种显示器D2的显示峰值亮度;D2_lux_reflect表示环境光打在各种显示器D2的反射率。
视明度模型定义了多个感知区间,例如,“white”,“bright grey”,“grey”,“dark grey”,“black”,L.adapt在grey区间描述的是特定视场大小形成的适应场中,人眼对视场中每个物体的明亮度感知。视明度模型不允许对比度调节(tone mapping,TM)后跨感知区间,例如,属于white区间的像素值在TM之后不允许落到bright grey的区间。
对比度感知模型描述的是特定视场大小形成的适应场中,人眼对于对比度的感知程度,感知模型不允许TM后JND数量有明显的下降,例如,门限低于10%。
颜色感知模型描述的是特定视场大小形成的适应场中,人眼对于不同光源色品的适应能力,使得感知的物体颜色趋向于记忆中本身的颜色(例如,白纸仍然感知为白色),其中光源色品适应变化对于的像素值调整可以使用色适应转换算法(Chromatic Adaptation Transform)来实现。
需要说明的是,上述过程只是示例性的介绍了一种视觉感官体验参数的确定方法,本申请实施例还可以采用其它方式确定视觉感官体验参数,对此不做具体限定。
步骤703、编码侧对待处理图像和多组视觉感官体验参数进行编码。
编码侧可以在获取多组视觉感官体验参数后,对待处理图像和多组视觉感官体验参数进行编码,其中,对待处理图像的编码方式可以参照联合图像专家组(joint photographic experts group,JPEG)编码标准、混合视频编码标准或者可分级视频编码标准,还可以采用端到端编码方式,此处不再赘述;对多组视觉感官体验参数可以作为元数据(metadata)进行编码,可以参照CUVA1.0标准。此外,编码侧还可以在码流中写入对待处理图像的划分方式,以及备选局部图像和视觉感官体验参数之间的对应关系,以使得解码侧可以通过解析码流获取多张备选局部图像及其对应的多组视觉感官体验参数。前述信息写入码流的方式可以参照相关技术,只要能满足使解码侧了解待处理图像的划分方式,以及各个备选局部图像与多组视觉感官体验参数的对应关系的目的,本申请实施例此处不做具体限定。
示例性的,编码侧编码一张待处理图像得到的压缩数据或码流,存储在联合图像专家组(joint photographic experts group,JPEG)文件的APP字段或者H264/265的增强层中,也存储对应的metadata,以保证不同显示能力的屏幕的显示效果一致。
图10为图像编码流程的示例图,如图10所示,对待编码图像(全局图像)进行JPEG编码得到全局图像的码流。相应的,编码侧采用反向JPEG解码得到全局图像的重建图像。
针对每个备选局部图像,先将其与全局图像对齐,即确定备选局部图像包含的像素在 全局图像中的对应像素,然后对重建图像和备选局部图像中相对应的像素求差得到备选局部图像的残差,对备选局部图像的残差进行编码得到该备选局部图像的码流。残差对应的码流可以放在通用编码器的某些字段中,例如,针对图片,通用编码器是JPEG,那么残差对应的码流放到JPEG的扩展字段、APP9或者APP10的位置。例如,针对视频,通用编码器是H.265,那么残差对应的码流放到SEI层中。此外,编码侧根据多组视觉感官体验参数产生metadata,以保证不同显示能力屏幕效果一致,产生方法可以参考CUVA1.0标准。元数据可以放在残差对应的码流之前、之后或者中间,对此不做具体限定。
本申请实施例将metadata的产生方法推广到每个备选局部图像,可以保证每个备选局部图像在不同显示能力的屏幕的显示效果一致。
编码侧得到的码流可以通过有线或无线的通信链路传输至解码端,也可以通过电子设备的内部总线传输至解码侧。
针对解码侧,过程700包括如下步骤:
步骤704、解码侧获取待处理图像。
解码侧可以通过采用与编码侧相对应的解码方式对码流进行解码得到待处理图像。
图11为图像解码流程的示例图,如图11所示,解码侧对全局图像的码流进行JPEG解码得到全局图像(待处理图像)的重建图像。针对每个备选局部图像的码流,解码侧对码流进行残差解码得到该备选局部图像的残差,然后将该备选局部图像的残差与全局图像的重建图像的对应像素求和得到该备选局部图像。
步骤705、解码侧获取放大操作指令,放大操作指令用于指示待处理图像中的待放大区域。
放大操作指令是作用于待处理图像上的操作产生的。例如,用户在手机上看一张图像,当想要将该图片的局部区域放大看细节时,可以用大拇指和食指在手机的屏幕上显示该局部区域的地方做两指放大的手势,或者用大拇指和食指在手机的屏幕上显示该局部区域的地方双击,从而在手机的屏幕上显示该局部区域的图片,该手势即可产生上述放大操作指令。又例如,用户将手机上的视频投屏到大屏上播放,当想要将某部分区域的视频放大播放时,可以用大拇指和食指在手机的屏幕上显示该局部区域的地方做两指放大的手势,或者用大拇指和食指在手机的屏幕上显示该局部区域的地方双击,从而在大屏上显示该局部区域的视频,该手势即可产生上述放大操作指令。需要说明的是,放大操作指令还可以通过其它方式产生,本申请实施例对此不做具体限定。
如上所述,放大操作指令除了指示放大图像的指令,还指示了待放大区域,这与产生放大操作指令的操作所对应的位置相关联,例如,以两指放大的手势的起始位置为中心,以设定长度为边长的矩形区域;或者,以两指放大的手势的起始位置为中心,以设定长度为半径的圆形区域。需要说明的是,待放大区域还可以采用其它方式确定,本申请实施例对此不做具体限定。
步骤706、解码侧获取与一张或多张局部图像对应的一组或多组视觉感官体验参数。
本申请实施例中,解码侧可以在确定待放大区域后,根据码流中携带的信息可以获取待处理图像的划分方式,以及划分的到的多张备选局部图像和多组视觉感官体验参数之间的对应关系,基于此,解码侧可以先根据前述划分方式对待处理图像进行划分得到多张备选局部图像,然后从多张备选局部图像中确定与待放大区域对应的一张或多张局部图像, 例如,根据待放大区域的位置,确定待放大区域包含了哪些备选局部图像,这些备选局部图像即为与待放大区域对应的局部图像。
如编码侧的描述,多张备选局部图像分别对应一组视觉感官体验参数,解码侧可以通过解码码流(例如解析元数据)得到与上述一张或多张局部图像对应的一组或多组视觉感官体验参数。
步骤707、解码侧根据一组或多组视觉感官体验参数分别对对应的局部图像进行处理以得到经处理的局部图像。
解码侧根据得到的一组或多组视觉感官体验参数分别对对应的局部图像进行处理以得到经处理的局部图像,这些经处理的局部图像可以呈现的画面仿真(逼近或者增强)人眼感知待放大区域的真实场景的视觉感官体验。
根据一组视觉感官体验参数包含的参数内容,解码侧可以对与其对应的局部图像进行的处理包括以下至少一种:
当视觉感官体验参数包括亮度参数时,对对应局部图像进行亮度调节;
当视觉感官体验参数包括对比度参数时,对对应局部图像进行对比度调节;
当视觉感官体验参数包括颜色参数时,对对应局部图像进行颜色调节;
当视觉感官体验参数包括细节参数时,对对应局部图像进行细节调节。
例如,对图像暗区的局部图像,可以对亮度和对比度进行提升,将颜色适应局部暗区,对欠曝细节进行增加,这样可以确定该局部图像的视觉感官体验参数包括亮度、对比度、颜色和细节四类参数,而参数的具体取值与前述调节需求相对应。又例如,对图像亮区的局部图像,可以对亮度和对比度进行降低,将颜色适应局部亮区,对过曝细节进行增加,这样可以确定该局部图像的视觉感官体验参数包括亮度、对比度、颜色和细节四类参数,而参数的具体取值与前述调节需求相对应。又例如对图像主体区域的局部图像,可以对亮度和对比度微调,将颜色适应图像主体,这样可以确定该局部图像的视觉感官体验参数包括亮度、对比度和颜色三类参数,而参数的具体取值与前述调节需求相对应。
在一种可能的实现方式中,解码侧可以通过以下方法实现对一张或多张局部图像的细节调节:
(1)获取多张参考图像,该多张参考图像和待处理图像是多个摄像头针对同一场景拍摄得到的,根据多张参考图像对对应局部图像进行细节调节。
编码端在采集待处理图像时,可以同时打开多个摄像头对同一场景进行拍摄,从而可以以不同的焦距、不同的角度等采集到多张参考图像,由于该多张参考图像拍摄的是同一场景,因此即使待处理图像没有拍到的细节,有可能在被其它摄像头捕捉到,这样编码侧可以基于该多张参考图像获取待处理图像的细节参数,从而使得解码侧可以基于该细节参数对待放大区域进行细节调节。
(2)获取与待处理图像的相似度超过预设阈值的多张历史图像,根据多张历史图像对对应局部图像进行细节调节。
不排除用户针对同一场景拍摄过多张图像,当历史图像和待处理图像的相似度较高时,可以认为历史图像可以给待处理图像提供细节参照,这样编码侧可以基于该多张历史图像获取待处理图像的细节参数,从而使得解码侧可以基于该细节参数对待放大区域进行细节调节。
在一种可能的实现方式中,解码侧可以将经处理的局部图像存储在本地,以便于后续用户再次放大同样区域时,将经处理的局部图像从存储器中提取出来直接显示。
在一种可能的实现方式中,解码侧可以将经处理的局部图像传输给显示装置(例如显示器)进行显示。
图12a为对局部图像的处理效果的示例图,如图12a所示,待放大区域是待处理图像中包含衬衣的区域,在全图视角下适应场的适应光源趋向于高色温,色彩符合感知。直接放大待放大区域,光源颜色适应场与全局的不匹配,比真实感知偏暖。采用本申请实施例的方法得到经处理的局部图像后,局部光源下光源颜色适应场重新计算,往低色温偏移,色彩符合感知。
图12b为对局部图像的处理效果的示例图,如图12b所示,待放大区域是待处理图像中包含顶灯的区域,在全图视角下对比度、亮度、细节都合适。直接放大待放大区域,细节丢失,依然过曝,对比度、亮度都没有带来视觉感官的提升。采用本申请实施例的方法得到经处理的局部图像后,过曝细节恢复,对比度提升,颜色饱和度提升,视觉感观明显提升。
此外,本申请实施例中,解码侧可以在完成上述步骤后,获取放大终止指令,该放大终止指令是由用户的两指在经处理的局部图像上进行的向内滑动操作产生的,或者,放大终止指令是由用户的单指在经处理的局部图像上进行的点击操作产生的,然后根据放大终止指令显示待处理图像。
即用户在看到经处理的放大的局部图像后,可以通过对前述局部图像执行操作以恢复到放大前的图像(待处理图像)。前述操作可以是用大拇指和食指在手机的屏幕上显示的放大图像上做两指缩小的手势,或者单指在手机的屏幕上显示的放大图像上点击。对此不做具体限定。
本申请实施例,编码侧在采集到待处理图像后,针对待处理图像获取多组视觉感官体验参数,编码侧将待处理图像和多组视觉感官体验参数编码发送给解码侧,这样解码侧可以在根据用户操作确定待放大区域后,基于待放大区域对应的视觉感官体验参数进行局部图像处理从而得到经处理的局部图像,该经处理的局部图像呈现的画面仿真(逼近或者增强)人眼感知待放大区域的真实场景的视觉感官体验,仿佛人真的走进待放大区域对应的真实场景时所看到的画面,一方面解决放大后的图像存在不清晰、图像模糊的问题,另一方面可以提升用户进行视觉放大的体验。
图13为本申请图像处理方法的一个示例性的流程图。过程1300可由编码侧和解码侧执行。为了方便,过程1300把编码侧和解码侧放在了一起描述。在具体实现的过程中,编码侧和解码侧可以单独执行。过程1300描述为一系列的步骤或操作,应当理解的是,过程1300可以以各种顺序执行和/或同时发生,不限于图13所示的执行顺序。
针对编码侧,过程1300包括如下步骤:
步骤1301、编码侧获取待处理图像,并对待处理图像进行划分得到多张备选局部图像。
步骤1301可以参照图7所示实施例的步骤701,此处不再赘述。
步骤1302、编码侧获取多组视觉感官体验参数。
步骤1302可以参照图7所示实施例的步骤702,此处不再赘述。
步骤1303、编码侧根据多组视觉感官体验参数分别对对应的备选局部图像进行处理以得到多张经处理的备选局部图像。
与图7所示实施例的区别在于:编码侧在得到多组视觉感官体验参数即对多张备选局部图像分别进行处理得到多张经处理的备选局部图像,而无需将多组视觉感官体验参数传输给解码侧,由解码侧执行图像处理的步骤。
针对每张备选局部图像,编码侧对其进行的图像处理可以参照图7所示实施例的步骤706的描述,区别在于:步骤706中是解码侧对待放大区域所包含的一张或多张局部区域分别进行处理,而步骤1303是编码侧对所有的备选局部区域分别进行处理。
步骤1304、编码侧对待处理图像和多张经处理的备选局部图像进行编码。
步骤1304可以参照图7所示实施例的步骤703,区别在于:编码侧只需要编码待处理图像和多张经处理的备选局部图像,而不需要基于视觉感官体验参数产生相应的元数据。
此外,编码侧可以编码多张经处理的备选局部图像之前,对待处理图像的各个局部区域实施TM,从而提升待处理图像的局部区域和与其对应的经处理的备选局部图像之间的相似度,进而在减少备选局部图像的残差数据量。
针对解码侧,过程1300包括如下步骤:
步骤1305、解码侧获取待处理图像和多张经处理的备选局部图像。
步骤1305可以参照图7所示实施例的步骤704,此处不再赘述。
步骤1306、解码侧获取放大操作指令,放大操作指令用于指示待处理图像中的待放大区域。
步骤1306可以参照图7所示实施例的步骤705,此处不再赘述。
步骤1307、解码侧根据待放大区域获取经处理的局部图像。
解码侧可以解码码流得到多张经处理的备选局部图像,然后从中获取与待放大区域对应的一张或多张经处理的局部图像,即待放大区域包含哪些经处理的备选局部图像,那么这些经处理的备选局部图像即为待放大区域对应的一张或多张经处理的局部图像。
与图7所示实施例的区别在于:解码侧解码得到的备选局部图像是编码侧已经处理过的图像,因此解码侧在获取待放大区域后,从多张备选局部图像中确定与待放大区域对应的一张或多张经处理的备选局部图像,该一张或多张经处理的备选局部图像直接构成经处理的局部图像。
本申请实施例,编码侧在采集到待处理图像后,针对待处理图像中的多张备选局部图像获取各自的视觉感官体验参数,然后根据多组视觉感官体验参数分别对对应的备选局部图像进行处理得到多张经处理的备选局部图像,编码侧将待处理图像和多张经处理的备选局部图像编码发送给解码侧,这样解码侧可以在根据用户操作确定待放大区域后,直接解码得到经处理的局部图像,该经处理的局部图像呈现的画面仿真(逼近或者增强)人眼感知待放大区域的真实场景的视觉感官体验,仿佛人真的走进待放大区域对应的真实场景时所看到的画面,一方面解决放大后的图像存在不清晰、图像模糊的问题,另一方面可以提升用户进行视觉放大的体验。
图14为本申请图像处理方法的一个示例性的流程图。过程1400可由编码侧和解码侧执行。为了方便,过程1400把编码侧和解码侧放在了一起描述。在具体实现的过程中,编码侧和解码侧可以单独执行。过程1400描述为一系列的步骤或操作,应当理解的是, 过程1400可以以各种顺序执行和/或同时发生,不限于图14所示的执行顺序。
针对编码侧,过程1400包括如下步骤:
步骤1401、编码侧获取待处理图像。
步骤1401和图7所示实施例的步骤701的区别在于:编码侧在获取待处理图像(全局图像)后,不需要对其进行划分。
步骤1402、编码侧对待处理图像进行编码。
编码侧对待处理图像的编码方式可以参照JPEG编码标准、混合视频编码标准或者可分级视频编码标准,还可以采用端到端编码方式,此处不再赘述。
针对解码侧,过程1400包括如下步骤:
步骤1403、解码侧获取待处理图像。
步骤1403可以参照图7所示实施例的步骤704,此处不再赘述。
步骤1404、解码侧获取放大操作指令,放大操作指令用于指示待处理图像中的待放大区域。
步骤1404可以参照图7所示实施例的步骤705,此处不再赘述。
步骤1405、解码侧根据预设规则获取与待放大区域对应的视觉感官体验参数。
本申请实施例中,编码侧只在码流中携带了待处理图像(即全局图像),没有多张备选局部图像,也没有多组视觉感官体验参数,更不会对待处理图像分割后再进行图像处理的操作,因此解码端解析码流后只能得到全局的重建图像。这样解码侧要对待放大区域进行处理就需要根据历史数据或经验信息来获取待放大区域对应的视觉感官体验参数。
解码侧可以先根据第一预设规则对待处理图像进行划分以得到多张备选局部图像,获取与待放大区域对应的一张或多张局部图像,多张备选局部图像包括一张或多张局部图像。然后根据第二预设规则获取与一张或多张局部图像对应的一组或多组视觉感官体验参数。上述预设规则包括第一预设规则和第二预设规则。
解码侧可以先采用图7所示实施例的步骤701中的关于划分方式的描述对重建得到的全局图像进行划分得到多张备选局部图像。再根据待放大区域的位置,确定其包含了哪些备选局部图像,这些备选局部图像即为与待放大区域对应的局部图像。
解码侧可以以以下方式获取一组或多组视觉感官体验参数:
对亮度的调整遵循如下原则:
Figure PCTCN2022133761-appb-000002
其中,whole pixel number表示全局图像包含的像素数目;local pixel number表示待放大区域包含的像素数目;
Figure PCTCN2022133761-appb-000003
表示待放大区域内的像素值的累加和。f()表示一种迭代规则使得等式成立,例如,如果等号左边大于等号右边,则对等号右边降低一部分像素的取值,使得等号成立。
对对比度的调整遵循如下原则:
Figure PCTCN2022133761-appb-000004
与亮度不一样的地方:f1为求对比度的算子,例如拉普拉斯算子;
Figure PCTCN2022133761-appb-000005
表示以像素值坐标(i,j)为中心的[M,N]窗口。
对细节的调整遵循如下原则:
Figure PCTCN2022133761-appb-000006
与对比度不一样的地方在于:f2为细节提取算子,例如采样sobel算子。
对颜色的调整遵循如下原则:
Figure PCTCN2022133761-appb-000007
其中,R/B,B/G分别表示每个像素值的RGB单独分量的比值;f3为一种调整方法使得等式成立,例如,采用RGB分量乘以修正的gain值来迭代。
需要说明的是,本申请实施例还可以采用其它方式获取与一张或多张局部图像对应的一组或多组视觉感官体验参数,对此不做具体限定。
步骤1406、解码侧根据视觉感官体验参数对待放大区域进行处理以得到经处理的局部图像。
步骤1406可以参照图7所示实施例的步骤707,此处不再赘述。
本申请实施例,编码侧在采集到待处理图像后,直接编码待处理图像,无需获取其多张备选局部图像,以及各自的视觉感官体验参数,这样可以减少码流的占用。而解码侧可以在根据用户操作确定待放大区域后,基于待放大区域得到对应的一张或多张局部图像,再基于预设规则得到一组或多组视觉感官体验参数,从而基于这些参数对前述局部图像进行进行处理从而得到经处理的局部图像,该经处理的局部图像呈现的画面仿真(逼近或者增强)人眼感知待放大区域的真实场景的视觉感官体验,仿佛人真的走进待放大区域对应的真实场景时所看到的画面,一方面解决放大后的图像存在不清晰、图像模糊的问题,另一方面可以提升用户进行视觉放大的体验。
结合图7或图13所示实施例,本申请实施例可以由编码侧先对多张备选局部图像进行处理,然后将待处理图像、多张经处理的备选局部图像以及多组视觉感官体验参数编入码流,传输至解码侧。此时,视觉感官体验参数可以分成两部分,一部分给编码侧使用,用于对多张备选局部图像进行处理,这样多张经处理的备选局部图像是已经从亮度、对比 度、颜色和细节中的至少其一进行了调整,得到较好的效果;另一部分编入码流传输至解码侧使用,也是用于对多张备选局部图像进行处理,这样再经过解码侧的处理,可以使得经处理的局部图像更符合显示端的需求,达到最好的显示效果,相应的,传输至解码侧的视觉感官体验参数可以包括对亮度、对比度等。解码侧解析码流,重建得到待处理的图像、多张经处理的备选局部图像以及多组视觉感官体验参数,采用多组视觉感官体验参数再次对多张经处理的备选局部图像进行处理得到最终的经处理的局部图像。
图15为本申请实施例解码装置1500的一个示例性的结构示意图,如图15所示,本实施例的装置1500可以应用于解码侧设备,也可以应用于具备解码侧功能的电子设备,该电子设备还具有编码侧功能。该装置1500可以包括:获取模块1501和处理模块1502。
可选的,获取模块1501,用于获取待处理图像;获取放大操作指令,所述放大操作指令用于指示所述待处理图像中的待放大区域,所述待放大区域对应一张或多张局部图像;获取与所述一张或多张局部图像对应的一组或多组视觉感官体验参数;处理模块1502,用于根据所述一组或多组视觉感官体验参数分别对对应的局部图像进行处理以得到经处理的局部图像。
在一种可能的实现方式中,所述视觉感官体验参数包括亮度参数、对比度参数、颜色参数和细节参数中的至少之一;所述处理模块1502,具体用于执行以下至少一种操作:当所述视觉感官体验参数包括亮度参数时,对对应局部图像进行亮度调节;当所述视觉感官体验参数包括对比度参数时,对所述对应局部图像进行对比度调节;当所述视觉感官体验参数包括颜色参数时,对所述对应局部图像进行颜色调节;当所述视觉感官体验参数包括细节参数时,对所述对应局部图像进行细节调节;其中,所述对应局部图像是所述一张或多张局部图像的其中之一。
在一种可能的实现方式中,所述处理模块1502,具体用于当对应局部图像对应图像暗区时,对所述对应局部图像进行亮度提升、对比度提升、颜色适应暗区和欠曝细节增加中的至少一个处理;或者,当所述对应局部图像对应图像亮区时,对所述对应局部图像进行亮度降低、对比度降低、颜色适应亮区和过曝细节增加中的至少一个处理;或者,当所述对应局部图像对应图像主体区域时,对所述对应局部图像进行颜色适应主体的处理;其中,所述对应局部图像是所述一张或多张局部图像的其中之一。
在一种可能的实现方式中,所述处理模块1502,具体用于获取多张参考图像,所述多张参考图像和所述待处理图像是多个摄像头针对同一场景拍摄得到的;根据所述多张参考图像对所述对应局部图像进行细节调节。
在一种可能的实现方式中,所述处理模块1502,具体用于获取与所述待处理图像的相似度超过预设阈值的多张历史图像;根据所述多张历史图像对所述对应局部图像进行细节调节。
在一种可能的实现方式中,所述经处理的局部图像呈现的画面仿真人眼感知所述待放大区域的真实场景的视觉感官体验。
在一种可能的实现方式中,所述放大操作指令是由用户的两指在所述待放大区域进行的向外滑动操作产生的;或者,所述放大操作指令是由用户的两指在所述待放大区域进行的点击操作产生的。
在一种可能的实现方式中,所述获取模块1501,具体用于对获取的码流进行解码以得 到所述一组或多组视觉感官体验参数。
在一种可能的实现方式中,所述获取模块1501,具体用于对获取的码流进行可分级视频解码以得到所述待处理图像;或者,对获取的图像文件进行图像解压缩以得到所述待处理图像。
在一种可能的实现方式中,所述处理模块1502,还用于显示所述经处理的局部图像;或者,存储所述经处理的局部图像。
在一种可能的实现方式中,所述获取模块1501,还用于获取放大终止指令,所述放大终止指令是由用户的两指在所述经处理的局部图像上进行的向内滑动操作产生的,或者,所述放大终止指令是由用户的单指在所述经处理的局部图像上进行的点击操作产生的;所述处理模块1502,还用于根据所述放大终止指令显示所述待处理图像。
可选的,获取模块1501,用于获取待处理图像;获取放大操作指令,所述放大操作指令用于指示所述待处理图像中的待放大区域;根据预设规则获取与所述待放大区域对应的视觉感官体验参数;处理模块1502,用于根据所述视觉感官体验参数对所述待放大区域进行处理以得到经处理的局部图像。
在一种可能的实现方式中,所述视觉感官体验参数包括亮度参数、对比度参数、颜色参数和细节参数中的至少之一;所述处理模块1502,具体用于执行以下至少一种操作:当所述视觉感官体验参数包括亮度参数时,对对应局部图像进行亮度调节;当所述视觉感官体验参数包括对比度参数时,对所述对应局部图像进行对比度调节;当所述视觉感官体验参数包括颜色参数时,对所述对应局部图像进行颜色调节;当所述视觉感官体验参数包括细节参数时,对所述对应局部图像进行细节调节;其中,所述对应局部图像是所述一张或多张局部图像的其中之一。
在一种可能的实现方式中,所述处理模块1502,具体用于当对应局部图像对应图像暗区时,对所述对应局部图像进行亮度提升、对比度提升、颜色适应暗区和欠曝细节增加中的至少一个处理;或者,当所述对应局部图像对应图像亮区时,对所述对应局部图像进行亮度降低、对比度降低、颜色适应亮区和过曝细节增加中的至少一个处理;或者,当所述对应局部图像对应图像主体区域时,对所述对应局部图像进行颜色适应主体的处理;其中,所述对应局部图像是所述一张或多张局部图像的其中之一。
在一种可能的实现方式中,所述处理模块1502,具体用于获取多张参考图像,所述多张参考图像和所述待处理图像是多个摄像头针对同一场景拍摄得到的;根据所述多张参考图像对所述对应局部图像进行细节调节。
在一种可能的实现方式中,所述处理模块1502,具体用于获取与所述待处理图像的相似度超过预设阈值的多张历史图像;根据所述多张历史图像对所述对应局部图像进行细节调节。
在一种可能的实现方式中,所述经处理的局部图像呈现的画面仿真人眼感知所述待放大区域的真实场景的视觉感官体验。
在一种可能的实现方式中,所述放大操作指令是由用户的两指在所述待放大区域进行的向外滑动操作产生的;或者,所述放大操作指令是由用户的两指在所述待放大区域进行的点击操作产生的。
在一种可能的实现方式中,所述获取模块1501,具体用于根据第一预设规则对所述待 处理图像进行划分以得到多张备选局部图像;获取与所述待放大区域对应的一张或多张局部图像,所述多张备选局部图像包括所述一张或多张局部图像;根据第二预设规则获取与所述一张或多张局部图像对应的一组或多组视觉感官体验参数;其中,所述预设规则包括所述第一预设规则和所述第二预设规则。
在一种可能的实现方式中,所述获取模块1501,具体用于对获取的码流进行可分级视频解码以得到所述待处理图像;或者,对获取的图像文件进行图像解压缩以得到所述待处理图像。
在一种可能的实现方式中,所述处理模块1502,还用于显示所述经处理的局部图像;或者,存储所述经处理的局部图像。
在一种可能的实现方式中,所述获取模块1501,还用于获取放大终止指令,所述放大终止指令是由用户的两指在所述经处理的局部图像上进行的向内滑动操作产生的,或者,所述放大终止指令是由用户的单指在所述经处理的局部图像上进行的点击操作产生的;所述处理模块1502,还用于根据所述放大终止指令显示所述待处理图像。
本实施例的装置,可以用于执行图7、图13或图14所示方法实施例中解码侧的技术方案,其实现原理和技术效果类似,此处不再赘述。
图16为本申请实施例编码装置1600的一个示例性的结构示意图,如图16所示,本实施例的装置1600可以应用于编码侧设备,也可以应用于具备编码侧功能的电子设备,该电子设备还具有解码侧功能。该装置1600可以包括:获取模块1601、编码模块1602和处理模块1603。其中,
可选的,获取模块1601,获取待处理图像;获取多组视觉感官体验参数;编码模块1602,用于对所述待处理图像和所述多组视觉感官体验参数进行编码。
可选的,获取模块1601,用于获取待处理图像;对所述待处理图像进行划分得到多张备选局部图像;获取多组视觉感官体验参数,所述多组视觉感官体验参数和所述多张备选局部图像对应;处理模块1603,用于根据所述多组视觉感官体验参数分别对对应的备选局部图像进行处理以得到多张经处理的备选局部图像;编码模块1602,用于对所述待处理图像和所述多张经处理的备选局部图像进行编码。
在一种可能的实现方式中,所述视觉感官体验参数包括亮度参数、对比度参数、颜色参数和细节参数中的至少之一;所述处理模块1603,具体用于执行以下至少一种操作:当所述视觉感官体验参数包括亮度参数时,对对应局部图像进行亮度调节;当所述视觉感官体验参数包括对比度参数时,对所述对应局部图像进行对比度调节;当所述视觉感官体验参数包括颜色参数时,对所述对应局部图像进行颜色调节;当所述视觉感官体验参数包括细节参数时,对所述对应局部图像进行细节调节;其中,所述对应局部图像是所述多张备选局部图像的其中之一。
在一种可能的实现方式中,所述处理模块1603,具体用于当对应局部图像对应图像暗区时,对所述对应局部图像进行亮度提升、对比度提升、颜色适应暗区和欠曝细节增加中的至少一个处理;或者,当所述对应局部图像对应图像亮区时,对所述对应局部图像进行亮度降低、对比度降低、颜色适应亮区和过曝细节增加中的至少一个处理;或者,当所述对应局部图像对应图像主体区域时,对所述对应局部图像进行颜色适应主体的处理;其中,所述对应局部图像是所述多张备选局部图像的其中之一。
在一种可能的实现方式中,所述处理模块1603,具体用于获取多张参考图像,所述多张参考图像和所述待处理图像是多个摄像头针对同一场景拍摄得到的;根据所述多张参考图像对所述对应局部图像进行细节调节。
在一种可能的实现方式中,所述处理模块1603,具体用于获取与所述待处理图像的相似度超过预设阈值的多张历史图像;根据所述多张历史图像对所述对应局部图像进行细节调节。
在一种可能的实现方式中,所述获取模块1601,具体用于根据第三预设规则获取所述多组视觉感官体验参数。
在一种可能的实现方式中,所述编码模块1602,具体用于对所述待处理图像和所述多张经处理的备选局部图像进行可分级视频编码以得到码流;或者,对所述待处理图像和所述多张经处理的备选局部图像进行图像压缩以得到图像文件。
本实施例的装置,可以用于执行图7、图13或图14所示方法实施例中的编码侧的技术方案,其实现原理和技术效果类似,此处不再赘述。
在实现过程中,上述方法实施例的各步骤可以通过处理器中的硬件的集成逻辑电路或者软件形式的指令完成。处理器可以是通用处理器、数字信号处理器(digital signal processor,DSP)、特定应用集成电路(application-specific integrated circuit,ASIC)、现场可编程门阵列(field programmable gate array,FPGA)或其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。本申请实施例公开的方法的步骤可以直接体现为硬件编码处理器执行完成,或者用编码处理器中的硬件及软件模块组合执行完成。软件模块可以位于随机存储器,闪存、只读存储器,可编程只读存储器或者电可擦写可编程存储器、寄存器等本领域成熟的存储介质中。该存储介质位于存储器,处理器读取存储器中的信息,结合其硬件完成上述方法的步骤。
上述各实施例中提及的存储器可以是易失性存储器或非易失性存储器,或可包括易失性和非易失性存储器两者。其中,非易失性存储器可以是只读存储器(read-only memory,ROM)、可编程只读存储器(programmable ROM,PROM)、可擦除可编程只读存储器(erasable PROM,EPROM)、电可擦除可编程只读存储器(electrically EPROM,EEPROM)或闪存。易失性存储器可以是随机存取存储器(random access memory,RAM),其用作外部高速缓存。通过示例性但不是限制性说明,许多形式的RAM可用,例如静态随机存取存储器(static RAM,SRAM)、动态随机存取存储器(dynamic RAM,DRAM)、同步动态随机存取存储器(synchronous DRAM,SDRAM)、双倍数据速率同步动态随机存取存储器(double data rate SDRAM,DDR SDRAM)、增强型同步动态随机存取存储器(enhanced SDRAM,ESDRAM)、同步连接动态随机存取存储器(synchlink DRAM,SLDRAM)和直接内存总线随机存取存储器(direct rambus RAM,DR RAM)。应注意,本文描述的系统和方法的存储器旨在包括但不限于这些和任意其它适合类型的存储器。
本领域普通技术人员可以意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、或者计算机软件和电子硬件的结合来实现。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本 申请的范围。
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的系统、装置和单元的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
在本申请所提供的几个实施例中,应该理解到,所揭露的系统、装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。
所述功能如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(个人计算机,服务器,或者网络设备等)执行本申请各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(read-only memory,ROM)、随机存取存储器(random access memory,RAM)、磁碟或者光盘等各种可以存储程序代码的介质。
以上所述,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以所述权利要求的保护范围为准。

Claims (44)

  1. 一种图像处理方法,其特征在于,包括:
    获取待处理图像;
    获取放大操作指令,所述放大操作指令用于指示所述待处理图像中的待放大区域,所述待放大区域对应一张或多张局部图像;
    获取与所述一张或多张局部图像对应的一组或多组视觉感官体验参数;
    根据所述一组或多组视觉感官体验参数分别对对应的局部图像进行处理以得到经处理的局部图像。
  2. 根据权利要求1所述的方法,其特征在于,所述视觉感官体验参数包括亮度参数、对比度参数、颜色参数和细节参数中的至少之一;
    所述根据所述一组或多组视觉感官体验参数分别对对应的局部图像进行处理以得到经处理的局部图像包括以下至少一种:
    当所述视觉感官体验参数包括亮度参数时,对对应局部图像进行亮度调节;
    当所述视觉感官体验参数包括对比度参数时,对所述对应局部图像进行对比度调节;
    当所述视觉感官体验参数包括颜色参数时,对所述对应局部图像进行颜色调节;
    当所述视觉感官体验参数包括细节参数时,对所述对应局部图像进行细节调节;
    其中,所述对应局部图像是所述一张或多张局部图像的其中之一。
  3. 根据权利要求1或2所述的方法,其特征在于,所述根据所述一组或多组视觉感官体验参数分别对对应的局部图像进行处理以得到经处理的局部图像,包括:
    当对应局部图像对应图像暗区时,对所述对应局部图像进行亮度提升、对比度提升、颜色适应暗区和欠曝细节增加中的至少一个处理;或者,
    当所述对应局部图像对应图像亮区时,对所述对应局部图像进行亮度降低、对比度降低、颜色适应亮区和过曝细节增加中的至少一个处理;或者,
    当所述对应局部图像对应图像主体区域时,对所述对应局部图像进行颜色适应主体的处理;
    其中,所述对应局部图像是所述一张或多张局部图像的其中之一。
  4. 根据权利要求2所述的方法,其特征在于,所述对所述对应局部图像进行细节调节,包括:
    获取多张参考图像,所述多张参考图像和所述待处理图像是多个摄像头针对同一场景拍摄得到的;
    根据所述多张参考图像对所述对应局部图像进行细节调节。
  5. 根据权利要求2所述的方法,其特征在于,所述对所述对应局部图像进行细节调节,包括:
    获取与所述待处理图像的相似度超过预设阈值的多张历史图像;
    根据所述多张历史图像对所述对应局部图像进行细节调节。
  6. 权利要求1-5中任一项所述的方法,其特征在于,所述经处理的局部图像呈现的画面仿真人眼感知所述待放大区域的真实场景的视觉感官体验。
  7. 权利要求1-6中任一项所述的方法,其特征在于,所述放大操作指令是由用户的两 指在所述待放大区域进行的向外滑动操作产生的;或者,所述放大操作指令是由用户的两指在所述待放大区域进行的点击操作产生的。
  8. 根据权利要求1-7中任一项所述的方法,其特征在于,所述获取与所述一张或多张局部图像对应的一组或多组视觉感官体验参数,包括:
    对获取的码流进行解码以得到所述一组或多组视觉感官体验参数。
  9. 根据权利要求1-7中任一项所述的方法,其特征在于,所述获取与所述一张或多张局部图像对应的一组或多组视觉感官体验参数,包括:
    根据预设规则获取所述一组或多组视觉感官体验参数。
  10. 根据权利要求1-9中任一项所述的方法,其特征在于,所述获取待处理图像,包括:
    对获取的码流进行可分级视频解码以得到所述待处理图像;或者,
    对获取的图像文件进行图像解压缩以得到所述待处理图像。
  11. 根据权利要求1-10中任一项所述的方法,其特征在于,所述根据所述一组或多组视觉感官体验参数分别对对应的局部图像进行处理以得到经处理的局部图像之后,还包括:
    显示所述经处理的局部图像;或者,
    存储所述经处理的局部图像。
  12. 根据权利要求1-11中任一项所述的方法,其特征在于,所述方法还包括:
    获取放大终止指令,所述放大终止指令是由用户的两指在所述经处理的局部图像上进行的向内滑动操作产生的,或者,所述放大终止指令是由用户的单指在所述经处理的局部图像上进行的点击操作产生的;
    根据所述放大终止指令显示所述待处理图像。
  13. 一种图像处理方法,其特征在于,包括:
    获取待处理图像;
    获取多组视觉感官体验参数;
    对所述待处理图像和所述多组视觉感官体验参数进行编码。
  14. 一种图像处理方法,其特征在于,包括:
    获取待处理图像;
    对所述待处理图像进行划分得到多张备选局部图像;
    获取多组视觉感官体验参数,所述多组视觉感官体验参数和所述多张备选局部图像对应;
    根据所述多组视觉感官体验参数分别对对应的备选局部图像进行处理以得到多张经处理的备选局部图像;
    对所述待处理图像和所述多张经处理的备选局部图像进行编码。
  15. 根据权利要求14所述的方法,其特征在于,所述视觉感官体验参数包括亮度参数、对比度参数、颜色参数和细节参数中的至少之一;
    所述根据所述多组视觉感官体验参数分别对对应的备选局部图像进行处理以得到多张经处理的备选局部图像包括以下至少一种:
    当所述视觉感官体验参数包括亮度参数时,对对应局部图像进行亮度调节;
    当所述视觉感官体验参数包括对比度参数时,对所述对应局部图像进行对比度调节;
    当所述视觉感官体验参数包括颜色参数时,对所述对应局部图像进行颜色调节;
    当所述视觉感官体验参数包括细节参数时,对所述对应局部图像进行细节调节;
    其中,所述对应局部图像是所述多张备选局部图像的其中之一。
  16. 根据权利要求14或15所述的方法,其特征在于,所述根据所述多组视觉感官体验参数分别对对应的备选局部图像进行处理以得到多张经处理的备选局部图像,包括:
    当对应局部图像对应图像暗区时,对所述对应局部图像进行亮度提升、对比度提升、颜色适应暗区和欠曝细节增加中的至少一个处理;或者,
    当所述对应局部图像对应图像亮区时,对所述对应局部图像进行亮度降低、对比度降低、颜色适应亮区和过曝细节增加中的至少一个处理;或者,
    当所述对应局部图像对应图像主体区域时,对所述对应局部图像进行颜色适应主体的处理;
    其中,所述对应局部图像是所述多张备选局部图像的其中之一。
  17. 根据权利要求15所述的方法,其特征在于,所述对所述对应局部图像进行细节调节,包括:
    获取多张参考图像,所述多张参考图像和所述待处理图像是多个摄像头针对同一场景拍摄得到的;
    根据所述多张参考图像对所述对应局部图像进行细节调节。
  18. 根据权利要求15所述的方法,其特征在于,所述对所述对应局部图像进行细节调节,包括:
    获取与所述待处理图像的相似度超过预设阈值的多张历史图像;
    根据所述多张历史图像对所述对应局部图像进行细节调节。
  19. 根据权利要求14-18中任一项所述的方法,其特征在于,所述获取多组视觉感官体验参数,包括:
    根据第三预设规则获取所述多组视觉感官体验参数。
  20. 根据权利要求14-19中任一项所述的方法,其特征在于,所述对所述待处理图像和所述多张经处理的备选局部图像进行编码,包括:
    对所述待处理图像和所述多张经处理的备选局部图像进行可分级视频编码以得到码流;或者,
    对所述待处理图像和所述多张经处理的备选局部图像进行图像压缩以得到图像文件。
  21. 一种解码装置,其特征在于,包括:
    获取模块,用于获取待处理图像;获取放大操作指令,所述放大操作指令用于指示所述待处理图像中的待放大区域,所述待放大区域对应一张或多张局部图像;获取与所述一张或多张局部图像对应的一组或多组视觉感官体验参数;
    处理模块,用于根据所述一组或多组视觉感官体验参数分别对对应的局部图像进行处理以得到经处理的局部图像。
  22. 根据权利要求21所述的装置,其特征在于,所述视觉感官体验参数包括亮度参数、对比度参数、颜色参数和细节参数中的至少之一;
    所述处理模块,具体用于执行以下至少一种操作:
    当所述视觉感官体验参数包括亮度参数时,对对应局部图像进行亮度调节;
    当所述视觉感官体验参数包括对比度参数时,对所述对应局部图像进行对比度调节;
    当所述视觉感官体验参数包括颜色参数时,对所述对应局部图像进行颜色调节;
    当所述视觉感官体验参数包括细节参数时,对所述对应局部图像进行细节调节;
    其中,所述对应局部图像是所述一张或多张局部图像的其中之一。
  23. 根据权利要求21或22所述的装置,其特征在于,所述处理模块,具体用于当对应局部图像对应图像暗区时,对所述对应局部图像进行亮度提升、对比度提升、颜色适应暗区和欠曝细节增加中的至少一个处理;或者,当所述对应局部图像对应图像亮区时,对所述对应局部图像进行亮度降低、对比度降低、颜色适应亮区和过曝细节增加中的至少一个处理;或者,当所述对应局部图像对应图像主体区域时,对所述对应局部图像进行颜色适应主体的处理;其中,所述对应局部图像是所述一张或多张局部图像的其中之一。
  24. 根据权利要求22所述的装置,其特征在于,所述处理模块,具体用于获取多张参考图像,所述多张参考图像和所述待处理图像是多个摄像头针对同一场景拍摄得到的;根据所述多张参考图像对所述对应局部图像进行细节调节。
  25. 根据权利要求22所述的装置,其特征在于,所述处理模块,具体用于获取与所述待处理图像的相似度超过预设阈值的多张历史图像;根据所述多张历史图像对所述对应局部图像进行细节调节。
  26. 权利要求21-25中任一项所述的装置,其特征在于,所述经处理的局部图像呈现的画面仿真人眼感知所述待放大区域的真实场景的视觉感官体验。
  27. 权利要求21-26中任一项所述的装置,其特征在于,所述放大操作指令是由用户的两指在所述待放大区域进行的向外滑动操作产生的;或者,所述放大操作指令是由用户的两指在所述待放大区域进行的点击操作产生的。
  28. 根据权利要求21-27中任一项所述的装置,其特征在于,所述获取模块,具体用于对获取的码流进行解码以得到所述一组或多组视觉感官体验参数。
  29. 根据权利要求21-27中任一项所述的装置,其特征在于,所述获取模块,具体用于根据预设规则获取所述一组或多组视觉感官体验参数。
  30. 根据权利要求21-29中任一项所述的装置,其特征在于,所述获取模块,具体用于对获取的码流进行可分级视频解码以得到所述待处理图像;或者,对获取的图像文件进行图像解压缩以得到所述待处理图像。
  31. 根据权利要求21-30中任一项所述的装置,其特征在于,所述处理模块,还用于显示所述经处理的局部图像;或者,存储所述经处理的局部图像。
  32. 根据权利要求21-31中任一项所述的装置,其特征在于,所述获取模块,还用于获取放大终止指令,所述放大终止指令是由用户的两指在所述经处理的局部图像上进行的向内滑动操作产生的,或者,所述放大终止指令是由用户的单指在所述经处理的局部图像上进行的点击操作产生的;
    所述处理模块,还用于根据所述放大终止指令显示所述待处理图像。
  33. 一种编码装置,其特征在于,包括:
    获取模块,获取待处理图像;获取多组视觉感官体验参数;
    编码模块,用于对所述待处理图像和所述多组视觉感官体验参数进行编码。
  34. 一种编码装置,其特征在于,包括:
    获取模块,用于获取待处理图像;对所述待处理图像进行划分得到多张备选局部图像;获取多组视觉感官体验参数,所述多组视觉感官体验参数和所述多张备选局部图像对应;
    处理模块,用于根据所述多组视觉感官体验参数分别对对应的备选局部图像进行处理以得到多张经处理的备选局部图像;
    编码模块,用于对所述待处理图像和所述多张经处理的备选局部图像进行编码。
  35. 根据权利要求34所述的装置,其特征在于,所述视觉感官体验参数包括亮度参数、对比度参数、颜色参数和细节参数中的至少之一;
    所述处理模块,具体用于执行以下至少一种操作:
    当所述视觉感官体验参数包括亮度参数时,对对应局部图像进行亮度调节;
    当所述视觉感官体验参数包括对比度参数时,对所述对应局部图像进行对比度调节;
    当所述视觉感官体验参数包括颜色参数时,对所述对应局部图像进行颜色调节;
    当所述视觉感官体验参数包括细节参数时,对所述对应局部图像进行细节调节;
    其中,所述对应局部图像是所述多张备选局部图像的其中之一。
  36. 根据权利要求34或35所述的装置,其特征在于,所述处理模块,具体用于当对应局部图像对应图像暗区时,对所述对应局部图像进行亮度提升、对比度提升、颜色适应暗区和欠曝细节增加中的至少一个处理;或者,当所述对应局部图像对应图像亮区时,对所述对应局部图像进行亮度降低、对比度降低、颜色适应亮区和过曝细节增加中的至少一个处理;或者,当所述对应局部图像对应图像主体区域时,对所述对应局部图像进行颜色适应主体的处理;其中,所述对应局部图像是所述多张备选局部图像的其中之一。
  37. 根据权利要求35所述的装置,其特征在于,所述处理模块,具体用于获取多张参考图像,所述多张参考图像和所述待处理图像是多个摄像头针对同一场景拍摄得到的;根据所述多张参考图像对所述对应局部图像进行细节调节。
  38. 根据权利要求35所述的装置,其特征在于,所述处理模块,具体用于获取与所述待处理图像的相似度超过预设阈值的多张历史图像;根据所述多张历史图像对所述对应局部图像进行细节调节。
  39. 根据权利要求34-38中任一项所述的装置,其特征在于,所述获取模块,具体用于根据第三预设规则获取所述多组视觉感官体验参数。
  40. 根据权利要求34-39中任一项所述的装置,其特征在于,所述编码模块,具体用于对所述待处理图像和所述多张经处理的备选局部图像进行可分级视频编码以得到码流;或者,对所述待处理图像和所述多张经处理的备选局部图像进行图像压缩以得到图像文件。
  41. 一种解码器,其特征在于,包括:
    一个或多个处理器;
    非瞬时性计算机可读存储介质,耦合到所述处理器并存储由所述处理器执行的程序,其中所述程序在由所述处理器执行时,使得所述解码器执行根据权利要求1-12中任一项所述的方法。
  42. 一种编码器,其特征在于,包括:
    一个或多个处理器;
    非瞬时性计算机可读存储介质,耦合到所述处理器并存储由所述处理器执行的程序,其中所述程序在由所述处理器执行时,使得所述编码器执行根据权利要求13-20中任一项所述的方法。
  43. 一种非瞬时性计算机可读存储介质,其特征在于,包括程序代码,当其由计算机设备执行时,用于执行根据权利要求1-20中任一项所述的方法。
  44. 一种非瞬时性存储介质,其特征在于,包括根据权利要求1-20中任一项所述的方法编码的比特流。
PCT/CN2022/133761 2021-11-27 2022-11-23 图像处理方法和装置 WO2023093768A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202111449229.X 2021-11-27
CN202111449229.XA CN116188603A (zh) 2021-11-27 2021-11-27 图像处理方法和装置

Publications (1)

Publication Number Publication Date
WO2023093768A1 true WO2023093768A1 (zh) 2023-06-01

Family

ID=86431282

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/133761 WO2023093768A1 (zh) 2021-11-27 2022-11-23 图像处理方法和装置

Country Status (2)

Country Link
CN (1) CN116188603A (zh)
WO (1) WO2023093768A1 (zh)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101592729A (zh) * 2009-07-13 2009-12-02 中国船舶重工集团公司第七○九研究所 基于目标细节的雷达ppi图像局部放大显示装置和方法
WO2016063234A1 (en) * 2014-10-22 2016-04-28 Koninklijke Philips N.V. Sub-viewport location, size, shape and/or orientation
CN107153500A (zh) * 2017-04-21 2017-09-12 努比亚技术有限公司 一种实现图像显示的方法及设备
CN111324270A (zh) * 2020-02-24 2020-06-23 北京字节跳动网络技术有限公司 图像处理方法、组件、电子设备及存储介质
CN111489294A (zh) * 2017-03-27 2020-08-04 侯苏华 一种图像放大的处理方法

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101592729A (zh) * 2009-07-13 2009-12-02 中国船舶重工集团公司第七○九研究所 基于目标细节的雷达ppi图像局部放大显示装置和方法
WO2016063234A1 (en) * 2014-10-22 2016-04-28 Koninklijke Philips N.V. Sub-viewport location, size, shape and/or orientation
CN111489294A (zh) * 2017-03-27 2020-08-04 侯苏华 一种图像放大的处理方法
CN107153500A (zh) * 2017-04-21 2017-09-12 努比亚技术有限公司 一种实现图像显示的方法及设备
CN111324270A (zh) * 2020-02-24 2020-06-23 北京字节跳动网络技术有限公司 图像处理方法、组件、电子设备及存储介质

Also Published As

Publication number Publication date
CN116188603A (zh) 2023-05-30

Similar Documents

Publication Publication Date Title
CN113498605A (zh) 编码器、解码器及使用自适应环路滤波器的相应方法
WO2020211765A1 (en) An encoder, a decoder and corresponding methods harmonzting matrix-based intra prediction and secoundary transform core selection
WO2021109978A1 (zh) 视频编码的方法、视频解码的方法及相应装置
JP7314281B2 (ja) イントラ・サブパーティション・コーディング・ツールによって引き起こされるサブパーティション境界のためのデブロッキングフィルタ
JP7277586B2 (ja) モードおよびサイズに依存したブロックレベル制限の方法および装置
WO2020151768A1 (en) An encoder, a decoder and corresponding methods of deblocking filter adaptation
CN113785573A (zh) 编码器、解码器和使用自适应环路滤波器的对应方法
CA3114341A1 (en) An encoder, a decoder and corresponding methods using compact mv storage
CA3165820A1 (en) An encoder, a decoder and corresponding methods for adaptive loop filtering
WO2022121770A1 (zh) 增强层编解码方法和装置
WO2023072068A1 (zh) 图像编解码方法和装置
WO2023093768A1 (zh) 图像处理方法和装置
WO2020114393A1 (zh) 变换方法、反变换方法以及视频编码器和视频解码器
CN114846789A (zh) 用于指示条带的图像分割信息的解码器及对应方法
CN113228632A (zh) 用于局部亮度补偿的编码器、解码器、以及对应方法
WO2024093317A1 (zh) 图像分层编码方法和装置
WO2024093312A1 (zh) 图像编解码方法和装置
WO2024093305A1 (zh) 图像编解码方法和装置
WO2023160470A1 (zh) 编解码方法和装置
WO2023173916A1 (zh) 编解码方法和装置
WO2023173929A1 (zh) 编解码方法和装置
WO2024093994A1 (zh) 编解码方法和装置
CN113766227B (zh) 用于图像编码和解码的量化和反量化方法及装置
WO2020140889A1 (zh) 量化、反量化方法及装置
CN114650425A (zh) 视频帧格式转换方法和装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22897851

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2022897851

Country of ref document: EP

Effective date: 20240429