WO2021104079A1 - 视频处理方法、装置、存储介质及计算机设备 - Google Patents

视频处理方法、装置、存储介质及计算机设备 Download PDF

Info

Publication number
WO2021104079A1
WO2021104079A1 PCT/CN2020/129077 CN2020129077W WO2021104079A1 WO 2021104079 A1 WO2021104079 A1 WO 2021104079A1 CN 2020129077 W CN2020129077 W CN 2020129077W WO 2021104079 A1 WO2021104079 A1 WO 2021104079A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
target
color attribute
pixel
yuv
Prior art date
Application number
PCT/CN2020/129077
Other languages
English (en)
French (fr)
Inventor
周雷
Original Assignee
深圳市万普拉斯科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市万普拉斯科技有限公司 filed Critical 深圳市万普拉斯科技有限公司
Publication of WO2021104079A1 publication Critical patent/WO2021104079A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/64Circuits for processing colour signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/4402Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/4402Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
    • H04N21/440218Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display by transcoding between formats or standards, e.g. from MPEG-2 to MPEG-4
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/64Circuits for processing colour signals
    • H04N9/646Circuits for processing colour signals for image enhancement, e.g. vertical detail restoration, cross-colour elimination, contour correction, chrominance trapping filters

Definitions

  • the embodiments of the present disclosure expect to provide a video processing method, device, storage medium, and computer equipment that help improve the visual effect of a video.
  • embodiments of the present disclosure also provide a video processing device, including:
  • a video acquisition module configured to acquire an original video to be processed, the original video including each first YUV image in a universal bandwidth compression format
  • An image processing module configured to perform color enhancement processing on the second YUV image to obtain a third YUV image in a linear format
  • the second conversion module is configured to convert the third YUV image from a linear format to a universal bandwidth compression format to obtain a fourth YUV image in a universal bandwidth compression format;
  • the image replacement module is configured to replace the corresponding first YUV image in the original video with the fourth YUV image to obtain a processed video.
  • embodiments of the present disclosure also provide an image processing method, including:
  • the target vector includes a preset number of target pixels
  • the new color attribute values of all pixels in the target image are obtained, the new color attribute values are used to replace the corresponding initial color attribute values to obtain the processed image.
  • the target image acquisition module is configured to acquire the target image to be processed
  • the relevant pixel point determination module is configured to determine the relevant pixel point corresponding to each target pixel point in the target vector, and the relevant pixel point is a neighboring pixel point of the target pixel point;
  • the color attribute reconstruction module is configured to reconstruct the color attribute of each target pixel in the target vector according to the initial color attribute value of the target pixel and the corresponding initial color attribute value of the relevant pixel to obtain the value of each target pixel.
  • the new color attribute value is configured to reconstruct the color attribute of each target pixel in the target vector according to the initial color attribute value of the target pixel and the corresponding initial color attribute value of the relevant pixel to obtain the value of each target pixel.
  • the color attribute replacement module is configured to, after obtaining the new color attribute values of all pixels in the target image, replace the corresponding initial color attribute values with the new color attribute values to obtain the processed image.
  • an embodiment of the present disclosure provides a computer device, including a memory and a processor, the memory stores a computer program, and the processor implements the steps of the video processing method when the computer program is executed; or, The processor implements the steps of the above-mentioned image processing method when the computer program is executed.
  • the embodiments of the present disclosure provide a computer-readable storage medium on which a computer program is stored.
  • the computer program is executed by a processor, the steps of the above video processing method are implemented; the computer program is executed by the processor When realizing the steps of the above image processing method.
  • the image in the original video is compressed and formatted, so that the converted image can be directly processed Image, which is convenient for color enhancement processing, and then the compression format conversion is performed again, so that the converted image is a universal bandwidth compression format, which is convenient for video playback.
  • the color enhancement processing for the image in the video is helpful Improve the visual effect of the video.
  • FIG. 1 is a schematic flowchart of an image processing method in an embodiment of the present disclosure
  • FIG. 2 is a schematic diagram of the arrangement of some pixels in a target image in an embodiment of the present disclosure
  • Figure 3 is an embodiment of the present disclosure, according to the initial color attribute value of the target pixel and the corresponding initial color attribute value of the relevant pixel, the color attribute reconstruction of each target pixel in the target vector is performed to obtain the value of each target pixel.
  • FIG. 4 is a schematic diagram of the structure of an image processing device in an embodiment of the present disclosure.
  • FIG. 5 is a schematic flowchart of a video processing method in an embodiment of the present disclosure.
  • FIG. 6 is a schematic flow chart of performing color enhancement processing on a second YUV image to obtain a third YUV image in a linear format in an embodiment of the present disclosure
  • FIG. 7 is a schematic flowchart of a method including each buffer space in an embodiment of the present disclosure.
  • FIG. 8 is a schematic structural diagram of a video processing device in an embodiment of the present disclosure.
  • Fig. 9 is an internal structure diagram of a computer device in an embodiment of the present disclosure.
  • the embodiment of the present disclosure proposes an image processing method, which is mainly used to realize the calculation of the Laplacian operator of the image.
  • the central processing unit CPU, Central Processing Unit
  • GPU Graphics Processing Unit
  • the CPU can only process a single target object (such as pixels) at a time, which has the problem of slow processing speed.
  • the GPU can process multiple target objects at the same time, it has the problem of high power consumption.
  • the above two implementations None of the methods are suitable for mobile terminals such as mobile phones.
  • the image processing method in the embodiment of the present disclosure implements the calculation of Laplacian operator on a computer digital signal processor (CDSP, Computer Digital Signal Processor). It has the characteristics of fast processing speed and low power consumption, so it can be used for For example, image or video processing is performed on mobile terminals such as mobile phones.
  • CDSP Computer Digital Signal Processor
  • an image processing method which can be applied to CDSP, and the image processing method includes the following steps:
  • Step S110 Obtain a target image to be processed.
  • the target image acquired by CDSP can be a single image or a certain frame of image in the video. That is to say, the image processing method in this application can process a single image or Video processing is not limited here.
  • step S120 a target vector is selected from the target image.
  • the target vector contains a preset number of target pixels, and CDSP can select multiple target vectors (target pixels) for processing at the same time.
  • the target vector is the target pixel, and the target pixel is the pixel to be processed.
  • the CDSP selects the pixels to be processed by selecting a target vector.
  • the target vector is composed of a preset number of consecutive pixels, and the preset number can be specifically determined according to the working characteristics of the CDSP itself. After selecting the target vector, each pixel in the target vector can be used as the target pixel.
  • Step S130 Determine the relevant pixel point corresponding to each target pixel point in the target vector.
  • the relevant pixel is the neighboring pixel of the target pixel, and the neighboring pixel can be understood as other pixels in the nine-square grid with the target pixel as the center pixel.
  • the relevant pixels may specifically be 4-neighbor pixels or 8-neighbor pixels of the target pixel, where 4-neighbor pixels refer to the top, bottom, left, and right four pixels of the central pixel in the nine-square grid.
  • Neighboring pixels in 8 directions, 8 neighbouring pixels refer to the eight neighbouring pixels located above, below, left, right, top left, top right, bottom left, and bottom right in the nine-square grid.
  • the CDSP can select a vector composed of pixels D4, D5, and D6 as the target vector.
  • the 4-neighbor pixels of D4, namely C4, D3, D5, and E4 can be used as the relevant pixels of D4; or D4 8 neighborhood pixels, namely C3, C4, C5, D3, D5, E3, E4, E5, as the relevant pixels of D4.
  • the preset number of 3 is only an example of a preset number, and it can be set according to the actual situation in the actual processing process.
  • Step S140 Perform color attribute reconstruction on each target pixel according to the initial color attribute value of the target pixel in the target vector and the corresponding initial color attribute value of the relevant pixel to obtain the new color attribute value of each target pixel.
  • the color attribute value refers to the attribute value of the pixel in different color coding formats.
  • the color coding format can be RGB (R for red channel, G for green channel, B for blue channel), HSV (H for hue, S for Saturation, V represents lightness), YUV (Y represents brightness, U and V represent chromaticity), etc.
  • Step S150 after obtaining the new color attribute values of all pixels in the target image, replace the corresponding initial color attribute values with the new color attribute values to obtain the processed image.
  • CDSP performs Laplacian calculation on all pixels in the target image, and obtains the new color attribute values of all pixels in the target image according to the calculation results. Click to replace the color attribute value to get the processed image.
  • CDSP obtains the new color attribute value of some pixels, it does not immediately replace the color attribute value of this part of the pixel, but continues to select the target vector containing the target pixel from the target image.
  • the processing is performed according to the foregoing steps S130 to S150, until all the pixels are replaced with new color attribute values.
  • multiple target pixels are processed in parallel at the same time. Take the target pixel in Figure 2 as an example.
  • D4 is processed
  • D5 and D6 are also processed at the same time.
  • processing D4 the color attribute value of D5 needs to be used. Therefore, it is uniformly set to use the initial color attribute value of all pixels to perform the Laplacian calculation.
  • the image processing method provided in this embodiment performs color attribute reconstruction processing on a preset number of target pixels in the image after the image is acquired.
  • the processing process is based on the target pixel and the related pixels close to the target pixel.
  • the initial color attribute value of the target pixel is realized, and then the new color attribute value obtained by the reconstruction process is used as the color attribute value of the target pixel.
  • This processing process combines the color attribute value of the relevant pixel around the target pixel, which can help Improve the contrast of the image;
  • the processing method of the embodiment of the present disclosure is implemented in a computer digital signal processor, which can process a preset number of target pixels at the same time, so that the method of the present application has high speed and power consumption. Low characteristics.
  • selecting the target vector from the target image includes: taking the first pixel in the target image as the starting pixel, and selecting a preset number of pixels as the target pixel according to the pixel arrangement direction, Get the target vector.
  • the pixel arrangement direction may refer to the horizontal direction in the image.
  • a preset number of pixels are taken as the target pixel, and the target vector is formed.
  • the target pixels A1, A2, and A3 can be selected to form the target vector.
  • the selection of the target vector can be implemented by the HVX_Vector instruction.
  • selecting the target vector from the target image includes: after completing the processing of the current target vector, adopting the method of aligning and fetching points, starting with the next pixel point of the last pixel in the current target vector For the pixel points, a preset number of pixels are selected as the target pixel points according to the arrangement direction of the pixel points, and the next target vector to be processed is obtained.
  • the current target vector is composed of target pixels D4, D5, and D6.
  • the method of aligning points is adopted, and D7, D8, and D9 are selected as the new ones.
  • the target pixels form the next target vector to be processed.
  • step S130 determines the relevant pixel corresponding to each target pixel in the target vector, which may include steps 132A to 136A.
  • Step 132A according to the pixel position of the target vector, adopt the method of non-aligned fetching, and the vector after the target vector is shifted forward by one pixel is used as the first correlation vector, and the target vector is shifted backward by one pixel.
  • Vector as the second correlation vector
  • Step 134A according to the pixel position of the target vector, determine the vector after the target vector is shifted forward by the first number of pixels as the third correlation vector, and determine the vector after the target vector is shifted backward by the first number of pixels as the fourth correlation vector.
  • the first number is the pixel width of the target image
  • Step 136A from the first correlation vector, the second correlation vector, the third correlation vector, and the fourth correlation vector, select 4 neighborhood pixels of each target pixel as the relevant pixel corresponding to each target pixel.
  • CDSP when determining the relevant pixels of the target pixel, instead of directly selecting a single pixel, CDSP first determines the target by non-aligned point picking. The correlation vector of the vector, and then select the relevant pixel corresponding to the target pixel from the correlation vector.
  • the correlation vector can be cached in the cache space, and when the relevant pixels need to be selected, the selection is read from the cache space.
  • the target vector includes target pixels D4, D5, D6, the first correlation vector includes pixels D3, D4, D5, and the second correlation vector includes pixels D5, D6, D7, and the third correlation vector includes pixels D5, D6, and D7.
  • the correlation vector includes pixel points C4, C5, and C6, and the fourth correlation vector includes pixel points E4, E5, E6.
  • the correlation vector of the forward and backward offset can be realized by the vmemu instruction, and the correlation vector of the up and down offset can be realized by the stride instruction.
  • the pixel D3 in the first correlation vector determines the pixel D3 in the first correlation vector, the pixel D5 in the second correlation vector, the pixel C4 in the third correlation vector, and the pixels in the fourth correlation vector.
  • the point E4 is regarded as the relevant pixel point of the target pixel point D4.
  • step S130 determines the relevant pixel corresponding to each target pixel in the target vector, including steps 132B to 136B.
  • Step 132B according to the pixel position of the target vector, adopt the method of non-aligned fetching, and the vector after the target vector is shifted forward by one pixel is used as the fifth correlation vector, and the target vector is shifted backward by one pixel.
  • Vector as the sixth correlation vector
  • Step 134B determine the vector after the target vector is shifted forward by the second number of pixels as the seventh correlation vector, and determine the vector after the target vector is shifted forward by the third number of pixels as the eighth correlation vector.
  • Correlation vector determine the vector after the target vector is shifted backward by the second number of pixels as the ninth correlation vector, determine the vector after the target vector is shifted backward by the third number of pixels as the tenth correlation vector, and the second number as the target
  • the pixel width of the image is increased by one, and the third number is the pixel width of the target image minus one;
  • Step 136B from the fifth correlation vector, the sixth correlation vector, the seventh correlation vector, the eighth correlation vector, the ninth correlation vector, and the tenth correlation vector, select 8 neighborhood pixels of each target pixel as each target pixel The point corresponds to the relevant pixel point.
  • the target vector includes target pixels D4, D5, D6, the fifth correlation vector includes pixel points D3, D4, D5, and the sixth correlation vector includes pixel points D5, D6, D7.
  • the seventh correlation vector includes pixels C3, C4, C5, the eighth correlation vector includes pixels C5, C6, C7, the ninth correlation vector includes pixels E3, E4, E5, and the tenth correlation vector includes pixels E5, E6, E7 .
  • the pixel point C5 in the ninth correlation vector, the pixel points E3 and E4 in the ninth correlation vector, and the pixel point E5 in the tenth correlation vector are used as the relevant pixel points of the target pixel point D4.
  • the relevant pixels of the target pixel when determining the relevant pixels of the target pixel, if the target pixel is a boundary pixel in the target image, the relevant pixel that is empty by the target pixel itself is used.
  • step S140 performs color attribute reconstruction on each target pixel according to the initial color attribute value of the target pixel in the target vector and the corresponding initial color attribute value of the relevant pixel to obtain
  • the new color attribute value of each target pixel includes step S141 to step S149.
  • Step S141 buffer the initial color attribute values of all pixels in the target image in the first buffer space
  • Step S143 Read the initial color attribute value of the target pixel and the corresponding initial color attribute value of the relevant pixel from the first buffer space;
  • Step S145 Calculate the sum of the initial color attribute values of related pixels to obtain a first calculation result
  • Step S146 Calculate the product of the initial color attribute value of the target pixel and the number of related pixels to obtain a second calculation result
  • Step S149 Perform a difference calculation on the first calculation result and the second calculation result, and obtain the new color attribute value of the target pixel according to the difference calculation result.
  • the Laplacian can be calculated by the following formula (1):
  • the new color attribute value can be further obtained according to the Laplacian operator and the calculation formula corresponding to each color attribute value.
  • the new color lightness can be realized by the following formula (2):
  • V' is the new color brightness
  • Y is the difference between the initial color brightness and the Laplacian.
  • the new color saturation can be achieved by the following formula (3):
  • S' is the new color saturation
  • H is the hue in HSV
  • S is the initial color saturation
  • the method further includes: according to the arrangement order of the initial color attribute value in the first buffer space, the initial color attribute value of the target pixel corresponding to the The new color attribute value is cached in the second cache space.
  • the calculation result can be saved without interfering with the parallel processing process of the target pixel (it is necessary to use the initial color attribute value for calculation); in addition, by following the sequence Caching can ensure the correspondence between the new color attribute value and the initial color attribute value when the color attribute value is replaced, and avoid the problem of chaotic replacement.
  • replacing the corresponding initial color attribute values with the new color attribute values to obtain the processed image includes: sequentially selecting from the second buffer space Read the new color attribute value of the target pixel and replace the initial color attribute value of the target pixel; when all the initial color attribute values of the target pixel have been replaced, the processed image is obtained.
  • an image processing device is provided, and the image processing device includes the following structure:
  • the target image acquisition module 110 is configured to acquire the target image to be processed
  • the target vector selection module 120 is configured to select a target vector from a target image, and the target vector includes a preset number of target pixels;
  • the relevant pixel point determining module 130 is configured to determine the relevant pixel point corresponding to each target pixel point in the target vector, and the relevant pixel point is a neighboring pixel point of the target pixel point;
  • the color attribute reconstruction module 140 is configured to reconstruct the color attribute of each target pixel in the target vector according to the initial color attribute value of the target pixel and the corresponding initial color attribute value of the relevant pixel to obtain the new target pixel.
  • the color attribute replacement module 150 is configured to, after obtaining the new color attribute values of all pixels in the target image, replace the corresponding initial color attribute values with the new color attribute values to obtain the processed image.
  • the target vector selection module 120 is configured to use the first pixel in the target image as the starting pixel, and select a preset number of pixels according to the pixel arrangement direction as the The target pixel point, the target vector is obtained.
  • the target vector selection module 120 is configured to, after processing the current target vector, adopt a method of aligning and fetching points, taking the next pixel point of the last pixel point in the current target vector as For the starting pixel, a preset number of pixels are selected as the target pixel according to the arrangement direction of the pixel, and the next target vector to be processed is obtained.
  • the color attribute reconstruction module 140 is configured to buffer the initial color attribute values of all pixels in the target image into a first buffer space; read the target pixel points from the first buffer space The initial color attribute value of the corresponding pixel and the corresponding initial color attribute value of the relevant pixel; the sum of the initial color attribute value of the relevant pixel is calculated to obtain the first calculation result; the initial color attribute value of the target pixel is calculated and the initial color attribute value of the target pixel is calculated.
  • the second calculation result is obtained by multiplying the number of related pixels; performing a difference operation on the first calculation result and the second calculation result, and obtaining a new color attribute value of the target pixel according to the difference calculation result.
  • the color attribute reconstruction module 140 is further configured to change the new color of the target pixel corresponding to the initial color attribute value according to the arrangement order of the initial color attribute value in the first buffer space.
  • the attribute value is cached in the second cache space.
  • the color attribute replacement module 150 is configured to sequentially read the new color attribute value of the target pixel from the second buffer space, and replace the initial color attribute value of the target pixel; When the initial color attribute values of all target pixels are replaced, the processed image is obtained.
  • Each module in the above-mentioned image processing device can be implemented in whole or in part by software, hardware, and a combination thereof.
  • the above-mentioned modules may be embedded in the form of hardware or independent of the processor in the computer equipment, or may be stored in the memory of the computer equipment in the form of software, so that the processor can call and execute the operations corresponding to the above-mentioned modules.
  • an embodiment of the present disclosure also provides a video processing method.
  • the video processing method can be applied to mobile terminals such as mobile phones.
  • the video processing method includes the following steps:
  • Step S210 Obtain the original video to be processed.
  • the original video includes each first YUV image in the Universal Bandwidth Compression Format (UBWC), that is, each frame of the original image in the original video can be considered as the first YUV image.
  • YUV is a color coding format, which is often used in various video processing components.
  • YUV is a type of compiling true-color color space (color space).
  • Proper nouns such as Y'UV, YUV, YCbCr, YPbPr, etc. can be called For YUV, there is overlap with each other.
  • Y represents brightness (Luminance or Luma), which is the grayscale value
  • U and V represent chrominance (Chrominance or Chroma), which are used to describe the color and saturation of the image, and are used to specify pixels s color.
  • YUV encodes photos or videos, it takes into account human perception and allows the bandwidth of chroma to be reduced.
  • Step S220 Convert the first YUV image from the universal bandwidth compression format to the linear format to obtain the second YUV image in the linear format.
  • the video data is the YUV image in the universal bandwidth compression format, which can save bandwidth.
  • the YUV image in the universal bandwidth compression format cannot be directly processed by the image algorithm. Therefore, first of all, the first video in the original video
  • a YUV image is converted into a compression format, that is, converted from a general bandwidth compression format to a linear format, thereby obtaining a second YUV image in a linear format.
  • the linear format can be understood as a common linear encoding format.
  • Step S230 Perform color enhancement processing on the second YUV image to obtain a third YUV image in a linear format.
  • the second YUV image in the linear format can be directly processed without changing the compression format, for example, color enhancement processing can be performed to improve the visual effect of the image, and the processed third YUV image can be obtained.
  • the color enhancement process does not change the compression format of the original image, the third YUV image is still in a linear format.
  • Step S240 Convert the third YUV image from a linear format to a universal bandwidth compression format to obtain a fourth YUV image in a universal bandwidth compression format;
  • the third YUV image is compressed again, that is, converted from linear format to bandwidth compression format, and the bandwidth compression format is obtained.
  • Fourth YUV image is obtained.
  • Step S250 Use the fourth YUV image to replace the corresponding first YUV image in the original video to obtain a processed video.
  • the obtained fourth YUV image is used to replace the first YUV image correspondingly. After all the YUV images are replaced, the processed video corresponding to the original video with better visual effects is obtained.
  • the relevant application programming interface API, Application Programming Interface
  • Ubwcdma Ubwcdma, etc.
  • the image in the original video is converted into a compression format, so that the converted image is an image that can be processed directly, thereby facilitating color enhancement
  • the compression format conversion is performed again, so that the converted image is a universal bandwidth compression format, which is convenient for video playback.
  • the color enhancement of the image in the video is helpful to improve the visual effect of the video.
  • converting the first YUV image from the universal bandwidth compression format to the linear format includes: converting the first YUV image from the universal bandwidth compression format to the linear format one by one according to the image frame sequence of the first YUV image in the original video format.
  • the conversion may be performed frame by frame, that is, according to the first frame, the second frame, the i-th frame, the i+1-th frame, and the last
  • the sequence of one frame is converted, and the processing sequence is consistent with the display sequence of each frame image during video playback, thereby facilitating direct video playback after subsequent processing.
  • converting the first YUV image from a universal bandwidth compression format to a linear format includes: simultaneously converting multiple frames of the first YUV image from the universal bandwidth compression format according to the image frame sequence of the first YUV image in the original video It is a linear format.
  • the method further includes: buffering the second YUV image in the first buffer space.
  • the compression format conversion of the first YUV image is performed frame by frame, or the compression format conversion is performed on multiple frames of the first YUV image at the same time to obtain the second YUV image in linear format, and then the second YUV image is cached to the first YUV image.
  • a buffer space During subsequent processing, the second YUV image that needs to be processed can be read from the first buffer space and processed. By buffering the image, it can play a role of temporary storage.
  • the cache rate is faster, which can improve the efficiency of image processing.
  • step S230 performs color enhancement processing on the second YUV image to obtain a third YUV image in a linear format, including steps S231 to S239.
  • Step S231 reading the second YUV image from the first buffer space, converting the second YUV image from the YUV color coding format to the RGB color coding format, obtaining the RGB image, and buffering the RGB image in the second buffer space;
  • Step S233 Read the RGB image from the second buffer space, convert the RGB image from the RGB color coding format to the HSV color coding format, obtain the HSV image, and buffer the HSV image in the third buffer space;
  • Step S235 Read the HSV image from the third buffer space, perform color enhancement processing on the HSV image to obtain an enhanced HSV image, and replace the HSV image in the third buffer space with the enhanced HSV image;
  • Step S237 Read the enhanced HSV image from the third buffer space, convert the enhanced HSV image from the HSV color coding format to the RGB color coding format, obtain the enhanced RGB image, and replace the RGB image in the second buffer space with the enhanced RGB image;
  • cache space can also be used to cache images.
  • six different cache space distributions can be used to cache the above six different images, so that there is no need to save the original cache space in the cache space. The image data is replaced.
  • the image when performing image processing, the image can be cached to serve as a temporary storage.
  • the cache rate is faster, thereby improving the efficiency of image processing.
  • performing color enhancement processing on the HSV image in step S237 includes: adopting a color adjustment curve formula corresponding to the HSV image to adjust the saturation and lightness of the HSV image.
  • the new color attribute value of the target pixel is determined according to the initial color attribute value of each target pixel in the HSV image and the initial color attribute value of the neighboring pixel corresponding to each target pixel, including: all pixels in the HSV image
  • the initial color attribute value of the point is cached in the fourth cache space; the initial color attribute value of the target pixel and the initial color attribute value of the corresponding neighboring pixel are read from the fourth cache space; the initial color of the neighboring pixel is calculated
  • the sum of the attribute values obtains the first calculation result; the product of the initial color attribute value of the target pixel and the number of neighboring pixels is calculated to obtain the second calculation result; the difference operation is performed on the first calculation result and the second calculation result , Obtain the new color attribute value of the target pixel according to the difference calculation result.
  • an embodiment of the present disclosure further provides a video processing device, including:
  • the video acquisition module 210 is configured to acquire the original video to be processed, and the original video includes each first YUV image in a universal bandwidth compression format;
  • the first conversion module 220 is configured to convert the first YUV image from a universal bandwidth compression format to a linear format to obtain a second YUV image in a linear format;
  • the image processing module 230 is configured to perform color enhancement processing on the second YUV image to obtain a third YUV image in a linear format
  • the second conversion module 240 is configured to convert the third YUV image from a linear format to a universal bandwidth compression format to obtain a fourth YUV image in a universal bandwidth compression format;
  • the image replacement module 250 is configured to replace the corresponding first YUV image in the original video with the fourth YUV image to obtain the processed video.
  • the first conversion module 220 is configured to convert the first YUV image from a universal bandwidth compression format to a linear format, including one of the following: The image frame sequence of the first YUV image in the original video, one by one, the first YUV image is converted from a universal bandwidth compression format to a linear format;
  • the multiple first YUV images are simultaneously converted from the universal bandwidth compression format to the linear format.
  • the device further includes a cache module configured to cache the second YUV image in the first cache space.
  • the image processing module 230 is configured to read the second YUV image from the first buffer space, and convert the second YUV image from a YUV color coding format
  • the RGB color coding format an RGB image is obtained, and the RGB image is buffered in a second buffer space; the RGB image is read from the second buffer space, and the RGB image is converted from the RGB color coding format to HSV
  • the color coding format is to obtain an HSV image, and cache the HSV image in a third buffer space; read the HSV image from the third buffer space, perform color enhancement processing on the HSV image, and obtain an enhanced HSV image, Use the enhanced HSV image to replace the HSV image in the third buffer space; read the enhanced HSV image from the third buffer space, and convert the enhanced HSV image from an HSV color coding format to RGB
  • the color coding format is used to obtain an enhanced RGB image, and the enhanced RGB image is used to replace the RGB image in the second buffer space; the enhanced RGB image is read from the second buffer space, and the enhanced
  • the image processing module 230 is configured to use the color adjustment curve formula corresponding to the HSV image to adjust the saturation and lightness of the HSV image.
  • the image processing module 230 is configured to perform according to the initial color attribute value of each target pixel in the HSV image and the initial color attribute value of the neighboring pixel corresponding to each target pixel.
  • the color attribute value determines a new color attribute value of the target pixel, and the color attribute value includes saturation and lightness; the new color attribute value is used to replace the initial color attribute value of the target pixel.
  • the image processing module 230 is configured to buffer the initial color attribute values of all pixels in the HSV image to a fourth buffer space; read from the fourth buffer space Take the initial color attribute value of the target pixel and the initial color attribute value of the corresponding neighborhood pixel; calculate the sum of the initial color attribute values of the neighborhood pixel to obtain the first calculation result; calculate the target pixel The product of the initial color attribute value of the point and the number of pixels in the neighborhood obtains the second calculation result; the difference calculation is performed on the first calculation result and the second calculation result, and the target is obtained according to the difference calculation result The new color attribute value of the pixel.
  • Each module in the above-mentioned video processing device may be implemented in whole or in part by software, hardware, and a combination thereof.
  • the above modules can be embedded in the form of hardware or independent of the processor in the computer device, or can be stored in the memory of the computer device in the form of software, so that the processor can call and execute the corresponding operations of the above modules.
  • the embodiments of the present disclosure also provide a computer device, including a memory and a processor, and a computer program is stored in the memory.
  • the processor implements the steps of the image processing method or the video processing method in the foregoing embodiments when the computer program is executed.
  • Fig. 9 shows an internal structure diagram of a computer device in an embodiment.
  • the computer device may specifically be a terminal (or server).
  • the computer equipment includes a processor, a memory, a network interface, an input device, a camera, a sound collection device, a speaker, and a display screen connected through a system bus.
  • the memory includes a non-volatile storage medium and an internal memory.
  • the non-volatile storage medium of the computer device stores an operating system, and may also store a computer program.
  • the processor can enable the processor to implement an image processing method or a video processing method.
  • the internal memory may also store a computer program, and when the computer program is executed by the processor, the processor can execute the image processing method or the video processing method.
  • the display screen of the computer equipment can be a liquid crystal display screen or an electronic ink display screen.
  • the input device of the computer equipment can be a touch layer covered on the display screen, or it can be a button, trackball or touchpad set on the housing of the computer equipment. It can be an external keyboard, touchpad, or mouse.
  • FIG. 9 is only a block diagram of a part of the structure related to the embodiment of the present disclosure, and does not constitute a limitation on the computer device to which the embodiment of the present disclosure is applied.
  • the specific computer The device may include more or fewer parts than shown in the figures, or combine certain parts, or have a different arrangement of parts.
  • the embodiments of the present disclosure also provide a computer-readable storage medium on which a computer program is stored, and when the computer program is executed by a processor, the steps of the image processing method or the video processing method in the foregoing embodiments are implemented.
  • any reference to memory, storage, database or other media used in the various embodiments provided in the present disclosure may include non-volatile and/or volatile memory.
  • Non-volatile memory can include read-only memory (ROM, Read Only Memory), programmable ROM (PROM, Programmable Read-Only Memory), electrically programmable ROM (EPROM, Erasable Programmable Read-Only Memory), and electrically erasable Programmable ROM (EEPROM, Electrically Erasable Programmable Read-Only Memory) or flash memory. Volatile memory may include random access memory (RAM, Random Access Memory) or external cache memory.
  • RAM Random Access Memory
  • RAM is available in many forms, such as static RAM (SRAM, Static Random Access Memory), dynamic RAM (DRAM, Dynamic Random Access Memory), synchronous DRAM (SDRAM, Synchronous Dynamic Random Access Memory), dual Data rate SDRAM (DDR SDRAM, Double Data Rate Synchronous Dynamic Random Access Memory), enhanced SDRAM (ESDRAM, Enhanced Synchronous Access Memory), Synchronous Link DRAM (SLDRAM, SyncLink Dynamic Random Access Memory), memory bus ( Rambus) Direct RAM (RDRAM), Direct Memory Bus Dynamic RAM (DRDRAM), and Memory Bus Dynamic RAM (RDRAM), etc.
  • SRAM Static Random Access Memory
  • DRAM dynamic RAM
  • DRAM Dynamic Random Access Memory
  • SDRAM Synchronous Dynamic Random Access Memory
  • DDR SDRAM dual Data rate SDRAM
  • ESDRAM Enhanced Synchronous Access Memory
  • SLDRAM Synchronous Link DRAM
  • SLDRAM Synchronous Link DRAM
  • SyncLink Dynamic Random Access Memory memory bus
  • Rambus Rambus
  • Direct RAM RDRAM
  • DRAM

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Image Processing (AREA)

Abstract

本公开实施例公开了一种视频处理方法、装置、存储介质及计算机设备,获取待处理的原始视频,所述原始视频包括通用带宽压缩格式的各第一YUV图像;将所述第一YUV图像从通用带宽压缩格式转换为线性格式,获得线性格式的第二YUV图像;对所述第二YUV图像进行色彩增强处理,获得线性格式的第三YUV图像;将所述第三YUV图像从线性格式转换为通用带宽压缩格式,获得通用带宽压缩格式的第四YUV图像;使用所述第四YUV图像替换所述原始视频中对应的第一YUV图像,获得处理后视频。

Description

视频处理方法、装置、存储介质及计算机设备
相关申请的交叉引用
本公开基于申请号为201911181080.4、申请日为2019年11月27日的中国专利申请提出,并要求该中国专利申请的优先权,该中国专利申请的全部内容在此以引入方式并入本公开。
技术领域
本公开涉及视频处理技术领域,特别是涉及一种视频处理方法、装置、存储介质及计算机设备。
背景技术
视频由多帧图像组成的连续画面,当图像的变化速率超过24帧每秒时,根据视觉暂留原理,人眼就无法辨别出单幅的静态画面,从而给人眼带来平滑连续的视觉效果。
在现实生活中,人们通常使用移动终端上的各种视频平台观看视频,然而,现有的视频平台多采用通用带宽压缩格式(UBWC,Universal BandWidth Compression)的视频数据,而移动终端上的视频平台通常无法对该压缩格式的视频数据进行处理,从而在视频的图像质量较低时,降低视频的视觉效果。
发明内容
本公开实施例期望提供一种有助于提高视频视觉效果的视频处理方法、装置、存储介质及计算机设备。
第一方面,本公开实施例提供了一种视频处理方法,包括:
获取待处理的原始视频,所述原始视频包括通用带宽压缩格式的各第一YUV图像;
将所述第一YUV图像从通用带宽压缩格式转换为线性格式,获得线性格式的第二YUV图像;
对所述第二YUV图像进行色彩增强处理,获得线性格式的第三YUV图像;
将所述第三YUV图像从线性格式转换为通用带宽压缩格式,获得通用带宽压缩格式的第四YUV图像;
使用所述第四YUV图像替换所述原始视频中对应的第一YUV图像,获得处理后视频。
第二方面,本公开实施例还提供了一种视频处理装置,包括:
视频获取模块,配置为获取待处理的原始视频,所述原始视频包括通用带宽压缩格式的各第一YUV图像;
第一转换模块,配置为将所述第一YUV图像从通用带宽压缩格式转换为线性格式,获得线性格式的第二YUV图像;
图像处理模块,配置为对所述第二YUV图像进行色彩增强处理,获得线性格式的第三YUV图像;
第二转换模块,配置为将所述第三YUV图像从线性格式转换为通用带宽压缩格式,获得通用带宽压缩格式的第四YUV图像;
图像替换模块,配置为使用所述第四YUV图像替换所述原始视频中对应的第一YUV图像,获得处理后视频。
第三方面,本公开实施例还提供了一种图像处理方法,包括:
获取待处理的目标图像;
从所述目标图像中选择目标向量;所述目标向量包含预设数量的目标像素点;
确定所述目标向量中各目标像素点对应的相关像素点,所述相关像素点为目标像素点的邻域像素点;
根据所述目标向量中的目标像素点的初始颜色属性值以及对应的相关像素点的初始颜色属性值,对各目标像素点进行颜色属性重建,得到各目标像素点的新的颜色属性值;
在得到所述目标图像中所有像素点的新的颜色属性值后,使用新的颜色属性值替换对应的初始颜色属性值,得到处理后图像。
第四方面,本公开实施例还提供了一种图像处理装置,包括:
目标图像获取模块,配置为获取待处理的目标图像;
目标向量选择模块,配置为从所述目标图像中选择目标向量,所述目标向量包含预设数量的目标像素点;
相关像素点确定模块,配置为确定所述目标向量中各目标像素点对应的相关像素点,所述相关像素点为目标像素点的邻域像素点;
颜色属性重建模块,配置为根据所述目标像素点的初始颜色属性值以及对应的相关像素点的初始颜色属性值,对目标向量中的各目标像素点进行颜色属性重建,得到各目标像素点的新的颜色属性值;
颜色属性替换模块,配置为在得到所述目标图像中所有像素点的新的颜色属性值后,使用新的颜色属性值替换对应的初始颜色属性值,得到处理后图像。
第五方面,本公开实施例提供了一种计算机设备,包括存储器和处理器,所述存储器存储有计算机程序,所述处理器执行所述计算机程序时实现上述视频处理方法的步骤;或者,所述处理器执行所述计算机程序时实现上述图像处理方法的步骤。
第六方面,本公开实施例提供了一种计算机可读存储介质,其上存储有计算机程序,所述计算机程序被处理器执行时实现上述视频处理方法的 步骤;所述计算机程序被处理器执行时实现上述图像处理方法的步骤。
本公开实施例的视频处理方法、装置、存储介质及计算机设备,在获取通用带宽压缩格式的原始视频后,通过对原始视频中的图像进行压缩格式转换,使得转换后的图像为可直接进行处理的图像,从而便于进行色彩增强处理,然后再次进行压缩格式转换,使得转换后的图像为通用带宽压缩格式,从而便于进行视频播放,由于对视频中的图像进行了色彩增强处理,从而有助于提高视频的视觉效果。
附图说明
图1为本公开一个实施例中图像处理方法的流程示意图;
图2为本公开一个实施例中目标图像中的部分像素点排布示意图;
图3为本公开一个实施例中根据目标像素点的初始颜色属性值以及对应的相关像素点的初始颜色属性值,对目标向量中的各目标像素点进行颜色属性重建,得到各目标像素点的新的颜色属性值的流程示意图;
图4为本公开一个实施例中图像处理装置的结构示意图;
图5为本公开一个实施例中视频处理方法的流程示意图;
图6为本公开一个实施例中对第二YUV图像进行色彩增强处理,获得线性格式的第三YUV图像的流程示意图;
图7为本公开一个实施例中包含各缓存空间的方法流程示意图;
图8为本公开一个实施例中视频处理装置的结构示意图;
图9为本公开一个实施例中计算机设备的内部结构图。
具体实施方式
为了使本公开实施例的目的、技术方案及优点更加清楚明白,以下结合附图及实施例,对本公开进行进一步详细说明。应当理解,此处描述的具体实施例仅仅用以解释本公开实施例,并不用于限定本公开实施例。
本公开实施例提出一种图像处理方法,主要用于实现图像的拉普拉斯算子的计算,相关技术中,在计算拉普拉斯算子时,通常是在中央处理器(CPU,Central Processing Unit)或者图形处理器(GPU,Graphics Processing Unit)上实现。然而,CPU一次只能对单个目标对象(例如像素点)进行处理,存在处理速度慢的问题,而GPU虽然能同时对多个目标对象进行处理,但是存在功耗大的问题,以上两种实现方法都不适应于手机等移动终端。本公开实施例中的图像处理方法,是在计算机数字信号处理器(CDSP,Computer Digital Signal Processor)上实现拉普拉斯算子的计算,具有处理速度快、功耗低等特点,因此可用于例如手机等移动终端上进行图像或者视频的处理。
在一个实施例中,如图1所示,提供一种图像处理方法,该图像处理方法可应用于CDSP,该图像处理方法包括以下步骤:
步骤S110,获取待处理的目标图像。
CDSP获取的目标图像可以是单独的一张图像,也可以是视频中的某一帧图像,也就是说,本申请中的图像处理方法可以是对单独的一张图像进行处理,也可以是对视频进行处理,在此不做限定。
步骤S120,从目标图像中选择目标向量。
目标向量包含预设数量的目标像素点,CDSP可以同时选取多个目标向量(目标像素点)进行处理。在一些可选实施例中,目标向量即为目标像素点,目标像素点即为待处理的像素点。具体地,CDSP通过选取目标向量的方式选择待处理的像素点,目标向量由预设数量的连续像素点组成,预设数量具体可以根据CDSP本身的工作特性确定。在选择目标向量后,目标向量中的每个像素点都可以作为目标像素点。
步骤S130,确定目标向量中各目标像素点对应的相关像素点。
其中,相关像素点为目标像素点的邻域像素点,邻域像素点可以理解 为以目标像素点为中心像素点的九宫格内的其他像素点。示例性的,相关像素点具体可以是目标像素点的4邻域像素点或者8邻域像素点,其中,4邻域像素点是指九宫格内位于中心像素点的上、下、左、右四个方位的邻域像素点,8邻域像素点是指九宫格内位于中心像素点上、下、左、右、左上、右上、左下、右下八个方位的邻域像素点。
如图2所示,为目标图像中的部分像素点排布示意图,以预设数量为3为例,则CDSP可以选择由像素点D4、D5、D6组成的向量为目标向量。在确定相关像素点时,以目标向量中的目标像素点D4为例,可以将D4的4邻域像素点,即C4、D3、D5、E4,作为D4的相关像素点;也可以将D4的8邻域像素点,即C3、C4、C5、D3、D5、E3、E4、E5,作为D4的相关像素点。可以理解,预设数量为3仅为一种预设数量的举例说明,在实际处理过程中,可以根据实际情况进行设置。
步骤S140,根据目标向量中的目标像素点的初始颜色属性值以及对应的相关像素点的初始颜色属性值,对各目标像素点进行颜色属性重建,得到各目标像素点的新的颜色属性值。
颜色属性值是指像素点在不同颜色编码格式下的属性值,颜色编码格式具体可以是RGB(R表示红色通道、G表示绿色通道、B表示蓝色通道)、HSV(H表示色调、S表示饱和度、V表示明度)、YUV(Y表示明亮度、U和V表示色度)等。CDSP在确定目标像素点以及对应的相关像素点之后,根据各像素点的初始颜色属性值进行拉普拉斯算子计算,并根据计算结果得到目标像素点的新的颜色属性值。
步骤S150,在得到目标图像中所有像素点的新的颜色属性值后,使用新的颜色属性值替换对应的初始颜色属性值,得到处理后图像。
CDSP通过步骤S120至步骤S140的处理流程,对目标图像中的所有像素点进行拉普拉斯算子计算,并根据计算结果得到目标图像中所有像素点 的新的颜色属性值,对所有的像素点进行颜色属性值的替换,得到处理后图像。
需要说明的是,CDSP在得到部分像素点的新的颜色属性值后,并不是马上对该部分像素点进行颜色属性值替换,而是继续从目标图像中选择包含有目标像素点的目标向量,按照前述步骤S130至步骤S150进行处理,直至得到所有像素点替换为新的颜色属性值。其原因在于,在CDSP中,是同时对多个目标像素点进行并行处理的,以图2中的目标像素点为例,在对目标像素点D4进行处理时,同时也在对D5、D6进行处理,而在对D4进行处理时,需要使用到D5的颜色属性值,因此,统一设定为采用所有像素点的初始颜色属性值进行拉普拉斯算子的计算。
本实施例提供的图像处理方法,在获取图像后,对图像中预设数量的目标像素点进行颜色属性重建处理,该处理过程根据该目标像素点以及与该目标像素点位置接近的相关像素点的初始颜色属性值实现,然后将重建处理得到的新的颜色属性值作为目标像素点的颜色属性值,该处理过程结合了目标像素点周围的相关像素点的颜色属性值,从而可以有助于提高图像的对比度;另外,本公开实施例的处理方法是在计算机数字信号处理器中实现的,可以同时对预设数量的目标像素点进行处理,从而使得本申请的方法具有速度快、功耗低的特点。
在一个实施例中,从目标图像中选择目标向量,包括:以目标图像中的第一个像素点为起始像素点,按照像素点排布方向选择预设数量的像素点作为目标像素点,得到目标向量。
一些可选实施例中,像素点排布方向可以是指图像中的水平方向。示例性的,以图2为例,可以从目标图像中的第一个像素点A1开始,取预设数量的像素点为目标像素点,并组成目标向量。例如,当预设数量为3时,可以选择目标像素点A1、A2、A3组成目标向量。目标向量的选择具体可 以通过HVX_Vector指令实现。
在一个实施例中,从目标图像中选择目标向量,包括:在完成对当前目标向量的处理后,采用对齐取点的方式,以当前目标向量中最后一个像素点的下一个像素点为起始像素点,按照像素点排布方向选择预设数量的像素点作为目标像素点,得到待处理的下一个目标向量。
具体地,以图2为例,当前目标向量由目标像素点D4、D5、D6组成,在选择下一次进行处理的目标向量时,采用对齐取点的方式,选择D7、D8、D9为新的目标像素点,组成待处理的下一个目标向量。
在一个实施例中,当相关像素点为目标像素点的4邻域像素点时,步骤S130确定目标向量中各目标像素点对应的相关像素点,可包括步骤132A至步骤136A。
步骤132A,根据目标向量的像素点位置,采用非对齐取点的方式,将目标向量向前偏移一个像素点后的向量作为第一相关向量,将目标向量向后偏移一个像素点后的向量作为第二相关向量;
步骤134A,根据目标向量的像素点位置,确定目标向量向前偏移第一数量像素点后的向量作为第三相关向量,确定目标向量向后偏移第一数量像素点后的向量作为第四相关向量,第一数量为目标图像的像素宽度;
步骤136A,从第一相关向量、第二相关向量、第三相关向量、第四相关向量中,选择各目标像素点的4邻域像素点作为各目标像素点对应的相关像素点。
本实施例中,由于CDSP能够同时选择多个像素点并进行处理的特性,在确定目标像素点的相关像素点时,不是直接选择单个的像素点,CDSP首先通过非对齐取点的方式确定目标向量的相关向量,再从相关向量中选择目标像素点对应的相关像素点。可选地,相关向量可以缓存至缓存空间,当需要选择相关像素点时,再从缓存空间中进行读取选择。
具体地,以图2为例,目标向量包括目标像素点D4、D5、D6,则第一相关向量包括像素点D3、D4、D5,第二相关向量包括像素点D5、D6、D7,第三相关向量包括像素点C4、C5、C6,第四相关向量包括像素点E4、E5、E6。其中,前后偏移的相关向量可以通过vmemu指令实现,上下偏移的相关向量可以通过stride指令实现。在选择目标像素点D4的相关像素点时,确定第一相关向量中的像素点D3、第二相关向量中的像素点D5、第三相关向量中的像素点C4、第四相关向量中的像素点E4作为目标像素点D4的相关像素点。
在一个实施例中,当相关像素点为目标像素点的8邻域像素点时,步骤S130确定目标向量中各目标像素点对应的相关像素点,包括步骤132B至步骤136B。
步骤132B,根据目标向量的像素点位置,采用非对齐取点的方式,将目标向量向前偏移一个像素点后的向量作为第五相关向量,将目标向量向后偏移一个像素点后的向量作为第六相关向量;
步骤134B,根据目标向量的像素点位置,确定目标向量向前偏移第二数量像素点后的向量作为第七相关向量,确定目标向量向前偏移第三数量像素点后的向量作为第八相关向量,确定目标向量向后偏移第二数量像素点后的向量作为第九相关向量,确定目标向量向后偏移第三数量像素点后的向量作为第十相关向量,第二数量为目标图像的像素宽度加一,第三数量为目标图像的像素宽度减一;
步骤136B,从第五相关向量、第六相关向量、第七相关向量、第八相关向量、第九相关向量、第十相关向量中,选择各目标像素点的8邻域像素点作为各目标像素点对应的相关像素点。
示例性的,以图2为例,目标向量包括目标像素点D4、D5、D6,则第五相关向量包括像素点D3、D4、D5,第六相关向量包括像素点D5、D6、 D7,第七相关向量包括像素点C3、C4、C5,第八相关向量包括像素点C5、C6、C7,第九相关向量包括像素点E3、E4、E5,第十相关向量包括像素点E5、E6、E7。在选择目标像素点D4的相关像素点时,确定第五相关向量中的像素点D3、第六相关向量中的像素点D5、第七相关向量中的像素点C3及C4、第八相关向量中的像素点C5、第九相关向量中的像素点E3及E4、第十相关向量中的像素点E5作为目标像素点D4的相关像素点。
在一个实施例中,在确定目标像素点的相关像素点时,若目标像素点为目标图像中的边界像素点,则使用目标像素点本身填充为空的相关像素点。
示例性的,以图2为例,当目标像素点为A2时,A2的4邻域像素点中,上方位置的像素点为空,则将使用A2填充A2上方位置的像素点,再选择目标像素点A2的相关像素点。
在一个实施例中,如图3所示,步骤S140根据目标向量中的目标像素点的初始颜色属性值以及对应的相关像素点的初始颜色属性值,对各目标像素点进行颜色属性重建,得到各目标像素点的新的颜色属性值,包括步骤S141至步骤S149。
步骤S141,将目标图像中所有像素点的初始颜色属性值缓存至第一缓存空间;
步骤S143,从第一缓存空间中读取目标像素点的初始颜色属性值以及对应的相关像素点的初始颜色属性值;
步骤S145,计算相关像素点的初始颜色属性值的总和,得到第一计算结果;
步骤S146,计算目标像素点的初始颜色属性值与相关像素点个数的乘积,得到第二计算结果;
步骤S149,对第一计算结果以及第二计算结果进行求差运算,根据求 差运算结果得到目标像素点的新的颜色属性值。
具体地,以相关像素点为4邻域像素点为例,在进行颜色属性重建时,拉普拉斯算子可以通过以下公式(1)计算得到:
Figure PCTCN2020129077-appb-000001
其中,
Figure PCTCN2020129077-appb-000002
为坐标为f(x,y)的像素点的拉普拉斯算子计算结果,x为像素点的横坐标,y为像素点的纵坐标,f(x,y)为坐标为(x,y)的像素点的初始颜色属性值。
在计算得到拉普拉斯算子后,进一步可以根据拉普拉斯算子以及各颜色属性值对应的计算公式得到新的颜色属性值。
例如,以颜色明度(V)为例,新的颜色明度可以通过以下公式(2)实现:
Figure PCTCN2020129077-appb-000003
其中,V'为新的颜色明度,Y为初始颜色明度与拉普拉斯算子的差值。
以颜色饱和度(S)为例,新的颜色饱和度可以通过以下公式(3)实现:
Figure PCTCN2020129077-appb-000004
其中,S'为新的颜色饱和度,H为HSV中的色调,S为初始颜色饱和度。
在一个实施例中,在得到各目标像素点的新的颜色属性值之后,方法还包括:按照第一缓存空间中初始颜色属性值的排布顺序,将初始颜色属性值对应的目标像素点的新的颜色属性值缓存至第二缓存空间。
本实施例通过将新的颜色属性值缓存至第二缓存空间,可以对计算结 果进行保存,而不干扰目标像素点的并行处理过程(需要使用初始颜色属性值进行计算);另外,通过按照顺序进行缓存,可以保证在进行颜色属性值替换时,新的颜色属性值与初始颜色属性值的对应关系,避免出现混乱替换的问题。
在一个实施例中,在得到目标图像中所有像素点的新的颜色属性值后,使用新的颜色属性值替换对应的初始颜色属性值,得到处理后图像,包括:依次从第二缓存空间中读取目标像素点的新的颜色属性值,并替换目标像素点的初始颜色属性值;当所有目标像素点的初始颜色属性值都被替换后,得到处理后图像。
在一个实施例中,如图4所示,提供一种图像处理装置,该图像处理装置包括以下结构:
目标图像获取模块110,配置为获取待处理的目标图像;
目标向量选择模块120,配置为从目标图像中选择目标向量,目标向量包含预设数量的目标像素点;
相关像素点确定模块130,配置为确定目标向量中各目标像素点对应的相关像素点,相关像素点为目标像素点的邻域像素点;
颜色属性重建模块140,配置为根据目标像素点的初始颜色属性值以及对应的相关像素点的初始颜色属性值,对目标向量中的各目标像素点进行颜色属性重建,得到各目标像素点的新的颜色属性值;
颜色属性替换模块150,配置为在得到目标图像中所有像素点的新的颜色属性值后,使用新的颜色属性值替换对应的初始颜色属性值,得到处理后图像。
在一些可选实施例中,所述目标向量选择模块120,配置为以所述目标图像中的第一个像素点为起始像素点,按照像素点排布方向选择预设数量的像素点作为目标像素点,得到目标向量。
在一些可选实施例中,所述目标向量选择模块120,配置为在完成对当前目标向量的处理后,采用对齐取点的方式,以当前目标向量中最后一个像素点的下一个像素点为起始像素点,按照像素点排布方向选择预设数量的像素点作为目标像素点,得到待处理的下一个目标向量。
在一些可选实施例中,所述颜色属性重建模块140,配置为将目标图像中所有像素点的初始颜色属性值缓存至第一缓存空间;从所述第一缓存空间中读取目标像素点的初始颜色属性值以及对应的相关像素点的初始颜色属性值;计算所述相关像素点的初始颜色属性值的总和,得到第一计算结果;计算所述目标像素点的初始颜色属性值与所述相关像素点个数的乘积,得到第二计算结果;对所述第一计算结果以及所述第二计算结果进行求差运算,根据求差运算结果得到目标像素点的新的颜色属性值。
在一些可选实施例中,所述颜色属性重建模块140,还配置为按照所述第一缓存空间中初始颜色属性值的排布顺序,将初始颜色属性值对应的目标像素点的新的颜色属性值缓存至第二缓存空间。
在一些可选实施例中,所述颜色属性替换模块150,配置为依次从所述第二缓存空间中读取目标像素点的新的颜色属性值,并替换目标像素点的初始颜色属性值;当所有目标像素点的初始颜色属性值都被替换后,得到处理后图像。
关于图像处理装置的具体限定可以参见上文中对于图像处理方法的限定,在此不再赘述。上述图像处理装置中的各个模块可全部或部分通过软件、硬件及其组合来实现。上述各模块可以硬件形式内嵌于或独立于计算机设备中的处理器中,也可以以软件形式存储于计算机设备中的存储器中,以便于处理器调用执行以上各个模块对应的操作。
如图5所示,本公开实施例还提供一种视频处理方法,该视频处理方法可以应用于手机等移动终端,该视频处理方法包括以下步骤:
步骤S210,获取待处理的原始视频。
原始视频包括通用带宽压缩格式(UBWC)的各第一YUV图像,即原始视频中的每一帧原始图像都可以认为是第一YUV图像。其中,YUV是一种颜色编码格式,常使用于各个视频处理组件中,YUV是编译true-color颜色空间(color space)的种类,Y'UV、YUV、YCbCr、YPbPr等专有名词都可以称为YUV,彼此有重叠。“Y”表示明亮度(Luminance或Luma),也就是灰阶值,“U”和“V”表示的则是色度(Chrominance或Chroma),作用是描述影像色彩及饱和度,用于指定像素的颜色。YUV在对照片或视频编码时,考虑到人类的感知能力,允许降低色度的带宽。
步骤S220,将第一YUV图像从通用带宽压缩格式转换为线性格式,获得线性格式的第二YUV图像。
在高通845之后的平台,视频数据为采用通用带宽压缩格式的YUV图像,从而可以节省带宽,然而,通用带宽压缩格式的YUV图像无法直接进行图像算法处理,因此,首先对原始视频中的各第一YUV图像进行压缩格式转换,即从通用带宽压缩格式转换为线性(Linear)格式,从而得到线性格式的第二YUV图像。线性格式可以理解为普通线性编码格式。
可以理解,本步骤中的处理过程是针对原始视频中的每一帧图像进行的,也就是说,将每一帧的第一YUV图像都进行压缩格式转换,得到对应的第二YUV图像。
步骤S230,对第二YUV图像进行色彩增强处理,获得线性格式的第三YUV图像。
在进行压缩格式转换后,可以直接对线性格式的第二YUV图像进行不改变压缩格式的图像处理,例如可进行色彩增强处理,以提高图像视觉效果,得到处理后的第三YUV图像。另外,由于色彩增强处理不会改变原始图像的压缩格式,因此,第三YUV图像仍为线性格式。
步骤S240,将第三YUV图像从线性格式转换为通用带宽压缩格式,获得通用带宽压缩格式的第四YUV图像;
在得到处理后的、线性格式的第三YUV图像后,为了保持平台视频格式的一致性,再次对第三YUV图像进行压缩格式转换,即从线性格式转换为带宽压缩格式,得到带宽压缩格式的第四YUV图像。
步骤S250,使用第四YUV图像替换原始视频中对应的第一YUV图像,获得处理后视频。
在根据步骤S220至步骤S240的流程对原始视频中的第一YUV图像进行图像处理,得到对应的第四YUV图像后,使用得到的第四YUV图像对应替换第一YUV图像,在所有的第一YUV图像都替换完毕后,得到原始视频对应的、视觉效果更好的处理后视频。
另外,在将图像进行压缩格式转换时,即在将图像由通用带宽压缩格式转换为线性格式,或从线性格式转换为通用带宽压缩格式时,可以通过调用相关的应用程序编程接口(API,Application Programming Interface)实现,具体例如Ubwcdma等,在此不作限定。
本实施例提供的视频处理方法,在获取通用带宽压缩格式的原始视频后,通过对原始视频中的图像进行压缩格式转换,使得转换后的图像为可直接进行处理的图像,从而便于进行色彩增强处理,然后再次进行压缩格式转换,使得转换后的图像为通用带宽压缩格式,从而便于进行视频播放,由于对视频中的图像进行了色彩增强处理,从而有助于提高视频的视觉效果。
在一个实施例中,将第一YUV图像从通用带宽压缩格式转换为线性格式,包括:按照原始视频中第一YUV图像的图像帧顺序,逐一将第一YUV图像从通用带宽压缩格式转换为线性格式。
具体地,在对原始视频中的各第一YUV图像进行压缩格式转换时,可 以是逐帧进行转换,即按照第一帧、第二帧、…第i帧、第i+1帧、…最后一帧的顺序进行转换,该处理顺序与视频播放时各帧图像的显示顺序一致,从而便于在后续处理后直接进行视频播放。
在一个实施例中,将第一YUV图像从通用带宽压缩格式转换为线性格式,包括:按照原始视频中第一YUV图像的图像帧顺序,同时将多帧第一YUV图像从通用带宽压缩格式转换为线性格式。
具体地,在对原始视频中的各第一YUV图像进行压缩格式转换时,可以是同时对多帧进行转换,例如,第一次处理时,同时对第一帧、第二帧、…第i帧的图像进行处理;第二次处理时,同时对第i+1帧、第i+2帧、…第2i帧的图像进行处理,以此类推。通过同时对多帧图像进行处理,处理过程可以是采用并行方式,从而可以有效提高图像处理效率。
在一个实施例中,获得线性格式的第二YUV图像之后,方法还包括:将第二YUV图像缓存至第一缓存空间。
本实施例中,在逐帧对第一YUV图像进行压缩格式转换,或者同时对多帧第一YUV图像进行压缩格式转换,得到线性格式的第二YUV图像后,将第二YUV图像缓存至第一缓存空间,在进行后续的处理时,可以从第一缓存空间中读取需要处理的第二YUV图像并进行处理,通过对图像进行缓存处理,可以起到临时存储的作用,另外,相较于内存保存的方式,缓存的速率更快,从而可以提高图像处理的效率。
在一个实施例中,如图6所示,步骤S230对第二YUV图像进行色彩增强处理,获得线性格式的第三YUV图像,包括步骤S231至步骤S239。
步骤S231,从第一缓存空间中读取第二YUV图像,将第二YUV图像从YUV颜色编码格式转换为RGB颜色编码格式,获得RGB图像,将RGB图像缓存至第二缓存空间;
步骤S233,从第二缓存空间中读取RGB图像,将RGB图像从RGB 颜色编码格式转换为HSV颜色编码格式,获得HSV图像,并将HSV图像缓存至第三缓存空间;
步骤S235,从第三缓存空间中读取HSV图像,并对HSV图像进行色彩增强处理,获得增强HSV图像,使用增强HSV图像替换第三缓存空间中的HSV图像;
步骤S237,从第三缓存空间中读取增强HSV图像,将增强HSV图像从HSV颜色编码格式转换为RGB颜色编码格式,获得增强RGB图像,使用增强RGB图像替换第二缓存空间中的RGB图像;
步骤S239,从第二缓存空间中读取增强RGB图像,将将增强RGB图像从RGB颜色编码格式转换为YUV颜色编码格式,获得第三YUV图像,使用第三YUV图像替换第一缓存空间中的第二YUV图像。
示例性的,如图7所示,为包含各缓存空间的方法流程示意图,本实施例中主要使用到三个不同的缓存空间,其中,第一缓存空间主要用于缓存第二YUV图像以及第三YUV图像,第二缓存空间主要用于缓存RGB图像和增强RGB图像,第三缓存空间主要用于缓存HSV图像以及增强HSV图像。另外,在对图像进行颜色编码格式转换时,可以采用现有技术中的转换方法进行转换,在此不作限定。
可以理解,在实际应用过程中,也可以使用其他数量的缓存空间对图像进行缓存,例如,可以采用六个不同的缓存空间分布对应缓存上述六种不同的图像,从而无需对缓存空间中原先保存的图像数据进行替换。
本实施例中,在进行图像处理时,通过对图像进行缓存处理,可以起到临时存储的作用,另外,相较于内存保存的方式,缓存的速率更快,从而可以提高图像处理的效率。
在一个实施例中,步骤S237中对HSV图像进行色彩增强处理,包括:采用HSV图像对应的色彩调节曲线公式,对HSV图像的饱和度和明度进 行调节处理。
具体地,色彩调节曲线公式是指应用对图像进行色彩调节(或色偏调节)的公式,不同的处理过程对应的公式具体形式不同,在此不做具体限定。通过采用HSV图像对应的色彩调节曲线公式对HSV图像的饱和度和明度进行调节处理,可以起到色彩增强的作用,从而提高视觉效果。
在一个实施例中,步骤S237中对HSV图像进行色彩增强处理,包括:根据HSV图像中各目标像素点的初始颜色属性值以及各目标像素点对应的邻域像素点的初始颜色属性值确定目标像素点的新的颜色属性值,颜色属性值包括饱和度和明度;使用新的颜色属性值替换目标像素点的初始颜色属性值。
其中,根据HSV图像中各目标像素点的初始颜色属性值以及各目标像素点对应的邻域像素点的初始颜色属性值确定目标像素点的新的颜色属性值,包括:将HSV图像中所有像素点的初始颜色属性值缓存至第四缓存空间;从第四缓存空间中读取目标像素点的初始颜色属性值以及对应的邻域像素点的初始颜色属性值;计算邻域像素点的初始颜色属性值的总和,得到第一计算结果;计算目标像素点的初始颜色属性值与邻域像素点个数的乘积,得到第二计算结果;对第一计算结果以及第二计算结果进行求差运算,根据求差运算结果得到目标像素点的新的颜色属性值。
在本实施例中,在对HSV图像进行色彩增强处理时,采用的是前述各实施例中所描述的图像处理方法,本实施例中的处理过程,可以认为是前述各实施例中所描述的计算图像中像素点拉普拉斯算子的过程,因此,关于本实施例的方法的限定,参见之前各实施例中对于图像处理方法的限定,在此不再赘述。
如图8所示,本公开实施例还提供一种视频处理装置,包括:
视频获取模块210,配置为获取待处理的原始视频,原始视频包括通用 带宽压缩格式的各第一YUV图像;
第一转换模块220,配置为将第一YUV图像从通用带宽压缩格式转换为线性格式,获得线性格式的第二YUV图像;
图像处理模块230,配置为对第二YUV图像进行色彩增强处理,获得线性格式的第三YUV图像;
第二转换模块240,配置为将第三YUV图像从线性格式转换为通用带宽压缩格式,获得通用带宽压缩格式的第四YUV图像;
图像替换模块250,配置为使用第四YUV图像替换原始视频中对应的第一YUV图像,获得处理后视频。
在本公开的一些可选实施例中,所述第一转换模块220,配置为将所述第一YUV图像从通用带宽压缩格式转换为线性格式,包括以下各项中的其中之一:按照所述原始视频中第一YUV图像的图像帧顺序,逐一将所述第一YUV图像从通用带宽压缩格式转换为线性格式;
按照所述原始视频中第一YUV图像的图像帧顺序,同时将多帧第一YUV图像从通用带宽压缩格式转换为线性格式。
在本公开的一些可选实施例中,所述装置还包括缓存模块,配置为将所述第二YUV图像缓存至第一缓存空间。
在本公开的一些可选实施例中,所述图像处理模块230,配置为从所述第一缓存空间中读取所述第二YUV图像,将所述第二YUV图像从YUV颜色编码格式转换为RGB颜色编码格式,获得RGB图像,将所述RGB图像缓存至第二缓存空间;从所述第二缓存空间中读取所述RGB图像,将所述RGB图像从RGB颜色编码格式转换为HSV颜色编码格式,获得HSV图像,将所述HSV图像缓存至第三缓存空间;从所述第三缓存空间中读取所述HSV图像,对所述HSV图像进行色彩增强处理,获得增强HSV图像,使用所述增强HSV图像替换所述第三缓存空间中的所述HSV图像;从所 述第三缓存空间中读取所述增强HSV图像,将所述增强HSV图像从HSV颜色编码格式转换为RGB颜色编码格式,获得增强RGB图像,使用所述增强RGB图像替换所述第二缓存空间中的所述RGB图像;从所述第二缓存空间中读取所述增强RGB图像,将将所述增强RGB图像从RGB颜色编码格式转换为YUV颜色编码格式,获得所述第三YUV图像,使用所述第三YUV图像替换所述第一缓存空间中的所述第二YUV图像。
在本公开的一些可选实施例中,所述图像处理模块230,配置为采用所述HSV图像对应的色彩调节曲线公式,对所述HSV图像的饱和度和明度进行调节处理。
在本公开的一些可选实施例中,所述图像处理模块230,配置为根据所述HSV图像中各目标像素点的初始颜色属性值以及所述各目标像素点对应的邻域像素点的初始颜色属性值确定所述目标像素点的新的颜色属性值,所述颜色属性值包括饱和度和明度;使用所述新的颜色属性值替换所述目标像素点的初始颜色属性值。
在本公开的一些可选实施例中,所述图像处理模块230,配置为将所述HSV图像中所有像素点的初始颜色属性值缓存至第四缓存空间;从所述第四缓存空间中读取所述目标像素点的初始颜色属性值以及对应的邻域像素点的初始颜色属性值;计算所述邻域像素点的初始颜色属性值的总和,得到第一计算结果;计算所述目标像素点的初始颜色属性值与邻域像素点个数的乘积,得到第二计算结果;对所述第一计算结果以及所述第二计算结果进行求差运算,根据求差运算结果得到所述目标像素点的新的颜色属性值。
关于视频处理装置的具体限定可以参见上文中对于视频处理方法的限定,在此不再赘述。上述视频处理装置中的各个模块可全部或部分通过软件、硬件及其组合来实现。上述各模块可以硬件形式内嵌于或独立于计算 机设备中的处理器中,也可以以软件形式存储于计算机设备中的存储器中,以便于处理器调用执行以上各个模块对应的操作。
在合理条件下应当理解,虽然前文各实施例涉及的流程图中的各个步骤按照箭头的指示依次显示,但是这些步骤并不是必然按照箭头指示的顺序依次执行。除非本文中有明确的说明,这些步骤的执行并没有严格的顺序限制,这些步骤可以以其它的顺序执行。而且,各流程图中的至少一部分步骤可以包括多个子步骤或者多个阶段,这些子步骤或者阶段并不必然是在同一时刻执行完成,而是可以在不同的时刻执行,这些子步骤或者阶段的执行顺序也不必然是依次进行,而是可以与其它步骤或者其它步骤的子步骤或者阶段的至少一部分轮流或者交替地执行。
本公开实施例还提供了一种计算机设备,包括存储器和处理器,存储器中存储有计算机程序,该处理器执行计算机程序时实现上述各实施例中的图像处理方法或者视频处理方法的步骤。
图9示出了一个实施例中计算机设备的内部结构图。该计算机设备具体可以是终端(或服务器)。如图9所示,该计算机设备包括该计算机设备包括通过系统总线连接的处理器、存储器、网络接口、输入装置、摄像头、声音采集装置、扬声器和显示屏。其中,存储器包括非易失性存储介质和内存储器。该计算机设备的非易失性存储介质存储有操作系统,还可存储有计算机程序,该计算机程序被处理器执行时,可使得处理器实现图像处理方法或者视频处理方法。该内存储器中也可储存有计算机程序,该计算机程序被处理器执行时,可使得处理器执图像处理方法或者视频处理方法。计算机设备的显示屏可以是液晶显示屏或者电子墨水显示屏,计算机设备的输入装置可以是显示屏上覆盖的触摸层,也可以是计算机设备外壳上设置的按键、轨迹球或触控板,还可以是外接的键盘、触控板或鼠标等。
本领域技术人员可以理解,图9中示出的结构,仅仅是与本公开实施 例相关的部分结构的框图,并不构成对本公开实施例所应用于其上的计算机设备的限定,具体的计算机设备可以包括比图中所示更多或更少的部件,或者组合某些部件,或者具有不同的部件布置。
本公开实施例还提供了一种计算机可读存储介质,其上存储有计算机程序,计算机程序被处理器执行时实现上述各实施例中的图像处理方法或者视频处理方法的步骤。
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,是可以通过计算机程序来指令相关的硬件来完成,计算机程序可存储于一非易失性计算机可读取存储介质中,该计算机程序在执行时,可包括如上述各方法的实施例的流程。其中,本公开所提供的各实施例中所使用的对存储器、存储、数据库或其它介质的任何引用,均可包括非易失性和/或易失性存储器。非易失性存储器可包括只读存储器(ROM,Read Only Memory)、可编程ROM(PROM,Programmable Read-Only Memory)、电可编程ROM(EPROM,Erasable Programmable Read-Only Memory)、电可擦除可编程ROM(EEPROM,Electrically Erasable Programmable Read-Only Memory)或闪存。易失性存储器可包括随机存取存储器(RAM,Random Access Memory)或者外部高速缓冲存储器。作为说明而非局限,RAM以多种形式可得,诸如静态RAM(SRAM,Static Random Access Memory)、动态RAM(DRAM,Dynamic Random Access Memory)、同步DRAM(SDRAM,Synchronous Dynamic Random Access Memory)、双数据率SDRAM(DDRSDRAM,Double Data Rate Synchronous Dynamic Random Access Memory)、增强型SDRAM(ESDRAM,Enhanced Synchronous Dynamic Random Access Memory)、同步链路(Synchlink)DRAM(SLDRAM,SyncLink Dynamic Random Access Memory)、存储器总线(Rambus)直接RAM(RDRAM)、直接存储器总线动态RAM(DRDRAM)、 以及存储器总线动态RAM(RDRAM)等。
本公开所提供的几个方法实施例中所揭露的方法,在不冲突的情况下可以任意组合,得到新的方法实施例。
本公开所提供的几个产品实施例中所揭露的特征,在不冲突的情况下可以任意组合,得到新的产品实施例。
本公开所提供的几个方法或设备实施例中所揭露的特征,在不冲突的情况下可以任意组合,得到新的方法实施例或设备实施例。
以上所述实施例的各技术特征可以进行任意的组合,为使描述简洁,未对上述实施例中的各个技术特征所有可能的组合都进行描述,然而,只要这些技术特征的组合不存在矛盾,都应当认为是本说明书记载的范围。
以上所述实施例仅表达了本公开的几种实施方式,其描述较为具体和详细,但并不能因此而理解为对本公开实施例范围的限制。应当指出的是,对于本领域的普通技术人员来说,在不脱离本公开实施例构思的前提下,还可以做出若干变形和改进,这些都属于本公开实施例的保护范围。因此,本公开实施例的保护范围应以所附权利要求为准。

Claims (28)

  1. 一种视频处理方法,包括:
    获取待处理的原始视频,所述原始视频包括通用带宽压缩格式的各第一YUV图像;
    将所述第一YUV图像从通用带宽压缩格式转换为线性格式,获得线性格式的第二YUV图像;
    对所述第二YUV图像进行色彩增强处理,获得线性格式的第三YUV图像;
    将所述第三YUV图像从线性格式转换为通用带宽压缩格式,获得通用带宽压缩格式的第四YUV图像;
    使用所述第四YUV图像替换所述原始视频中对应的第一YUV图像,获得处理后视频。
  2. 根据权利要求1所述的方法,其中,将所述第一YUV图像从通用带宽压缩格式转换为线性格式,包括以下各项中的其中之一:
    按照所述原始视频中第一YUV图像的图像帧顺序,逐一将所述第一YUV图像从通用带宽压缩格式转换为线性格式;
    按照所述原始视频中第一YUV图像的图像帧顺序,同时将多帧第一YUV图像从通用带宽压缩格式转换为线性格式。
  3. 根据权利要求1所述的方法,其中,所述获得线性格式的第二YUV图像之后,所述方法还包括:
    将所述第二YUV图像缓存至第一缓存空间。
  4. 根据权利要求3所述的方法,其中,所述对所述第二YUV图像进行色彩增强处理,获得线性格式的第三YUV图像,包括:
    从所述第一缓存空间中读取所述第二YUV图像,将所述第二YUV图像从YUV颜色编码格式转换为RGB颜色编码格式,获得RGB图像,将所 述RGB图像缓存至第二缓存空间;
    从所述第二缓存空间中读取所述RGB图像,将所述RGB图像从RGB颜色编码格式转换为HSV颜色编码格式,获得HSV图像,将所述HSV图像缓存至第三缓存空间;
    从所述第三缓存空间中读取所述HSV图像,对所述HSV图像进行色彩增强处理,获得增强HSV图像,使用所述增强HSV图像替换所述第三缓存空间中的所述HSV图像;
    从所述第三缓存空间中读取所述增强HSV图像,将所述增强HSV图像从HSV颜色编码格式转换为RGB颜色编码格式,获得增强RGB图像,使用所述增强RGB图像替换所述第二缓存空间中的所述RGB图像;
    从所述第二缓存空间中读取所述增强RGB图像,将将所述增强RGB图像从RGB颜色编码格式转换为YUV颜色编码格式,获得所述第三YUV图像,使用所述第三YUV图像替换所述第一缓存空间中的所述第二YUV图像。
  5. 根据权利要求4所述的方法,其中,所述对所述HSV图像进行色彩增强处理,包括:
    采用所述HSV图像对应的色彩调节曲线公式,对所述HSV图像的饱和度和明度进行调节处理。
  6. 根据权利要求4所述的方法,其中,所述对所述HSV图像进行色彩增强处理,包括:
    根据所述HSV图像中各目标像素点的初始颜色属性值以及所述各目标像素点对应的邻域像素点的初始颜色属性值确定所述目标像素点的新的颜色属性值,所述颜色属性值包括饱和度和明度;
    使用所述新的颜色属性值替换所述目标像素点的初始颜色属性值。
  7. 根据权利要求6所述的方法,其中,所述根据所述HSV图像中各 目标像素点的初始颜色属性值以及所述各目标像素点对应的邻域像素点的初始颜色属性值确定所述目标像素点的新的颜色属性值,包括:
    将所述HSV图像中所有像素点的初始颜色属性值缓存至第四缓存空间;
    从所述第四缓存空间中读取所述目标像素点的初始颜色属性值以及对应的邻域像素点的初始颜色属性值;
    计算所述邻域像素点的初始颜色属性值的总和,得到第一计算结果;
    计算所述目标像素点的初始颜色属性值与邻域像素点个数的乘积,得到第二计算结果;
    对所述第一计算结果以及所述第二计算结果进行求差运算,根据求差运算结果得到所述目标像素点的新的颜色属性值。
  8. 一种视频处理装置,包括:
    视频获取模块,配置为获取待处理的原始视频,所述原始视频包括通用带宽压缩格式的各第一YUV图像;
    第一转换模块,配置为将所述第一YUV图像从通用带宽压缩格式转换为线性格式,获得线性格式的第二YUV图像;
    图像处理模块,配置为对所述第二YUV图像进行色彩增强处理,获得线性格式的第三YUV图像;
    第二转换模块,配置为将所述第三YUV图像从线性格式转换为通用带宽压缩格式,获得通用带宽压缩格式的第四YUV图像;
    图像替换模块,配置为使用所述第四YUV图像替换所述原始视频中对应的第一YUV图像,获得处理后视频。
  9. 根据权利要求8所述的装置,其中,所述第一转换模块,配置为将所述第一YUV图像从通用带宽压缩格式转换为线性格式,包括以下各项中的其中之一:
    按照所述原始视频中第一YUV图像的图像帧顺序,逐一将所述第一YUV图像从通用带宽压缩格式转换为线性格式;
    按照所述原始视频中第一YUV图像的图像帧顺序,同时将多帧第一YUV图像从通用带宽压缩格式转换为线性格式。
  10. 根据权利要求8所述的装置,其中,所述装置还包括缓存模块,配置为将所述第二YUV图像缓存至第一缓存空间。
  11. 根据权利要求10所述的装置,其中,所述图像处理模块,配置为从所述第一缓存空间中读取所述第二YUV图像,将所述第二YUV图像从YUV颜色编码格式转换为RGB颜色编码格式,获得RGB图像,将所述RGB图像缓存至第二缓存空间;从所述第二缓存空间中读取所述RGB图像,将所述RGB图像从RGB颜色编码格式转换为HSV颜色编码格式,获得HSV图像,将所述HSV图像缓存至第三缓存空间;从所述第三缓存空间中读取所述HSV图像,对所述HSV图像进行色彩增强处理,获得增强HSV图像,使用所述增强HSV图像替换所述第三缓存空间中的所述HSV图像;从所述第三缓存空间中读取所述增强HSV图像,将所述增强HSV图像从HSV颜色编码格式转换为RGB颜色编码格式,获得增强RGB图像,使用所述增强RGB图像替换所述第二缓存空间中的所述RGB图像;从所述第二缓存空间中读取所述增强RGB图像,将将所述增强RGB图像从RGB颜色编码格式转换为YUV颜色编码格式,获得所述第三YUV图像,使用所述第三YUV图像替换所述第一缓存空间中的所述第二YUV图像。
  12. 根据权利要求11所述的装置,其中,所述图像处理模块,配置为采用所述HSV图像对应的色彩调节曲线公式,对所述HSV图像的饱和度和明度进行调节处理。
  13. 根据权利要求11所述的装置,其中,所述图像处理模块,配置为根据所述HSV图像中各目标像素点的初始颜色属性值以及所述各目标像素 点对应的邻域像素点的初始颜色属性值确定所述目标像素点的新的颜色属性值,所述颜色属性值包括饱和度和明度;使用所述新的颜色属性值替换所述目标像素点的初始颜色属性值。
  14. 根据权利要求13所述的装置,其中,所述图像处理模块,配置为将所述HSV图像中所有像素点的初始颜色属性值缓存至第四缓存空间;从所述第四缓存空间中读取所述目标像素点的初始颜色属性值以及对应的邻域像素点的初始颜色属性值;计算所述邻域像素点的初始颜色属性值的总和,得到第一计算结果;计算所述目标像素点的初始颜色属性值与邻域像素点个数的乘积,得到第二计算结果;对所述第一计算结果以及所述第二计算结果进行求差运算,根据求差运算结果得到所述目标像素点的新的颜色属性值。
  15. 一种图像处理方法,包括:
    获取待处理的目标图像;
    从所述目标图像中选择目标向量;所述目标向量包含预设数量的目标像素点;
    确定所述目标向量中各目标像素点对应的相关像素点,所述相关像素点为目标像素点的邻域像素点;
    根据所述目标向量中的目标像素点的初始颜色属性值以及对应的相关像素点的初始颜色属性值,对各目标像素点进行颜色属性重建,得到各目标像素点的新的颜色属性值;
    在得到所述目标图像中所有像素点的新的颜色属性值后,使用新的颜色属性值替换对应的初始颜色属性值,得到处理后图像。
  16. 根据权利要求15所述的方法,其中,所述从所述目标图像中选择目标向量,包括:
    以所述目标图像中的第一个像素点为起始像素点,按照像素点排布方 向选择预设数量的像素点作为目标像素点,得到目标向量。
  17. 根据权利要求15或16所述的方法,其中,所述从所述目标图像中选择目标向量,包括:
    在完成对当前目标向量的处理后,采用对齐取点的方式,以当前目标向量中最后一个像素点的下一个像素点为起始像素点,按照像素点排布方向选择预设数量的像素点作为目标像素点,得到待处理的下一个目标向量。
  18. 根据权利要求15所述的方法,其中,所述根据所述目标向量中的目标像素点的初始颜色属性值以及对应的相关像素点的初始颜色属性值,对各目标像素点进行颜色属性重建,得到各目标像素点的新的颜色属性值,包括:
    将目标图像中所有像素点的初始颜色属性值缓存至第一缓存空间;
    从所述第一缓存空间中读取目标像素点的初始颜色属性值以及对应的相关像素点的初始颜色属性值;
    计算所述相关像素点的初始颜色属性值的总和,得到第一计算结果;
    计算所述目标像素点的初始颜色属性值与所述相关像素点个数的乘积,得到第二计算结果;
    对所述第一计算结果以及所述第二计算结果进行求差运算,根据求差运算结果得到目标像素点的新的颜色属性值。
  19. 根据权利要求18所述的方法,其中,所述方法还包括:
    按照所述第一缓存空间中初始颜色属性值的排布顺序,将初始颜色属性值对应的目标像素点的新的颜色属性值缓存至第二缓存空间。
  20. 根据权利要求19所述的方法,其中,所述在得到所述目标图像中所有像素点的新的颜色属性值后,使用新的颜色属性值替换对应的初始颜色属性值,得到处理后图像,包括:
    依次从所述第二缓存空间中读取目标像素点的新的颜色属性值,并替 换目标像素点的初始颜色属性值;
    当所有目标像素点的初始颜色属性值都被替换后,得到处理后图像。
  21. 一种图像处理装置,包括:
    目标图像获取模块,配置为获取待处理的目标图像;
    目标向量选择模块,配置为从所述目标图像中选择目标向量,所述目标向量包含预设数量的目标像素点;
    相关像素点确定模块,配置为确定所述目标向量中各目标像素点对应的相关像素点,所述相关像素点为目标像素点的邻域像素点;
    颜色属性重建模块,配置为根据所述目标像素点的初始颜色属性值以及对应的相关像素点的初始颜色属性值,对目标向量中的各目标像素点进行颜色属性重建,得到各目标像素点的新的颜色属性值;
    颜色属性替换模块,配置为在得到所述目标图像中所有像素点的新的颜色属性值后,使用新的颜色属性值替换对应的初始颜色属性值,得到处理后图像。
  22. 根据权利要求21所述的装置,其中,所述目标向量选择模块,配置为以所述目标图像中的第一个像素点为起始像素点,按照像素点排布方向选择预设数量的像素点作为目标像素点,得到目标向量。
  23. 根据权利要求21或22所述的装置,其中,所述目标向量选择模块,配置为在完成对当前目标向量的处理后,采用对齐取点的方式,以当前目标向量中最后一个像素点的下一个像素点为起始像素点,按照像素点排布方向选择预设数量的像素点作为目标像素点,得到待处理的下一个目标向量。
  24. 根据权利要求21所述的装置,其中,所述颜色属性重建模块,配置为将目标图像中所有像素点的初始颜色属性值缓存至第一缓存空间;从所述第一缓存空间中读取目标像素点的初始颜色属性值以及对应的相关像 素点的初始颜色属性值;计算所述相关像素点的初始颜色属性值的总和,得到第一计算结果;计算所述目标像素点的初始颜色属性值与所述相关像素点个数的乘积,得到第二计算结果;对所述第一计算结果以及所述第二计算结果进行求差运算,根据求差运算结果得到目标像素点的新的颜色属性值。
  25. 根据权利要求24所述的装置,其中,所述颜色属性重建模块,还配置为按照所述第一缓存空间中初始颜色属性值的排布顺序,将初始颜色属性值对应的目标像素点的新的颜色属性值缓存至第二缓存空间。
  26. 根据权利要求25所述的装置,其中,所述颜色属性替换模块,配置为依次从所述第二缓存空间中读取目标像素点的新的颜色属性值,并替换目标像素点的初始颜色属性值;当所有目标像素点的初始颜色属性值都被替换后,得到处理后图像。
  27. 一种计算机设备,包括存储器和处理器,所述存储器存储有计算机程序,所述处理器执行所述计算机程序时实现权利要求1至7中任一项所述方法的步骤;或者,所述处理器执行所述计算机程序时实现权利要求15至20中任一项所述方法的步骤。
  28. 一种计算机可读存储介质,其上存储有计算机程序,所述计算机程序被处理器执行时实现权利要求1至7中任一项所述的方法的步骤;或者,所述计算机程序被处理器执行时实现权利要求15至20中任一项所述的方法的步骤。
PCT/CN2020/129077 2019-11-27 2020-11-16 视频处理方法、装置、存储介质及计算机设备 WO2021104079A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201911181080.4 2019-11-27
CN201911181080.4A CN112866802B (zh) 2019-11-27 2019-11-27 视频处理方法、装置、存储介质及计算机设备

Publications (1)

Publication Number Publication Date
WO2021104079A1 true WO2021104079A1 (zh) 2021-06-03

Family

ID=75985530

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/129077 WO2021104079A1 (zh) 2019-11-27 2020-11-16 视频处理方法、装置、存储介质及计算机设备

Country Status (2)

Country Link
CN (1) CN112866802B (zh)
WO (1) WO2021104079A1 (zh)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113473132A (zh) * 2021-07-26 2021-10-01 Oppo广东移动通信有限公司 透明视频压缩方法、装置、存储介质以及终端
CN113709489A (zh) * 2021-07-26 2021-11-26 山东云海国创云计算装备产业创新中心有限公司 一种视频压缩方法、装置、设备及可读存储介质
CN113706367A (zh) * 2021-08-26 2021-11-26 北京市商汤科技开发有限公司 节点排布方式确定方法及装置、电子设备和存储介质
CN114489456A (zh) * 2022-01-04 2022-05-13 杭州涂鸦信息技术有限公司 照明系统控制方法、装置、计算机设备和可读存储介质
CN115063325A (zh) * 2022-08-17 2022-09-16 中央广播电视总台 一种视频信号处理方法、装置、计算机设备和存储介质

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116095509B (zh) * 2021-11-05 2024-04-12 荣耀终端有限公司 生成视频帧的方法、装置、电子设备及存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104978945A (zh) * 2014-04-14 2015-10-14 深圳Tcl新技术有限公司 图像饱和度增强的方法及其装置
US20170372452A1 (en) * 2016-06-22 2017-12-28 Qualcomm Incorporated Image rotation method and apparatus
CN108053383A (zh) * 2017-12-28 2018-05-18 努比亚技术有限公司 一种降噪方法、设备和计算机可读存储介质
CN108282643A (zh) * 2018-02-12 2018-07-13 武汉斗鱼网络科技有限公司 图像处理方法、图像处理装置及电子设备
CN109739609A (zh) * 2019-01-03 2019-05-10 腾讯科技(深圳)有限公司 图像处理方法、装置、计算机可读存储介质和计算机设备

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8139632B2 (en) * 2007-03-23 2012-03-20 Advanced Micro Devices, Inc. Video decoder with adaptive outputs
CN101951523B (zh) * 2010-09-21 2012-10-24 北京工业大学 一种自适应彩色图像处理方法及系统
CN102223547B (zh) * 2011-06-16 2014-03-12 王洪剑 图像色彩增强装置和方法
CN104702909B (zh) * 2014-04-17 2018-11-06 杭州海康威视数字技术股份有限公司 视频数据的处理方法及装置
US10694197B2 (en) * 2018-01-17 2020-06-23 Qualcomm Incorporated Composition based dynamic panel mode switch
US10416808B2 (en) * 2018-01-17 2019-09-17 Qualcomm Incorporated Input event based dynamic panel mode switch
CN208001343U (zh) * 2018-01-27 2018-10-23 深圳市康帕斯科技发展有限公司 一种视频、图像超融合处理器平台
CN109525901B (zh) * 2018-11-27 2020-08-25 Oppo广东移动通信有限公司 视频处理方法、装置、电子设备及计算机可读介质

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104978945A (zh) * 2014-04-14 2015-10-14 深圳Tcl新技术有限公司 图像饱和度增强的方法及其装置
US20170372452A1 (en) * 2016-06-22 2017-12-28 Qualcomm Incorporated Image rotation method and apparatus
CN108053383A (zh) * 2017-12-28 2018-05-18 努比亚技术有限公司 一种降噪方法、设备和计算机可读存储介质
CN108282643A (zh) * 2018-02-12 2018-07-13 武汉斗鱼网络科技有限公司 图像处理方法、图像处理装置及电子设备
CN109739609A (zh) * 2019-01-03 2019-05-10 腾讯科技(深圳)有限公司 图像处理方法、装置、计算机可读存储介质和计算机设备

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113473132A (zh) * 2021-07-26 2021-10-01 Oppo广东移动通信有限公司 透明视频压缩方法、装置、存储介质以及终端
CN113709489A (zh) * 2021-07-26 2021-11-26 山东云海国创云计算装备产业创新中心有限公司 一种视频压缩方法、装置、设备及可读存储介质
CN113709489B (zh) * 2021-07-26 2024-04-19 山东云海国创云计算装备产业创新中心有限公司 一种视频压缩方法、装置、设备及可读存储介质
CN113473132B (zh) * 2021-07-26 2024-04-26 Oppo广东移动通信有限公司 透明视频压缩方法、装置、存储介质以及终端
CN113706367A (zh) * 2021-08-26 2021-11-26 北京市商汤科技开发有限公司 节点排布方式确定方法及装置、电子设备和存储介质
CN113706367B (zh) * 2021-08-26 2024-05-17 北京市商汤科技开发有限公司 节点排布方式确定方法及装置、电子设备和存储介质
CN114489456A (zh) * 2022-01-04 2022-05-13 杭州涂鸦信息技术有限公司 照明系统控制方法、装置、计算机设备和可读存储介质
CN114489456B (zh) * 2022-01-04 2024-01-30 杭州涂鸦信息技术有限公司 照明系统控制方法、装置、计算机设备和可读存储介质
CN115063325A (zh) * 2022-08-17 2022-09-16 中央广播电视总台 一种视频信号处理方法、装置、计算机设备和存储介质

Also Published As

Publication number Publication date
CN112866802B (zh) 2022-03-15
CN112866802A (zh) 2021-05-28

Similar Documents

Publication Publication Date Title
WO2021104079A1 (zh) 视频处理方法、装置、存储介质及计算机设备
US8094230B2 (en) Image processing apparatus, image processing method, and program
US8284271B2 (en) Chroma noise reduction for cameras
US8446493B2 (en) Image processing apparatus, imaging apparatus, computer readable storage medium storing image processing program, and image processing method for performing color processing on a raw image
US8861846B2 (en) Image processing apparatus, image processing method, and program for performing superimposition on raw image or full color image
US9300840B2 (en) Image processing device and computer-readable storage medium storing computer-readable instructions
WO2021115242A1 (zh) 一种超分辨率图像处理方法以及相关装置
US8878867B2 (en) Transparency information in image or video format not natively supporting transparency
WO2019200640A1 (zh) 色域映射的方法和装置
CN108846871B (zh) 一种图像处理方法及装置
CN114040246A (zh) 图形处理器的图像格式转换方法、装置、设备及存储介质
JP2006129105A (ja) 視覚処理装置、視覚処理方法、視覚処理プログラムおよび半導体装置
US20180241977A1 (en) Image processing device, image processing method, and display device
CN117768774A (zh) 图像处理器、图像处理方法、拍摄装置和电子设备
CN113824914A (zh) 视频处理方法、装置、电子设备和存储介质
JPWO2017203941A1 (ja) 画像処理装置、および画像処理方法、並びにプログラム
EP2675171B1 (en) Transparency information in image or video format not natively supporting transparency
CN112862905B (zh) 图像处理方法、装置、存储介质及计算机设备
CN116156140A (zh) 视频显示处理方法、装置、计算机设备和存储介质
US11350068B2 (en) Video tone mapping using a sequence of still images
US11195247B1 (en) Camera motion aware local tone mapping
JP2001339695A (ja) 画像信号処理装置及び画像信号処理方法
CN110858389B (zh) 一种增强视频画质的方法、装置、终端及转码设备
US11363245B2 (en) Image processing device, image processing method, and image processing program
JP7022696B2 (ja) 画像処理装置、画像処理方法およびプログラム

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20892414

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 27.09.2022)

122 Ep: pct application non-entry in european phase

Ref document number: 20892414

Country of ref document: EP

Kind code of ref document: A1