WO2023036111A1 - 视频处理方法、装置、设备及介质 - Google Patents

视频处理方法、装置、设备及介质 Download PDF

Info

Publication number
WO2023036111A1
WO2023036111A1 PCT/CN2022/117204 CN2022117204W WO2023036111A1 WO 2023036111 A1 WO2023036111 A1 WO 2023036111A1 CN 2022117204 W CN2022117204 W CN 2022117204W WO 2023036111 A1 WO2023036111 A1 WO 2023036111A1
Authority
WO
WIPO (PCT)
Prior art keywords
linear
video frame
color space
special effect
color
Prior art date
Application number
PCT/CN2022/117204
Other languages
English (en)
French (fr)
Inventor
刘昂
Original Assignee
北京字跳网络技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京字跳网络技术有限公司 filed Critical 北京字跳网络技术有限公司
Publication of WO2023036111A1 publication Critical patent/WO2023036111A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
    • G09G5/02Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the way in which colour is displayed
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/268Signal distribution or switching

Definitions

  • the present disclosure relates to the technical field of data processing, and in particular to a video processing method, device, device and medium.
  • the current video processing method will cause chromatic aberration in the color of the spliced video, resulting in insufficient color accuracy of the generated special effect video.
  • the present disclosure provides a video processing method, device, device and medium.
  • an embodiment of the present disclosure provides a video processing method, the method comprising:
  • Acquire linear special effect resources using the first color space color gamut perform fusion processing on the first linear target video frame and the linear special effect resources, and generate a first linear special effect video using the first color space color gamut frame.
  • the non-linear video using the color gamut of the first color space includes: using an ultra-high-definition television broadcasting system and program sources to produce a non-linear high dynamic range image HDR video in the international standard Rec.2020 color space;
  • the non-linear video using the color gamut of the second color space includes: non-linear standard dynamic range SDR video using the standard red, green and blue color sRGB color space.
  • the acquiring the first linear target video frame using the color gamut of the first color space according to the first linear video frame and the third linear video frame includes:
  • the acquiring the linear special effect resources using the color gamut of the first color space includes:
  • the method further includes:
  • nonlinear special effect resource adopts the color gamut of the second color space, process the nonlinear special effect resource to generate a corresponding linear special effect resource adopting the second color space color gamut;
  • the method further includes:
  • Encoding the first linear special effect video frame using the color gamut of the first color space is performed to generate the first linear special effect video and display it on a display device.
  • the method further includes:
  • Encoding is performed on the first nonlinear special effect video frame to generate a first nonlinear special effect video storage using the color gamut of the first color space.
  • the method further includes:
  • Encoding processing is performed on the second linear special effect video frame to generate a second linear special effect video for display on a display device.
  • the method further includes:
  • Encoding is performed on the second nonlinear special effect video frame to generate a second nonlinear special effect video storage using the color gamut of the second color space.
  • the method also includes:
  • the data storage precision of the video frame is determined according to the storage device or the display device.
  • an embodiment of the present disclosure provides a video processing device, the device comprising:
  • the decoding module is used to decode the nonlinear video using the first color space color gamut to obtain the corresponding first nonlinear video frame, and decode the nonlinear video using the second color space color gamut to obtain the corresponding second nonlinear video a video frame, wherein the color gamut of the first color space is larger than the color gamut of the second color space;
  • the first conversion module is configured to process the second nonlinear video frame to generate a corresponding second linear video frame, and perform color space conversion processing on the second linear video frame to generate a corresponding second linear video frame using the first color A third linear video frame of the spatial color gamut;
  • a first generating module configured to process the first nonlinear video frame to generate a corresponding first linear video frame, and obtain the first color according to the first linear video frame and the third linear video frame the first linear target video frame of the spatial gamut;
  • the second generating module is configured to obtain linear special effect resources using the color gamut of the first color space, perform fusion processing on the first linear target video frame and the linear special effect resources, and generate color space using the first color space domain's first linear effect video frame.
  • the present disclosure provides a computer-readable storage medium, where instructions are stored in the computer-readable storage medium, and when the instructions are run on a terminal device, the terminal device is made to implement the above method.
  • the present disclosure provides an electronic device, which includes: a processor; a memory for storing instructions executable by the processor; the above executable instructions, and execute the instructions to implement the above method.
  • the present disclosure provides a computer program product, where the computer program product includes a computer program/instruction, and when the computer program/instruction is executed by a processor, the above method is implemented.
  • the video processing method provided by the embodiment of the present disclosure performs linear processing on the second nonlinear video frame to generate the second linear video frame, so that special effect processing can be performed based on the second linear video frame, and the color space conversion of the second linear video frame can be performed.
  • the third linear video frame in the first color space, thereby ensuring the unity of the color space of the video frame to be processed by special effects, and expanding the color space simultaneously, so that the color of the video frame is richer;
  • the first nonlinear video frame is processed, Generate the first linear video frame, so that special effects processing can be performed based on the first linear video frame, and at the same time, the first linear video frame and the third linear video frame are both in linear space, so the first linear video frame generated based on the above two video frames
  • the linear target video frame is still in the linear space, and the first linear target video frame and the linear special effect resource are fused.
  • the first linear target video frame and the linear special effect resource are both in the linear space, and both use the first color space color gamut. Therefore, the color uniformity, accuracy and richness of the first linear special effect video frame are guaranteed, the added special effect resources are more natural, and the realism of the special effect video is improved.
  • FIG. 1 is a schematic flowchart of a video processing method provided by an embodiment of the present disclosure
  • FIG. 2 is a schematic diagram of a color space provided by an embodiment of the present disclosure
  • Fig. 3a is a schematic diagram of a linear space from black to white provided by an embodiment of the present disclosure
  • Fig. 3b is a schematic diagram of a nonlinear space from black to white provided by an embodiment of the present disclosure
  • Fig. 3c is a schematic diagram of a nonlinear space and a nonlinear space comparison provided by an embodiment of the present disclosure
  • FIG. 4 is a schematic diagram of a correspondence relationship between a linear space and a nonlinear space provided by an embodiment of the present disclosure
  • Fig. 5a is a schematic diagram of splicing a first linear video frame and a third linear video frame according to an embodiment of the present disclosure
  • Fig. 5b is another schematic diagram of splicing of the first linear video frame and the third linear video frame provided by an embodiment of the present disclosure
  • FIG. 6 is a schematic diagram of superposition of a first linear video frame and a third linear video frame provided by an embodiment of the present disclosure
  • FIG. 7 is a schematic diagram of different data storage precisions of a video frame provided by an embodiment of the present disclosure.
  • FIG. 8 is a schematic flowchart of another video processing method provided by an embodiment of the present disclosure.
  • FIG. 9 is a schematic structural diagram of a video processing device provided by an embodiment of the present disclosure.
  • FIG. 10 is a schematic structural diagram of an electronic device provided by an embodiment of the present disclosure.
  • the term “comprise” and its variations are open-ended, ie “including but not limited to”.
  • the term “based on” is “based at least in part on”.
  • the term “one embodiment” means “at least one embodiment”; the term “another embodiment” means “at least one further embodiment”; the term “some embodiments” means “at least some embodiments.” Relevant definitions of other terms will be given in the description below.
  • an embodiment of the present disclosure provides a video processing method, which will be introduced in conjunction with specific embodiments below.
  • FIG. 1 is a schematic flow chart of a video processing method provided by an embodiment of the present disclosure.
  • the method can be executed by a video processing device, where the device can be implemented by software and/or hardware, and generally can be integrated into an electronic device. As shown in Figure 1, the method includes:
  • Step 101 decoding the nonlinear video using the first color space and color gamut to obtain the corresponding first nonlinear video frame, and decoding the nonlinear video using the second color space and color gamut to obtain the corresponding second nonlinear video frame , where the color gamut of the first color space is larger than the color gamut of the second color space.
  • the multiple non-linear videos to be processed that are captured by the mobile phone and/or forwarded by other communication applications include: non-linear videos using the color gamut of the first color space and non-linear videos using the color gamut of the second color space Non-linear video, where the gamut of the first color space is larger than the gamut of the second color space.
  • HDR High-Dynamic Range
  • SDR Standard Dynamic Range
  • a color space is a model used to represent colors, and there are corresponding color ranges in different color spaces.
  • a three-dimensional color space as shown in Figure 2, the triangles corresponding to different color spaces in this schematic diagram cover The color area represents the color range that the color space can represent, and the area of the color area indicates the size of the color space.
  • the Rec.2020 color range is larger than sRGB color range.
  • three-dimensional coordinates are used to represent the color, and the value of each dimension coordinate is 0 to 1, where 0 means that the color is not used, and 1 means that the color is taken to the maximum value of the color in this color space.
  • the upper vertex of the triangle corresponding to Rec.2020 is the point representing the green primary color in Rec.2020 (0,1,0)
  • the upper vertex of the triangle corresponding to sRGB is the point representing the green primary color in sRGB Expressed as (0,1,0)
  • videos captured by mobile phones and/or videos processed by communication applications are generally nonlinear videos.
  • the color gamut of the first color space and the color gamut of the second color space are adopted.
  • the non-linear video frame of the color gamut refers to the video that converts the linear space of the video into a non-linear space.
  • Figure 3a is a schematic diagram of a linear space from black to white provided by an embodiment of the present disclosure .
  • the human eye is more sensitive to dark colors, the human eye sees more bright areas than dark areas in Figure 3a. In order to balance the areas of the bright and dark areas so that the areas of the bright and dark areas seen by human eyes are similar, nonlinear processing can be performed on the linear space.
  • the human eyes see more dark areas and fewer bright areas, so that the area areas of the bright areas and dark areas seen by the human eyes are similar.
  • the nonlinear space is a gamma-corrected space and the corresponding gamma value is 2.2.
  • the dotted line represents the dividing line between the dark area and the bright area seen by the human eye.
  • the nonlinear video in different color spaces can be one video or multiple videos.
  • Step 102 Process the second nonlinear video frame to generate a corresponding second linear video frame, and perform color space conversion processing on the second linear video frame to generate a corresponding third linear video frame using the color gamut of the first color space.
  • conversion functions corresponding to different video formats are preset according to application scenarios of video processing, for example, a non-linear HDR video frame is converted into a linear HDR video frame through an HDR conversion function corresponding to an HDR video format, or, The non-linear SDR video frame is converted into a linear SDR video frame by an SDR conversion function corresponding to the SDR video format.
  • the SDR nonlinear video frame is used as an example to illustrate as follows:
  • the color diagram of the SDR nonlinear video frame whose coordinates are from black to white.
  • the curve in the figure shows the correspondence and conversion relationship between the color of the linear video frame and the color of the SDR nonlinear video frame. According to this relationship, the SDR nonlinear video frame can be Convert to SDR linear video frames.
  • the conversion relationship between the first color space and the second color space it is necessary to determine the conversion relationship between the first color space and the second color space, and perform color space conversion processing on the second linear video frame according to the conversion relationship to generate a corresponding third linear video frame using the color gamut of the first color space .
  • the first color space is Rec.2020
  • the second color space is sRGB
  • the second linear video frame is an SDR video frame using the sRGB color space
  • the color space conversion matrix converts the color space of the second linear video frame from sRGB to Rec.2020, so that the obtained third linear video frame is an HDR video frame using Rec.2020.
  • Step 103 Process the first nonlinear video frame to generate a corresponding first linear video frame, and obtain a first linear target video frame using the first color space color gamut according to the first linear video frame and the third linear video frame.
  • the color deviation caused by the different color spaces is avoided, thereby ensuring the accuracy of the color.
  • the color space of the first linear target video frame is also the first color space.
  • splicing processing is performed on the first linear video frame and the third linear video frame to obtain the first linear target video frame using the color gamut of the first color space.
  • N is an integer
  • M is an integer
  • preset the splicing method the selected N first linear video frames and The M third linear video frames are combined according to a preset splicing method, and the combined video frame is the first linear target video frame.
  • Method 1 As shown in Figure 5a, take part of the frame in the third linear video frame and stitch it after the part of the first linear video frame, so as to obtain the first linear target video frame, in Figure 5a, select 100 frames of the first linear video frame For the video frame, select 80 frames of the third linear video frame, and the spliced first linear target video frame has a total of 100 frames, and 1-50 frames of the first linear target video frame are 1-50 frames of the first linear video frame, and 51- 100 frames are 31-80 frames of the third linear video frame.
  • Method 2 As shown in Figure 5b, the first linear video frame and the third linear video frame are cross-spliced to obtain the first linear target video frame.
  • 100 frames of the first linear video frame are selected, and 80 frames are selected
  • the third linear video frame, the spliced first linear target video frame has a total of 80 frames, in the first linear target video frame, 1-20 frames are 1-20 frames of the first linear video frame, and 21-40 frames are the third Frames 21-40 of the linear video frame, frames 41-60 are frames 21-40 of the first linear video frame, and frames 61-80 are frames 41-60 of the third linear video frame.
  • pixels of the first linear video frame and the third linear video frame are superimposed to obtain the first linear target video frame using the color gamut of the first color space. Examples are as follows:
  • the number of frames of the first linear video frame is the same as that of the third linear video frame, and each frame of the first linear video frame and each frame of the third linear video frame may be The pixels of the frame are superimposed to obtain the first linear target video frame.
  • the first linear video frame has 3 frames
  • the third linear video frame also has 3 frames, and the pixels of each frame of the first linear video frame and each frame of the third linear video frame are superimposed.
  • the pixels of the first linear video frame overlap with the pixels of the third linear video frame, the pixels of the third linear video frame are reserved, so as to realize the superposition of pixels, and obtain the corresponding first linear target video frame.
  • Step 104 Obtain the linear special effect resource using the first color space and color gamut, perform fusion processing on the first linear target video frame and the linear special effect resource, and generate the first linear special effect video frame using the first color space and color gamut.
  • the color space used by the linear special effect resource is the first color space color gamut.
  • the linear special effect resource is obtained, and the first linear The target video frame and the linear special effect resource are fused to generate the first linear special effect video frame using the color gamut of the first color space. Applying the linear special effect resource to the first linear target video frame through fusion processing, adding virtual stickers and/or blurring to the first linear target video frame according to special effect application requirements, thereby generating the first linear special effect video frame.
  • the color space of the linear special effect resource and the first linear target video frame is Rec.2020
  • the linear special effect resource is used to add sticker effects
  • the linear special effect resource is used for fusion processing with the first linear target video frame.
  • a sticker is added to a corresponding position of the first linear target video frame to generate a first linear special effect video frame using the color gamut of the first color space.
  • each video frame in the above embodiment has a corresponding data storage precision
  • the data storage precision represents the precision of the color that each pixel in the video frame can represent, the same color range, the number of digits of data storage precision The more, the finer the granularity that can divide the color range.
  • the data storage accuracy is divided into 10bit and 8bit, as can be seen from Figure 7, the granularity of the color corresponding to 10bit is finer, and the granularity of the color corresponding to 8bit is coarser .
  • the display device is not enough to display finer-grained colors, or the storage space of the storage device is not enough to provide enough storage space to store video frames with more digits of data storage precision.
  • the display device determines the data storage accuracy of each video frame in this embodiment. For example, if the display device cannot show the difference between the data storage precision of 10 bits and 8 bits, it is determined that the data storage precision of the video frame is 8 bits.
  • the video processing method of the embodiment of the present disclosure performs linear processing on the second nonlinear video frame to generate the second linear video frame, so that the second linear video frame can be processed based on the color space of the second linear video frame.
  • the first non-linear video frame is processed, Generate the first linear video frame, so that special effects processing can be performed based on the first linear video frame, and at the same time, the first linear video frame and the third linear video frame are both in linear space, so the first linear video frame generated based on the above two video frames
  • the linear target video frame is still in the linear space, and the first linear target video frame and the linear special effect resource are fused.
  • the first linear target video frame and the linear special effect resource are both in the linear space, and both use the first color space color gamut. Therefore, the color uniformity, accuracy and richness of the first linear special effect video frame are guaranteed, the added special effect resources are more natural, and the realism of the special effect video is improved.
  • the linear special effect resource with the same color space and color gamut is illustrated by using a non-linear special effect resource as an example, as shown in Figure 8.
  • the linear special effect resource using the first color space and color gamut is acquired, including:
  • Step 801 Detect whether the nonlinear special effect resource adopts the first color space color gamut, and if the first color space color gamut is adopted, process the nonlinear special effect resource to generate a linear special effect resource adopting the first color space color gamut.
  • the non-linear special effect resource adopts the first color space color gamut.
  • the color space of a linear target video frame is the same.
  • the color space used by the nonlinear special effect resource is also the first color space. For example, if the first color space is Rec.2020, analyze the non-linear special effect resource to obtain the color space parameters of the non-linear special effect resource, and if the non-linear special effect resource is a non-linear HDR special effect resource using Rec. Rec.2020 non-linear HDR special effect resources are used for linearization processing, and the linear special effect resources obtained are linear HDR special effect resources using Rec.2020.
  • Step 802 If the non-linear special effect resource adopts the color gamut of the second color space, process the non-linear special effect resource to generate a corresponding linear special effect resource adopting the second color space and color gamut.
  • the non-linear special effect resource adopts the second color space color gamut, it means that the color space of the non-linear special effect resource is different from the color space of the first linear target video frame to be processed. If the special effect processing is performed directly, the color of the special effect processing will be inaccurate , causing the effect of special effect processing to be less realistic. At the same time, in order to ensure the realism of special effect processing and to be able to perform special effect processing methods based on mathematical operations, such as blur processing, it is necessary to perform color space conversion and linear processing on non-linear special effect resources. Firstly, linear processing is performed on the non-linear special effect resource to generate a corresponding linear special effect resource using the color gamut of the second color space.
  • the non-linear special effect resource is a non-linear SDR special effect resource using sRGB
  • the non-linear SDR special effect resource using sRGB is linearly processed first, and the generated linear special effect resource is a non-linear special effect resource using sRGB linear SDR special effect resources.
  • Step 803 Perform color space conversion processing on the linear special effect resource using the color gamut of the second color space to generate a corresponding linear special effect resource using the color gamut of the first color space.
  • a color space conversion process is performed on the linear special effect resource using the color gamut of the second color space to generate a corresponding linear special effect resource using the color gamut of the first color space.
  • the color space conversion process can be implemented by a conversion function, and the conversion function can be set according to the first color space and the second color space.
  • the linear special effect resource is a linear SDR special effect resource using sRGB
  • the color space conversion is performed on the linear SDR special effect resource using sRGB through a conversion function.
  • 2020 HDR format video frame so the generated linear special effect resource is a linear HDR special effect resource using Rec.2020.
  • the special effect resource is determined to be linear and the color space of the special effect resource is the first color space, which ensures that the color range of the special effect resource is relatively large, and ensures that the generated first linear special effect video frame
  • the richness of the color, and the first linear target video frame to be processed with special effects is consistent with the linear space and the first color space, thereby ensuring the color accuracy of the image and video after the special effects processing, so that the added special effects Assets are more natural, improving the realism of generated first-line effects video frames.
  • a corresponding video needs to be generated, and the video can be used for display on a display device or stored in a storage device.
  • video frame generation methods examples are as follows:
  • Scenario 1 In this scenario, a video is generated for display on a display device, and the display device is suitable for displaying a video using the first color space color gamut, including: the first linear special effect video frame using the first color space color gamut Perform encoding processing to generate the first linear special effect video and display it on the display device.
  • the linear video can be displayed on the display device.
  • the first linear special effect video frame using the first color space color gamut is encoded to generate the first linear special effect video for display on the display device.
  • the encoding process can synthesize linear special effect video frames into corresponding linear special effect video.
  • Scenario 2 In this scenario, the generated video is used for storage on a storage device, and the storage device is suitable for storing videos using the first color space color gamut, including:
  • the first linear special effect video frame is processed to generate the first nonlinear special effect video frame using the color gamut of the first color space.
  • the video stored in the storage medium is a nonlinear video, so it is necessary to process the first linear special effect video frame to generate the first nonlinear special effect video frame.
  • This processing can be used to convert the linear video frame into a nonlinear video frame. For the processing process, see The foregoing embodiments will not be repeated here.
  • encoding processing is performed on the first non-linear special effect video frame to generate a first non-linear special effect video storage using the color gamut of the first color space.
  • the encoding process can synthesize the first nonlinear special effect video frame into a corresponding first nonlinear special effect video.
  • encoding processes which can be selected according to the application scene. Perform processing to generate a corresponding first non-linear special effect video, and use a storage device to store the video.
  • Scenario 3 In this scenario, the video is generated for display on the display device.
  • the color space used by the video in this scenario is the second color space color gamut, including:
  • the color space conversion can be selected according to the application scene. This embodiment is not limited, for example: use the conversion function to realize the color space conversion And/or use a matrix to realize color space conversion.
  • the conversion function or matrix can be selected and designed according to the second color space and the first color space.
  • the first color space is Rec.2020
  • the second color space is Rec.2020.
  • the color space is sRGB
  • the first linear special effect video frame is a linear HDR special effect video frame using Rec.2020
  • the conversion function is used to process the linear HDR special effect video frame using Rec.2020.
  • the linear HDR video frame is converted into a linear SDR video frame using sRGB, so that the generated second linear special effect video frame is a linear SDR special effect video frame using sRGB.
  • encoding processing is performed on the second linear special effect video frame to generate a second linear special effect video to be displayed on a display device.
  • the encoding process can synthesize the second linear special effect video frame into a corresponding two linear special effect video.
  • encoding process There are many types of encoding process, which can be selected according to the application scenario. This embodiment does not limit it, for example: soft encoding and hard encoding.
  • the color space is the first color space (for example: Rec.
  • the quality will not be improved, and it may cause overexposure (that is, as long as the color to be displayed exceeds the color that the pixel can display, the pixel will show the maximum brightness, thereby reducing the accuracy of the color).
  • the first color space is converted into the second color space, which improves the accuracy of the color, reduces the storage space occupied by the special effect video frame, and improves the transmission efficiency of the special effect video.
  • Scenario 4 In this scenario, based on the above embodiment, before displaying the second linear special effect video on the display device, the special effect video needs to be stored in the storage device.
  • the video stored in the storage medium is a nonlinear video, so the linear special effect video needs to be
  • the special effect video frame is processed to generate a non-linear special effect video frame.
  • after generating the second linear special effect video frame using the color gamut of the second color space it also includes:
  • encoding processing is performed on the second non-linear special effect video frame to generate a second non-linear special effect video storage using a color gamut of the second color space.
  • the encoding process can synthesize the second non-linear special effect video frame into a corresponding second non-linear special effect video.
  • FIG. 9 is a schematic structural diagram of a video processing device provided by an embodiment of the present disclosure.
  • the device may be implemented by software and/or hardware, and may generally be integrated into an electronic device.
  • the device 900 includes:
  • the decoding module 901 is configured to decode the nonlinear video using the color gamut of the first color space to obtain the corresponding first nonlinear video frame, and decode the nonlinear video using the second color space and color gamut to obtain the corresponding second non-linear video frame. a linear video frame, wherein the color gamut of the first color space is larger than the color gamut of the second color space;
  • the first conversion module 902 is configured to process the second nonlinear video frame to generate a corresponding second linear video frame, and perform color space conversion processing on the second linear video frame to generate a corresponding a third linear video frame of the color space gamut;
  • the first generating module 903 is configured to process the first nonlinear video frame to generate a corresponding first linear video frame, and obtain the first linear video frame according to the first linear video frame and the third linear video frame using the first the first linear target video frame of the color space gamut;
  • the second generation module 904 is configured to obtain linear special effect resources using the color gamut of the first color space, perform fusion processing on the first linear target video frame and the linear special effect resources, and generate the linear special effect resources using the first color space The first linear effect video frame of the color gamut.
  • the non-linear video using the color gamut of the first color space includes: non-linear HDR video using the Rec.2020 color space;
  • the non-linear video using the color gamut of the second color space includes: non-linear SDR video using the sRGB color space.
  • the first generation module 903 is configured to:
  • the second generating module 904 is configured to:
  • the device 900 further includes:
  • a first processing module configured to process the nonlinear special effect resource to generate a corresponding linear special effect resource using the second color space color gamut if the nonlinear special effect resource adopts the color gamut of the second color space ;
  • the second processing module is configured to perform color space conversion processing on the linear special effect resources using the second color space color gamut, and generate corresponding linear special effect resources using the first color space color gamut.
  • the device 900 further includes:
  • the first encoding module is configured to encode the first linear special effect video frame using the color gamut of the first color space, and generate the first linear special effect video for display on a display device.
  • the device 900 further includes:
  • a third processing module configured to process the first linear special effect video frame to generate a first nonlinear special effect video frame using the color gamut of the first color space;
  • the second encoding module is configured to perform encoding processing on the first nonlinear special effect video frame to generate a first nonlinear special effect video storage using the color gamut of the first color space.
  • the device 900 further includes:
  • the second conversion module is configured to perform color space conversion on the first linear special effect video frame using the color gamut of the first color space, and generate a second linear special effect video frame using the second color space color gamut;
  • the third encoding module is configured to encode the second linear special effect video frame to generate a second linear special effect video for display on a display device.
  • the device 900 further includes:
  • a fourth processing module configured to process the second linear special effect video frame to generate a second nonlinear special effect video frame using the color gamut of the second color space;
  • the fourth encoding module is configured to encode the second nonlinear special effect video frame to generate a second nonlinear special effect video storage using the color gamut of the second color space.
  • the device 900 further includes:
  • a determining module configured to determine the data storage precision of the video frame according to the storage device or the display device.
  • the video processing device provided in the embodiments of the present disclosure can execute the video processing method provided in any embodiment of the present disclosure, and has corresponding functional modules and beneficial effects for executing the method.
  • An embodiment of the present disclosure further provides a computer program product, including a computer program/instruction, and when the computer program/instruction is executed by a processor, the video processing method provided in any embodiment of the present disclosure is implemented.
  • FIG. 10 is a schematic structural diagram of an electronic device provided by an embodiment of the present disclosure.
  • FIG. 10 it shows a schematic structural diagram of an electronic device 1000 suitable for implementing an embodiment of the present disclosure.
  • the electronic device 1000 in the embodiment of the present disclosure may include, but not limited to, mobile phones, notebook computers, digital broadcast receivers, PDAs (Personal Digital Assistants), PADs (Tablet Computers), PMPs (Portable Multimedia Players), vehicle-mounted terminals ( Mobile terminals such as car navigation terminals), wearable electronic devices, etc., and fixed terminals such as digital TVs, desktop computers, smart home devices, etc.
  • the electronic device shown in FIG. 10 is only an example, and should not limit the functions and application scope of the embodiments of the present disclosure.
  • an electronic device 1000 may include a processing device (such as a central processing unit, a graphics processing unit, etc.) 1001, which may be randomly accessed according to a program stored in a read-only memory (ROM) 1002 or loaded from a storage device 1008.
  • a processing device such as a central processing unit, a graphics processing unit, etc.
  • RAM memory
  • various appropriate actions and processes are executed by programs in the memory (RAM) 1003 .
  • RAM 1003 In the RAM 1003, various programs and data necessary for the operation of the electronic device 1000 are also stored.
  • the processing device 1001, ROM 1002, and RAM 1003 are connected to each other through a bus 1004.
  • An input/output (I/O) interface 1005 is also connected to the bus 1004 .
  • the following devices can be connected to the I/O interface 1005: input devices 1006 including, for example, a touch screen, touchpad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; including, for example, a liquid crystal display (LCD), speaker, vibration an output device 1007 such as a computer; a storage device 1008 including, for example, a magnetic tape, a hard disk, etc.; and a communication device 1009.
  • the communication means 1009 may allow the electronic device 1000 to perform wireless or wired communication with other devices to exchange data. While FIG. 10 shows electronic device 1000 having various means, it is to be understood that implementing or having all of the means shown is not a requirement. More or fewer means may alternatively be implemented or provided.
  • embodiments of the present disclosure include a computer program product, which includes a computer program carried on a non-transitory computer readable medium, where the computer program includes program code for executing the method shown in the flowchart.
  • the computer program may be downloaded and installed from a network via the communication means 1009, or from the storage means 1008, or from the ROM 1002.
  • the processing device 1001 the above-mentioned functions defined in the video processing method of the embodiment of the present disclosure are executed.
  • the above-mentioned computer-readable medium in the present disclosure may be a computer-readable signal medium or a computer-readable storage medium or any combination of the above two.
  • a computer readable storage medium may be, for example, but not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination thereof. More specific examples of computer-readable storage media may include, but are not limited to, electrical connections with one or more wires, portable computer diskettes, hard disks, random access memory (RAM), read-only memory (ROM), erasable Programmable read-only memory (EPROM or flash memory), optical fiber, portable compact disk read-only memory (CD-ROM), optical storage device, magnetic storage device, or any suitable combination of the above.
  • a computer-readable storage medium may be any tangible medium that contains or stores a program that can be used by or in conjunction with an instruction execution system, apparatus, or device.
  • a computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave carrying computer-readable program code therein. Such propagated data signals may take many forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of the foregoing.
  • a computer-readable signal medium may also be any computer-readable medium other than a computer-readable storage medium, which can transmit, propagate, or transmit a program for use by or in conjunction with an instruction execution system, apparatus, or device .
  • Program code embodied on a computer readable medium may be transmitted by any appropriate medium, including but not limited to wires, optical cables, RF (radio frequency), etc., or any suitable combination of the above.
  • the client and the server can communicate using any currently known or future network protocols such as HTTP (HyperText Transfer Protocol, Hypertext Transfer Protocol), and can communicate with digital data in any form or medium
  • HTTP HyperText Transfer Protocol
  • the communication eg, communication network
  • Examples of communication networks include local area networks (“LANs”), wide area networks (“WANs”), internetworks (e.g., the Internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed network of.
  • the above-mentioned computer-readable medium may be included in the above-mentioned electronic device, or may exist independently without being incorporated into the electronic device.
  • the computer-readable medium carries one or more programs, and when the one or more programs are executed by the electronic device, the electronic device: acquires the first nonlinear video frame by decoding the nonlinear video in the first color space, Decoding the nonlinear video in the second color space to obtain a second nonlinear video frame; processing the second nonlinear video frame to generate a second linear video frame, and converting the color space of the second linear video frame to generate a second linear video frame using the first color space The third linear video frame of the color gamut; processing the first nonlinear video frame to generate the first linear video frame, and obtaining the first linear target video frame of the first color space according to the first linear video frame and the third linear video frame; obtaining The linear special effect resource in the first color space is fused with the first linear target video frame and the linear special effect resource to generate the first linear special effect video frame in the first color space.
  • the embodiment of the present disclosure ensures the color accuracy and richness of the first linear special effect video frame, makes the added special effect resources more natural, and improves the
  • Computer program code for carrying out operations of the present disclosure may be written in one or more programming languages, or combinations thereof, including but not limited to object-oriented programming languages—such as Java, Smalltalk, C++, and Includes conventional procedural programming languages - such as the "C" language or similar programming languages.
  • the program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server.
  • the remote computer can be connected to the user computer through any kind of network, including a local area network (LAN) or a wide area network (WAN), or it can be connected to an external computer (such as through an Internet service provider). Internet connection).
  • LAN local area network
  • WAN wide area network
  • Internet service provider such as AT&T, MCI, Sprint, EarthLink, MSN, GTE, etc.
  • each block in a flowchart or block diagram may represent a module, program segment, or portion of code that contains one or more logical functions for implementing specified executable instructions.
  • the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or they may sometimes be executed in the reverse order, depending upon the functionality involved.
  • each block of the block diagrams and/or flowchart illustrations, and combinations of blocks in the block diagrams and/or flowchart illustrations can be implemented by a dedicated hardware-based system that performs the specified functions or operations , or may be implemented by a combination of dedicated hardware and computer instructions.
  • the units involved in the embodiments described in the present disclosure may be implemented by software or by hardware. Wherein, the name of a unit does not constitute a limitation of the unit itself under certain circumstances.
  • FPGAs Field Programmable Gate Arrays
  • ASICs Application Specific Integrated Circuits
  • ASSPs Application Specific Standard Products
  • SOCs System on Chips
  • CPLD Complex Programmable Logical device
  • a machine-readable medium may be a tangible medium that may contain or store a program for use by or in conjunction with an instruction execution system, apparatus, or device.
  • a machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium.
  • a machine-readable medium may include, but is not limited to, electronic, magnetic, optical, electromagnetic, infrared, or semiconductor systems, apparatus, or devices, or any suitable combination of the foregoing.
  • machine-readable storage media would include one or more wire-based electrical connections, portable computer discs, hard drives, random access memory (RAM), read only memory (ROM), erasable programmable read only memory (EPROM or flash memory), optical fiber, compact disk read only memory (CD-ROM), optical storage, magnetic storage, or any suitable combination of the foregoing.
  • RAM random access memory
  • ROM read only memory
  • EPROM or flash memory erasable programmable read only memory
  • CD-ROM compact disk read only memory
  • magnetic storage or any suitable combination of the foregoing.
  • the present disclosure provides a video processing method, including:
  • Acquire linear special effect resources using the first color space color gamut perform fusion processing on the first linear target video frame and the linear special effect resources, and generate a first linear special effect video using the first color space color gamut frame.
  • the non-linear video using the color gamut of the first color space includes: non-linear HDR video using the Rec.2020 color space;
  • the non-linear video using the color gamut of the second color space includes: non-linear SDR video using the sRGB color space.
  • the acquisition using the first color space color gamut according to the first linear video frame and the third linear video frame The first linear target video frame consists of:
  • the acquisition of the linear special effect resource using the color gamut of the first color space includes:
  • the method further includes:
  • nonlinear special effect resource adopts the color gamut of the second color space, process the nonlinear special effect resource to generate a corresponding linear special effect resource adopting the second color space color gamut;
  • the method further includes:
  • Encoding the first linear special effect video frame using the color gamut of the first color space is performed to generate the first linear special effect video and display it on a display device.
  • the method further includes:
  • Encoding is performed on the first nonlinear special effect video frame to generate a first nonlinear special effect video storage using the color gamut of the first color space.
  • the method further includes:
  • Encoding processing is performed on the second linear special effect video frame to generate a second linear special effect video for display on a display device.
  • the method further includes:
  • Encoding is performed on the second nonlinear special effect video frame to generate a second nonlinear special effect video storage using the color gamut of the second color space.
  • the method further includes:
  • the data storage precision of the video frame is determined according to the storage device or the display device.
  • the present disclosure provides a video processing device, including:
  • the decoding module is used to decode the nonlinear video using the first color space color gamut to obtain the corresponding first nonlinear video frame, and decode the nonlinear video using the second color space color gamut to obtain the corresponding second nonlinear video a video frame, wherein the color gamut of the first color space is larger than the color gamut of the second color space;
  • the first conversion module is configured to process the second nonlinear video frame to generate a corresponding second linear video frame, and perform color space conversion processing on the second linear video frame to generate a corresponding second linear video frame using the first color A third linear video frame of the spatial color gamut;
  • a first generating module configured to process the first nonlinear video frame to generate a corresponding first linear video frame, and obtain the first color according to the first linear video frame and the third linear video frame the first linear target video frame of the spatial gamut;
  • the second generating module is configured to obtain linear special effect resources using the color gamut of the first color space, perform fusion processing on the first linear target video frame and the linear special effect resources, and generate color space using the first color space domain's first linear effect video frame.
  • a video processing device provided by the present disclosure:
  • the non-linear video using the color gamut of the first color space includes: non-linear HDR video using the Rec.2020 color space;
  • the non-linear video using the color gamut of the second color space includes: non-linear SDR video using the sRGB color space.
  • the first generating module is configured to:
  • the second generating module is configured to:
  • the device further includes:
  • a first processing module configured to process the nonlinear special effect resource to generate a corresponding linear special effect resource using the second color space color gamut if the nonlinear special effect resource adopts the color gamut of the second color space ;
  • the second processing module is configured to perform color space conversion processing on the linear special effect resources using the second color space color gamut, and generate corresponding linear special effect resources using the first color space color gamut.
  • the device further includes:
  • the first encoding module is configured to encode the first linear special effect video frame using the color gamut of the first color space, and generate the first linear special effect video for display on a display device.
  • the device further includes:
  • a third processing module configured to process the first linear special effect video frame to generate a first nonlinear special effect video frame using the color gamut of the first color space;
  • the second encoding module is configured to perform encoding processing on the first nonlinear special effect video frame to generate a first nonlinear special effect video storage using the color gamut of the first color space.
  • the device further includes:
  • the second conversion module is configured to perform color space conversion on the first linear special effect video frame using the color gamut of the first color space, and generate a second linear special effect video frame using the second color space color gamut;
  • the third encoding module is configured to encode the second linear special effect video frame to generate a second linear special effect video for display on a display device.
  • the device further includes:
  • a fourth processing module configured to process the second linear special effect video frame to generate a second nonlinear special effect video frame using the color gamut of the second color space;
  • the fourth encoding module is configured to encode the second nonlinear special effect video frame to generate a second nonlinear special effect video storage using the color gamut of the second color space.
  • the device further includes:
  • a determining module configured to determine the data storage precision of the video frame according to the storage device or the display device.
  • the present disclosure provides an electronic device, including:
  • the processor is configured to read the executable instructions from the memory, and execute the instructions to implement any video processing method provided in the present disclosure.
  • the present disclosure provides a computer-readable storage medium, the storage medium stores a computer program, and the computer program is used to execute any one of the video files provided by the present disclosure. Approach.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • Computer Hardware Design (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Processing Of Color Television Signals (AREA)

Abstract

本公开实施例涉及一种视频处理方法、装置、设备及介质,包括:对第一颜色空间色域的非线性视频解码获取第一非线性视频帧,对第二颜色空间色域的非线性视频解码获取第二非线性视频帧;对第二非线性视频帧处理生成第二线性视频帧,并对第二线性视频帧颜色空间转换处理生成采用第一颜色空间色域的第三线性视频帧;对第一非线性视频帧处理生成第一线性视频帧,根据第一线性视频帧和第三线性视频帧获取第一颜色空间的第一线性目标视频帧;对第一线性目标视频帧和线性特效资源融合处理,生成第一颜色空间的第一线性特效视频帧。本公开实施例保证了第一线性特效视频帧的色彩准确性和丰富程度,使得添加的特效资源更加自然,提高了特效视频的真实感。

Description

视频处理方法、装置、设备及介质
本申请要求于2021年09月10日提交中国专利局、申请号为202111064216.0、申请名称为“视频处理方法、装置、设备及介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本公开涉及数据处理技术领域,尤其涉及一种视频处理方法、装置、设备及介质。
背景技术
随着计算机技术的发展,视频处理技术的应用场景愈加广泛,在进行视频处理的过程中,常见的,需要将多个视频进行剪辑拼接之后进行特效处理,比如:将视频拼接之后添加特效贴纸等。
然而,目前的视频处理方法会造成拼接而成的视频色彩存在色差,从而造成生成的特效视频的色彩准确性不足。
发明内容
为了解决上述技术问题或者至少部分地解决上述技术问题,本公开提供了一种视频处理方法、装置、设备及介质。
第一方面,本公开实施例提供了一种视频处理方法,所述方法包括:
对采用第一颜色空间色域的非线性视频进行解码获取对应的第一非线性视频帧,对采用第二颜色空间色域的非线性视频进行解码获取对应的第二非线性视频帧,其中,所述第一颜色空间的色域大于所述第二颜色空间的色域;
对所述第二非线性视频帧进行处理生成对应的第二线性视频帧,并对所述第二线性视频帧进行颜色空间转换处理生成对应的采用所述第一颜色空间色域的第三线性视频帧;
对所述第一非线性视频帧进行处理生成对应的第一线性视频帧,根据所述第一线性视频帧和所述第三线性视频帧获取采用所述第一颜色空间色域的第一线性目标视频帧;
获取采用所述第一颜色空间色域的线性特效资源,对所述第一线性目标视频帧和所述线性特效资源进行融合处理,生成采用所述第一颜色空间色域的第一线性特效视频帧。
一种可能的实施方式中,所述采用第一颜色空间色域的非线性视频包括:采用超高清电视广播系统与节目源制作国际标准Rec.2020颜色空间的非线性高动态范围图像HDR视频;
所述采用第二颜色空间色域的非线性视频包括:采用标准红绿蓝颜色sRGB颜色空间的非线性标准动态范围SDR视频。
一种可能的实施方式中,所述根据所述第一线性视频帧和所述第三线性视频帧获取采用所述第一颜色空间色域的第一线性目标视频帧,包括:
对所述第一线性视频帧和所述第三线性视频帧进行拼接处理,获取采用所述第一颜色空间色域的第一线性目标视频帧;和/或,
对所述第一线性视频帧和所述第三线性视频帧的像素进行叠加处理,获取采用所述第一颜色空间色域的第一线性目标视频帧。
一种可能的实施方式中,所述获取采用所述第一颜色空间色域的线性特效资源,包括:
检测非线性特效资源是否采用所述第一颜色空间色域,如果采用所述第一颜色空间色域,则对所述非线性特效资源进行处理生成采用所述第一颜色空间色域的线性特效资源。
一种可能的实施方式中,在所述检测非线性特效资源是否采用所述第一颜色空间色域之后,所述方法还包括:
如果所述非线性特效资源采用所述第二颜色空间色域,则对所述非线性特效资源进行处理生成对应的采用所述第二颜色空间色域的线性特效资源;
对采用所述第二颜色空间色域的线性特效资源进行颜色空间转换处理,生成对应的采用所述第一颜色空间色域的线性特效资源。
一种可能的实施方式中,在所述生成采用所述第一颜色空间色域的第一线性特效视频帧之后,所述方法还包括:
对采用所述第一颜色空间色域的第一线性特效视频帧进行编码处理,生成第一线性特效视频在显示设备显示。
一种可能的实施方式中,在所述生成采用所述第一颜色空间色域的第一线性特效视频帧之后,所述方法还包括:
对所述第一线性特效视频帧进行处理生成采用所述第一颜色空间色域的第一非线性特效视频帧;
对所述第一非线性特效视频帧进行编码处理,生成采用所述第一颜色空间色域的第一非线性特效视频存储。
一种可能的实施方式中,在所述生成采用所述第一颜色空间色域的第一线性特效视频帧之后,所述方法还包括:
对采用所述第一颜色空间色域的第一线性特效视频帧进行颜色空间转换,生成采用所述第二颜色空间色域的第二线性特效视频帧;
对所述第二线性特效视频帧进行编码处理,生成第二线性特效视频在显示设备显示。
一种可能的实施方式中,在所述生成采用所述第二颜色空间色域的第二线性特效视频帧之后,所述方法还包括:
对所述第二线性特效视频帧进行处理生成采用所述第二颜色空间色域的第二非线性特效视频帧;
对所述第二非线性特效视频帧进行编码处理,生成采用所述第二颜色空间色域的第二非线性特效视频存储。
一种可能的实施方式中,所述方法还包括:
根据存储设备或显示设备确定所述视频帧的数据存储精度。
第二方面,本公开实施例提供了一种视频处理装置,所述装置包括:
解码模块,用于对采用第一颜色空间色域的非线性视频进行解码获取对应的第一非线性视频帧,对采用第二颜色空间色域的非线性视频进行解码获取对应的第二非线性视频帧,其中,所述第一颜色空间的色域大于所述第二颜色空间的色域;
第一转换模块,用于对所述第二非线性视频帧进行处理生成对应的第二线性视频帧,并对所述第二线性视频帧进行颜色空间转换处理生成对应的采用所述第一颜色空间色域的第三线性视频帧;
第一生成模块,用于对所述第一非线性视频帧进行处理生成对应的第一线性视频帧,根据所述第一线性视频帧和所述第三线性视频帧获取采用所述第一颜色空间色域的第一线性目标视频帧;
第二生成模块,用于获取采用所述第一颜色空间色域的线性特效资源,对所述第一线性目标视频帧和所述线性特效资源进行融合处理,生成采用所述第一颜色空间色域的第一线性特效视频帧。
第三方面,本公开提供了一种计算机可读存储介质,所述计算机可读存储介质中存储有指令,当所述指令在终端设备上运行时,使得所述终端设备实现上述的方法。
第四方面,本公开提供了一种电子设备,所述电子设备包括:处理器;用于存储所述处理器可执行指令的存储器;所述处理器,用于从所述存储器中读取所述可执行指令,并执行所述指令以实现上述的方法。
第五方面,本公开提供了一种计算机程序产品,所述计算机程序产品包括计算机程序/指令,所述计算机程序/指令被处理器执行时实现上述的方法。
本公开实施例提供的技术方案与现有技术相比至少具有如下优点:
本公开实施例提供的视频处理方法,对第二非线性视频帧进行线性处理生成第二线性视频帧,从而可以基于该第二线性视频帧进行特效处理,将第二线性视频帧进行颜色空间转换到第一颜色空间生成第三线性视频帧,从而保证了待特效处理视频帧的颜色空间的统一,同时扩大了颜色空间,使得视频帧的色彩更丰富;将第一非线性视频帧进行处理,生成第一线性视频帧,从而可以基于该第一线性视频帧进行特效处理,同时统一了第一线性视频帧和第三线性视频帧均为线性空间,因而基于上述两个视频帧生成的第一线性目标视频帧仍为线性空间,对第一线性目标视频帧和线性特效资源进行融合处理,第一线性目标视频帧和线性特效资源都为线性空间,且都采用了第一颜色空间色域,从而保证了该第一线性特效视频帧的色彩统一性、准确性和丰富程度,使得添加的特效资源更加自然,提高了特效视频的真实感。
附图说明
结合附图并参考以下具体实施方式,本公开各实施例的上述和其他特征、优点及方面将变得更加明显。贯穿附图中,相同或相似的附图标记表示相同或相似的元素。应当理解附图是示意性的,原件和元素不一定按照比例绘制。
图1为本公开实施例提供的一种视频处理方法的流程示意图;
图2为本公开实施例提供的一种颜色空间的示意图;
图3a为本公开实施例提供的一种从黑到白的线性空间的示意图;
图3b为本公开实施例提供的一种从黑到白的非线性空间的示意图;
图3c为本公开实施例提供的一种非线性空间和非线性空间对比的示意图;
图4为本公开实施例提供的一种线性空间和非线性空间的对应关系的示意图;
图5a为本公开实施例提供的一种第一线性视频帧和第三线性视频帧拼接的示意图;
图5b为本公开实施例提供的另一种第一线性视频帧和第三线性视频帧拼接的示意 图;
图6为本公开实施例提供的一种第一线性视频帧和第三线性视频帧叠加的示意图;
图7为本公开实施例提供的一种视频帧的不同数据存储精度的示意图;
图8为本公开实施例提供的另一种视频处理方法的流程示意图;
图9为本公开实施例提供的一种视频处理装置的结构示意图;
图10为本公开实施例提供的一种电子设备的结构示意图。
具体实施方式
下面将参照附图更详细地描述本公开的实施例。虽然附图中显示了本公开的某些实施例,然而应当理解的是,本公开可以通过各种形式来实现,而且不应该被解释为限于这里阐述的实施例,相反提供这些实施例是为了更加透彻和完整地理解本公开。应当理解的是,本公开的附图及实施例仅用于示例性作用,并非用于限制本公开的保护范围。
应当理解,本公开的方法实施方式中记载的各个步骤可以按照不同的顺序执行,和/或并行执行。此外,方法实施方式可以包括附加的步骤和/或省略执行示出的步骤。本公开的范围在此方面不受限制。
本文使用的术语“包括”及其变形是开放性包括,即“包括但不限于”。术语“基于”是“至少部分地基于”。术语“一个实施例”表示“至少一个实施例”;术语“另一实施例”表示“至少一个另外的实施例”;术语“一些实施例”表示“至少一些实施例”。其他术语的相关定义将在下文描述中给出。
需要注意,本公开中提及的“第一”、“第二”等概念仅用于对不同的装置、模块或单元进行区分,并非用于限定这些装置、模块或单元所执行的功能的顺序或者相互依存关系。
需要注意,本公开中提及的“一个”、“多个”的修饰是示意性而非限制性的,本领域技术人员应当理解,除非在上下文另有明确指出,否则应该理解为“一个或多个”。
本公开实施方式中的多个装置之间所交互的消息或者信息的名称仅用于说明性的目的,而并不是用于对这些消息或信息的范围进行限制。
为了解决上述问题,本公开实施例提供了一种视频处理方法,下面结合具体的实施例对该方法进行介绍。
图1为本公开实施例提供的一种视频处理方法的流程示意图,该方法可以由视频处理装置执行,其中该装置可以采用软件和/或硬件实现,一般可集成在电子设备中。如图1所示,该方法包括:
步骤101,对采用第一颜色空间色域的非线性视频进行解码获取对应的第一非线性视频帧,对采用第二颜色空间色域的非线性视频进行解码获取对应的第二非线性视频帧,其中,第一颜色空间的色域大于第二颜色空间的色域。
本实施例中,通过手机拍摄和/或经过其他通信应用转发等方式获取的待处理的多个非线性视频包括:采用第一颜色空间色域的非线性视频和采用第二颜色空间色域的非线性视频,其中,第一颜色空间的色域大于第二颜色空间的色域。例如:通过手机拍摄获取采用超高清电视广播系统与节目源制作国际标准(Recommendation ITU-R-BT.2020,Rec.2020)颜色空间的非线性高动态范围图像(High-Dynamic Range,HDR)视频,以及通过通信应 用获取转发的采用标准红绿蓝颜色(standard Red Green Blue,sRGB)颜色空间的非线性标准动态范围(Standard Dynamic Range,SDR)视频。
需要说明的是,颜色空间是用于表示颜色的模型,不同的颜色空间存在对应的颜色范围,以三维颜色空间为例,如图2所示,该示意图中不同颜色空间对应的三角形所覆盖的颜色区域代表了该颜色空间可以表示的颜色范围,该颜色区域的面积表示了该颜色空间的大小,从图2中可以看出,图中展示的四种颜色空间,Rec.2020颜色范围大于sRGB的颜色范围。以三维颜色空间为例,使用三维坐标表示颜色,其中每个维度坐标的取值为0~1,其中0表示不取该颜色,1表示该颜色取到该颜色在此颜色空间中的最大。参见图2,Rec.2020对应的三角形的上部顶点为Rec.2020中表示绿色元色的点表示为(0,1,0),sRGB对应的三角形的上部顶点为sRGB中表示绿色元色的点表示为(0,1,0),从图2中可以看出,虽然在Rec.2020中的绿色元色和sRGB中的绿色元色都表示为(0,1,0),但是实际上代表的颜色是不同的。由此可见,非线性视频采用的第一颜色空间色域和第二颜色空间的颜色表示即便相同,但是实际颜色并不相同。因而不同颜色空间图像的混用有可能造成颜色失去准确性。为了保证特效处理后的视频帧的颜色准确,特效色彩的质量好,需要在特效处理前保证各个待处理视频帧采用的颜色空间一致。
进一步,需要注意的是,通过手机拍摄的视频和/或经过通信应用转发处理的视频等方式获取的视频一般是非线性视频,本实施例中,采用第一颜色空间色域和采用第二颜色空间色域的非线性视频帧均是指将视频的线性空间转化为非线性空间的视频。线性视频中,像素点对应的值与该像素点的发光功率之间的关系是线性的,如图3a所示,图3a为本公开实施例提供的一种从黑到白的线性空间的示意图。但是因为人眼对暗色更为敏感,进而人眼看到的图3a中的亮色区域是多于暗色区域的。为了平衡亮色区域和暗色区域的区域面积,使人眼看到的亮色区域与暗色区域的区域面积相似,可以对线性空间进行非线性处理。
在非线性空间中,像素点对应的值与该像素点的发光功率之间的关系是非线性的,如图3b所示,图3b为本公开实施例提供的一种从黑到白的非线性空间的示意图。相对于图3a,人眼看到的暗色区域变多且亮色区域变少,从而能够使人眼看到的亮色区域与暗色区域的区域面积是相近的。一种可能的实施方式中,非线性空间为经过gamma校正的空间且对应的gamma值为2.2,为了更清楚地说明该线性空间和非线性空间的关系,如图3c所示,图3c中,虚线表示人眼看到的暗色区域和亮色区域的分界线,在线性空间中该分界线对应的刻度值为21.76%,在非线性空间中该分界线对应的刻度值为50%,可以看出,除了在0和100%刻度,在线性空间和非线性空间中相同的人眼视觉颜色对应的刻度值是不同的,进而为了保证颜色的精准性,在进行图像处理之前,需要对视频帧进行线性空间或者非线性空间的统一。
为了对非线性视频进行特效处理,需要对采用第一颜色空间色域的非线性视频进行解码获取对应的第一非线性视频帧,以及对采用第二颜色空间色域的非线性视频进行解码获取对应的第二非线性视频帧。其中,该解码方式有多种,可以根据应用场景进行选择,本实施例不作限制,例如:软解码、硬解码。并且不同颜色空间的非线性视频可以为一段视频也可以为多段视频。
步骤102,对第二非线性视频帧进行处理生成对应的第二线性视频帧,并对第二线性 视频帧进行颜色空间转换处理生成对应的采用第一颜色空间色域的第三线性视频帧。
为了满足视频处理,例如特效处理的需要,例如光照特效,面部磨皮等计算处理,需要在线性空间下对视频帧进行特效处理,但是由于当前待处理的第一非线性视频帧采用的第一颜色空间的色域大于第二非线性视频帧采用的第二颜色空间的色域,如前所述,为了保证颜色空间的统一,且为了采用图像色彩更加丰富的第一颜色空间色域,需要将采用第二颜色空间色域的第二非线性视频帧转换到第一颜色空间色域进行处理。由于视频帧的颜色空间转换也需要在线性空间下进行处理,因此,需要先对第二非线性视频帧进行线性转换,生成对应的第二线性视频帧,再对第二线性视频帧进行颜色空间的转换,具体处理过程如下:
首先,需要确定待进行处理的第二非线性视频帧的视频格式,进而调用与视频格式对应的转换函数对非线性视频帧进行转换处理生成对应的第二线性视频帧。需要说明的是,根据视频处理的应用场景预先设置好与不同视频格式对应的转换函数,例如,通过与HDR视频格式对应的HDR转换函数将非线性HDR视频帧转换成线性HDR视频帧,或者,通过与SDR视频格式对应的SDR转换函数将非线性SDR视频帧转换成线性SDR视频帧。为了更清楚的说明对非线性视频帧的线性转换过程,以SDR非线性视频帧举例说明如下:如图4所示,图4中横坐标为从黑到白的线性视频帧的颜色示意图,纵坐标为从黑到白的SDR非线性视频帧的颜色示意图,图中曲线表示了线性视频帧的颜色和SDR非线性视频帧的颜色的对应和转换关系,根据该关系可以将SDR非线性视频帧转换为SDR线性视频帧。
进而,需要确定第一颜色空间和第二颜色空间之间的转换关系,根据该转换关系对第二线性视频帧进行颜色空间转换处理生成对应的采用第一颜色空间色域的第三线性视频帧。举例说明,假设第一颜色空间为Rec.2020,第二颜色空间为sRGB,第二线性视频帧为采用sRGB颜色空间的SDR视频帧,进而根据Rec.2020与sRGB之间的颜色空间转换函数或颜色空间转换矩阵将第二线性视频帧的颜色空间从sRGB转换到Rec.2020,从而获得的第三线性视频帧为采用Rec.2020的HDR视频帧。
步骤103,对第一非线性视频帧进行处理生成对应的第一线性视频帧,根据第一线性视频帧和第三线性视频帧获取采用第一颜色空间色域的第一线性目标视频帧。
本实施例中,为了满足视频处理的需要,还需要对第一非线性视频帧进行线性处理从而生成对应的第一线性视频帧,该处理方式与将第二非线性视频帧进行处理生成第二线性视频帧的处理方式类似,此处不再赘述。
在获取了颜色空间相同且同为线性空间的第一线性视频帧和第三线性视频帧之后,避免了因为颜色空间不同导致的颜色偏差,从而保证了颜色的准确性,可以根据第一线性视频帧和第三线性视频帧获取第一线性目标视频帧,该第一线性目标视频帧的颜色空间也为第一颜色空间。
需要说明的是,上述第一线性目标视频帧的获取方法用多种,示例说明如下:
在一个实施例中,对第一线性视频帧和所述第三线性视频帧进行拼接处理,获取采用第一颜色空间色域的第一线性目标视频帧。
首先,选取N个第一线性视频帧(N为整数),并且选取M个第三线性视频帧(M为整数),进一步地,预设拼接方法,将选取的N个第一线性视频帧和M个第三线性视频帧 根据预设的拼接方法进行组合,组合得到的视频帧为第一线性目标视频帧,该预设的拼接方法有多种,示例说明如下:
方法一:如图5a所示,取第三线性视频帧中的部分帧拼接在第一线性视频帧的部分帧之后,从而获得第一线性目标视频帧,图5a中,选取100帧第一线性视频帧,选取80帧第三线性视频帧,拼接得到的第一线性目标视频帧共有100帧,第一线性目标视频帧中1~50帧为第一线性视频帧的1~50帧,51~100帧为第三线性视频帧的31~80帧。
方法二:如图5b所示,将第一线性视频帧和第三线性视频帧进行交叉拼接,从而获得第一线性目标视频帧,图5b中,选取100帧第一线性视频帧,选取80帧第三线性视频帧,拼接得到的第一线性目标视频帧共有80帧,第一线性目标视频帧中,1~20帧为第一线性视频帧的1~20帧,21~40帧为第三线性视频帧的21~40帧,41~60帧为第一线性视频帧的21~40帧,61~80帧为第三线性视频帧的41~60帧。
在另一个实施例中,对第一线性视频帧和第三线性视频帧的像素进行叠加处理,获取采用所述第一颜色空间色域的第一线性目标视频帧。举例说明如下:
一种可能的实施方式中,如图6所示,第一线性视频帧的帧数与第三线性视频帧相同,可以在第一线性视频帧的每一帧和第三线性视频帧的每一帧的像素进行叠加处理,获取第一线性目标视频帧。图6中,第一线性视频帧有3帧,第三线性视频帧也有3帧,将第一线性视频帧的每一帧和第三线性视频帧的每一帧的像素进行叠加处理,当第一线性视频帧的像素和第三线性视频帧的像素重合时,保留第三线性视频帧的像素,从而实现像素的叠加,并获得对应的第一线性目标视频帧。
步骤104,获取采用第一颜色空间色域的线性特效资源,对第一线性目标视频帧和线性特效资源进行融合处理,生成采用第一颜色空间色域的第一线性特效视频帧。
本实施例中,为了保证色彩的准确性,线性特效资源采用的颜色空间为第一颜色空间色域,在确定待处理的第一线性目标视频帧之后,获取该线性特效资源,将第一线性目标视频帧和线性特效资源进行融合处理,生成采用第一颜色空间色域的第一线性特效视频帧。通过融合处理将线性特效资源作用于第一线性目标视频帧,根据特效应用需求对第一线性目标视频帧进行添加虚拟贴纸和/或模糊等处理,从而生成第一线性特效视频帧。举例说明,线性特效资源与第一线性目标视频帧的颜色空间同为Rec.2020,该线性特效资源用于添加贴纸特效,使用该线性特效资源与第一线性目标视频帧进行融合处理,从而在第一线性目标视频帧的的相应位置添加贴纸,生成采用第一颜色空间色域的第一线性特效视频帧。
需要说明的是,上述实施例中的各个视频帧存在对应的数据存储精度,该数据存储精度表示了该视频帧中各个像素能够表示的颜色的精度,相同的颜色范围,数据存储精度的位数越多,则表示能够将该颜色范围划分的颗粒度越细。一种可能的实施方式中,如图7所示,数据存储精度分为10bit和8bit,从图7中可以看出,10bit对应的颜色的颗粒度更细,8bit对应的颜色的颗粒度较粗。但是,一些应用场景中,显示设备不足以显示颗粒度较细的色彩,或者,存储设备的存储空间不足以提供足够的存储空间存储数据存储精度位数多的视频帧,进而,需要根据存储设备或显示设备确定本实施例中各个视频帧的数据存储精度。举例而言,若显示设备无法呈现出数据存储精度为10bit和8bit的区别,确定视频帧的数据存储精度为8bit。
综上,本公开实施例的视频处理方法,对第二非线性视频帧进行线性处理生成第二线性视频帧,从而可以基于该第二线性视频帧进行处理,将第二线性视频帧进行颜色空间转换到第一颜色空间生成第三线性视频帧,从而保证了待处理视频帧的颜色空间的统一,同时扩大了颜色空间,使得视频帧的色彩更丰富;将第一非线性视频帧进行处理,生成第一线性视频帧,从而可以基于该第一线性视频帧进行特效处理,同时统一了第一线性视频帧和第三线性视频帧均为线性空间,因而基于上述两个视频帧生成的第一线性目标视频帧仍为线性空间,对第一线性目标视频帧和线性特效资源进行融合处理,第一线性目标视频帧和线性特效资源都为线性空间,且都采用了第一颜色空间色域,从而保证了该第一线性特效视频帧的色彩统一性、准确性和丰富程度,使得添加的特效资源更加自然,提高了特效视频的真实感。
基于上述实施例,为了保证处理后的视频帧的颜色准确、特效色彩的质量好,需要在处理前保证获取为线性空间的且与待处理的第一线性目标视频帧采用的颜色空间一致的线性特效资源,进而根据线性特效资源对该第一线性目标视频帧进行特效处理。然而,在实际的应用场景中,特效资源采用的颜色空间与第一线性目标视频帧可能不一致,特效资源也可能是非线性的,因此,还需要获取与待处理的第一线性目标视频帧采用的颜色空间色域一致的线性特效资源,以非线性特效资源举例说明,如图8所示,上述实施例中,获取采用第一颜色空间色域的线性特效资源,包括:
步骤801,检测非线性特效资源是否采用第一颜色空间色域,如果采用第一颜色空间色域,则对非线性特效资源进行处理生成采用第一颜色空间色域的线性特效资源。
在本实施例中,为了特效处理的颜色准确,检测非线性特效资源是否采用第一颜色空间色域,如果采用第一颜色空间色域,说明非线性特效资源的颜色空间与待特效处理的第一线性目标视频帧的颜色空间相同,进一步地,为了保证能够进行特效处理,需要对非线性特效资源进行线性处理生成线性特效资源,该非线性特效资源采用的颜色空间也为第一颜色空间。举例而言,若第一颜色空间为Rec.2020,解析非线性特效资源,获取非线性特效资源的颜色空间参数,若非线性特效资源为采用Rec.2020的非线性HDR特效资源,则进一步对该采用Rec.2020的非线性HDR特效资源进行线性化处理,获得线性特效资源为采用Rec.2020的线性HDR特效资源。
步骤802,如果非线性特效资源采用第二颜色空间色域,则对非线性特效资源进行处理生成对应的采用第二颜色空间色域的线性特效资源。
如果该非线性特效资源采用第二颜色空间色域,说明非线性特效资源的颜色空间与待处理的第一线性目标视频帧的颜色空间不同,若直接进行特效处理会造成特效处理的颜色不准确,造成特效处理的效果真实感较差,同时为了确保特效处理的真实感并且能够进行基于数学运算的特效处理方法,例如,模糊处理等,需要对非线性特效资源进行颜色空间转换和线性处理,首先对非线性特效资源进行线性处理生成对应的采用第二颜色空间色域的线性特效资源。举例而言,若第二颜色空间色域为sRGB,非线性特效资源为采用sRGB的非线性SDR特效资源,先对该采用sRGB的非线性SDR特效资源进行线性处理,生成的线性特效资源为采用sRGB的线性SDR特效资源。
步骤803,对采用第二颜色空间色域的线性特效资源进行颜色空间转换处理,生成对应的采用第一颜色空间色域的线性特效资源。
进一步地,对采用第二颜色空间色域的线性特效资源进行颜色空间转换处理,生成对应的采用第一颜色空间色域的线性特效资源。一种可能的实施方式中,该颜色空间转换处理能够通过转换函数实现,并且可以根据第一颜色空间和第二颜色空间设置该转换函数。举例而言,线性特效资源为采用sRGB的线性SDR特效资源,对该采用sRGB的线性SDR特效资源通过转换函数进行颜色空间转换,转换函数可以将采用sRGB的SDR格式的视频帧转换为采用Rec.2020的HDR格式的视频帧,因而生成的线性特效资源为采用Rec.2020的线性HDR特效资源。
综上,本公开实施例的视频处理方法,将特效资源确定为线性且特效资源的颜色空间为第一颜色空间,确保了特效资源的颜色范围较大,保证了生成的第一线性特效视频帧的颜色的丰富程度,并且与待进行特效处理的第一线性目标视频帧都一致为线性空间且为第一颜色空间,从而确保了特效处理之后的图像和视频的颜色准确性,使得添加的特效资源更加自然,提高了生成的第一线性特效视频帧的真实感。
基于上述实施例,进一步地,在生成第一线性特效视频帧之后,还需要生成对应的视频,该视频可以用于在显示设备进行显示,也可以用于在存储设备进行存储。根据不同的应用场景存在对应的视频帧的生成方法,示例说明如下:
场景一:本场景中,生成视频用于在显示设备显示,且该显示设备适用于显示采用第一颜色空间色域的视频,包括:对采用第一颜色空间色域的第一线性特效视频帧进行编码处理,生成第一线性特效视频在显示设备显示。
线性视频能够用于在显示设备上显示,为了获取线性特效视频,对采用第一颜色空间色域的第一线性特效视频帧进行编码处理,生成第一线性特效视频在显示设备显示。其中,编码处理能够将线性特效视频帧合成为对应的线性特效视频,该编码处理有多种,可以根据应用场景进行选择,本实施例不作限制,例如:软编码、硬编码。本场景中,在显示设备显示采用第一颜色空间色域的线性视频,可以使颜色更加丰富和精准。
场景二:本场景中,生成的视频用于在存储设备存储,且该存储设备适用于存储采用第一颜色空间色域的视频,包括:
对第一线性特效视频帧进行处理生成采用第一颜色空间色域的第一非线性特效视频帧。存储介质存储的视频为非线性视频,因而需要对第一线性特效视频帧进行处理生成第一非线性特效视频帧,该处理能够用于将线性视频帧转换为非线性视频帧,该处理过程参见上述实施例,此处不再赘述。
进一步地,对第一非线性特效视频帧进行编码处理,生成采用所述第一颜色空间色域的第一非线性特效视频存储。其中,编码处理能够将第一非线性特效视频帧合成为对应的第一非线性特效视频,该编码处理有多种,可以根据应用场景进行选择,采用该编码方法对第一非线性特效视频帧进行处理,生成对应的第一非线性特效视频,并使用存储设备存储对该视频进行存储。
场景三:本场景中,生成视频用于在显示设备显示,该场景中的视频采用的颜色空间 为第二颜色空间色域,包括:
对采用第一颜色空间色域的第一线性特效视频帧进行颜色空间转换,生成采用第二颜色空间色域的第二线性特效视频帧。对第一线性特效视频帧进行处理生成第二线性特效视频帧,实现该颜色空间转换的方法有多种,可以根据应用场景进行选择,本实施例不作限制,例如:使用转换函数实现颜色空间转换和/或使用矩阵实现颜色空间转换,该转换函数或矩阵可以根据第二颜色空间和第一颜色空间进行选择和设计,一种可能的实施例中,第一颜色空间为Rec.2020,第二颜色空间为sRGB,第一线性特效视频帧为采用Rec.2020的线性HDR特效视频帧,使用转换函数对采用Rec.2020的线性HDR特效视频帧进行处理,该转换函数可以将采用Rec.2020的线性HDR视频帧转换到采用sRGB的线性SDR视频帧,从而生成的第二线性特效视频帧为采用sRGB的线性SDR特效视频帧。
进一步地,对第二线性特效视频帧进行编码处理,生成第二线性特效视频在显示设备显示。其中,编码处理能够将第二线性特效视频帧合成为对应的二线性特效视频,该编码处理有多种,可以根据应用场景进行选择,本实施例不作限制,例如:软编码、硬编码。
需要说明的是,在视频显示时,如果在只能显示颜色空间为第二颜色空间(例如:sRGB)视频的显示器上显示颜色空间为第一颜色空间(例如:Rec.2020)特效视频,色彩质量不会提高,还有可能造成过曝现象(即,只要需要显示的颜色超过了像素点能够显示的颜色,该像素点就会呈现最大亮度,从而降低了颜色的准确性),因而将第一颜色空间转换为第二颜色空间,提高了颜色的准确性,并且降低特效视频帧占用的存储空间,提高特效视频的传输效率。
场景四:本场景中,基于上述实施例,在显示设备上显示第二线性特效视频之前,需要将该特效视频存储在存储设备中,存储介质存储的视频为非线性视频,因而需要对该线性特效视频帧进行处理生成非线性特效视频帧,具体地,在上述实施例中,在生成采用第二颜色空间色域的第二线性特效视频帧之后,还包括:
对第二线性特效视频帧进行非线性处理生成采用第二颜色空间色域的第二非线性特效视频帧。该处理能够用于将线性视频帧转换为非线性视频帧,该转换过程参见前述实施例,此处不再赘述。
进一步地,对第二非线性特效视频帧进行编码处理,生成采用第二颜色空间色域的第二非线性特效视频存储。其中,编码处理能够将第二非线性特效视频帧合成为对应的第二非线性特效视频,该编码处理有多种,可以根据应用场景进行选择,采用该编码方法对第二非线性特效视频帧进行处理,生成对应的第二非线性特效视频,并使用存储设备存储对该视频进行存储。
图9为本公开实施例提供的一种视频处理装置的结构示意图,该装置可由软件和/或硬件实现,一般可集成在电子设备中。如图9所示,该装置900,包括:
解码模块901,用于对采用第一颜色空间色域的非线性视频进行解码获取对应的第一非线性视频帧,对采用第二颜色空间色域的非线性视频进行解码获取对应的第二非线性视频帧,其中,所述第一颜色空间的色域大于所述第二颜色空间的色域;
第一转换模块902,用于对所述第二非线性视频帧进行处理生成对应的第二线性视频 帧,并对所述第二线性视频帧进行颜色空间转换处理生成对应的采用所述第一颜色空间色域的第三线性视频帧;
第一生成模块903,用于对所述第一非线性视频帧进行处理生成对应的第一线性视频帧,根据所述第一线性视频帧和所述第三线性视频帧获取采用所述第一颜色空间色域的第一线性目标视频帧;
第二生成模块904,用于获取采用所述第一颜色空间色域的线性特效资源,对所述第一线性目标视频帧和所述线性特效资源进行融合处理,生成采用所述第一颜色空间色域的第一线性特效视频帧。
在一种可能的实施方式中,所述装置900中:
所述采用第一颜色空间色域的非线性视频包括:采用Rec.2020颜色空间的非线性HDR视频;
所述采用第二颜色空间色域的非线性视频包括:采用sRGB颜色空间的非线性SDR视频。
在一种可能的实施方式中,所述第一生成模块903,用于:
对所述第一线性视频帧和所述第三线性视频帧进行拼接处理,获取采用所述第一颜色空间色域的第一线性目标视频帧;和/或,
对所述第一线性视频帧和所述第三线性视频帧的像素进行叠加处理,获取采用所述第一颜色空间色域的第一线性目标视频帧。
在一种可能的实施方式中,所述第二生成模块904,用于:
检测非线性特效资源是否采用所述第一颜色空间色域,如果采用所述第一颜色空间色域,则对所述非线性特效资源进行处理生成采用所述第一颜色空间色域的线性特效资源。
在一种可能的实施方式中,所述装置900,还包括:
第一处理模块,用于如果所述非线性特效资源采用所述第二颜色空间色域,则对所述非线性特效资源进行处理生成对应的采用所述第二颜色空间色域的线性特效资源;
第二处理模块,用于对采用所述第二颜色空间色域的线性特效资源进行颜色空间转换处理,生成对应的采用所述第一颜色空间色域的线性特效资源。
在一种可能的实施方式中,所述装置900,还包括:
第一编码模块,用于对采用所述第一颜色空间色域的第一线性特效视频帧进行编码处理,生成第一线性特效视频在显示设备显示。
在一种可能的实施方式中,所述装置900,还包括:
第三处理模块,用于对所述第一线性特效视频帧进行处理生成采用所述第一颜色空间色域的第一非线性特效视频帧;
第二编码模块,用于对所述第一非线性特效视频帧进行编码处理,生成采用所述第一颜色空间色域的第一非线性特效视频存储。
在一种可能的实施方式中,所述装置900,还包括:
第二转换模块,用于对采用所述第一颜色空间色域的第一线性特效视频帧进行颜色空间转换,生成采用所述第二颜色空间色域的第二线性特效视频帧;
第三编码模块,用于对所述第二线性特效视频帧进行编码处理,生成第二线性特效视 频在显示设备显示。
在一种可能的实施方式中,所述装置900,还包括:
第四处理模块,用于对所述第二线性特效视频帧进行处理生成采用所述第二颜色空间色域的第二非线性特效视频帧;
第四编码模块,用于对所述第二非线性特效视频帧进行编码处理,生成采用所述第二颜色空间色域的第二非线性特效视频存储。
在一种可能的实施方式中,所述装置900,还包括:
确定模块,用于根据存储设备或显示设备确定所述视频帧的数据存储精度。
本公开实施例所提供的视频处理装置可执行本公开任意实施例所提供的视频处理方法,具备执行方法相应的功能模块和有益效果。
本公开实施例还提供了一种计算机程序产品,包括计算机程序/指令,该计算机程序/指令被处理器执行时实现本公开任意实施例所提供的视频处理方法。
图10为本公开实施例提供的一种电子设备的结构示意图。
下面具体参考图10,其示出了适于用来实现本公开实施例中的电子设备1000的结构示意图。本公开实施例中的电子设备1000可以包括但不限于诸如移动电话、笔记本电脑、数字广播接收器、PDA(个人数字助理)、PAD(平板电脑)、PMP(便携式多媒体播放器)、车载终端(例如车载导航终端)、可穿戴电子设备等等的移动终端以及诸如数字TV、台式计算机、智能家居设备等等的固定终端。图10示出的电子设备仅仅是一个示例,不应对本公开实施例的功能和使用范围带来任何限制。
如图10所示,电子设备1000可以包括处理装置(例如中央处理器、图形处理器等)1001,其可以根据存储在只读存储器(ROM)1002中的程序或者从存储装置1008加载到随机访问存储器(RAM)1003中的程序而执行各种适当的动作和处理。在RAM 1003中,还存储有电子设备1000操作所需的各种程序和数据。处理装置1001、ROM 1002以及RAM1003通过总线1004彼此相连。输入/输出(I/O)接口1005也连接至总线1004。
通常,以下装置可以连接至I/O接口1005:包括例如触摸屏、触摸板、键盘、鼠标、摄像头、麦克风、加速度计、陀螺仪等的输入装置1006;包括例如液晶显示器(LCD)、扬声器、振动器等的输出装置1007;包括例如磁带、硬盘等的存储装置1008;以及通信装置1009。通信装置1009可以允许电子设备1000与其他设备进行无线或有线通信以交换数据。虽然图10示出了具有各种装置的电子设备1000,但是应理解的是,并不要求实施或具备所有示出的装置。可以替代地实施或具备更多或更少的装置。
特别地,根据本公开的实施例,上文参考流程图描述的过程可以被实现为计算机软件程序。例如,本公开的实施例包括一种计算机程序产品,其包括承载在非暂态计算机可读介质上的计算机程序,该计算机程序包含用于执行流程图所示的方法的程序代码。在这样的实施例中,该计算机程序可以通过通信装置1009从网络上被下载和安装,或者从存储装置1008被安装,或者从ROM 1002被安装。在该计算机程序被处理装置1001执行时,执行本公开实施例的视频处理方法中限定的上述功能。
需要说明的是,本公开上述的计算机可读介质可以是计算机可读信号介质或者计算机可读存储介质或者是上述两者的任意组合。计算机可读存储介质例如可以是——但不限于——电、磁、光、电磁、红外线、或半导体的系统、装置或器件,或者任意以上的组合。计算机可读存储介质的更具体的例子可以包括但不限于:具有一个或多个导线的电连接、便携式计算机磁盘、硬盘、随机访问存储器(RAM)、只读存储器(ROM)、可擦式可编程只读存储器(EPROM或闪存)、光纤、便携式紧凑磁盘只读存储器(CD-ROM)、光存储器件、磁存储器件、或者上述的任意合适的组合。在本公开中,计算机可读存储介质可以是任何包含或存储程序的有形介质,该程序可以被指令执行系统、装置或者器件使用或者与其结合使用。而在本公开中,计算机可读信号介质可以包括在基带中或者作为载波一部分传播的数据信号,其中承载了计算机可读的程序代码。这种传播的数据信号可以采用多种形式,包括但不限于电磁信号、光信号或上述的任意合适的组合。计算机可读信号介质还可以是计算机可读存储介质以外的任何计算机可读介质,该计算机可读信号介质可以发送、传播或者传输用于由指令执行系统、装置或者器件使用或者与其结合使用的程序。计算机可读介质上包含的程序代码可以用任何适当的介质传输,包括但不限于:电线、光缆、RF(射频)等等,或者上述的任意合适的组合。
在一些实施方式中,客户端、服务器可以利用诸如HTTP(HyperText Transfer Protocol,超文本传输协议)之类的任何当前已知或未来研发的网络协议进行通信,并且可以与任意形式或介质的数字数据通信(例如,通信网络)互连。通信网络的示例包括局域网(“LAN”),广域网(“WAN”),网际网(例如,互联网)以及端对端网络(例如,ad hoc端对端网络),以及任何当前已知或未来研发的网络。
上述计算机可读介质可以是上述电子设备中所包含的;也可以是单独存在,而未装配入该电子设备中。
上述计算机可读介质承载有一个或者多个程序,当上述一个或者多个程序被该电子设备执行时,使得该电子设备:对第一颜色空间的非线性视频解码获取第一非线性视频帧,对第二颜色空间的非线性视频解码获取第二非线性视频帧;对第二非线性视频帧处理生成第二线性视频帧,并对第二线性视频帧颜色空间转换处理生成采用第一颜色空间色域的第三线性视频帧;对第一非线性视频帧处理生成第一线性视频帧,根据第一线性视频帧和第三线性视频帧获取第一颜色空间的第一线性目标视频帧;获取第一颜色空间的线性特效资源,对第一线性目标视频帧和线性特效资源融合处理,生成第一颜色空间的第一线性特效视频帧。本公开实施例保证了第一线性特效视频帧的色彩准确性和丰富程度,使得添加的特效资源更加自然,提高了特效视频的真实感。
可以以一种或多种程序设计语言或其组合来编写用于执行本公开的操作的计算机程序代码,上述程序设计语言包括但不限于面向对象的程序设计语言—诸如Java、Smalltalk、C++,还包括常规的过程式程序设计语言—诸如“C”语言或类似的程序设计语言。程序代码可以完全地在用户计算机上执行、部分地在用户计算机上执行、作为一个独立的软件包执行、部分在用户计算机上部分在远程计算机上执行、或者完全在远程计算机或服务器上执行。在涉及远程计算机的情形中,远程计算机可以通过任意种类的网络——包括局域网(LAN)或广域网(WAN)—连接到用户计算机,或者,可以连接到外部计算机(例如利用因特 网服务提供商来通过因特网连接)。
附图中的流程图和框图,图示了按照本公开各种实施例的系统、方法和计算机程序产品的可能实现的体系架构、功能和操作。在这点上,流程图或框图中的每个方框可以代表一个模块、程序段、或代码的一部分,该模块、程序段、或代码的一部分包含一个或多个用于实现规定的逻辑功能的可执行指令。也应当注意,在有些作为替换的实现中,方框中所标注的功能也可以以不同于附图中所标注的顺序发生。例如,两个接连地表示的方框实际上可以基本并行地执行,它们有时也可以按相反的顺序执行,这依所涉及的功能而定。也要注意的是,框图和/或流程图中的每个方框、以及框图和/或流程图中的方框的组合,可以用执行规定的功能或操作的专用的基于硬件的系统来实现,或者可以用专用硬件与计算机指令的组合来实现。
描述于本公开实施例中所涉及到的单元可以通过软件的方式实现,也可以通过硬件的方式来实现。其中,单元的名称在某种情况下并不构成对该单元本身的限定。
本文中以上描述的功能可以至少部分地由一个或多个硬件逻辑部件来执行。例如,非限制性地,可以使用的示范类型的硬件逻辑部件包括:现场可编程门阵列(FPGA)、专用集成电路(ASIC)、专用标准产品(ASSP)、片上系统(SOC)、复杂可编程逻辑设备(CPLD)等等。
在本公开的上下文中,机器可读介质可以是有形的介质,其可以包含或存储以供指令执行系统、装置或设备使用或与指令执行系统、装置或设备结合地使用的程序。机器可读介质可以是机器可读信号介质或机器可读储存介质。机器可读介质可以包括但不限于电子的、磁性的、光学的、电磁的、红外的、或半导体系统、装置或设备,或者上述内容的任何合适组合。机器可读存储介质的更具体示例会包括基于一个或多个线的电气连接、便携式计算机盘、硬盘、随机存取存储器(RAM)、只读存储器(ROM)、可擦除可编程只读存储器(EPROM或快闪存储器)、光纤、便捷式紧凑盘只读存储器(CD-ROM)、光学储存设备、磁储存设备、或上述内容的任何合适组合。
根据本公开的一个或多个实施例,本公开提供了一种视频处理方法,包括:
对采用第一颜色空间色域的非线性视频进行解码获取对应的第一非线性视频帧,对采用第二颜色空间色域的非线性视频进行解码获取对应的第二非线性视频帧,其中,所述第一颜色空间的色域大于所述第二颜色空间的色域;
对所述第二非线性视频帧进行处理生成对应的第二线性视频帧,并对所述第二线性视频帧进行颜色空间转换处理生成对应的采用所述第一颜色空间色域的第三线性视频帧;
对所述第一非线性视频帧进行处理生成对应的第一线性视频帧,根据所述第一线性视频帧和所述第三线性视频帧获取采用所述第一颜色空间色域的第一线性目标视频帧;
获取采用所述第一颜色空间色域的线性特效资源,对所述第一线性目标视频帧和所述线性特效资源进行融合处理,生成采用所述第一颜色空间色域的第一线性特效视频帧。
根据本公开的一个或多个实施例,本公开提供的一种视频处理方法中,
所述采用第一颜色空间色域的非线性视频包括:采用Rec.2020颜色空间的非线性HDR视频;
所述采用第二颜色空间色域的非线性视频包括:采用sRGB颜色空间的非线性SDR视频。
根据本公开的一个或多个实施例,本公开提供的一种视频处理方法中,所述根据所述第一线性视频帧和所述第三线性视频帧获取采用所述第一颜色空间色域的第一线性目标视频帧,包括:
对所述第一线性视频帧和所述第三线性视频帧进行拼接处理,获取采用所述第一颜色空间色域的第一线性目标视频帧;和/或,
对所述第一线性视频帧和所述第三线性视频帧的像素进行叠加处理,获取采用所述第一颜色空间色域的第一线性目标视频帧。
根据本公开的一个或多个实施例,本公开提供的一种视频处理方法中,所述获取采用所述第一颜色空间色域的线性特效资源,包括:
检测非线性特效资源是否采用所述第一颜色空间色域,如果采用所述第一颜色空间色域,则对所述非线性特效资源进行处理生成采用所述第一颜色空间色域的线性特效资源。
根据本公开的一个或多个实施例,本公开提供的一种视频处理方法中,在所述检测非线性特效资源是否采用所述第一颜色空间色域之后,所述方法还包括:
如果所述非线性特效资源采用所述第二颜色空间色域,则对所述非线性特效资源进行处理生成对应的采用所述第二颜色空间色域的线性特效资源;
对采用所述第二颜色空间色域的线性特效资源进行颜色空间转换处理,生成对应的采用所述第一颜色空间色域的线性特效资源。
根据本公开的一个或多个实施例,本公开提供的一种视频处理方法中,在所述生成采用所述第一颜色空间色域的第一线性特效视频帧之后,所述方法还包括:
对采用所述第一颜色空间色域的第一线性特效视频帧进行编码处理,生成第一线性特效视频在显示设备显示。
根据本公开的一个或多个实施例,本公开提供的一种视频处理方法中,在所述生成采用所述第一颜色空间色域的第一线性特效视频帧之后,所述方法还包括:
对所述第一线性特效视频帧进行处理生成采用所述第一颜色空间色域的第一非线性特效视频帧;
对所述第一非线性特效视频帧进行编码处理,生成采用所述第一颜色空间色域的第一非线性特效视频存储。
根据本公开的一个或多个实施例,本公开提供的一种视频处理方法中,在所述生成采用所述第一颜色空间色域的第一线性特效视频帧之后,所述方法还包括:
对采用所述第一颜色空间色域的第一线性特效视频帧进行颜色空间转换,生成采用所述第二颜色空间色域的第二线性特效视频帧;
对所述第二线性特效视频帧进行编码处理,生成第二线性特效视频在显示设备显示。
根据本公开的一个或多个实施例,本公开提供的一种视频处理方法中,在所述生成采用所述第二颜色空间色域的第二线性特效视频帧之后,所述方法还包括:
对所述第二线性特效视频帧进行处理生成采用所述第二颜色空间色域的第二非线性特效视频帧;
对所述第二非线性特效视频帧进行编码处理,生成采用所述第二颜色空间色域的第二非线性特效视频存储。
根据本公开的一个或多个实施例,本公开提供的一种视频处理方法中,所述方法还包括:
根据存储设备或显示设备确定所述视频帧的数据存储精度。
根据本公开的一个或多个实施例,本公开提供了一种视频处理装置,包括:
解码模块,用于对采用第一颜色空间色域的非线性视频进行解码获取对应的第一非线性视频帧,对采用第二颜色空间色域的非线性视频进行解码获取对应的第二非线性视频帧,其中,所述第一颜色空间的色域大于所述第二颜色空间的色域;
第一转换模块,用于对所述第二非线性视频帧进行处理生成对应的第二线性视频帧,并对所述第二线性视频帧进行颜色空间转换处理生成对应的采用所述第一颜色空间色域的第三线性视频帧;
第一生成模块,用于对所述第一非线性视频帧进行处理生成对应的第一线性视频帧,根据所述第一线性视频帧和所述第三线性视频帧获取采用所述第一颜色空间色域的第一线性目标视频帧;
第二生成模块,用于获取采用所述第一颜色空间色域的线性特效资源,对所述第一线性目标视频帧和所述线性特效资源进行融合处理,生成采用所述第一颜色空间色域的第一线性特效视频帧。
根据本公开的一个或多个实施例,本公开提供的一种视频处理装置中:
所述采用第一颜色空间色域的非线性视频包括:采用Rec.2020颜色空间的非线性HDR视频;
所述采用第二颜色空间色域的非线性视频包括:采用sRGB颜色空间的非线性SDR视频。
根据本公开的一个或多个实施例,本公开提供的一种视频处理装置中,所述第一生成模块,用于:
对所述第一线性视频帧和所述第三线性视频帧进行拼接处理,获取采用所述第一颜色空间色域的第一线性目标视频帧;和/或,
对所述第一线性视频帧和所述第三线性视频帧的像素进行叠加处理,获取采用所述第一颜色空间色域的第一线性目标视频帧。
根据本公开的一个或多个实施例,本公开提供的一种视频处理装置中,所述第二生成模块,用于:
检测非线性特效资源是否采用所述第一颜色空间色域,如果采用所述第一颜色空间色域,则对所述非线性特效资源进行处理生成采用所述第一颜色空间色域的线性特效资源。
根据本公开的一个或多个实施例,本公开提供的一种视频处理装置中,所述装置,还包括:
第一处理模块,用于如果所述非线性特效资源采用所述第二颜色空间色域,则对所述非线性特效资源进行处理生成对应的采用所述第二颜色空间色域的线性特效资源;
第二处理模块,用于对采用所述第二颜色空间色域的线性特效资源进行颜色空间转换处理,生成对应的采用所述第一颜色空间色域的线性特效资源。
根据本公开的一个或多个实施例,本公开提供的一种视频处理装置中,所述装置,还包括:
第一编码模块,用于对采用所述第一颜色空间色域的第一线性特效视频帧进行编码处理,生成第一线性特效视频在显示设备显示。
根据本公开的一个或多个实施例,本公开提供的一种视频处理装置中,所述装置,还包括:
第三处理模块,用于对所述第一线性特效视频帧进行处理生成采用所述第一颜色空间色域的第一非线性特效视频帧;
第二编码模块,用于对所述第一非线性特效视频帧进行编码处理,生成采用所述第一颜色空间色域的第一非线性特效视频存储。
根据本公开的一个或多个实施例,本公开提供的一种视频处理装置中,所述装置,还包括:
第二转换模块,用于对采用所述第一颜色空间色域的第一线性特效视频帧进行颜色空间转换,生成采用所述第二颜色空间色域的第二线性特效视频帧;
第三编码模块,用于对所述第二线性特效视频帧进行编码处理,生成第二线性特效视频在显示设备显示。
根据本公开的一个或多个实施例,本公开提供的一种视频处理装置中,所述装置,还包括:
第四处理模块,用于对所述第二线性特效视频帧进行处理生成采用所述第二颜色空间色域的第二非线性特效视频帧;
第四编码模块,用于对所述第二非线性特效视频帧进行编码处理,生成采用所述第二颜色空间色域的第二非线性特效视频存储。
根据本公开的一个或多个实施例,本公开提供的一种视频处理装置中,所述装置,还包括:
确定模块,用于根据存储设备或显示设备确定所述视频帧的数据存储精度。
根据本公开的一个或多个实施例,本公开提供了一种电子设备,包括:
处理器;
用于存储所述处理器可执行指令的存储器;
所述处理器,用于从所述存储器中读取所述可执行指令,并执行所述指令以实现如本公开提供的任一所述的视频处理方法。
根据本公开的一个或多个实施例,本公开提供了一种计算机可读存储介质,所述存储介质存储有计算机程序,所述计算机程序用于执行如本公开提供的任一所述的视频处理方法。
以上描述仅为本公开的较佳实施例以及对所运用技术原理的说明。本领域技术人员应 当理解,本公开中所涉及的公开范围,并不限于上述技术特征的特定组合而成的技术方案,同时也应涵盖在不脱离上述公开构思的情况下,由上述技术特征或其等同特征进行任意组合而形成的其它技术方案。例如上述特征与本公开中公开的(但不限于)具有类似功能的技术特征进行互相替换而形成的技术方案。
此外,虽然采用特定次序描绘了各操作,但是这不应当理解为要求这些操作以所示出的特定次序或以顺序次序执行来执行。在一定环境下,多任务和并行处理可能是有利的。同样地,虽然在上面论述中包含了若干具体实现细节,但是这些不应当被解释为对本公开的范围的限制。在单独的实施例的上下文中描述的某些特征还可以组合地实现在单个实施例中。相反地,在单个实施例的上下文中描述的各种特征也可以单独地或以任何合适的子组合的方式实现在多个实施例中。
尽管已经采用特定于结构特征和/或方法逻辑动作的语言描述了本主题,但是应当理解所附权利要求书中所限定的主题未必局限于上面描述的特定特征或动作。相反,上面所描述的特定特征和动作仅仅是实现权利要求书的示例形式。

Claims (14)

  1. 一种视频处理方法,包括:
    对采用第一颜色空间色域的非线性视频进行解码获取对应的第一非线性视频帧,对采用第二颜色空间色域的非线性视频进行解码获取对应的第二非线性视频帧,其中,所述第一颜色空间的色域大于所述第二颜色空间的色域;
    对所述第二非线性视频帧进行处理生成对应的第二线性视频帧,并对所述第二线性视频帧进行颜色空间转换处理生成对应的采用所述第一颜色空间色域的第三线性视频帧;
    对所述第一非线性视频帧进行处理生成对应的第一线性视频帧;
    根据所述第一线性视频帧和所述第三线性视频帧获取采用所述第一颜色空间色域的第一线性目标视频帧;
    获取采用所述第一颜色空间色域的线性特效资源;
    对所述第一线性目标视频帧和所述线性特效资源进行融合处理,生成采用所述第一颜色空间色域的第一线性特效视频帧。
  2. 根据权利要求1所述的方法,其中,所述采用第一颜色空间色域的非线性视频包括:采用超高清电视广播系统与节目源制作国际标准Rec.2020颜色空间的非线性高动态范围图像HDR视频;
    所述采用第二颜色空间色域的非线性视频包括:采用标准红绿蓝颜色sRGB颜色空间的非线性标准动态范围SDR视频。
  3. 根据权利要求1所述的方法,其中,所述根据所述第一线性视频帧和所述第三线性视频帧获取采用所述第一颜色空间色域的第一线性目标视频帧,包括:
    对所述第一线性视频帧和所述第三线性视频帧进行拼接处理,获取采用所述第一颜色空间色域的第一线性目标视频帧;和/或,
    对所述第一线性视频帧和所述第三线性视频帧的像素进行叠加处理,获取采用所述第一颜色空间色域的第一线性目标视频帧。
  4. 根据权利要求1所述的方法,其中,所述获取采用所述第一颜色空间色域的线性特效资源,包括:
    检测非线性特效资源是否采用所述第一颜色空间色域,如果采用所述第一颜色空间色域,则对所述非线性特效资源进行处理生成采用所述第一颜色空间色域的线性特效资源。
  5. 根据权利要求4所述的方法,其中,在所述检测非线性特效资源是否采用所述第一颜色空间色域之后,所述方法还包括:
    如果所述非线性特效资源采用所述第二颜色空间色域,则对所述非线性特效资源进行处理生成对应的采用所述第二颜色空间色域的线性特效资源;
    对采用所述第二颜色空间色域的线性特效资源进行颜色空间转换处理,生成对应的采用所述第一颜色空间色域的线性特效资源。
  6. 根据权利要求1所述的方法,其中,在所述生成采用所述第一颜色空间色域的第一线性特效视频帧之后,所述方法还包括:
    对采用所述第一颜色空间色域的第一线性特效视频帧进行编码处理,生成第一线性特效视频在显示设备显示。
  7. 根据权利要求1所述的方法,其中,在所述生成采用所述第一颜色空间色域的第一线性特效视频帧之后,所述方法还包括:
    对所述第一线性特效视频帧进行处理生成采用所述第一颜色空间色域的第一非线性特效视频帧;
    对所述第一非线性特效视频帧进行编码处理,生成采用所述第一颜色空间色域的第一非线性特效视频存储。
  8. 根据权利要求1所述的方法,其中,在所述生成采用所述第一颜色空间色域的第一线性特效视频帧之后,所述方法还包括:
    对采用所述第一颜色空间色域的第一线性特效视频帧进行颜色空间转换,生成采用所述第二颜色空间色域的第二线性特效视频帧;
    对所述第二线性特效视频帧进行编码处理,生成第二线性特效视频在显示设备显示。
  9. 根据权利要求8所述的方法,其中,在所述生成采用所述第二颜色空间色域的第二线性特效视频帧之后,所述方法还包括:
    对所述第二线性特效视频帧进行处理生成采用所述第二颜色空间色域的第二非线性特效视频帧;
    对所述第二非线性特效视频帧进行编码处理,生成采用所述第二颜色空间色域的第二非线性特效视频存储。
  10. 根据权利要求1-9任一所述的方法,其中,所述方法还包括:
    根据存储设备或显示设备确定所述视频帧的数据存储精度。
  11. 一种视频特效处理装置,所述装置包括:
    解码模块,用于对采用第一颜色空间色域的非线性视频进行解码获取对应的第一非线性视频帧,对采用第二颜色空间色域的非线性视频进行解码获取对应的第二非线性视频帧,其中,所述第一颜色空间的色域大于所述第二颜色空间的色域;
    第一转换模块,用于对所述第二非线性视频帧进行处理生成对应的第二线性视频帧,并对所述第二线性视频帧进行颜色空间转换处理生成对应的采用所述第一颜色空间色域的第三线性视频帧;
    第一生成模块,用于对所述第一非线性视频帧进行处理生成对应的第一线性视频帧,根据所述第一线性视频帧和所述第三线性视频帧获取采用所述第一颜色空间色域的第一线性目标视频帧;
    第二生成模块,用于获取采用所述第一颜色空间色域的线性特效资源,对所述第一线性目标视频帧和所述线性特效资源进行融合处理,生成采用所述第一颜色空间色域的第一线性特效视频帧。
  12. 一种电子设备,所述电子设备包括:
    处理器;
    存储器,用于存储所述处理器可执行指令;
    所述处理器,用于从所述存储器中读取所述可执行指令,并执行所述指令以实现上述权利要求1-10中任一项所述的视频处理方法。
  13. 一种计算机可读存储介质,所述计算机可读存储介质中存储有指令,当所述指令在终端设备上运行时,使得所述终端设备实现如权利要求1-10任一项所述的视频处理方法。
  14. 一种计算机程序产品,所述计算机程序产品包括计算机程序/指令,所述计算机程序/指令被处理器执行时实现如权利要求1-10任一项所述的视频处理方法。
PCT/CN2022/117204 2021-09-10 2022-09-06 视频处理方法、装置、设备及介质 WO2023036111A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202111064216.0 2021-09-10
CN202111064216.0A CN115801976A (zh) 2021-09-10 2021-09-10 视频处理方法、装置、设备及介质

Publications (1)

Publication Number Publication Date
WO2023036111A1 true WO2023036111A1 (zh) 2023-03-16

Family

ID=85416870

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/117204 WO2023036111A1 (zh) 2021-09-10 2022-09-06 视频处理方法、装置、设备及介质

Country Status (2)

Country Link
CN (1) CN115801976A (zh)
WO (1) WO2023036111A1 (zh)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150245004A1 (en) * 2014-02-24 2015-08-27 Apple Inc. User interface and graphics composition with high dynamic range video
CN105578145A (zh) * 2015-12-30 2016-05-11 天津德勤和创科技发展有限公司 一种三维虚拟场景与视频监控实时智能融合的方法
CN106233721A (zh) * 2014-04-30 2016-12-14 索尼公司 信息处理设备、信息记录介质、图像处理方法以及程序
CN108184079A (zh) * 2017-12-29 2018-06-19 北京奇虎科技有限公司 一种多媒体文件的合并方法和装置
CN109474793A (zh) * 2017-09-08 2019-03-15 安华高科技股份有限公司 用于组合视频及图形源以供显示的系统及方法
CN112799224A (zh) * 2019-11-14 2021-05-14 徕卡仪器(新加坡)有限公司 用于生成输出图像数据的系统和方法以及显微镜

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150245004A1 (en) * 2014-02-24 2015-08-27 Apple Inc. User interface and graphics composition with high dynamic range video
CN106233721A (zh) * 2014-04-30 2016-12-14 索尼公司 信息处理设备、信息记录介质、图像处理方法以及程序
CN105578145A (zh) * 2015-12-30 2016-05-11 天津德勤和创科技发展有限公司 一种三维虚拟场景与视频监控实时智能融合的方法
CN109474793A (zh) * 2017-09-08 2019-03-15 安华高科技股份有限公司 用于组合视频及图形源以供显示的系统及方法
CN108184079A (zh) * 2017-12-29 2018-06-19 北京奇虎科技有限公司 一种多媒体文件的合并方法和装置
CN112799224A (zh) * 2019-11-14 2021-05-14 徕卡仪器(新加坡)有限公司 用于生成输出图像数据的系统和方法以及显微镜

Also Published As

Publication number Publication date
CN115801976A (zh) 2023-03-14

Similar Documents

Publication Publication Date Title
CN110021052B (zh) 用于生成眼底图像生成模型的方法和装置
KR102617258B1 (ko) 이미지 프로세싱 방법 및 장치
CN110211030B (zh) 图像生成方法和装置
US11893770B2 (en) Method for converting a picture into a video, device, and storage medium
US11481927B2 (en) Method and apparatus for determining text color
JP2023509429A (ja) 画像処理方法及び装置
WO2024037556A1 (zh) 图像处理方法、装置、设备及存储介质
CN111738950B (zh) 图像处理方法及装置
WO2023231918A1 (zh) 图像处理方法、装置、电子设备及存储介质
CN112967193A (zh) 图像校准方法及装置、计算机可读介质和电子设备
WO2023169287A1 (zh) 美妆特效的生成方法、装置、设备、存储介质和程序产品
WO2023098576A1 (zh) 图像处理方法、装置、设备及介质
WO2023036111A1 (zh) 视频处理方法、装置、设备及介质
WO2023035973A1 (zh) 视频处理方法、装置、设备及介质
CN111415393B (zh) 一种调节多媒体黑板显示的方法、装置、介质和电子设备
JP2023550970A (ja) 画面の中の背景を変更する方法、機器、記憶媒体、及びプログラム製品
CN114119413A (zh) 图像处理方法及装置、可读介质和移动终端
CN111292245A (zh) 图像处理方法和装置
US20220292733A1 (en) Method and apparatus for text effect processing
WO2021031846A1 (zh) 水波纹效果实现方法、装置、电子设备和计算机可读存储介质
WO2021018176A1 (zh) 文字特效处理方法及装置
RU2802724C1 (ru) Способ и устройство обработки изображений, электронное устройство и машиночитаемый носитель информации
CN111738899B (zh) 用于生成水印的方法、装置、设备和计算机可读介质
WO2023125500A1 (zh) 一种图像处理方法、装置、电子设备和存储介质
CN110189279B (zh) 模型训练方法、装置、电子设备及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22866574

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE