CN117061817A - Video processing method, device, electronic equipment and storage medium - Google Patents

Video processing method, device, electronic equipment and storage medium Download PDF

Info

Publication number
CN117061817A
CN117061817A CN202210494202.0A CN202210494202A CN117061817A CN 117061817 A CN117061817 A CN 117061817A CN 202210494202 A CN202210494202 A CN 202210494202A CN 117061817 A CN117061817 A CN 117061817A
Authority
CN
China
Prior art keywords
video
color
original video
information
picture
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210494202.0A
Other languages
Chinese (zh)
Inventor
赖守波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Zitiao Network Technology Co Ltd
Original Assignee
Beijing Zitiao Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Zitiao Network Technology Co Ltd filed Critical Beijing Zitiao Network Technology Co Ltd
Priority to CN202210494202.0A priority Critical patent/CN117061817A/en
Publication of CN117061817A publication Critical patent/CN117061817A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • H04N21/47205End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for manipulating displayed content, e.g. interacting with MPEG-4 objects, editing locally

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Databases & Information Systems (AREA)
  • Human Computer Interaction (AREA)
  • Television Signal Processing For Recording (AREA)

Abstract

The embodiment of the disclosure relates to a video processing method, a video processing device, electronic equipment and a storage medium, wherein the method comprises the following steps: receiving a fixed-size picture sent by a client, wherein the fixed-size picture is a picture frame corresponding to fixed-size time in an original video of the client; analyzing the associated information corresponding to the freeze-frame picture to obtain color parameter information corresponding to the original video; rendering the fixed-size picture according to the color parameter information, and setting the duration of the fixed-size picture to generate a fixed-size video segment; and returning the stop-motion video segments to the client for display, wherein the color information of the stop-motion video segments is consistent with that of the original video. By adopting the technical scheme, the color parameter information of the original video is utilized to perform color processing in the rendering operation of generating the fixed-grid video segments, the generated fixed-grid video segments are consistent with the original video in color, the quality of video fixed-grid processing is improved, and the visual effect of the fixed-grid video is improved.

Description

Video processing method, device, electronic equipment and storage medium
Technical Field
The disclosure relates to the technical field of video processing, and in particular relates to a video processing method, a video processing device, electronic equipment and a storage medium.
Background
With the popularity of video applications, users start to make videos by themselves, and various special effects are added in the videos, such as adding stop-motion special effects in the videos to make stop-motion videos.
In the related art, when a freeze video is produced, a specific time frame is selected from video materials to be decoded, and the decoded specific time frame is stored as a picture in an image file storage format of a joint photographic experts group (Joint Photographic Experts Group, JPEG) on a local disk, or as a picture in an image file storage format of a portable network graphic (Portable Network Graphic, PNG), and then the stored JPEG picture or PNG picture is transmitted to a video editing engine architecture to be subjected to freeze processing, so that the freeze video is produced.
However, for some video materials with high dynamic range effect, the video clip generated only according to the stored JPEG/PNG picture has larger chromatic aberration with the original video color, and the visual effect of the video clip is affected.
Disclosure of Invention
In order to solve the technical problems described above or at least partially solve the technical problems described above, the present disclosure provides a video processing method, apparatus, electronic device, and storage medium.
The embodiment of the disclosure provides a video processing method, which comprises the following steps:
Receiving a freeze frame picture sent by a client, wherein the freeze frame picture is a picture frame corresponding to freeze time in an original video of the client;
analyzing the associated information corresponding to the freeze picture to acquire color parameter information corresponding to the original video;
rendering the fixed-size picture according to the color parameter information, and setting the duration of the fixed-size picture to generate a fixed-size video segment;
and returning the stop-motion video segments to the client for display, wherein the color information of the stop-motion video segments is consistent with that of the original video.
The embodiment of the disclosure also provides a video processing device, which comprises:
the device comprises a receiving module, a processing module and a processing module, wherein the receiving module is used for receiving a stop-motion picture sent by a client, wherein the stop-motion picture is a picture frame corresponding to the stop-motion time in an original video of the client;
the analysis module is used for analyzing the association information corresponding to the fixed-size picture and acquiring color parameter information corresponding to the original video;
the rendering module is used for rendering the fixed-size picture according to the color parameter information and setting the duration of the fixed-size picture to generate a fixed-size video segment;
And the sending module is used for returning the stop-motion video segments to the client for display, wherein the color information of the stop-motion video segments is consistent with that of the original video.
The embodiment of the disclosure also provides an electronic device, which comprises: a processor; a memory for storing the processor-executable instructions; the processor is configured to read the executable instructions from the memory and execute the instructions to implement a video processing method as provided in an embodiment of the disclosure.
The present disclosure also provides a computer-readable storage medium storing a computer program for executing the video processing method as provided by the embodiments of the present disclosure.
The disclosed embodiments also provide a computer program product comprising a computer program/instructions which, when executed by a processor, implement a video processing method as provided by the disclosed embodiments.
Compared with the prior art, the technical scheme provided by the embodiment of the disclosure has the following advantages: according to the video processing scheme provided by the embodiment of the disclosure, the fixed-grid picture sent by the client is received, the associated information corresponding to the fixed-grid picture is analyzed, the color parameter information corresponding to the original video is obtained, the fixed-grid picture is rendered according to the color parameter information, the fixed-grid video fragment is generated by setting the time length of the fixed-grid picture, and finally the fixed-grid video fragment is returned to the client for display, so that the color processing is performed by utilizing the color parameter information of the original video in the rendering operation of generating the fixed-grid video fragment, the color consistency of the generated fixed-grid video fragment and the original video can be ensured, the quality of video fixed-grid processing is improved, and the visual effect of the fixed-grid video is improved.
Drawings
The above and other features, advantages, and aspects of embodiments of the present disclosure will become more apparent by reference to the following detailed description when taken in conjunction with the accompanying drawings. The same or similar reference numbers will be used throughout the drawings to refer to the same or like elements. It should be understood that the figures are schematic and that elements and components are not necessarily drawn to scale.
Fig. 1 is a flowchart of a video processing method according to an embodiment of the disclosure;
fig. 2 is a flowchart of a video processing method according to another embodiment of the disclosure;
fig. 3 is a schematic structural diagram of a video processing apparatus according to an embodiment of the disclosure;
fig. 4 is a schematic structural diagram of an electronic device according to an embodiment of the disclosure.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure have been shown in the accompanying drawings, it is to be understood that the present disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein, but are provided to provide a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the present disclosure are for illustration purposes only and are not intended to limit the scope of the present disclosure.
It should be understood that the various steps recited in the method embodiments of the present disclosure may be performed in a different order and/or performed in parallel. Furthermore, method embodiments may include additional steps and/or omit performing the illustrated steps. The scope of the present disclosure is not limited in this respect.
The term "including" and variations thereof as used herein are intended to be open-ended, i.e., including, but not limited to. The term "based on" is based at least in part on. The term "one embodiment" means "at least one embodiment"; the term "another embodiment" means "at least one additional embodiment"; the term "some embodiments" means "at least some embodiments. Related definitions of other terms will be given in the description below.
It should be noted that the terms "first," "second," and the like in this disclosure are merely used to distinguish between different devices, modules, or units and are not used to define an order or interdependence of functions performed by the devices, modules, or units.
It should be noted that references to "one", "a plurality" and "a plurality" in this disclosure are intended to be illustrative rather than limiting, and those of ordinary skill in the art will appreciate that "one or more" is intended to be understood as "one or more" unless the context clearly indicates otherwise.
Currently, for the freeze processing of video material, the available nonlinear video editing engine architecture designs are as follows:
(1) Selecting a specific time frame and setting time length as a stop-motion segment, directly adding a source video to a video editing engine architecture, setting frame time and segment duration corresponding to a source video material, decoding the time frame of the stop-motion segment corresponding to the source material in the process of clipping in a frame running mode, and executing stop-motion processing. The main drawbacks of the architecture design approach of this freeze-frame process are poor performance, and its performance is unstable and uncontrollable for different types of video material. For example, for video materials with a long group of pictures (Group of Pictures, GOP), there is a performance loss in repetition decoding and redundancy decoding, and for video materials of a type such as High-Dynamic Range (HDR), there are problems such as long decoding time and low medium-low end machine performance.
(2) Selecting a specific time frame and setting time length as a stop-motion segment, directly adding a source video to a video editing engine architecture, setting frame time and segment duration corresponding to a source video material, decoding the time frame of the stop-motion segment corresponding to the source material in the process of clipping, then putting the decoded time frame into a buffer pool, and executing stop-motion processing. The architecture design mode of the freeze processing mainly aims to solve the problem of poor performance of the architecture of the mode (1), and a cache pool is added on the basis of the architecture design of the mode (1), and the main defect is that the problem of poor performance, instability and uncontrollability still cannot be solved when the freeze processing is performed for the first time, and meanwhile, the complexity of design logic of the whole video editing engine architecture is increased by adding the cache pool to the existing editing engine architecture.
(3) Selecting a specific time frame and setting time length as a stop-motion segment, firstly decoding the stop-motion frame time corresponding to the source video, storing the stop-motion frame time as a local JPEG/PNG picture, adding the stored JPEG/PNG picture as an independent picture segment into a video editing engine architecture, and keeping the stop-motion processing logic consistent with the processing logic for adding the stop-motion segment in the modes (1) and (2).
The video editing engine architecture commonly used in video freeze processing at present is the architecture design provided in the above mode (3), and the architecture design mode can ensure the performance of freeze clipping. However, the main drawback of this architecture design manner is that, for the HDR type video material, since the existing JPEG/PNG picture cannot store the HDR related color information, when the JPEG/PNG picture is transmitted to the video editing engine architecture to perform the freeze processing to generate the freeze video clip, the video editing engine architecture cannot obtain the HDR color information of the HDR video material based on the JPEG/PNG picture, so that the video freeze clip generated according to the JPEG/PNG picture does not have the HDR color information, thereby causing the freeze video clip to have obvious color difference with the original video direct clip preview, and affecting the visual effect of the freeze video.
In view of the above problems, the present disclosure provides a video processing method, by receiving a freeze frame picture sent by a client, analyzing associated information corresponding to the freeze frame picture, obtaining color parameter information corresponding to an original video, further performing rendering processing on the freeze frame picture according to the color parameter information, performing duration setting on the freeze frame picture to generate a freeze frame video segment, and finally returning the freeze frame video segment to the client for display, thereby performing color processing by using the color parameter information of the original video in rendering operation of generating the freeze frame video segment, ensuring that the generated freeze frame video segment is consistent with the original video in color, improving quality of video freeze frame processing, and improving visual effect of the freeze frame video.
Fig. 1 is a flow chart of a video processing method according to an embodiment of the disclosure, where the method may be performed by a video processing apparatus, where the apparatus may be implemented by software and/or hardware, and may be generally integrated in an electronic device, where the electronic device is configured with a video editing engine architecture, and the electronic device may be a server, for example. As shown in fig. 1, the video processing method may include the steps of:
Step 101, receiving a freeze-frame picture sent by a client, wherein the freeze-frame picture is a picture frame corresponding to a freeze-frame time in an original video of the client.
The client may be an application program supporting a video production function, such as video editing software, and the client interacts with an electronic device configured with a video editing engine architecture, and sends a freeze-frame picture to the video editing engine architecture in the electronic device, so that the video editing engine architecture receives the freeze-frame picture sent by the client and performs freeze-frame processing.
In the embodiment of the disclosure, the freeze-frame picture can be obtained through the existing mode of generating the freeze-frame picture, and the freeze-frame picture is stored in a JPEG format or a PNG format.
The user uploads the original video to be subjected to the freeze-drying process to the client, the freeze-drying time in the original video is selected through the client, and the client decodes and stores the picture frame corresponding to the freeze-drying time in the original video into a local JPEG/PNG picture to obtain a freeze-drying picture corresponding to the freeze-drying time in the original video. And the client transmits the stored JPEG/PNG picture to a video editing engine architecture for freeze-stop processing.
And 102, analyzing the associated information corresponding to the freeze-frame picture, and acquiring color parameter information corresponding to the original video.
In the embodiment of the disclosure, after receiving the freeze-frame picture sent by the client, the video editing engine architecture may analyze association information corresponding to the freeze-frame picture to obtain color parameter information corresponding to the original video.
The video editing engine architecture can analyze the color parameter information of the original video from the picture names of the fixed-size pictures.
The related information corresponding to the freeze-frame picture may be other data information except for the freeze-frame picture in the data packet, and when the client sends the freeze-frame picture to the video editing engine architecture, the client may acquire color parameter information of the original video, package the acquired color parameter information with the freeze-frame picture to generate the data packet, and send the data packet to the video editing engine architecture. The video editing engine architecture extracts the freeze-frame pictures from the data packets, and analyzes other data information in the data packets to obtain color parameter information of the original video.
And 103, rendering the freeze-frame picture according to the color parameter information, and setting the duration of the freeze-frame picture to generate a freeze-frame video clip.
In the embodiment of the disclosure, after the color parameter information of the original video is obtained, the fixed-size picture can be rendered according to the obtained color parameter information, and the fixed-size picture is set in time length to generate the fixed-size video segment.
When the time length of the freeze-frame picture is set, the set time length can be a default time length or a time length that the client sends to the video editing engine architecture, the time length can be set by the user through the client according to actual requirements, and then the time length is sent to the video editing engine architecture by the client, for example, the time length of the freeze-frame set by the user can be sent to the video editing engine architecture by the client when the freeze-frame picture is sent.
It should be noted that, when performing the freeze processing, the freeze processing logic is basically identical to the freeze processing logic of the video editing engine architecture commonly used at present, which is different in that in the embodiment of the present disclosure, the color parameter information of the original video obtained by parsing is transferred to the subsequent rendering operation for performing the color processing, so that the finally generated freeze video segment is identical to the color of the original video.
And 104, returning the frozen video segments to the client for display, wherein the frozen video segments are consistent with the color information of the original video.
In the embodiment of the disclosure, after the freeze video clip is generated, the video editing engine architecture may return the generated freeze video clip to the client for display. The fixed-size video clips are obtained by rendering the fixed-size pictures according to the color parameter information of the original video, so that the fixed-size video clips have the same color information as the original video, the fixed-size video clips are consistent with the color information of the original video, no color difference exists between the fixed-size video clips displayed in the client and the original video, and the visual effect of the fixed-size video is improved.
According to the video processing method, the fixed-grid picture sent by the client is received, the associated information corresponding to the fixed-grid picture is analyzed, the color parameter information corresponding to the original video is obtained, the fixed-grid picture is rendered according to the color parameter information, the fixed-grid video clips are generated by setting the time length of the fixed-grid picture, and finally the fixed-grid video clips are returned to the client for display, so that the color processing is performed by utilizing the color parameter information of the original video in the rendering operation of generating the fixed-grid video clips, the color consistency of the generated fixed-grid video clips and the original video can be ensured, the quality of video fixed-grid processing is improved, and the visual effect of the fixed-grid video is improved.
In an alternative implementation of the embodiment of the present disclosure, as shown in fig. 2, step 102 may include the following sub-steps, based on the embodiment shown in fig. 1:
step 201, obtaining a file name of the freeze-frame picture, wherein the file name includes color field information corresponding to the original video.
When generating the frame picture and naming the frame picture, the client may name the frame picture based on a preset file naming format and color parameter information corresponding to the original video, and generate a file name containing color field information corresponding to the original video, where in the file name of the frame picture, each color field information corresponds to the color parameter information of the original video.
In the embodiment of the disclosure, the video editing engine architecture obtains the file name of the freeze-frame picture, which can be obtained in real time, or according to the storage path of the freeze-frame picture, and respectively explained below.
The video editing engine architecture may obtain the file name of the freeze-frame picture according to the transmission information of the freeze-frame picture when receiving the freeze-frame picture sent by the client.
In an exemplary embodiment, the freeze-frame pictures received by the video editing engine architecture may be stored in the local disk by default, and then the storage path of the freeze-frame pictures is also determined, so that when the video editing engine architecture obtains the file names of the freeze-frame pictures, the video editing engine architecture may parse the path information stored in the local disk by the freeze-frame pictures, and extract the file names of the freeze-frame pictures.
For example, the path information saved in the local disc by the dummy frame picture is: and D, analyzing the path information to extract the file name of the freeze picture as abc, wherein the file name does not contain an extension, namely a suffix.
Step 202, analyzing the color field information to obtain color parameter information corresponding to the original video.
In the embodiment of the disclosure, after the file name of the freeze-frame picture is obtained, color field information in the file name can be analyzed to obtain color parameter information corresponding to the original video.
It can be understood that the color field information in the file name corresponds to the color parameter information of the original video, so that the color field information is analyzed, and the color parameter information of the original video can be extracted from the color field information.
For example, if the original video is an HDR video, the file name of the freeze-frame picture includes relevant HDR color information of the original video, for example, if the color space of the original video is HDR-rec.2020HLG, the file name of the freeze-frame picture of the original video may include color field information HLG thereof, and further, the color field information in the file name is parsed, so that color parameter information corresponding to the original video may be obtained, including HLG.
According to the video processing method, the file name of the freeze-frame picture is obtained, the file name comprises color field information corresponding to the original video, and further the color field information is analyzed to obtain color parameter information corresponding to the original video, so that the color parameter information of the original video can be analyzed from the file name of the freeze-frame picture, and data support is provided for subsequent understanding of the color parameter information of the original video for color processing.
In an optional embodiment of the present disclosure, resolving the color field information of the file name to obtain the color parameter information of the original video may include:
matching the color field information of the file name according to a preset candidate color parameter set, wherein the candidate color parameter set is a color parameter set of a target video type;
If the matching is successful, determining target color parameter information consistent with the candidate color parameter set from the file name, wherein the target color parameter information is corresponding to the original video when the original video belongs to the target video type;
if the matching fails, determining the preset default color parameter information as the color parameter information corresponding to the original video.
Wherein the target video type may comprise HDR video, and as technology advances, the target video type may also comprise video types that are richer than HDR video colors. The candidate color parameter set is related to the target video type, and color parameter information related to the target video type is included in the candidate color parameter set. For example, the target video type includes HDR video, and since the color transmission characteristics of HDR video include two standards of PQ and HLG, the color primary information of HDR video includes three of 709, p3, and 2020, the candidate color parameter set may include PQ, HLG, 709, p3, and 2020. If the target video type includes not only the HDR video but also other types, the candidate color parameter set includes not only the color parameter information related to the HDR video but also color parameter information related to other target video types.
It can be appreciated that partial color parameter information for the target video type may be included in the candidate color parameter set, e.g., PQ, HLG, 709, p3 may be included in the candidate color parameter set, but not 2020, when the target video type is HDR video.
The candidate color parameter set may be represented by a regular expression, for example. For example, when the target video type is HDR video, the candidate color parameter set may be represented as a regular expression "((PQ) | (HLG)) ((709) | (p 3))".
In the embodiment of the disclosure, after the file name of the freeze-frame picture is extracted, matching processing can be performed on each color field information of the extracted file name by using a preset candidate color parameter set, when matching is successful, target color parameter information consistent with the candidate color parameter set is determined from the file name as color parameter information corresponding to the original video, and when matching fails, the preset default color parameter information is determined as color parameter information corresponding to the original video.
For example, assuming that the candidate color parameter set includes PQ, HLG, 709, and p3, for the original video in the HDR format, the file name of the corresponding freeze picture is 1_pq_709, where 1, PQ, and 709 correspond to one field respectively, PQ, and 709 are color field information, and by matching each color field information of the file name with the candidate color parameter set, it is possible to match the successful PQ and 709, and then it may be determined that the target color parameter information includes PQ and 709. If the video type of the original video is not the target video type, when each field of the file name of the freeze-frame picture is matched with the candidate color parameter set, the matching fails, and the default color parameter information is determined as the color parameter information corresponding to the original video. Wherein the default color parameter information is preset.
In the embodiment of the disclosure, the color field information of the file name is matched according to the preset candidate color parameter set, when the matching is successful, the target color parameter information consistent with the candidate color parameter set is determined from the file name, and when the matching is failed, the preset default color parameter information is determined to be the color parameter information corresponding to the original video, so that the color parameter information of the original video can be determined, and data support is provided for the subsequent rendering processing based on the color parameter information of the original video to generate the stop-motion video segment.
In an optional embodiment of the disclosure, in a case where the original video is an HDR video, the HDR-related color information obtained from the original video includes color transmission characteristic information and color primary information, and the file name of the frozen picture includes color field information corresponding to the original video, including:
first field information representing HDR color transfer characteristics, and second field information representing HDR color primary information.
The first field is a Color transmission characteristic field (noted as color_transfer) and is used for recording Color transmission characteristic information of the original video, and the first field information can represent the HDR Color transmission characteristic related to the original video; the second field is a Color Primary field (noted as color_primary) for recording Color Primary information of the original video, and the second field information can represent HDR Color Primary information related to the original video.
Correspondingly, the analyzing the color field information to obtain color parameter information corresponding to the original video includes:
analyzing the first field information to extract HDR color transmission characteristics; and
and analyzing the second field information to extract HDR color primary information.
For example, assuming that the original video is an HDR video, the HDR-related color information acquired by the client from the original video includes color transmission characteristic information HLG and color primary information 709, and the file name of the freeze picture generated by the client includes color field information HLG and 709, where HLG is first field information and 709 is second field information. For example, the file name (without extension) of the freeze picture may be denoted hlg_709. Therefore, after the video editing engine architecture obtains the file name of the freeze-frame picture, the first field information of the file name is analyzed, the HDR color transmission characteristic can be extracted to be HLG, the second field information of the file name is analyzed, the HDR color primary color information can be extracted to be 709, the extracted HDR color transmission characteristic HLG and the HDR color primary color information 709 are the color parameter information corresponding to the original video, which is analyzed from the color field information of the file name, and further the freeze-frame picture is rendered by utilizing the analyzed color parameters, so that the freeze-frame video fragment consistent with the color of the original video can be obtained, and the visual effect of the freeze-frame video is improved.
In an optional embodiment of the disclosure, in a case where the original video is an HDR video, the HDR-related color information acquired from the original video includes color transmission characteristic information or color primary information, and the file name of the frozen picture includes color field information corresponding to the original video, including:
first field information representing HDR color transfer characteristics, or second field information representing HDR color primary information.
The explanation of the first field, the first field information, the second field, and the second field information refers to the relevant content of the foregoing embodiment, and is not repeated here.
Correspondingly, the analyzing the color field information to obtain color parameter information corresponding to the original video includes:
when the file name only comprises the first field information, analyzing the first field information to extract HDR color transmission characteristics, and taking preset default color primary information as HDR color primary information; or,
and under the condition that the file name only comprises the second field information, analyzing the second field information to extract HDR color primary color information, and taking a preset default color transmission characteristic as an HDR color transmission characteristic.
That is, in the embodiment of the present disclosure, when only one of the color transfer characteristic and the color primary information is included in the file name, the other information may be determined according to preset default information.
The default color primary information and the default color transmission characteristic may be preset, for example, default color primary information is set to 2020, and default color transmission characteristic is set to PQ.
For example, let the default color primary information be 2020, the default color transmission characteristic be PQ, the target video type be HDR video, the original video be HDR video, and the original video be target video type. Assuming that the HDR-related color information acquired from the original video includes color transfer characteristic information HLG, the file name of the client-generated freeze picture includes color field information HLG, i.e., the color field information of the file name includes only the first field information HLG. Therefore, after the video editing engine architecture obtains the file name of the freeze-frame picture, the first field information of the file name is parsed, the HDR color transmission characteristic can be extracted as HLG, and the default color primary information 2020 is determined as the HDR color primary information. Furthermore, the determined HDR color transmission characteristic HLG and the HDR color primary information 2020 are utilized to perform rendering processing, so that a freeze video segment with a better visual effect can be obtained, and the color difference between the freeze video segment and the original video can be reduced.
Assuming that the HDR-related color information acquired from the original video includes color primary information 709, the file name of the client-generated freeze picture includes color field information 709, i.e., the color field information of the file name includes only second field information 709. Thus, after the video editing engine architecture obtains the file name of the freeze-frame picture, the second field information of the file name is parsed, and the HDR color primary information can be extracted as 709, and the default color transmission characteristic PQ is determined as the HDR color transmission characteristic. Furthermore, the determined HDR color transmission characteristic PQ and the HDR color primary information are used for rendering 709, so that a freeze video segment with good visual effect can be obtained, and the color difference between the freeze video segment and the original video can be reduced.
Further, in an alternative implementation of the disclosed embodiment, after determining the HDR color transfer characteristic and the HDR color primary information, color processing may be performed according to the determined color parameter information. Therefore, the rendering processing of the frozen picture according to the color parameter information comprises the following steps:
and rendering the freeze picture according to the HDR color transmission characteristic and the HDR color primary color information to generate a freeze video.
And then, setting the time length of the generated stop-motion video frame to obtain a stop-motion video fragment.
In the embodiment of the disclosure, the fixed video is generated by rendering the fixed picture according to the determined HDR color transmission characteristic and the HDR color primary color information, so that the color difference between the fixed video frame and the original video can be reduced, the fixed video frame with the same color or basically the same color as the original video is obtained, and the visual effect of the fixed video is improved.
In an optional embodiment of the disclosure, the video editing engine architecture may further receive an original video sent by a client, perform fusion editing processing on a frozen video segment and the original video, generate a target video, and return the target video to the client for display.
In an exemplary embodiment, when the generated frozen video segments and the original video are subjected to the fusion editing process, the video editing engine architecture can identify video frames consistent with the frozen video frames in the frozen video segments from the original video through an image identification technology, and then splice the frozen video segments to the video frames to generate the target video containing the frozen video segments.
In an exemplary embodiment, when the video editing engine architecture performs fusion editing processing on the generated frozen video segments and the original video, the video editing engine architecture may receive a frozen time point sent by the client, and after the frozen video segments are spliced in the original video at the frozen time point, the video editing engine architecture generates a target video including the frozen video segments. For example, if the stop-motion time point sent by the client is 5 seconds of the original video, the video editing engine architecture splices the stop-motion video segment after 5 seconds of the original video.
It can be understood that if the stop-motion video segment has a certain duration, the stop-motion video segment and the original video are subjected to fusion editing processing, and the playing duration of the generated target video is longer than that of the original video.
In the embodiment of the disclosure, the original video sent by the client is received, the stop-motion video segment and the original video are subjected to fusion editing processing, the target video is generated, and the target video is returned to the client for display.
In practical application, a user may have a need to freeze a plurality of video clips in a section of original video, and then a plurality of freeze pictures used for freeze are required to be determined from the original video, so that in order to determine a plurality of generated freeze pictures conveniently, file names are prevented from being repeated, and the file names of the freeze pictures can be distinguished. Thus, in an optional implementation manner of the embodiment of the present disclosure, in a case that the client determines multiple Zhang Dingge pictures from the original video, the file name of the freeze picture further includes:
identification field information for distinguishing a plurality of the freeze pictures.
Illustratively, the identification field may be a field, the identification field information may be a sequence number determined according to a sequence in which the freeze pictures are determined from the original video, for example, the identification field information included in the file name of the first freeze picture determined from the original video is 1, the identification field information included in the file name of the second freeze picture determined from the original video is 2, and so on.
Illustratively, the identification field may be a plurality of fields, such as the identification field may include a global identification field and a private identification field. The identification field is used for recording a global unique identifier, and the global unique identifier can be randomly generated according to the current machine environment; the private identification field is used to record a private identifier, which may be generated from information (such as path information) of the original video. It should be noted that, a global unique identifier and a private identifier may be randomly generated for each frozen picture determined from the original video, and each time the global unique identifier and the private identifier that are randomly generated are different, so that each frozen picture may be distinguished according to the global unique identifier and the private identifier.
For example, only one global unique identifier and one private identifier may be randomly generated for the same original video, the file names of the plurality of Zhang Dingge pictures determined from the original video include the same global unique identifier and the same private identifier, at this time, each of the fixed pictures cannot be distinguished according to the global unique identifier and the private identifier, the identification field may include a sequence identification field in addition to the global identification field and the private identification field, where the sequence identification field is used for recording a sequence number for determining the fixed picture from the original video, for example, the sequence number of the first fixed picture determined from the original video is 1, the sequence identification field information included in the file name of the fixed picture is 1, each fixed picture is distinguished according to the sequence identification field information, and the global identification field information and the private identification field information are used for distinguishing different original videos.
In the embodiment of the disclosure, under the condition that a plurality of Zhang Dingge pictures are determined from an original video, the file name of the freeze-frame picture also comprises identification field information for distinguishing the plurality of Zhang Dingge pictures, so that different freeze-frame pictures of the original video can be accurately distinguished, confusion and misuse of the freeze-frame pictures are avoided, and the accuracy of freeze-frame processing of the original video according to the plurality of Zhang Dingge pictures is improved.
In order to implement the above embodiments, the present disclosure also provides a video processing apparatus, which may be implemented in software and/or hardware, and may be generally integrated in an electronic device configured with a video editing engine architecture, and the electronic device may be, for example, a server.
Fig. 3 is a schematic structural diagram of a video processing apparatus according to an embodiment of the disclosure, and as shown in fig. 3, the video processing apparatus 30 may include:
the receiving module 301 is configured to receive a freeze frame sent by a client, where the freeze frame is a picture frame corresponding to a freeze time in an original video of the client;
the parsing module 302 is configured to parse the association information corresponding to the freeze-frame picture, and obtain color parameter information corresponding to the original video;
the rendering module 303 is configured to perform rendering processing on the freeze-frame picture according to the color parameter information, and perform duration setting on the freeze-frame picture to generate a freeze-frame video segment;
and the sending module 304 is configured to return the frozen video segment to the client for displaying, where the frozen video segment is consistent with the color information of the original video.
In an alternative implementation of the disclosed embodiment, the parsing module 302 includes:
the acquisition unit is used for acquiring the file name of the freeze-frame picture, wherein the file name comprises color field information corresponding to the original video;
and the analysis unit is used for analyzing the color field information to obtain color parameter information corresponding to the original video.
In an optional implementation manner of the embodiment of the disclosure, the obtaining unit is further configured to:
and analyzing path information stored in a local disk of the freeze-frame picture, and extracting the file name of the freeze-frame picture.
In an optional implementation of the disclosed embodiment, the parsing unit is further configured to:
matching the color field information of the file name according to a preset candidate color parameter set, wherein the candidate color parameter set is a color parameter set of a target video type;
if the matching is successful, determining target color parameter information consistent with the candidate color parameter set from the file name, wherein the target color parameter information is corresponding to the original video when the original video belongs to the target video type;
If the matching fails, determining the preset default color parameter information as the color parameter information corresponding to the original video.
In an optional implementation of the disclosed embodiment, in a case where the original video is a high dynamic range imaging HDR video, the file name of the freeze picture includes: first field information representing HDR color transfer characteristics, and second field information representing HDR color primary color information; correspondingly, the parsing unit is further configured to:
analyzing the first field information to extract HDR color transmission characteristics; and
and analyzing the second field information to extract HDR color primary information.
In an optional implementation of the disclosed embodiment, in a case where the original video is a high dynamic range imaging HDR video, the file name of the freeze picture includes: first field information representing HDR color transfer characteristics, or second field information representing HDR color primary color information; correspondingly, the parsing unit is further configured to:
when the file name only comprises the first field information, analyzing the first field information to extract HDR color transmission characteristics, and taking preset default color primary information as HDR color primary information; or,
And under the condition that the file name only comprises the second field information, analyzing the second field information to extract HDR color primary color information, and taking a preset default color transmission characteristic as an HDR color transmission characteristic.
In an alternative implementation of the disclosed embodiment, the receiving module 301 is further configured to:
receiving the original video sent by the client;
the video processing device 30 further includes:
the fusion module is used for carrying out fusion editing processing on the fixed-frame video segment and the original video to generate a target video;
the sending module 304 is further configured to:
and returning the target video to the client for display.
The video processing device provided by the embodiment of the disclosure can execute the video processing method provided by any embodiment of the disclosure, and has the corresponding functional modules and beneficial effects of the execution method.
To achieve the above embodiments, the present disclosure also proposes a computer program product comprising a computer program/instruction which, when executed by a processor, implements the video processing method in the above embodiments.
Fig. 4 is a schematic structural diagram of an electronic device according to an embodiment of the disclosure.
Referring now in particular to fig. 4, a schematic diagram of an electronic device 400 suitable for use in implementing embodiments of the present disclosure is shown. The electronic device 400 in the embodiments of the present disclosure may include, but is not limited to, mobile terminals such as mobile phones, notebook computers, digital broadcast receivers, PDAs (personal digital assistants), PADs (tablet computers), PMPs (portable multimedia players), in-vehicle terminals (e.g., in-vehicle navigation terminals), and the like, and stationary terminals such as digital TVs, desktop computers, and the like. The electronic device shown in fig. 4 is merely an example and should not be construed to limit the functionality and scope of use of the disclosed embodiments.
As shown in fig. 4, the electronic device 400 may include a processing means (e.g., a central processing unit, a graphics processor, etc.) 401, which may perform various suitable actions and processes according to a program stored in a Read Only Memory (ROM) 402 or a program loaded from a storage means 408 into a Random Access Memory (RAM) 403. In the RAM 403, various programs and data necessary for the operation of the electronic device 400 are also stored. The processing device 401, the ROM402, and the RAM 403 are connected to each other by a bus 404. An input/output (I/O) interface 405 is also connected to bus 404.
In general, the following devices may be connected to the I/O interface 405: input devices 406 including, for example, a touch screen, touchpad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; an output device 407 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage 408 including, for example, magnetic tape, hard disk, etc.; and a communication device 409. The communication means 409 may allow the electronic device 400 to communicate with other devices wirelessly or by wire to exchange data. While fig. 4 shows an electronic device 400 having various means, it is to be understood that not all of the illustrated means are required to be implemented or provided. More or fewer devices may be implemented or provided instead.
In particular, according to embodiments of the present disclosure, the processes described above with reference to flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a non-transitory computer readable medium, the computer program comprising program code for performing the method shown in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network via communications device 409, or from storage 408, or from ROM 402. When executed by the processing device 401, the computer program performs the above-described functions defined in the video processing method of the embodiment of the present disclosure.
It should be noted that the computer readable medium described in the present disclosure may be a computer readable signal medium or a computer readable storage medium, or any combination of the two. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples of the computer-readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this disclosure, a computer-readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In the present disclosure, however, the computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave, with the computer-readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, fiber optic cables, RF (radio frequency), and the like, or any suitable combination of the foregoing.
In some implementations, the clients, servers may communicate using any currently known or future developed network protocol, such as HTTP (HyperText Transfer Protocol ), and may be interconnected with any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include a local area network ("LAN"), a wide area network ("WAN"), the internet (e.g., the internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed networks.
The computer readable medium may be contained in the electronic device; or may exist alone without being incorporated into the electronic device.
The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: receiving a freeze frame picture sent by a client, wherein the freeze frame picture is a picture frame corresponding to freeze time in an original video of the client; analyzing the associated information corresponding to the freeze picture to acquire color parameter information corresponding to the original video; rendering the fixed-size picture according to the color parameter information, and setting the duration of the fixed-size picture to generate a fixed-size video segment; and returning the stop-motion video segments to the client for display, wherein the color information of the stop-motion video segments is consistent with that of the original video.
Computer program code for carrying out operations of the present disclosure may be written in one or more programming languages, including, but not limited to, an object oriented programming language such as Java, smalltalk, C ++ and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computer (for example, through the Internet using an Internet service provider).
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units involved in the embodiments of the present disclosure may be implemented by means of software, or may be implemented by means of hardware. Wherein the names of the units do not constitute a limitation of the units themselves in some cases.
The functions described above herein may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: a Field Programmable Gate Array (FPGA), an Application Specific Integrated Circuit (ASIC), an Application Specific Standard Product (ASSP), a system on a chip (SOC), a Complex Programmable Logic Device (CPLD), and the like.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
In accordance with one or more embodiments of the present disclosure, the present disclosure provides a video processing method, including:
receiving a freeze frame picture sent by a client, wherein the freeze frame picture is a picture frame corresponding to freeze time in an original video of the client;
analyzing the associated information corresponding to the freeze picture to acquire color parameter information corresponding to the original video;
rendering the fixed-size picture according to the color parameter information, and setting the duration of the fixed-size picture to generate a fixed-size video segment;
and returning the stop-motion video segments to the client for display, wherein the color information of the stop-motion video segments is consistent with that of the original video.
According to one or more embodiments of the present disclosure, in the video processing method provided by the present disclosure, the parsing the association information corresponding to the frozen picture, and obtaining the color parameter information corresponding to the original video, includes:
acquiring a file name of the freeze-frame picture, wherein the file name comprises color field information corresponding to the original video;
and analyzing the color field information to obtain color parameter information corresponding to the original video.
According to one or more embodiments of the present disclosure, in the video processing method provided by the present disclosure, the obtaining a file name of the freeze-frame picture includes:
and analyzing path information stored in a local disk of the freeze-frame picture, and extracting the file name of the freeze-frame picture.
According to one or more embodiments of the present disclosure, in the video processing method provided by the present disclosure, the parsing the color field information to obtain color parameter information corresponding to the original video includes:
matching the color field information of the file name according to a preset candidate color parameter set, wherein the candidate color parameter set is a color parameter set of a target video type;
if the matching is successful, determining target color parameter information consistent with the candidate color parameter set from the file name, wherein the target color parameter information is corresponding to the original video when the original video belongs to the target video type;
if the matching fails, determining the preset default color parameter information as the color parameter information corresponding to the original video.
According to one or more embodiments of the present disclosure, in the video processing method provided by the present disclosure, in a case where the original video is a high dynamic range imaging HDR video, the file name includes color field information corresponding to the original video, including:
First field information representing HDR color transfer characteristics, and second field information representing HDR color primary color information;
the analyzing the color field information to obtain color parameter information corresponding to the original video includes:
analyzing the first field information to extract HDR color transmission characteristics; and
and analyzing the second field information to extract HDR color primary information.
According to one or more embodiments of the present disclosure, in the video processing method provided by the present disclosure, in a case where the original video is a high dynamic range imaging HDR video, the file name includes color field information corresponding to the original video, including:
first field information representing HDR color transfer characteristics, or second field information representing HDR color primary color information;
the analyzing the color field information to obtain color parameter information corresponding to the original video includes:
when the file name only comprises the first field information, analyzing the first field information to extract HDR color transmission characteristics, and taking preset default color primary information as HDR color primary information; or,
and under the condition that the file name only comprises the second field information, analyzing the second field information to extract HDR color primary color information, and taking a preset default color transmission characteristic as an HDR color transmission characteristic.
In accordance with one or more embodiments of the present disclosure, in a video processing method provided by the present disclosure, the method further includes:
receiving the original video sent by the client;
and carrying out fusion editing processing on the stop-motion video segment and the original video to generate a target video, and returning the target video to the client for display.
According to one or more embodiments of the present disclosure, there is provided a video processing apparatus including:
the device comprises a receiving module, a processing module and a processing module, wherein the receiving module is used for receiving a stop-motion picture sent by a client, wherein the stop-motion picture is a picture frame corresponding to the stop-motion time in an original video of the client;
the analysis module is used for analyzing the association information corresponding to the fixed-size picture and acquiring color parameter information corresponding to the original video;
the rendering module is used for rendering the fixed-size picture according to the color parameter information and setting the duration of the fixed-size picture to generate a fixed-size video segment;
and the sending module is used for returning the stop-motion video segments to the client for display, wherein the color information of the stop-motion video segments is consistent with that of the original video.
In accordance with one or more embodiments of the present disclosure, in the video processing apparatus provided by the present disclosure, the parsing module 302 includes:
the acquisition unit is used for acquiring the file name of the freeze-frame picture, wherein the file name comprises color field information corresponding to the original video;
and the analysis unit is used for analyzing the color field information to obtain color parameter information corresponding to the original video.
According to one or more embodiments of the present disclosure, in the video processing apparatus provided by the present disclosure, the obtaining unit is further configured to:
and analyzing path information stored in a local disk of the freeze-frame picture, and extracting the file name of the freeze-frame picture.
According to one or more embodiments of the present disclosure, in the video processing apparatus provided by the present disclosure, the parsing unit is further configured to:
matching the color field information of the file name according to a preset candidate color parameter set, wherein the candidate color parameter set is a color parameter set of a target video type;
if the matching is successful, determining target color parameter information consistent with the candidate color parameter set from the file name, wherein the target color parameter information is corresponding to the original video when the original video belongs to the target video type;
If the matching fails, determining the preset default color parameter information as the color parameter information corresponding to the original video.
According to one or more embodiments of the present disclosure, in the video processing apparatus provided by the present disclosure, in a case where the original video is a high dynamic range imaging HDR video, a file name of the freeze picture includes: first field information representing HDR color transfer characteristics, and second field information representing HDR color primary color information; correspondingly, the parsing unit is further configured to:
analyzing the first field information to extract HDR color transmission characteristics; and
and analyzing the second field information to extract HDR color primary information.
According to one or more embodiments of the present disclosure, in the video processing apparatus provided by the present disclosure, in a case where the original video is a high dynamic range imaging HDR video, a file name of the freeze picture includes: first field information representing HDR color transfer characteristics, or second field information representing HDR color primary color information; correspondingly, the parsing unit is further configured to:
when the file name only comprises the first field information, analyzing the first field information to extract HDR color transmission characteristics, and taking preset default color primary information as HDR color primary information; or,
And under the condition that the file name only comprises the second field information, analyzing the second field information to extract HDR color primary color information, and taking a preset default color transmission characteristic as an HDR color transmission characteristic.
In accordance with one or more embodiments of the present disclosure, in the video processing apparatus provided by the present disclosure, the receiving module 301 is further configured to:
receiving the original video sent by the client;
the video processing device 30 further includes:
the fusion module is used for carrying out fusion editing processing on the fixed-frame video segment and the original video to generate a target video;
the sending module 304 is further configured to:
and returning the target video to the client for display.
According to one or more embodiments of the present disclosure, the present disclosure provides an electronic device comprising:
a processor;
a memory for storing the processor-executable instructions;
the processor is configured to read the executable instructions from the memory and execute the instructions to implement any of the video processing methods provided in the present disclosure.
According to one or more embodiments of the present disclosure, the present disclosure provides a computer-readable storage medium storing a computer program for performing any one of the video processing methods provided by the present disclosure.
The foregoing description is only of the preferred embodiments of the present disclosure and description of the principles of the technology being employed. It will be appreciated by persons skilled in the art that the scope of the disclosure referred to in this disclosure is not limited to the specific combinations of features described above, but also covers other embodiments which may be formed by any combination of features described above or equivalents thereof without departing from the spirit of the disclosure. Such as those described above, are mutually substituted with the technical features having similar functions disclosed in the present disclosure (but not limited thereto).
Moreover, although operations are depicted in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order. In certain circumstances, multitasking and parallel processing may be advantageous. Likewise, while several specific implementation details are included in the above discussion, these should not be construed as limiting the scope of the present disclosure. Certain features that are described in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination.
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are example forms of implementing the claims.

Claims (10)

1. A video processing method, comprising:
receiving a freeze frame picture sent by a client, wherein the freeze frame picture is a picture frame corresponding to freeze time in an original video of the client;
analyzing the associated information corresponding to the freeze picture to acquire color parameter information corresponding to the original video;
rendering the fixed-size picture according to the color parameter information, and setting the duration of the fixed-size picture to generate a fixed-size video segment;
and returning the stop-motion video segments to the client for display, wherein the color information of the stop-motion video segments is consistent with that of the original video.
2. The method according to claim 1, wherein the parsing the association information corresponding to the frozen picture to obtain color parameter information corresponding to the original video includes:
Acquiring a file name of the freeze-frame picture, wherein the file name comprises color field information corresponding to the original video;
and analyzing the color field information to obtain color parameter information corresponding to the original video.
3. The method according to claim 2, wherein the obtaining the file name of the frozen picture comprises:
and analyzing path information stored in a local disk of the freeze-frame picture, and extracting the file name of the freeze-frame picture.
4. The method of claim 2, wherein said parsing the color field information to obtain color parameter information corresponding to the original video comprises:
matching the color field information of the file name according to a preset candidate color parameter set, wherein the candidate color parameter set is a color parameter set of a target video type;
if the matching is successful, determining target color parameter information consistent with the candidate color parameter set from the file name, wherein the target color parameter information is corresponding to the original video when the original video belongs to the target video type;
If the matching fails, determining the preset default color parameter information as the color parameter information corresponding to the original video.
5. The method of claim 2, wherein, in the case where the original video is a high dynamic range imaging HDR video, the file name includes color field information corresponding to the original video, comprising:
first field information representing HDR color transfer characteristics, and second field information representing HDR color primary color information;
the analyzing the color field information to obtain color parameter information corresponding to the original video includes:
analyzing the first field information to extract HDR color transmission characteristics; and
and analyzing the second field information to extract HDR color primary information.
6. The method of claim 2, wherein, in the case where the original video is a high dynamic range imaging HDR video, the file name includes color field information corresponding to the original video, comprising:
first field information representing HDR color transfer characteristics, or second field information representing HDR color primary color information;
the analyzing the color field information to obtain color parameter information corresponding to the original video includes:
When the file name only comprises the first field information, analyzing the first field information to extract HDR color transmission characteristics, and taking preset default color primary information as HDR color primary information; or,
and under the condition that the file name only comprises the second field information, analyzing the second field information to extract HDR color primary color information, and taking a preset default color transmission characteristic as an HDR color transmission characteristic.
7. The method of any one of claims 1-6, further comprising:
receiving the original video sent by the client;
and carrying out fusion editing processing on the stop-motion video segment and the original video to generate a target video, and returning the target video to the client for display.
8. A video processing apparatus, comprising:
the device comprises a receiving module, a processing module and a processing module, wherein the receiving module is used for receiving a stop-motion picture sent by a client, wherein the stop-motion picture is a picture frame corresponding to the stop-motion time in an original video of the client;
the analysis module is used for analyzing the association information corresponding to the fixed-size picture and acquiring color parameter information corresponding to the original video;
The rendering module is used for rendering the fixed-size picture according to the color parameter information and setting the duration of the fixed-size picture to generate a fixed-size video segment;
and the sending module is used for returning the stop-motion video segments to the client for display, wherein the color information of the stop-motion video segments is consistent with that of the original video.
9. An electronic device, the electronic device comprising:
a processor;
a memory for storing the processor-executable instructions;
the processor is configured to read the executable instructions from the memory and execute the instructions to implement the video processing method of any of the preceding claims 1-7.
10. A computer readable storage medium, characterized in that the storage medium stores a computer program for executing the video processing method according to any one of the preceding claims 1-7.
CN202210494202.0A 2022-05-07 2022-05-07 Video processing method, device, electronic equipment and storage medium Pending CN117061817A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210494202.0A CN117061817A (en) 2022-05-07 2022-05-07 Video processing method, device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210494202.0A CN117061817A (en) 2022-05-07 2022-05-07 Video processing method, device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN117061817A true CN117061817A (en) 2023-11-14

Family

ID=88667932

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210494202.0A Pending CN117061817A (en) 2022-05-07 2022-05-07 Video processing method, device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN117061817A (en)

Similar Documents

Publication Publication Date Title
KR102158557B1 (en) Method and device for determining response time
US9961398B2 (en) Method and device for switching video streams
US11818424B2 (en) Method and apparatus for generating video, electronic device, and computer readable medium
CN111629251B (en) Video playing method and device, storage medium and electronic equipment
US11928152B2 (en) Search result display method, readable medium, and terminal device
US11785195B2 (en) Method and apparatus for processing three-dimensional video, readable storage medium and electronic device
CN110545472B (en) Video data processing method and device, electronic equipment and computer readable medium
US11893770B2 (en) Method for converting a picture into a video, device, and storage medium
KR20220144857A (en) Multimedia data publishing method and apparatus, device and recording medium
CN113507637A (en) Media file processing method, device, equipment, readable storage medium and product
CN111818383A (en) Video data generation method, system, device, electronic equipment and storage medium
CN114567812A (en) Audio playing method, device, system, electronic equipment and storage medium
CN114125551B (en) Video generation method, device, electronic equipment and computer readable medium
CN113535105A (en) Media file processing method, device, equipment, readable storage medium and product
CN113839829A (en) Cloud game delay testing method, device and system and electronic equipment
WO2023098576A1 (en) Image processing method and apparatus, device, and medium
CN114584808B (en) Video stream acquisition method, device, system, equipment and medium
CN117061817A (en) Video processing method, device, electronic equipment and storage medium
CN111246254A (en) Video recommendation method and device, server, terminal equipment and storage medium
CN111385638B (en) Video processing method and device
CN113139090A (en) Interaction method, interaction device, electronic equipment and computer-readable storage medium
CN113382293A (en) Content display method, device, equipment and computer readable storage medium
CN112287171A (en) Information processing method and device and electronic equipment
CN111447490A (en) Streaming media file processing method and device
CN115334360B (en) Audio and video playing method and device, electronic equipment and readable medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination