CN115250377A - Video processing method, processing platform, electronic device and storage medium - Google Patents

Video processing method, processing platform, electronic device and storage medium Download PDF

Info

Publication number
CN115250377A
CN115250377A CN202110458994.1A CN202110458994A CN115250377A CN 115250377 A CN115250377 A CN 115250377A CN 202110458994 A CN202110458994 A CN 202110458994A CN 115250377 A CN115250377 A CN 115250377A
Authority
CN
China
Prior art keywords
video
segment
video segment
segments
label
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110458994.1A
Other languages
Chinese (zh)
Other versions
CN115250377B (en
Inventor
张民
吕德政
崔刚
张彤
张艳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Frame Color Film And Television Technology Co ltd
Original Assignee
Shenzhen Frame Color Film And Television Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Frame Color Film And Television Technology Co ltd filed Critical Shenzhen Frame Color Film And Television Technology Co ltd
Priority to CN202110458994.1A priority Critical patent/CN115250377B/en
Publication of CN115250377A publication Critical patent/CN115250377A/en
Application granted granted Critical
Publication of CN115250377B publication Critical patent/CN115250377B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/44008Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics in the video stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/84Generation or processing of descriptive data, e.g. content descriptors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/845Structuring of content, e.g. decomposing content into time segments
    • H04N21/8456Structuring of content, e.g. decomposing content into time segments by decomposing the content in the time domain, e.g. in time segments

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Image Analysis (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The application provides a video processing method, a processing platform, electronic equipment and a storage medium, wherein a plurality of first video segments are obtained by acquiring target video information to be processed and then dividing the target video information; respectively issuing a plurality of first video segments and an annotation file to each client, wherein the annotation file comprises a plurality of preset video characteristic labels; receiving annotation information sets corresponding to a plurality of first video segments returned by each client, wherein the annotation information set corresponding to the first video segment comprises at least one video characteristic tag in a plurality of video characteristic tags; and processing the first video segment according to the annotation information set corresponding to each first video segment to obtain the processed video information. By the video processing method, the target video information is divided into the video segments, so that the same video is processed by multiple people at the same time, and the video processing speed is increased.

Description

Video processing method, processing platform, electronic device and storage medium
Technical Field
The present application relates to the field of video processing technologies, and in particular, to a video processing method, a processing platform, an electronic device, and a storage medium.
Background
At present, when performing post-processing on a video, it is usually necessary for a post-worker to browse the video according to a time sequence of the video, and process the video according to human experience, for example, process a video image to increase brightness or contrast.
However, since the video post-processing has higher processing capability requirement for the video processing platform, the post-processing personnel is less due to the limitation of equipment resources. With the continuous increase of the video duration, the workload of later-stage personnel is continuously increased, so that the video post-processing speed is reduced. Therefore, the application provides a new video processing method to improve the speed of video post-processing.
Disclosure of Invention
The application provides a video processing method, a processing platform, an electronic device and a storage medium, which are used for solving the problem of low video processing speed in the prior art.
A first aspect of the present application provides a video processing method, where the method includes:
acquiring target video information to be processed;
dividing the target video information to obtain a plurality of first video segments;
respectively issuing a plurality of first video segments and an annotation file to each client, wherein the annotation file comprises a plurality of preset video characteristic labels;
receiving annotation information sets corresponding to a plurality of first video segments returned by each client, wherein the annotation information set corresponding to the first video segment comprises at least one video characteristic tag in the plurality of video characteristic tags;
and processing the first video segment according to the annotation information set corresponding to each first video segment to obtain the processed video information.
In a possible implementation manner, the dividing the target video information to obtain a plurality of first video segments includes:
dividing the target video information according to a lens to obtain a plurality of second video segments;
and allocating each second video segment to the first video segment to obtain a plurality of first video segments, wherein each first video segment comprises at least one second video segment.
In one possible implementation, the assigning each second video segment to a first video segment to obtain a plurality of first video segments includes:
determining the number of the plurality of first video segments according to the number of the current online clients;
detecting the duration of each second video segment;
and distributing each second video segment to the first video segment according to the number of the plurality of first video segments and the time length of each second video segment to obtain a plurality of first video segments, wherein the time lengths of the plurality of first video segments are equal.
In one possible implementation, the assigning each second video segment to a first video segment to obtain a plurality of first video segments includes:
identifying each second video segment, and determining the content type of each second video segment;
and distributing each second video segment to the first video segments according to the content type of each second video segment to obtain a plurality of first video segments, wherein the content types of the second video segments corresponding to each first video segment are the same.
In a possible implementation manner, the processing the first video segment according to the annotation information set corresponding to each first video segment to obtain processed video information includes:
determining a video processing program corresponding to the video characteristic label in the annotation information set according to the annotation information set corresponding to each first video segment;
and calling a video processing program corresponding to the video characteristic label in the annotation information set to process the first video segment.
In a possible implementation manner, the markup file further includes: label grade sets corresponding to the video characteristic labels, wherein each label grade set comprises a plurality of label grade parameters; the annotation information set corresponding to the first video segment further comprises: label grade parameters corresponding to video characteristic labels in the label information set;
the calling a video processing program corresponding to the video characteristic tag in the annotation information set to process the first video segment includes: and calling a video processing program corresponding to the video characteristic label in the annotation information set, and processing the first video segment by adopting the label grade parameter corresponding to the video characteristic label in the annotation information set.
In one possible implementation, the video characteristics tags in the annotation file include a plurality of: contrast label, brightness label, motion label, human face label.
In a second aspect, the present application provides a video processing platform, the platform comprising:
the device comprises an acquisition unit, a processing unit and a processing unit, wherein the acquisition unit is used for acquiring target video information to be processed;
the dividing unit is used for dividing the target video information to obtain a plurality of first video segments;
the system comprises a sending unit, a receiving unit and a processing unit, wherein the sending unit is used for respectively sending a plurality of first video segments and an annotation file to each client, and the annotation file comprises a plurality of preset video characteristic labels;
the receiving unit is used for receiving annotation information sets corresponding to a plurality of first video segments returned by each client, wherein the annotation information set corresponding to the first video segment comprises at least one video characteristic tag in the plurality of video characteristic tags;
and the processing unit is used for processing the first video segments according to the annotation information set corresponding to each first video segment to obtain the processed video information.
In a possible implementation manner, the dividing unit includes:
the lens detection module is used for dividing the target video information according to lenses to obtain a plurality of second video segments;
and the dividing module is used for distributing each second video segment to the first video segments to obtain a plurality of first video segments, wherein each first video segment comprises at least one second video segment.
In one possible implementation, the dividing module includes:
the time detection module is used for determining the number of the plurality of first video segments according to the number of the current online clients;
the time detection module is further configured to detect a time duration of each second video segment;
the time detection module is further configured to allocate each second video segment to a first video segment according to the number of the plurality of first video segments and the time length of each second video segment, so as to obtain a plurality of first video segments, where the time lengths of the plurality of first video segments are equal.
In one possible implementation, the dividing module includes:
the identification module is used for identifying each second video segment and determining the content type of each second video segment;
the identification module is further configured to allocate each second video segment to the first video segment according to the content type of each second video segment to obtain a plurality of first video segments, where the content types of the second video segments corresponding to each first video segment are the same.
In one possible implementation, the processing unit includes: the system comprises an analysis module and a calling module;
the analysis module is used for determining a video processing program corresponding to the video characteristic label in the annotation information set according to the annotation information set corresponding to each first video segment;
and the calling module is used for calling a video processing program corresponding to the video characteristic label in the annotation information set to process the first video segment.
In a possible implementation manner, the markup file further includes: label level sets corresponding to the video characteristic labels, wherein each label level set comprises a plurality of label level parameters; the annotation information set corresponding to the first video segment further comprises: label grade parameters corresponding to video characteristic labels in the label information set;
the calling module is specifically configured to call a video processing program corresponding to the video characteristic tag in the annotation information set, and process the first video segment by using the tag level parameter corresponding to the video characteristic tag in the annotation information set.
In one possible implementation, the video property tags in the annotation file include a plurality of: contrast label, luminance label, motion label, people's face label.
In a third aspect, the present application provides an electronic device, comprising: a memory, a processor;
a memory; a memory for storing the processor-executable instructions;
wherein the processor is configured to perform the method according to any one of the first aspect according to the executable instructions.
In a fourth aspect, the present application provides a computer-readable storage medium having stored thereon computer-executable instructions for implementing the method according to any one of the first aspect when executed by a processor.
In a fifth aspect, the present application provides a computer program product comprising a computer program that, when executed by a processor, implements the method according to any one of the first aspect.
According to the video processing method, the processing platform, the electronic device and the storage medium, a plurality of first video segments are obtained by acquiring target video information to be processed and then dividing the target video information; respectively issuing a plurality of first video segments and an annotation file to each client, wherein the annotation file comprises a plurality of preset video characteristic labels; receiving annotation information sets corresponding to a plurality of first video segments returned by each client, wherein the annotation information set corresponding to the first video segment comprises at least one video characteristic label in a plurality of video characteristic labels; and processing the first video segment according to the labeling information set corresponding to each first video segment to obtain processed video information. By the video processing method, the target video information is divided into the video segments, so that the same video can be processed by multiple people at the same time, and the video processing speed is increased.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present application and together with the description, serve to explain the principles of the application.
Fig. 1 is a schematic view of an application scenario of video processing;
fig. 2 is a schematic flowchart of a video processing method according to an embodiment of the present application;
fig. 3 is a schematic view of an application scenario of video processing provided in the present application;
fig. 4 is a schematic flowchart of a video information dividing method according to an embodiment of the present application;
fig. 5 is a schematic flowchart of a video information processing method provided in the present application;
fig. 6 is a schematic structural diagram of a video processing platform provided in the present application;
FIG. 7 is a schematic structural diagram of another video processing platform provided in the present application;
fig. 8 is a schematic structural diagram of an electronic device provided in an embodiment of the present application.
With the above figures, there are shown specific embodiments of the present application, which will be described in more detail below. The drawings and written description are not intended to limit the scope of the inventive concepts in any manner, but rather to illustrate the concepts of the application by those skilled in the art with reference to specific embodiments.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present application, as detailed in the appended claims.
Fig. 1 is a schematic view of an application scenario of video processing. The shooting equipment is used for shooting an object to be shot to obtain original video information. And then, sending the original video information to a video processing platform, and performing processing (for example, processing such as clipping, adding subtitles, rendering images, and the like) by the video processing platform, where the video processing platform may be a server in a cloud, and is not limited herein. And sending the processed video to a playing device (such as a television, a mobile phone, a cinema playing device, and the like), and playing the video after the playing device receives the processed video. And further, the processed video enables the audience to have better film watching experience.
At present, when the video is post-processed, post-personnel are generally required to browse the video on a video processing platform according to the time sequence of the video and process the video according to human experience. Because the video post-processing has higher processing capacity requirement on the video processing platform and is limited by equipment, the post-processing personnel are less. However, as the duration of the video increases, the workload of the post-processing personnel increases, resulting in a decrease in the video post-processing speed.
The following describes the technical solution of the present application and how to solve the above technical problems in detail by specific embodiments. These several specific embodiments may be combined with each other below, and details of the same or similar concepts or processes may not be repeated in some embodiments. Embodiments of the present application will be described below with reference to the accompanying drawings.
Fig. 2 is a schematic flowchart of a video processing method according to an embodiment of the present disclosure. As shown in fig. 2, the method includes:
s101, obtaining target video information to be processed;
s102, dividing the target video information to obtain a plurality of first video segments.
Exemplarily, an application scenario of the video processing method provided by the embodiment of the present application is as shown in fig. 3. Fig. 3 is a schematic view of an application scenario of video processing provided in the present application, where the application scenario includes a client 1, a client 2, a client 3, and a video processing platform 4. The video processing platform may be a remote server. The client may be an application installed on a device and the application is not as demanding as the processing power of the device.
After the video processing platform receives the target video information, the video processing platform splits the received target video information into a plurality of first video segments. In a possible implementation manner, the target video information may be distributed evenly or randomly according to the time length directly according to the number of clients connected to the video processing platform.
S103, respectively issuing the plurality of first video segments and the annotation file to each client, wherein the annotation file comprises a plurality of preset video characteristic labels.
Illustratively, after obtaining a plurality of first video segments, a preset annotation file and the first video segments are sent to each client. For example, the obtained first video segments are: a first video segment 1, a first video segment 2 and a first video segment 3. Then, the first video segment 1 is sent to the client 1, the first video segment 2 is sent to the client 2, and the first video segment 3 is sent to the client 3.
In addition, when the first video segment is issued to each client, an annotation file is sent to each client, wherein the annotation file comprises a plurality of preset video characteristic labels. For example, the predetermined plurality of video characteristic tags may include a plurality of the following tags: contrast label, brightness label, motion label, human face label. It should be noted that the video feature tags in the present application include, but are not limited to, the tags illustrated above. The face label can be used to label key face details in the video segment that need to be emphasized, and the resolution of the video segment needs to be improved, for example, when details such as facial expressions of characters such as a main character in a movie need to be highlighted, a face feature label can be added to the details. And, when it is known in advance that the acquired target video information is a video for describing the growth of the animal, wherein no face appears, the preset video characteristic tags may include: a contrast label, a brightness label, and a motion label.
S104, receiving annotation information sets corresponding to a plurality of first video segments returned by the clients, wherein the annotation information set corresponding to the first video segment comprises at least one video characteristic label in a plurality of video characteristic labels.
Illustratively, after the client receives a plurality of first video segments, the user selects a video characteristic tag from the received annotation file to generate an annotation information set corresponding to the first video segment, and uploads the annotation information set to the video processing platform. That is, the user adds the video property tag to the video segment at the client, which indicates that the video segment corresponding to the user needs to perform one or more types of video processing, for example, when the contrast tag is selected for the first video segment, it indicates that the contrast of the first video segment needs to be adjusted. And then, sending the generated annotation information set to a video processing platform.
Optionally, in order to associate a first video segment with a set of annotation information, start time information, end time information, and a total number of frames corresponding to the first video segment can be set in the set of annotation information.
And S105, processing the first video segments according to the annotation information set corresponding to each first video segment to obtain the processed video information.
For example, after receiving the annotation information set, the video processing platform processes each first video segment according to the video property tag included in the annotation information set corresponding to the first video segment. Specifically, when the video processing platform processes the video, each time a labeling information set is received, a first video segment corresponding to the labeling information set is searched in the platform and processed, and after all the first video segments are processed, the processed video segments are combined according to the sequence of the original time of the video to obtain the processed video.
It should be noted that, when the video processing platform and the client may perform communication interaction (for example, the client uploads a label information set to the video processing platform and the video processing platform issues a video segment and a label file to the client), the communication modes of the video processing platform and the client may be various, including but not limited to a fourth generation mobile information system (4 th generation mobile communication technology, abbreviated as 4G), a fifth generation mobile communication technology (5 th generation mobile networks, abbreviated as 5G), and the like.
In this embodiment, when performing post-processing on a video, the processing capability of the device required by the required processing program is high, so the video processing platform may be a device with high processing capability, such as a remote server. The target video information is divided by the video processing platform and then is sent to the client-side equipment with weak processing capability, so that the label information sets are generated by adding labels to the video segments at the plurality of client-sides and then are uploaded to the video processing platform, and then the video processing platform can process the first video segments according to the label sets corresponding to the first video segments. By the method, multiple persons can simultaneously process each video segment in the same video, the video processing speed is improved, and the method has low requirement on the processing capacity of equipment and is easy to realize.
In practical applications, when the target video information is divided (i.e. when step S102 is executed), the division can be realized by the steps shown in fig. 4. Fig. 4 is a schematic flowchart of a method for dividing video information according to an embodiment of the present application, as shown in fig. 4, including the following steps:
s201, dividing the target video information according to the shot to obtain a plurality of second video segments.
For example, when dividing the target video information, the target video information may be divided according to the existing shot detection technology to obtain a plurality of second video segments, where each second video segment corresponds to a shot.
For example, the conventional shot detection technology usually determines according to a difference between contrast and a luminance motion vector between adjacent frame images in the target video, and if the difference is large, the adjacent frame images are in two different shots.
S202, distributing each second video segment to the first video segments to obtain a plurality of first video segments, wherein each first video segment comprises at least one second video segment.
In one example, step S202 may be performed by:
the method comprises the steps of firstly, determining the number of a plurality of first video segments according to the number of current online clients;
secondly, detecting the duration of each second video segment;
and thirdly, distributing each second video segment to the first video segment according to the number of the first video segments and the time length of each second video segment to obtain a plurality of first video segments, wherein the time lengths of the first video segments are equal.
Illustratively, after obtaining the plurality of second video segments, the video processing platform detects the number of currently online clients, and takes the number of currently online clients as the number of first video segments. In another case, the video processing platform can also determine the number of the first video segments according to the number of the clients specified by the user.
And then, obtaining a plurality of first video segments with the same duration according to the detection of the duration of each second video segment, the number of the first video segments and the duration of each second video segment, wherein the duration of the first video segments is the sum of the durations of the second video segments included in the first video segments.
In another case, the duration of the first video segment can be different. For example, the time length of each of the divided first video segments is within a preset time length range. For example, by setting the total duration of the target video information to be 100min and the number of the clients to be 5, it is determined that the average duration of each client is 20min, and the preset duration range may be 18min to 22min, so that when a plurality of first video segments are obtained through the duration of the second video segment and the number of the first video segments, the integrity of the second video segment is ensured, that is, each second video segment only corresponds to one first video segment, and the obtained durations of the plurality of first video segments are all 18min to 22min, so that the post-processing is more accurate.
In one example, to allocate each second video segment to a first video segment to obtain a plurality of first video segments (i.e., when performing step S202), the following steps can be further performed:
step one, identifying each second video segment, and determining the content type of each second video segment;
and secondly, distributing each second video section to the first video section according to the content type of each second video section to obtain a plurality of first video sections, wherein the content type of the second video section corresponding to each first video section is the same.
Illustratively, after obtaining the plurality of second video segments, in order to distribute and generate the plurality of first video segments, the following method can be used. Firstly, the video processing platform identifies the frame image in each second video segment, and then determines the content type in the second video segment. For example, the content category of the second video segment 1 includes sky, mountain, grassland, tree. The content category of the second video segment 2 includes: sky, river, grass. The content category of the second video segment 3 includes: sky, building, pedestrian. The content category of the second video segment 4 includes: sky, river, grass. In the second video segment, the second video segment 2 and the second video segment 4 include the same internal same type, so that the second video segment 1 and the second video segment 3 can be respectively delivered to the first video segment 1 and the first video terminal 2, the second video segment 2 and the second video segment 4 are distributed to the same first video segment 3, namely delivered to the same client, and the characteristics of the tag are selected by the same user, so that the post-processing difference of the same content is not large due to human subjective factors during processing.
In addition, in another embodiment, since the similarity of the content types in the second video segments 1 and 2 is high, a similarity threshold may also be set, and a video with a video similarity between the second video segments higher than the similarity threshold is allocated to the same first video segment. That is, the similarity between the second video segment 1, the second video segment 2 and the second video segment 4 is higher, and the second video segment 2 can be assigned to the first video segment 1, and the similarity between the second video segment 2 and the rest of the second video segments is lower, and the second video segment 2 can be assigned to the first video segment 2.
Alternatively, in this example, when the second video segments are allocated according to the content category, the allocation may also be performed in combination with the time length of each second video segment, that is, the method of this example is combined with the method of the previous example. Subsequently, when the client determines the annotation information set corresponding to the first video segment, the annotation information sets corresponding to a plurality of second video segments included in the first video segment can be included therein. That is, the annotation information sets of the second video segments in the first video segment constitute the annotation information set of the first video segment. In another possible case, the annotation information set corresponding to the first video segment is composed of annotation information sets corresponding to the frames of images in the first video segment.
In this embodiment, when dividing the video segments, the target video information may be divided into a plurality of second video segments according to the shots, and then the plurality of second video segments are distributed to the plurality of first video segments to obtain a plurality of first video segments. Specifically, the allocation can be performed according to the time length of each second video segment and the number of the first video segments, so that the time length of each first video segment is equal, the workload of each client is the same, and the video processing speed is increased. In addition, the first video segments can be distributed according to the content sets of the second video segments, so that the content types of the second video segments in each first video end are the same or the similarity is higher, and further, during post-processing, the same content is not affected by human factors to cause larger processing difference, the processing speed is increased, and the processing accuracy is improved.
In practical application, when the video processing platform receives the annotation information set sent by the client and processes the plurality of first video segments, the following steps may be implemented (that is, step S105 includes the following steps), as shown in fig. 5, fig. 5 is a schematic flow diagram of a video information processing method provided by the present application:
s1051, determining a video processing program corresponding to a video characteristic label in an annotation information set according to the annotation information set corresponding to each first video segment;
and S1052, calling a video processing program corresponding to the video characteristic label in the annotation information set, and processing the first video segment.
Illustratively, after the video processing platform acquires each annotation information set, for each first video segment, a video characteristic tag is searched in a tag information set corresponding to the first video segment, and then a video characteristic processing program corresponding to the video characteristic tag in the annotation information set is determined. For example, if a video characteristic set corresponding to a certain first video segment includes a brightness characteristic tag, a brightness adjustment processing program corresponding to the brightness characteristic tag is determined according to the brightness characteristic tag, where a correspondence between the video characteristic tag and the video processing program may be stored in the video processing platform in advance.
And then, after determining a video processing program corresponding to the first video segment, the video processing platform calls the video processing program to process the first video segment.
In one example, the annotation file further comprises: the video feature tag comprises a plurality of tag level sets corresponding to video feature tags, wherein each tag level set comprises a plurality of tag level parameters; the annotation information set corresponding to the first video segment further comprises: and label grade parameters corresponding to the video characteristic labels in the label information set. That is, when the video processing platform issues the first video segment and the markup file to the client, the markup file may further include a tag level set corresponding to a plurality of video characteristic tags that are specified in advance, and each tag level set includes a plurality of tag level parameters, for example, when the markup file includes a video characteristic tag: when the brightness characteristic label a, the contrast characteristic label B, and the motion characteristic label C are used, the brightness characteristic label a may be divided into 5 label level parameters, i.e., A1, A2, A3, A4, and A5, wherein the larger the label level parameter for the same video segment is, the higher the brightness of the adjusted video segment is, that is, for the same video segment, when the label level parameter A2 is labeled, the brightness adjusted by the label level parameter A1 is brighter, and the brightness adjusted by the label level parameter A3 is darker.
At this time, step S1052 specifically includes: and calling a video processing program corresponding to the video characteristic label in the labeling information set, and processing the first video segment by adopting a label grade parameter corresponding to the video characteristic label in the labeling information set.
In this embodiment, when the video processing platform processes the first video segment, the processing program corresponding to the video property label is called according to the video property label in the annotation information set corresponding to the first video segment sent by the client, so as to process the first video segment. Furthermore, the annotation file can further include a plurality of label level parameters corresponding to each video characteristic label, after the client receives the annotation file, the client selects the video characteristic label required by the video segment in the annotation file, and selects the label level parameter corresponding to the video characteristic label, so that when the video processing platform processes the first video segment, the video processing platform can call a video processing program according to the video characteristic label corresponding to the first video segment, and adopts the label level parameter corresponding to the first video segment to select the processing parameter, execute the video processing program, and process the first video segment, so that the effect of the processed video information is more accurate, and the user has a better viewing effect.
Fig. 6 is a schematic structural diagram of a video processing platform provided in the present application, and as shown in fig. 6, the platform includes:
an obtaining unit 61, configured to obtain target video information to be processed;
a dividing unit 62, configured to divide the target video information to obtain a plurality of first video segments;
a sending unit 63, configured to issue the multiple first video segments and the annotation file to each client, respectively, where the annotation file includes multiple predetermined video feature tags;
a receiving unit 64, configured to receive annotation information sets corresponding to multiple first video segments returned by each client, where a annotation information set corresponding to a first video segment includes at least one video characteristic tag in multiple video characteristic tags;
the processing unit 65 is configured to process each first video segment according to the annotation information set corresponding to the first video segment, so as to obtain processed video information.
The video processing platform provided by this embodiment is used to implement the technical scheme provided by the above method, and the implementation principle and technical effect are similar, which are not described again.
Fig. 7 is a schematic structural diagram of another video processing platform provided in the present application, and as shown in fig. 7, based on the structure shown in fig. 6, the dividing unit 62 includes:
the shot detection module 621 is configured to divide the target video information according to a shot to obtain a plurality of second video segments;
the partitioning module 622 is configured to allocate each of the second video segments to a first video segment, resulting in a plurality of first video segments, wherein each of the first video segments comprises at least one of the second video segments.
In one possible implementation, the dividing module 622 includes:
the time detection module is used for determining the number of the plurality of first video segments according to the number of the current online clients;
the time detection module is also used for detecting the time length of each second video segment;
the time detection module is further configured to allocate each second video segment to a first video segment according to the number of the plurality of first video segments and the time length of each second video segment, so as to obtain a plurality of first video segments, where the time lengths of the plurality of first video segments are equal.
In one possible implementation, the dividing module 622 includes:
the identification module is used for identifying each second video segment and determining the content type of each second video segment;
the identification module is further configured to allocate each second video segment to the first video segment according to the content type of each second video segment to obtain a plurality of first video segments, where the content types of the second video segments corresponding to each first video segment are the same.
In one possible implementation, the processing unit 65 includes: parsing module 651 and calling module 652;
the analysis module 651 is configured to determine, according to the annotation information set corresponding to each first video segment, a video processing program corresponding to the video characteristic tag in the annotation information set;
the invoking module 652 is configured to invoke a video processing program corresponding to the video property tag in the annotation information set to process the first video segment.
In a possible implementation manner, the markup file further includes: label grade sets corresponding to a plurality of video characteristic labels, wherein each label grade set comprises a plurality of label grade parameters; the annotation information set corresponding to the first video segment further comprises: label grade parameters corresponding to video characteristic labels in the label information set;
the invoking module 652 is specifically configured to invoke a video processing program corresponding to the video characteristic tag in the annotation information set, and process the first video segment by using the tag level parameter corresponding to the video characteristic tag in the annotation information set.
In one possible implementation, the video property tags in the annotation file include a plurality of: contrast label, brightness label, motion label, human face label.
The apparatus provided in this embodiment is used to implement the technical solution provided by the above method, and the implementation principle and the technical effect are similar and will not be described again.
Fig. 8 is a schematic structural diagram of an electronic device provided in an embodiment of the present application, and as shown in fig. 8, the electronic device includes:
a processor (processor) 291, the electronic device further including a memory (memory) 292; a Communication Interface 293 and bus 294 may also be included. The processor 291, the memory 292, and the communication interface 293 may communicate with each other through the bus 294. Communication interface 293 may be used for the transmission of information. Processor 291 may call logic instructions in memory 294 to perform the methods of the embodiments described above.
Further, the logic instructions in the memory 292 may be implemented in software functional units and stored in a computer readable storage medium when sold or used as a stand-alone product.
The memory 292 is a computer-readable storage medium for storing software programs, computer-executable programs, such as program instructions/modules corresponding to the methods in the embodiments of the present application. The processor 291 executes the software programs, instructions and modules stored in the memory 292 to execute functional applications and data processing, i.e., to implement the methods in the above-described method embodiments.
The memory 292 may include a program storage area and a data storage area, wherein the program storage area may store an operating system, an application program required for at least one function; the storage data area may store data created according to the use of the terminal device, and the like. Further, the memory 292 may include a high speed random access memory and may also include a non-volatile memory.
The embodiment of the application provides a computer-readable storage medium, in which computer-executable instructions are stored, and the computer-executable instructions are executed by a processor to implement the method provided by the above embodiment.
The embodiment of the present application provides a computer program product, including a computer program, which when executed by a processor implements the method provided by the above embodiment
Other embodiments of the present application will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This application is intended to cover any variations, uses, or adaptations of the invention following, in general, the principles of the application and including such departures from the present disclosure as come within known or customary practice within the art to which the invention pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the application being indicated by the following claims.
It will be understood that the present application is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the application is limited only by the appended claims.

Claims (10)

1. A method of video processing, the method comprising:
acquiring target video information to be processed;
dividing the target video information to obtain a plurality of first video segments;
respectively issuing a plurality of first video segments and annotation files to each client, wherein the annotation files comprise a plurality of preset video characteristic labels;
receiving annotation information sets corresponding to a plurality of first video segments returned by each client, wherein the annotation information set corresponding to the first video segment comprises at least one video characteristic label in the plurality of video characteristic labels;
and processing the first video segment according to the annotation information set corresponding to each first video segment to obtain the processed video information.
2. The method of claim 1, wherein said dividing the target video information into a plurality of first video segments comprises:
dividing the target video information according to the lens to obtain a plurality of second video segments;
and allocating each second video segment to the first video segment to obtain a plurality of first video segments, wherein each first video segment comprises at least one second video segment.
3. The method of claim 2, wherein said assigning each second video segment to a first video segment resulting in a plurality of first video segments comprises:
determining the number of the plurality of first video segments according to the number of the current online clients;
detecting the duration of each second video segment;
and distributing each second video segment to the first video segment according to the number of the plurality of first video segments and the time length of each second video segment to obtain a plurality of first video segments, wherein the time lengths of the plurality of first video segments are equal.
4. The method of claim 2, wherein said assigning each second video segment to a first video segment resulting in a plurality of first video segments comprises:
identifying each second video segment and determining the content type of each second video segment;
and distributing each second video segment to the first video segments according to the content type of each second video segment to obtain a plurality of first video segments, wherein the content types of the second video segments corresponding to each first video segment are the same.
5. The method according to claim 1, wherein said processing each first video segment according to its corresponding annotation information set to obtain processed video information comprises:
determining a video processing program corresponding to the video characteristic label in the annotation information set according to the annotation information set corresponding to each first video segment;
and calling a video processing program corresponding to the video characteristic label in the annotation information set to process the first video segment.
6. The method of claim 5, wherein the markup file further comprises: label grade sets corresponding to the video characteristic labels, wherein each label grade set comprises a plurality of label grade parameters; the annotation information set corresponding to the first video segment further comprises: label grade parameters corresponding to video characteristic labels in the label information set;
the calling a video processing program corresponding to the video characteristic tag in the annotation information set to process the first video segment includes: and calling a video processing program corresponding to the video characteristic label in the annotation information set, and processing the first video segment by adopting the label level parameter corresponding to the video characteristic label in the annotation information set.
7. The method of claim 1, wherein the video property tags in the annotation file comprise a plurality of: contrast label, luminance label, motion label, people's face label.
8. A video processing platform, the platform comprising:
the device comprises an acquisition unit, a processing unit and a processing unit, wherein the acquisition unit is used for acquiring target video information to be processed;
the dividing unit is used for dividing the target video information to obtain a plurality of first video segments;
the system comprises a sending unit, a receiving unit and a processing unit, wherein the sending unit is used for respectively sending a plurality of first video segments and an annotation file to each client, and the annotation file comprises a plurality of preset video characteristic labels;
the receiving unit is used for receiving annotation information sets corresponding to a plurality of first video segments returned by each client, wherein the annotation information set corresponding to the first video segment comprises at least one video characteristic tag in the plurality of video characteristic tags;
and the processing unit is used for processing the first video segments according to the annotation information set corresponding to each first video segment to obtain the processed video information.
9. An electronic device, comprising: a memory, a processor;
a memory; a memory for storing the processor-executable instructions;
wherein the processor is configured to perform the method according to the executable instructions of any one of claims 1-7.
10. A computer-readable storage medium having computer-executable instructions stored thereon, which when executed by a processor, perform the method of any one of claims 1-7.
CN202110458994.1A 2021-04-27 2021-04-27 Video processing method, processing platform, electronic device and storage medium Active CN115250377B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110458994.1A CN115250377B (en) 2021-04-27 2021-04-27 Video processing method, processing platform, electronic device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110458994.1A CN115250377B (en) 2021-04-27 2021-04-27 Video processing method, processing platform, electronic device and storage medium

Publications (2)

Publication Number Publication Date
CN115250377A true CN115250377A (en) 2022-10-28
CN115250377B CN115250377B (en) 2024-04-02

Family

ID=83697510

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110458994.1A Active CN115250377B (en) 2021-04-27 2021-04-27 Video processing method, processing platform, electronic device and storage medium

Country Status (1)

Country Link
CN (1) CN115250377B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006074742A (en) * 2004-08-04 2006-03-16 Noritsu Koki Co Ltd Photographing scene correcting method, program, and photographing scene correction processing system implementing the method
CN106162323A (en) * 2015-03-26 2016-11-23 无锡天脉聚源传媒科技有限公司 A kind of video data handling procedure and device
CN109525901A (en) * 2018-11-27 2019-03-26 Oppo广东移动通信有限公司 Method for processing video frequency, device, electronic equipment and computer-readable medium
CN109614517A (en) * 2018-12-04 2019-04-12 广州市百果园信息技术有限公司 Classification method, device, equipment and the storage medium of video
US20190174189A1 (en) * 2017-12-04 2019-06-06 Boe Technology Group Co., Ltd. Video playing method, video playing device, video playing system, apparatus and computer-readable storage medium
CN111416950A (en) * 2020-03-26 2020-07-14 腾讯科技(深圳)有限公司 Video processing method and device, storage medium and electronic equipment
CN111901535A (en) * 2020-07-23 2020-11-06 北京达佳互联信息技术有限公司 Video editing method, device, electronic equipment, system and storage medium

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006074742A (en) * 2004-08-04 2006-03-16 Noritsu Koki Co Ltd Photographing scene correcting method, program, and photographing scene correction processing system implementing the method
CN106162323A (en) * 2015-03-26 2016-11-23 无锡天脉聚源传媒科技有限公司 A kind of video data handling procedure and device
US20190174189A1 (en) * 2017-12-04 2019-06-06 Boe Technology Group Co., Ltd. Video playing method, video playing device, video playing system, apparatus and computer-readable storage medium
CN109525901A (en) * 2018-11-27 2019-03-26 Oppo广东移动通信有限公司 Method for processing video frequency, device, electronic equipment and computer-readable medium
CN109614517A (en) * 2018-12-04 2019-04-12 广州市百果园信息技术有限公司 Classification method, device, equipment and the storage medium of video
CN111416950A (en) * 2020-03-26 2020-07-14 腾讯科技(深圳)有限公司 Video processing method and device, storage medium and electronic equipment
CN111901535A (en) * 2020-07-23 2020-11-06 北京达佳互联信息技术有限公司 Video editing method, device, electronic equipment, system and storage medium

Also Published As

Publication number Publication date
CN115250377B (en) 2024-04-02

Similar Documents

Publication Publication Date Title
JP6972260B2 (en) Systems and methods for partitioning search indexes to improve media segment identification efficiency
CN110971929B (en) Cloud game video processing method, electronic equipment and storage medium
CN109302619A (en) A kind of information processing method and device
CN108965950B (en) Advertisement monitoring method and device
CN107465936A (en) A kind of live list mirror image methods of exhibiting, live Platform Server and client
CN110418153B (en) Watermark adding method, device, equipment and storage medium
CN113763296A (en) Image processing method, apparatus and medium
CN110677718B (en) Video identification method and device
CN110688523A (en) Video service providing method, device, electronic equipment and storage medium
CN109640104A (en) Living broadcast interactive method, apparatus, equipment and storage medium based on recognition of face
CN111614967A (en) Live virtual image broadcasting method and device, electronic equipment and storage medium
CN107241585B (en) Video monitoring method and system
CN109804409A (en) The method and apparatus of image procossing
CN113329232A (en) Resource distribution method and device, electronic equipment and storage medium
AU2018275194A1 (en) Temporal placement of a rebuffering event
CN115250377B (en) Video processing method, processing platform, electronic device and storage medium
CN110300118B (en) Streaming media processing method, device and storage medium
US10904499B2 (en) Information processing system, information processing device and program
CN114095740B (en) Information processing method, information processing device, electronic equipment and storage medium
CN115278278B (en) Page display method and device, electronic equipment and storage medium
CN111385610B (en) Method and device for controlling advertisement duration
CN111352725B (en) Storage load balancing method and device
CN117717769A (en) Data processing method, device, equipment and medium
CN113115109A (en) Video processing method and device, electronic equipment and storage medium
CN115857750A (en) Interaction method, device and equipment based on social platform and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant