CN117156224A - Video processing method, device, electronic equipment and storage medium - Google Patents

Video processing method, device, electronic equipment and storage medium Download PDF

Info

Publication number
CN117156224A
CN117156224A CN202311155211.8A CN202311155211A CN117156224A CN 117156224 A CN117156224 A CN 117156224A CN 202311155211 A CN202311155211 A CN 202311155211A CN 117156224 A CN117156224 A CN 117156224A
Authority
CN
China
Prior art keywords
video
video segment
live broadcasting
live
broadcasting room
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311155211.8A
Other languages
Chinese (zh)
Inventor
李琼
欧阳天鹏
刘行
张晓光
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Youzhuju Network Technology Co Ltd
Original Assignee
Beijing Youzhuju Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Youzhuju Network Technology Co Ltd filed Critical Beijing Youzhuju Network Technology Co Ltd
Priority to CN202311155211.8A priority Critical patent/CN117156224A/en
Publication of CN117156224A publication Critical patent/CN117156224A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/85Assembly of content; Generation of multimedia applications
    • H04N21/854Content authoring
    • H04N21/8549Creating video summaries, e.g. movie trailer
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/2187Live feed
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/433Content storage operation, e.g. storage operation in response to a pause request, caching operations
    • H04N21/4331Caching operations, e.g. of an advertisement for later insertion during playback
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/845Structuring of content, e.g. decomposing content into time segments
    • H04N21/8456Structuring of content, e.g. decomposing content into time segments by decomposing the content in the time domain, e.g. in time segments

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Databases & Information Systems (AREA)
  • Computer Security & Cryptography (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

The embodiment of the disclosure provides a video processing method, a video processing device, electronic equipment and a storage medium. The method comprises the following steps: determining a first video segment corresponding to a first live broadcasting room, determining starting and stopping time of a second video segment corresponding to the first video segment, wherein the second video segment is a highlight moment video segment material of the first video segment, and the second video segment is a highlight moment video segment material which is selected from a third video segment corresponding to the second live broadcasting room and can promote access quantity of the second live broadcasting room, and live broadcasting contents of the first live broadcasting room and the second live broadcasting room meet preset association conditions; and pushing the first live broadcasting room based on the first video clip, the start-stop time of the second video clip and the third video clip. The scheme can realize that the highlight moment video segment material is instantly applied to a live broadcasting room to carry out highlight display, and ensures the timeliness of the highlight moment video segment material.

Description

Video processing method, device, electronic equipment and storage medium
Technical Field
The embodiment of the disclosure relates to a data processing technology, in particular to a video processing method, a video processing device, electronic equipment and a storage medium.
Background
With the continuous development of live broadcast technology, many business parties select short videos to push live rooms. In order to provide better video pushing effect, some highlight moment video clips are extracted from the video, and the highlight moment video clips can be more highlight or attractive clips in the video. However, in general, a person editing a video manually clips a video clip at a highlight moment in the video, so that the video clip at the highlight moment is poor in timeliness and cannot be used immediately, and the access amount of a live broadcasting room is poor.
Disclosure of Invention
The disclosure provides a video processing method, a video processing device, electronic equipment and a storage medium, so that the purpose that a short video is formed by generating a proper highlight moment video clip material by fast clipping is achieved, and the access amount to a live broadcasting room is improved.
In a first aspect, an embodiment of the present disclosure provides a video processing method, including:
determining a first video segment corresponding to a first direct broadcasting room, wherein the first video segment is real-time direct broadcasting video data of the first direct broadcasting room;
determining the start-stop time of a second video segment corresponding to a first video segment, wherein the second video segment is a highlight moment video segment material of the first video segment, the second video segment is a highlight moment video segment material which is selected from a third video segment corresponding to a second live broadcast room and can promote the access amount of the second live broadcast room, the third video segment is historical live broadcast video data of the second live broadcast room, and the live broadcast content between the first live broadcast room and the second live broadcast room meets preset association conditions;
And pushing the first live broadcasting room based on the first video clip, the start-stop time of the second video clip and the third video clip.
In a second aspect, embodiments of the present disclosure further provide a video processing apparatus, the apparatus including:
the first determining module is used for determining a first video segment corresponding to a first live broadcasting room, wherein the first video segment is real-time live broadcasting video data of the first live broadcasting room;
the second determining module is used for determining a start-stop time of a second video segment corresponding to the first video segment, wherein the second video segment is a highlight moment video segment material of the first video segment, the second video segment is a highlight moment video segment material which is selected from a third video segment corresponding to a second live broadcasting room and can promote the access amount of the second live broadcasting room, the third video segment is historical live broadcasting video data of the second live broadcasting room, and the live broadcasting contents of the first live broadcasting room and the second live broadcasting room meet preset association conditions;
and the video processing module is used for pushing the first live broadcasting room based on the first video clip, the start-stop time of the second video clip and the third video clip.
In a third aspect, an embodiment of the present disclosure further provides an electronic device, including:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
the memory stores a computer program executable by the at least one processor to enable the at least one processor to perform the video processing method of any one of the above embodiments.
In a fourth aspect, there is also provided in an embodiment of the disclosure a computer readable medium storing computer instructions for causing a processor to execute the video processing method according to any one of the above embodiments.
In the embodiment of the disclosure, when the access amount of a live broadcasting room is promoted by using a short video, a first video segment corresponding to a first live broadcasting room and a second video segment start-stop time corresponding to the first video segment, which are required to promote the access amount, are determined, the second video segment is a high-light time video segment material for promoting the access amount of the second live broadcasting room in a third video segment corresponding to the second live broadcasting room, the third video segment is historical live video data of the second live broadcasting room, live broadcasting contents of the first live broadcasting room and the second live broadcasting room meet preset association conditions, and then the first live broadcasting room is pushed by using the start-stop time of the first video segment, the second video segment and the third video segment, and as the second video segment adopted for promoting the access amount of the first live broadcasting room is derived from the third video segment of the second live broadcasting room with preset association conditions, the high-light time video material indicated by the second video segment can be fully represented by the high-light time video segment material of the first live broadcasting room, the live broadcasting content core of the first live broadcasting room can be realized, the high-light time content of the second video segment can be ensured to be displayed at the first live broadcasting room, and the live broadcasting material can not be generated after the high-light time live broadcasting material is required to be displayed by the live broadcasting time of the first live broadcasting room; meanwhile, when the second video clip is determined, only the start-stop time of the second video clip is generated and stored, and the second video clip is not directly generated, so that the storage cost of the highlight moment video clip material and the transmission flow cost of the highlight moment video clip material can be reduced.
It should be understood that the description in this section is not intended to identify key or critical features of the embodiments of the disclosure, nor is it intended to be used to limit the scope of the disclosure. Other features of the present disclosure will become apparent from the following specification.
Drawings
The above and other features, advantages, and aspects of embodiments of the present disclosure will become more apparent by reference to the following detailed description when taken in conjunction with the accompanying drawings. The same or similar reference numbers will be used throughout the drawings to refer to the same or like elements. It should be understood that the figures are schematic and that elements and components are not necessarily drawn to scale.
Fig. 1 is a schematic flow chart of a video processing method according to an embodiment of the disclosure;
FIG. 2 is an interactive schematic diagram of a video processing flow to which embodiments of the present disclosure are applicable;
FIG. 3 is a flow chart of a video stitching method applicable to embodiments of the present disclosure;
FIG. 4 is a flowchart of another video processing method according to an embodiment of the present disclosure;
fig. 5 is a schematic drawing of extracting video clip materials at a highlight moment in a video processing flow, which is applicable to an embodiment of the disclosure;
fig. 6 is a schematic structural diagram of a video processing apparatus according to an embodiment of the disclosure;
Fig. 7 is a schematic structural diagram of an electronic device for implementing a video processing method according to an embodiment of the disclosure.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure have been shown in the accompanying drawings, it is to be understood that the present disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein, but are provided to provide a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the present disclosure are for illustration purposes only and are not intended to limit the scope of the present disclosure.
It should be understood that the various steps recited in the method embodiments of the present disclosure may be performed in a different order and/or performed in parallel. Furthermore, method embodiments may include additional steps and/or omit performing the illustrated steps. The scope of the present disclosure is not limited in this respect.
The term "including" and variations thereof as used herein are intended to be open-ended, i.e., including, but not limited to. The term "based on" is based at least in part on. The term "one embodiment" means "at least one embodiment"; the term "another embodiment" means "at least one additional embodiment"; the term "some embodiments" means "at least some embodiments. Related definitions of other terms will be given in the description below.
It should be noted that the terms "first," "second," and the like in this disclosure are merely used to distinguish between different devices, modules, or units and are not used to define an order or interdependence of functions performed by the devices, modules, or units.
It should be noted that references to "one", "a plurality" and "a plurality" in this disclosure are intended to be illustrative rather than limiting, and those of ordinary skill in the art will appreciate that "one or more" is intended to be understood as "one or more" unless the context clearly indicates otherwise.
The names of messages or information interacted between the various devices in the embodiments of the present disclosure are for illustrative purposes only and are not intended to limit the scope of such messages or information.
It will be appreciated that prior to using the technical solutions disclosed in the embodiments of the present disclosure, the user should be informed and authorized of the type, usage range, usage scenario, etc. of the personal information related to the present disclosure in an appropriate manner according to the relevant legal regulations.
For example, in response to receiving an active request from a user, a prompt is sent to the user to explicitly prompt the user that the operation it is requesting to perform will require personal information to be obtained and used with the user. Thus, the user can autonomously select whether to provide personal information to software or hardware such as an electronic device, an application program, a server or a storage medium for executing the operation of the technical scheme of the present disclosure according to the prompt information.
As an alternative but non-limiting implementation, in response to receiving an active request from a user, the manner in which the prompt information is sent to the user may be, for example, a popup, in which the prompt information may be presented in a text manner. In addition, a selection control for the user to select to provide personal information to the electronic device in a 'consent' or 'disagreement' manner can be carried in the popup window.
It will be appreciated that the above-described notification and user authorization process is merely illustrative and not limiting of the implementations of the present disclosure, and that other ways of satisfying relevant legal regulations may be applied to the implementations of the present disclosure.
It will be appreciated that the data (including but not limited to the data itself, the acquisition or use of the data) involved in the present technical solution should comply with the corresponding legal regulations and the requirements of the relevant regulations.
Fig. 1 is a schematic flow chart of a video processing method provided by an embodiment of the present disclosure, where the embodiment of the present disclosure is applicable to a case of using a short video to increase an access amount of a living room, the method may be performed by a video processing apparatus, and the video processing apparatus may be implemented in a form of software and/or hardware and is generally integrated on any electronic device having a network communication function, where the electronic device may be a mobile terminal, a PC end, a server, or the like.
As shown in fig. 1, the video processing method of the embodiment of the present disclosure may include the following processes:
s110, determining a first video segment corresponding to the first live broadcasting room, wherein the first video segment is real-time live broadcasting video data of the first live broadcasting room.
The live broadcasting business form comprises a live broadcasting room access amount and a direct-casting live broadcasting room of short videos, wherein the short videos enter the live broadcasting room to promote the access amount of the live broadcasting room in a recommended stream in a short video form through live broadcasting advertisements, a conversion assembly and a breathing lamp are overlapped, and an entrance entering the live broadcasting room is mainly the conversion assembly and clicks a live broadcasting room icon; the direct-casting live broadcasting room is characterized in that the access quantity to the live broadcasting room is improved in a recommended stream in a direct-pushing live broadcasting room mode, the full screen can be used for pointing, the information stream directly pulls the live broadcasting room to conduct real-time drawing and throwing, the card style is overlapped and converted, the clicking interest of the live broadcasting room is increased, and the information stream is guided into the live broadcasting room.
The first direct broadcasting room can be a direct broadcasting room which needs to adopt a short video form to improve the access quantity of the direct broadcasting room, and the first video segment can be video stream data generated by the first direct broadcasting room for real-time direct broadcasting.
S120, determining starting and stopping time of a second video segment corresponding to the first video segment, wherein the second video segment is a highlight moment video segment material of the first video segment, the second video segment is a highlight moment video segment material which is selected from a third video segment corresponding to the second live broadcasting room and can improve access quantity of the second live broadcasting room, the third video segment is historical live broadcasting video data of the second live broadcasting room, and live broadcasting contents of the first live broadcasting room and the second live broadcasting room meet preset association conditions.
The method is characterized in that the method is used for directly throwing in a live broadcasting room, and meanwhile short videos are also used for guiding and improving the access quantity of the live broadcasting room, and the short videos are processed secondarily by using high-light time video segment materials of live broadcasting contents, so that the access quantity of the live broadcasting room is improved better. However, when the access amount of the live broadcasting room is increased by using the short video, the following problems are found: the timeliness of the short video is low, the short video is generated in a large time and at a large cost, the short video is difficult to use immediately, and the storage cost of the generated short video is high.
Referring to fig. 2, when a highlight moment video segment material is generated for a first live broadcasting room, a second live broadcasting room meeting a preset association condition on live broadcasting content of the first live broadcasting room is determined from a plurality of live broadcasting rooms which have live broadcasting to form historical live broadcasting video data, so that the highlight moment video segment material capable of improving access quantity of the second live broadcasting room can be extracted from a third video segment corresponding to the second live broadcasting room, the video segment material is determined to be the highlight moment video segment material corresponding to the first video segment, and thus the second video segment extracted from the historical live broadcasting video data corresponding to the second live broadcasting room can be directly pushed to the first live broadcasting room as the highlight moment video segment material of the first live broadcasting room, and when the live broadcasting is performed in the first live broadcasting room, the highlight moment video segment can be directly displayed without waiting for the whole live broadcasting to be seen back, and then the highlight moment video segment material is selected and generated frame by frame, so that timeliness of the highlight moment video segment material is ensured.
Referring to fig. 2, when a second video clip containing highlight video clip material is generated for a first direct broadcasting room, the second video clip corresponding to the first video clip needs to be stored, so that the highlight video clip material can be sent to a client when a client corresponding to the first direct broadcasting room requests. However, if the second video clip is stored, a certain storage resource is occupied, and as the requirements of the video clip materials at the highlight moment are increased, the occupied storage resource is increased; moreover, as time goes on, the video clip material at the highlight moment required by the first live broadcasting room also changes dynamically, which further causes a large amount of occupation of storage resources. Therefore, when the second video segment corresponding to the first video segment is determined, only the start-stop time of the second video segment in the third video segment is required to be determined, so that a large amount of storage resources are not additionally occupied for the high-light time video segment material, the storage cost can be reduced, the storage resource pressure caused by the generation of the high-light time video segment material by the server can be effectively solved, the transmission flow cost of the part of video segment material when the second video segment start-stop time is used can be saved, and the coverage rate of the high-light time video segment material can be enlarged when the storage resource cost is reduced.
And S130, pushing the first live broadcasting room based on the first video clip, the start-stop time of the second video clip and the third video clip.
The second video clips are highlight moment video clip materials which are selected from third video clips corresponding to the second live broadcasting room and can improve the access quantity of the second live broadcasting room, and the first live broadcasting room and the second live broadcasting room meet the preset association condition, so that the second video clips can be used as highlight moment video clip materials of the first video clips in the first live broadcasting room. Therefore, the second video segment can be extracted from the third video segment through the start-stop time of the second video segment, and the first live broadcasting room can be pushed by using the extracted second video segment and the first video segment.
As an optional but non-limiting implementation manner, the first live room is pushed based on the first video segment, the start-stop time of the second video segment and the third video segment, and the method comprises the following steps of A1-A2:
and A1, transmitting at least one second video segment start-stop time to the corresponding client side of the first live broadcasting room so that the corresponding client side of the first live broadcasting room can extract the second video segment from the third video segment based on the second video segment start-stop time.
And A2, sending the first video clip to the corresponding client of the first live broadcasting room, so that the corresponding client of the first live broadcasting room carries out video clip preprocessing on the second video clip and the first video clip and displays the second video clip and the first video clip on the corresponding client of the first live broadcasting room.
Referring to fig. 2, when a client corresponding to a first living broadcasting room sends a request for video clip materials at a highlight moment corresponding to the first living broadcasting room to a server, the server may send start and stop times of a second video clip extracted from a third video clip corresponding to a second living broadcasting room to the client corresponding to the first living broadcasting room. The starting and ending time of the second video segment is used for describing the starting time and the ending time of the video segment materials in the third video segment at the highlight moment, wherein the starting and ending time can be used for improving the access quantity of the second live broadcasting room in the third video segment. For the server, the extracted video clips can be represented only by storing the start-stop time of the video clip material at the highlight moment, the video clips are not required to be stored, and the occupation amount of storage resources is reduced.
Referring to fig. 2 and fig. 3, at least one second video segment start-stop time is sent to the client, the client corresponding to the first living broadcast room can extract and download the second video segment from the third video segment supplied by the server according to the received second video segment start-stop time, so that the client can conveniently extract the second video segment meeting the own requirement from the third video segment according to the own requirement, the generation of the highlight moment video segment material is more flexible and dynamic, the highlight moment video segment material can be continuously adjusted by modifying the second video segment start-stop time or updating and acquiring the new second video segment start-stop time, and meanwhile, the transmission flow cost of the highlight moment video segment material pulled on the client can be reduced.
Referring to fig. 3, the second video clip is a video clip material extracted from a third video clip corresponding to the second live broadcast room, which is used for improving the access amount of the second live broadcast room and belongs to the highlight moment corresponding to the first video clip. And when the corresponding client side of the first live broadcasting room receives the first video segment sent by the server side and at least one second video segment start-stop time corresponding to the first video segment, extracting and downloading the video segments according to the start-stop time indicated by the at least one second video segment start-stop time in the third video segment. Wherein, the start-stop time of at least one second video clip can be as follows: highlight 1 (start time 1, end time 1), highlight 2 (start time 2, end time 2), highlight 3 (start time 3, end time 3), … …, highlight N (start time N, end time N) were recorded.
Referring to fig. 2 and fig. 3, the corresponding client in the first live broadcasting room may receive the first video clip sent by the server, and then perform video clip preprocessing on the second video clip and the first video clip before rendering and displaying the second video clip and the first video clip. Wherein the video clip preprocessing may include at least one of: splicing video clips, adding special effects in the video clips, adding stickers in the video clips, adding transition animations in the video clips and adding watermarks in the video clips.
Referring to fig. 3, optionally, after splicing the first video segment and the second video segment corresponding to the first video segment is completed, the spliced video segment is configured to preload the first video in the video list before the spliced video segment is triggered to be displayed, and when the spliced video segment is triggered to display the first video, the rest video in the video list is preloaded. The splicing of the second video segment corresponding to the first video segment includes video display style (including vertical video style and horizontal video style), highlight time judgment, highlight time duration and splicing mode (such as setting playing progress and setting that the highlight time video segment material is played from the nth second (highlight time) to the end, and playing from the beginning of next playing).
According to the technical scheme, when the access amount of the live broadcasting room is improved by using the short video, the access amount of the first direct broadcasting room can be improved by using the first video segment, the start-stop time of the second video segment and the third video segment, and the second video segment adopted for improving the access amount of the first direct broadcasting room is derived from the third video segment of the second direct broadcasting room with preset association conditions with the first direct broadcasting room, so that the high-light moment video segment material indicated by the second video segment can be ensured to fully represent the core of the live broadcasting content of the first direct broadcasting room, the high-light display of the second video segment in the first direct broadcasting room can be realized, the timeliness of the high-light moment video segment material is ensured, and the proper high-light moment video segment material can be generated without waiting for the live broadcasting content of the first direct broadcasting room to be seen back after the live broadcasting of the first direct broadcasting room is finished; meanwhile, when the second video clip is determined, only the start-stop time of the second video clip is generated and stored without directly generating the second video clip, so that the storage cost of the highlight moment video clip material and the transmission flow cost of the highlight moment video clip material can be reduced, and further the coverage rate of the highlight moment video clip material can be enlarged when the storage resource cost is reduced.
Fig. 4 is a schematic flow chart of another video processing method provided by the embodiment of the present disclosure, where the process of determining the start-stop time of the second video segment corresponding to the first video segment in the foregoing embodiment is further optimized based on the foregoing embodiment, and the embodiment may be combined with each alternative in one or more embodiments.
As shown in fig. 4, the video processing method of the embodiment of the present disclosure may include the following processes:
s410, determining a first video segment corresponding to a first live broadcasting room, wherein the first video segment is real-time live broadcasting video data of the first live broadcasting room.
S420, determining at least two third video clips in the second live broadcasting room and corresponding interaction parameter values, wherein the interaction parameter is a detection index for detecting whether the video clips contain highlight moment video clip materials capable of guiding the access quantity of the second live broadcasting room to be improved.
The third video segment is historical live video data of the second live broadcasting room, and live contents of the first live broadcasting room and the second live broadcasting room meet preset association conditions, so that a live content core of the second live broadcasting room can represent a live content core of the first live broadcasting room to a certain extent. Optionally, the live broadcast content of the first live broadcast room and the second live broadcast room meeting the preset association condition includes that the product identification information indicated by the live broadcast content of the first live broadcast room and the product identification information indicated by the live broadcast content of the second live broadcast room meet the preset similarity. Wherein, the product identification information comprises a product type, a product name and the like.
Optionally, the interaction parameter may be a video duration of the video clip, a click rate of the video clip, a point conversion rate of the video clip, a product of the click rate and the conversion rate of the video clip, a total play amount of the video clip, an instantaneous play rate (such as a 3s play rate) of the video clip, a play completion rate of the video clip, an average play duration of the video clip, a praise rate of the video clip, a comment rate of the video clip, an activation rate of the video clip, a feedback rate of the video clip, a payment number of the video clip, a payment rate of the video clip, and the like. The interaction parameters can also be the playing times of the video clips, the attention of the video clips, the permeability of the video clips and the like.
As an optional but non-limiting implementation manner, determining values of at least two third video clips and corresponding interaction parameters in the second live broadcast room includes the following steps B1-B2:
and B1, determining that product identification information of live broadcast pushing in the first live broadcast room and product identification information of live broadcast pushing in the second live broadcast room meet a preset similarity condition in a second live broadcast room associated with the first live broadcast room.
And B2, acquiring at least two third video segments in the second live broadcasting room and interaction parameter values corresponding to the third video segments from the historical live broadcasting video data corresponding to the second live broadcasting room.
Referring to fig. 2 and fig. 5, when a client corresponding to a first live broadcast room sends a request for a video clip material at a highlight moment corresponding to the first live broadcast room to a server, a second live broadcast room meeting a preset association condition with the first live broadcast room on live broadcast content is determined from a plurality of live broadcast rooms in which live broadcast forms historical live broadcast video data, and product identification information of live broadcast pushing in the first live broadcast room and product identification information of live broadcast pushing in the second live broadcast room are ensured to meet a preset similarity condition. And at least two third video segments in the second live broadcasting room can be obtained from the historical live broadcasting video data corresponding to the second live broadcasting room, and the interactive parameter values corresponding to the third video segments also need to be marked. For example, referring to fig. 5, for 6 third video clips of 1min, each third video clip is marked with an interactive parameter value of the third video clip.
S430, selecting at least one fourth video segment from the at least two third video segments according to the corresponding interaction parameter values of the at least two third video segments.
Referring to fig. 5, according to the values of the interaction parameters corresponding to the third video segments, it may be determined whether the third video segments include highlight moment video segment materials capable of guiding the access amount of the second live broadcasting room to be improved, so as to implement coarse screening of the third video segments, and determine the remaining third video segments as fourth video segments. The fourth video segment contains high-light moment video segment materials which can guide the access quantity of the second live broadcasting room to be improved. Optionally, if the interaction parameter value corresponding to the third video segment is detected to be greater than or equal to the preset interaction parameter threshold, determining that the third video segment contains a highlight moment video segment material capable of guiding the access amount of the second live broadcasting room to be improved; if the interaction parameter value corresponding to the third video segment is detected to be smaller than the preset interaction parameter threshold value, determining that the third video segment does not contain highlight moment video segment materials capable of guiding the access quantity of the second live broadcasting room to be improved.
For example, referring to fig. 5, based on a series of strategies, according to the values of the interaction parameters and the corresponding interaction parameter thresholds corresponding to the third video segments, whether the third video segments are video segments containing the highlight moment video segment materials is determined, for example, the playing times of the video segments, the attention of the video segments and the permeability of the video segments can be used as interaction parameter indexes defined by the highlight moment video segment materials, so that relatively abundant live broadcasting forms (killing in seconds, lottery, popular product review, live broadcasting forenotice, etc.) can be located.
S440, identifying the starting and ending time of the second video segment corresponding to the first video segment from at least one fourth video segment.
Referring to fig. 5, for 6 third video segments of 1min, each third video segment is marked with an interactive parameter value of the third video segment, so that the text, punctuation characters, and respective start time and end time included in the fourth video segment can be identified by using a voice recognition text technology, so that the start-stop time of at least one second video segment included in the fourth video segment can be identified.
As an optional but non-limiting implementation manner, identifying the start-stop time of the second video segment corresponding to the first video segment from at least one fourth video segment includes the following steps C1-C3:
And C1, identifying at least three clause texts from the fourth video segment, and predicting keywords corresponding to the clause texts.
And C2, generating at least two candidate compound sentence texts based on keywords corresponding to at least three sentence texts, wherein each candidate compound sentence text is associated with at least one start-stop time of a fifth video segment, and the fifth video segment is a local video segment in the fourth video segment, which is associated with the keywords corresponding to the sentence texts.
And C3, determining a target compound sentence text from at least two candidate compound sentence texts, and determining the start-stop time of a second video segment corresponding to the first video segment based on at least one start-stop time of a fifth video segment associated with the target compound sentence text.
Referring to fig. 5, for the fourth video segment, at least three clause texts can be identified from the fourth video segment by using a voice recognition text technology, and an interface service is called to predict the word segmentation label of each clause text to obtain a keyword corresponding to each clause text. Keywords of the clause text can be product names, product functions, product promotion information, live speech speed, applicable objects of products and the like.
Referring to fig. 5, in combination with keywords corresponding to the clause text, at least two candidate compound sentence texts can be produced under the condition of ensuring the sentence integrity (for example, product or selling point information labels are snapped on the clause text and are combined into candidate compound sentence texts according to the clause text, each candidate compound sentence text is associated with at least one start-stop time of a fifth video segment, the fifth video segment is a local video segment in the fourth video segment, which is associated with the keywords corresponding to the clause text, and the video is generally 2 minutes, so that 2-5 compound sentences can be produced.
Referring to fig. 5, optionally, a target compound sentence text is determined from at least two candidate compound sentence texts: comprising the following steps: and carrying out weighted calculation on keywords included in the candidate compound sentence texts, and determining a target compound sentence text from at least two candidate compound sentence texts based on weighted calculation results corresponding to the candidate compound sentence texts. For example, a score is calculated by weighting based on keywords included in the candidate compound sentence text, the score is larger than a threshold value, the candidate compound sentence text is determined as a target compound sentence text, and otherwise, the candidate compound sentence text is discarded. Furthermore, the start-stop time of at least one fifth video segment associated with the target compound sentence text can be determined as the start-stop time of a second video segment corresponding to the first video segment, so that the output of the video segment material at the highlight moment of the first live broadcasting room from the second live broadcasting room can be realized.
S450, pushing the first live broadcasting room based on the first video clip, the start-stop time of the second video clip and the third video clip.
According to the technical scheme, when the access amount of the live broadcasting room is improved by using the short video, the access amount of the first direct broadcasting room can be improved by using the first video segment, the start-stop time of the second video segment and the third video segment, and the second video segment adopted for improving the access amount of the first direct broadcasting room is derived from the third video segment of the second direct broadcasting room with the preset association condition with the first direct broadcasting room, so that the high-light time video segment material indicated by the second video segment can be ensured to fully represent the core of the live broadcasting content of the first direct broadcasting room, the high-light display of the second video segment in the first direct broadcasting room can be realized, the timeliness of the high-light time video segment material is ensured, and the video segment material with proper high-light time can be generated without waiting for the live broadcasting content of the first direct broadcasting room after the direct broadcasting is finished, the skill requirements on clip personnel can be reduced, the clip process is reduced, the clip efficiency and the clip time consumption quality are improved, and the effect on the access amount of the direct broadcasting room is improved; meanwhile, when the second video clip is determined, only the start-stop time of the second video clip is generated and stored without directly generating the second video clip, so that the storage cost of the highlight moment video clip material and the transmission flow cost of the highlight moment video clip material can be reduced, and further the coverage rate of the highlight moment video clip material can be enlarged when the storage resource cost is reduced.
Fig. 6 is a schematic structural diagram of a video processing apparatus provided in an embodiment of the present disclosure, where the embodiment of the present disclosure is applicable to a case of using a short video to increase access to a living room, the video processing apparatus may be implemented in software and/or hardware, and is generally integrated on any electronic device having a network communication function, where the electronic device may be a mobile terminal, a PC or a server.
As shown in fig. 6, the video processing apparatus of the embodiment of the present disclosure may include the following: a first determination module 610, a second determination module 620, and a video processing module 630. Wherein:
a first determining module 610, configured to determine a first video segment corresponding to a first live room, where the first video segment is live video data of the first live room;
a second determining module 620, configured to determine a start-stop time of a second video segment corresponding to the first video segment, where the second video segment is a highlight moment video segment material of the first video segment, the second video segment is a highlight moment video segment material that is selected from a third video segment corresponding to a second live broadcast room and can promote an access amount of the second live broadcast room, the third video segment is historical live broadcast video data of the second live broadcast room, and live broadcast contents between the first live broadcast room and the second live broadcast room meet a preset association condition;
The video processing module 630 is configured to push the first live broadcast room based on the first video clip, the start-stop time of the second video clip, and the third video clip.
On the basis of the foregoing embodiment, optionally, determining a start-stop time of the second video segment corresponding to the first video segment includes:
determining at least two third video clips in a second live broadcasting room and corresponding interaction parameter values, wherein the interaction parameter is a detection index for detecting whether the video clips contain highlight moment video clip materials capable of guiding the access quantity of the second live broadcasting room to be improved;
screening at least one fourth video segment from the at least two third video segments according to the values of the interaction parameters corresponding to the at least two third video segments;
and identifying the starting and ending time of the second video segment corresponding to the first video segment from the at least one fourth video segment.
On the basis of the foregoing embodiment, optionally, determining at least two third video segments and corresponding values of the interaction parameters in the second live broadcast room includes:
determining a second live broadcasting room associated with the first live broadcasting room, wherein product identification information of live broadcasting and pushing in the first live broadcasting room and product identification information of live broadcasting and pushing in the second live broadcasting room meet a preset similarity condition;
And acquiring at least two third video segments in the second live broadcasting room and interaction parameter values corresponding to the third video segments from the historical live broadcasting video data corresponding to the second live broadcasting room.
On the basis of the foregoing embodiment, optionally, identifying, from the at least one fourth video segment, a start-stop time of a second video segment corresponding to the first video segment includes:
identifying at least three clause texts from the fourth video segment, and predicting keywords corresponding to the clause texts;
generating at least two candidate compound sentence texts based on keywords corresponding to at least three sentence texts, wherein each candidate compound sentence text is associated with at least one fifth video segment start-stop time, and the fifth video segment is a local video segment in the fourth video segment, which is associated with the keywords corresponding to the sentence texts;
and determining a target compound sentence text from the at least two candidate compound sentence texts, and determining a second video segment start-stop time corresponding to the first video segment based on at least one fifth video segment start-stop time associated with the target compound sentence text.
On the basis of the foregoing embodiment, optionally, determining the target compound sentence text from the at least two candidate compound sentence texts includes:
And carrying out weighted calculation on keywords included in the candidate compound sentence texts, and determining a target compound sentence text from the at least two candidate compound sentence texts based on weighted calculation results corresponding to the candidate compound sentence texts.
On the basis of the foregoing embodiment, optionally, pushing the first live room based on the first video clip, the start-stop time of the second video clip, and the third video clip includes:
transmitting at least one second video segment start-stop time to the first live-broadcast-room corresponding client, so that the first live-broadcast-room corresponding client extracts a second video segment from the third video segment based on the second video segment start-stop time;
and sending the first video segment to the corresponding client of the first living broadcast room, so that the corresponding client of the first living broadcast room carries out video segment preprocessing on the second video segment and the first video segment and displays the second video segment and the first video segment on the corresponding client of the first living broadcast room.
On the basis of the above embodiment, optionally, the video clip preprocessing includes at least one of: splicing video clips, adding special effects in the video clips, adding stickers in the video clips, adding transition animations in the video clips and adding watermarks in the video clips.
According to the technical scheme provided by the embodiment of the disclosure, when the access amount of the live broadcasting room is improved by using the short video, the access amount of the first direct broadcasting room can be improved by using the first video segment, the start-stop time of the second video segment and the third video segment, and the second video segment adopted for improving the access amount of the first direct broadcasting room is derived from the third video segment of the second direct broadcasting room with the preset association condition with the first direct broadcasting room, so that the high-light moment video segment material indicated by the second video segment can be ensured to fully represent the core of the live broadcasting content of the first direct broadcasting room, the high-light display of the second video segment in the first direct broadcasting room can be realized, the timeliness of the high-light moment video segment material is ensured, and the proper high-light moment video segment material can be generated without waiting for the live broadcasting content of the first direct broadcasting room after the direct broadcasting of the first direct broadcasting room is finished; meanwhile, when the second video clip is determined, only the start-stop time of the second video clip is generated and stored without directly generating the second video clip, so that the storage cost of the highlight moment video clip material and the transmission flow cost of the highlight moment video clip material can be reduced, and further the coverage rate of the highlight moment video clip material can be enlarged when the storage resource cost is reduced.
The video processing device provided by the embodiment of the disclosure can execute the video processing method provided by any embodiment of the disclosure, and has the corresponding functional modules and beneficial effects of the execution method.
It should be noted that each unit and module included in the above apparatus are only divided according to the functional logic, but not limited to the above division, so long as the corresponding functions can be implemented; in addition, the specific names of the functional units are also only for convenience of distinguishing from each other, and are not used to limit the protection scope of the embodiments of the present disclosure.
Fig. 7 is a schematic structural diagram of an electronic device according to an embodiment of the disclosure. Referring now to fig. 7, a schematic diagram of an electronic device (e.g., a terminal device or server in fig. 7) 500 suitable for use in implementing embodiments of the present disclosure is shown. The terminal devices in the embodiments of the present disclosure may include, but are not limited to, mobile terminals such as mobile phones, notebook computers, digital broadcast receivers, PDAs (personal digital assistants), PADs (tablet computers), PMPs (portable multimedia players), in-vehicle terminals (e.g., in-vehicle navigation terminals), and the like, and stationary terminals such as digital TVs, desktop computers, and the like. The electronic device shown in fig. 7 is merely an example and should not be construed to limit the functionality and scope of use of the disclosed embodiments.
As shown in fig. 7, the electronic device 500 may include a processing means (e.g., a central processing unit, a graphics processor, etc.) 501, which may perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM) 502 or a program loaded from a storage means 508 into a Random Access Memory (RAM) 503. In the RAM 503, various programs and data required for the operation of the electronic apparatus 500 are also stored. The processing device 501, the ROM 502, and the RAM 503 are connected to each other via a bus 504. An edit/output (I/O) interface 505 is also connected to bus 504.
In general, the following devices may be connected to the I/O interface 505: input devices 506 including, for example, a touch screen, touchpad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; an output device 507 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage 508 including, for example, magnetic tape, hard disk, etc.; and communication means 509. The communication means 509 may allow the electronic device 500 to communicate with other devices wirelessly or by wire to exchange data. While fig. 7 shows an electronic device 500 having various means, it is to be understood that not all of the illustrated means are required to be implemented or provided. More or fewer devices may be implemented or provided instead.
In particular, according to embodiments of the present disclosure, the processes described above with reference to flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a non-transitory computer readable medium, the computer program comprising program code for performing the method shown in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network via the communication means 509, or from the storage means 508, or from the ROM 502. The above-described functions defined in the methods of the embodiments of the present disclosure are performed when the computer program is executed by the processing device 501.
The names of messages or information interacted between the various devices in the embodiments of the present disclosure are for illustrative purposes only and are not intended to limit the scope of such messages or information.
The electronic device provided by the embodiment of the present disclosure and the video processing method provided by the foregoing embodiment belong to the same inventive concept, and technical details not described in detail in the present embodiment may be referred to the foregoing embodiment, and the present embodiment has the same beneficial effects as the foregoing embodiment.
The present disclosure provides a computer storage medium having stored thereon a computer program which, when executed by a processor, implements the video processing method provided by the above embodiments.
It should be noted that the computer readable medium described in the present disclosure may be a computer readable signal medium or a computer readable storage medium, or any combination of the two. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples of the computer-readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this disclosure, a computer-readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In the present disclosure, however, the computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave, with the computer-readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, fiber optic cables, RF (radio frequency), and the like, or any suitable combination of the foregoing.
In some implementations, the clients, servers may communicate using any currently known or future developed network protocol, such as HTTP (HyperText Transfer Protocol ), and may be interconnected with any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include a local area network ("LAN"), a wide area network ("WAN"), the internet (e.g., the internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed networks.
The computer readable medium may be contained in the electronic device; or may exist alone without being incorporated into the electronic device.
The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: determining a first video segment corresponding to a first direct broadcasting room, wherein the first video segment is real-time direct broadcasting video data of the first direct broadcasting room; determining the start-stop time of a second video segment corresponding to a first video segment, wherein the second video segment is a highlight moment video segment material of the first video segment, the second video segment is a highlight moment video segment material which is selected from a third video segment corresponding to a second live broadcast room and can promote the access amount of the second live broadcast room, the third video segment is historical live broadcast video data of the second live broadcast room, and the live broadcast content between the first live broadcast room and the second live broadcast room meets preset association conditions; and pushing the first live broadcasting room based on the first video clip, the start-stop time of the second video clip and the third video clip.
Computer program code for carrying out operations of the present disclosure may be written in one or more programming languages, including, but not limited to, an object oriented programming language such as Java, smalltalk, C ++ and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computer (for example, through the Internet using an Internet service provider).
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units involved in the embodiments of the present disclosure may be implemented by means of software, or may be implemented by means of hardware. The name of the unit does not in any way constitute a limitation of the unit itself, for example the first acquisition unit may also be described as "unit acquiring at least two internet protocol addresses".
The functions described above herein may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: a Field Programmable Gate Array (FPGA), an Application Specific Integrated Circuit (ASIC), an Application Specific Standard Product (ASSP), a system on a chip (SOC), a Complex Programmable Logic Device (CPLD), and the like.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The foregoing description is only of the preferred embodiments of the present disclosure and description of the principles of the technology being employed. It will be appreciated by persons skilled in the art that the scope of the disclosure referred to in this disclosure is not limited to the specific combinations of features described above, but also covers other embodiments which may be formed by any combination of features described above or equivalents thereof without departing from the spirit of the disclosure. Such as those described above, are mutually substituted with the technical features having similar functions disclosed in the present disclosure (but not limited thereto).
Moreover, although operations are depicted in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order. In certain circumstances, multitasking and parallel processing may be advantageous. Likewise, while several specific implementation details are included in the above discussion, these should not be construed as limiting the scope of the present disclosure. Certain features that are described in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination.
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are example forms of implementing the claims.

Claims (10)

1. A method of video processing, the method comprising:
determining a first video segment corresponding to a first direct broadcasting room, wherein the first video segment is real-time direct broadcasting video data of the first direct broadcasting room;
determining the start-stop time of a second video segment corresponding to a first video segment, wherein the second video segment is a highlight moment video segment material of the first video segment, the second video segment is a highlight moment video segment material which is selected from a third video segment corresponding to a second live broadcast room and can promote the access amount of the second live broadcast room, the third video segment is historical live broadcast video data of the second live broadcast room, and the live broadcast content between the first live broadcast room and the second live broadcast room meets preset association conditions;
and pushing the first live broadcasting room based on the first video clip, the start-stop time of the second video clip and the third video clip.
2. The method of claim 1, wherein determining a start-stop time of a second video segment corresponding to the first video segment comprises:
determining at least two third video clips in a second live broadcasting room and corresponding interaction parameter values, wherein the interaction parameter is a detection index for detecting whether the video clips contain highlight moment video clip materials capable of guiding the access quantity of the second live broadcasting room to be improved;
screening at least one fourth video segment from the at least two third video segments according to the values of the interaction parameters corresponding to the at least two third video segments;
and identifying the starting and ending time of the second video segment corresponding to the first video segment from the at least one fourth video segment.
3. The method of claim 2, wherein determining at least two third video clips and corresponding interactive parameter values in the second live room comprises:
determining a second live broadcasting room associated with the first live broadcasting room, wherein product identification information of live broadcasting and pushing in the first live broadcasting room and product identification information of live broadcasting and pushing in the second live broadcasting room meet a preset similarity condition;
and acquiring at least two third video segments in the second live broadcasting room and interaction parameter values corresponding to the third video segments from the historical live broadcasting video data corresponding to the second live broadcasting room.
4. The method of claim 2, wherein identifying a second video segment start-stop time corresponding to the first video segment from the at least one fourth video segment comprises:
identifying at least three clause texts from the fourth video segment, and predicting keywords corresponding to the clause texts;
generating at least two candidate compound sentence texts based on keywords corresponding to at least three sentence texts, wherein each candidate compound sentence text is associated with at least one fifth video segment start-stop time, and the fifth video segment is a local video segment in the fourth video segment, which is associated with the keywords corresponding to the sentence texts;
and determining a target compound sentence text from the at least two candidate compound sentence texts, and determining a second video segment start-stop time corresponding to the first video segment based on at least one fifth video segment start-stop time associated with the target compound sentence text.
5. The method of claim 4, wherein determining target compound text from the at least two candidate compound texts comprises:
and carrying out weighted calculation on keywords included in the candidate compound sentence texts, and determining a target compound sentence text from the at least two candidate compound sentence texts based on weighted calculation results corresponding to the candidate compound sentence texts.
6. The method of any of claims 1-5, wherein pushing the first live room based on the first video clip, the second video clip start-stop time, and the third video clip comprises:
transmitting at least one second video segment start-stop time to the first live-broadcast-room corresponding client, so that the first live-broadcast-room corresponding client extracts a second video segment from the third video segment based on the second video segment start-stop time;
and sending the first video segment to the corresponding client of the first living broadcast room, so that the corresponding client of the first living broadcast room carries out video segment preprocessing on the second video segment and the first video segment and displays the second video segment and the first video segment on the corresponding client of the first living broadcast room.
7. The method of claim 6, wherein the video clip preprocessing comprises at least one of: splicing video clips, adding special effects in the video clips, adding stickers in the video clips, adding transition animations in the video clips and adding watermarks in the video clips.
8. A video processing apparatus, the apparatus comprising:
The first determining module is used for determining a first video segment corresponding to a first live broadcasting room, wherein the first video segment is real-time live broadcasting video data of the first live broadcasting room;
the second determining module is used for determining a start-stop time of a second video segment corresponding to the first video segment, wherein the second video segment is a highlight moment video segment material of the first video segment, the second video segment is a highlight moment video segment material which is selected from a third video segment corresponding to a second live broadcasting room and can promote the access amount of the second live broadcasting room, the third video segment is historical live broadcasting video data of the second live broadcasting room, and the live broadcasting contents of the first live broadcasting room and the second live broadcasting room meet preset association conditions;
and the video processing module is used for pushing the first live broadcasting room based on the first video clip, the start-stop time of the second video clip and the third video clip.
9. An electronic device, the electronic device comprising:
one or more processors;
storage means for storing one or more programs,
the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the video processing method of any of claims 1-7.
10. A storage medium containing computer executable instructions which, when executed by a computer processor, are for performing the video processing method of any of claims 1-7.
CN202311155211.8A 2023-09-07 2023-09-07 Video processing method, device, electronic equipment and storage medium Pending CN117156224A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311155211.8A CN117156224A (en) 2023-09-07 2023-09-07 Video processing method, device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311155211.8A CN117156224A (en) 2023-09-07 2023-09-07 Video processing method, device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN117156224A true CN117156224A (en) 2023-12-01

Family

ID=88886618

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311155211.8A Pending CN117156224A (en) 2023-09-07 2023-09-07 Video processing method, device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN117156224A (en)

Similar Documents

Publication Publication Date Title
CN107832434B (en) Method and device for generating multimedia play list based on voice interaction
CN107833574B (en) Method and apparatus for providing voice service
CN107844586B (en) News recommendation method and device
CN111510760B (en) Video information display method and device, storage medium and electronic equipment
CN110351572B (en) Method, device and equipment for updating live broadcast room information and storage medium
CN107943877B (en) Method and device for generating multimedia content to be played
CN111447489A (en) Video processing method and device, readable medium and electronic equipment
CN110267113B (en) Video file processing method, system, medium, and electronic device
CN109120954B (en) Video message pushing method and device, computer equipment and storage medium
CN109862100B (en) Method and device for pushing information
CN111800671A (en) Method and apparatus for aligning paragraphs and video
US20220375460A1 (en) Method and apparatus for generating interaction record, and device and medium
US20220391058A1 (en) Interaction information processing method and apparatus, electronic device and storage medium
CN112380365A (en) Multimedia subtitle interaction method, device, equipment and medium
CN113253885A (en) Target content display method, device, equipment, readable storage medium and product
CN114201705A (en) Video processing method and device, electronic equipment and storage medium
CN115801980A (en) Video generation method and device
CN110379406B (en) Voice comment conversion method, system, medium and electronic device
CN113221040A (en) Method and related device for displaying comment information
CN117171406A (en) Application program function recommendation method, device, equipment and storage medium
CN114630136B (en) Video interface interaction method, device, electronic equipment and computer readable medium
CN115547330A (en) Information display method and device based on voice interaction and electronic equipment
CN115379136A (en) Special effect prop processing method and device, electronic equipment and storage medium
CN117156224A (en) Video processing method, device, electronic equipment and storage medium
CN111131354B (en) Method and apparatus for generating information

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination