CN110933511A - Video sharing method, electronic device and medium - Google Patents

Video sharing method, electronic device and medium Download PDF

Info

Publication number
CN110933511A
CN110933511A CN201911206986.7A CN201911206986A CN110933511A CN 110933511 A CN110933511 A CN 110933511A CN 201911206986 A CN201911206986 A CN 201911206986A CN 110933511 A CN110933511 A CN 110933511A
Authority
CN
China
Prior art keywords
video
target
screen information
bullet screen
user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911206986.7A
Other languages
Chinese (zh)
Other versions
CN110933511B (en
Inventor
秦兴兴
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN201911206986.7A priority Critical patent/CN110933511B/en
Publication of CN110933511A publication Critical patent/CN110933511A/en
Application granted granted Critical
Publication of CN110933511B publication Critical patent/CN110933511B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/478Supplemental services, e.g. displaying phone caller identification, shopping application
    • H04N21/4788Supplemental services, e.g. displaying phone caller identification, shopping application communicating with other users, e.g. chatting
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/435Processing of additional data, e.g. decrypting of additional data, reconstructing software from modules extracted from the transport stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/44008Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics in the video stream

Abstract

The embodiment of the invention discloses a video sharing method, electronic equipment and a medium. The video sharing method comprises the following steps: acquiring the bullet screen information of the N items of the first video; acquiring M video clips associated with the N items of bullet screen information; generating a second video based on the M video segments; sending the second video to an N item target object, wherein the target object is an object for sending target bullet screen information; wherein N and M are integers greater than 1. By the method and the device, the problem that a user cannot quickly and accurately know the video segment commented by the comment information through the comment information can be solved.

Description

Video sharing method, electronic device and medium
Technical Field
Embodiments of the present invention relate to the field of communications technologies, and in particular, to a video sharing method, an electronic device, and a medium.
Background
At present, when a user watches a video, the user can generally express his own real-time idea and viewpoint only through characters and expressions, for example, when the user sees an interesting picture or video clip, the user can input comment information in a video comment area in a video playing interface for playing the video, and express his own idea and viewpoint of the interesting picture or video clip through characters and expressions.
When a plurality of users watch the same video together, the video clip commented by the comment information cannot be quickly and accurately known only through the comment information consisting of characters and expressions in the video comment area.
Disclosure of Invention
The embodiment of the invention provides a video sharing method, electronic equipment and a medium, and aims to solve the problem that a user cannot quickly and accurately know a video clip commented by comment information through the comment information.
In order to solve the technical problem, the invention is realized as follows:
in a first aspect, an embodiment of the present invention provides a video sharing method applied to an electronic device, including:
acquiring the bullet screen information of the N items of the first video;
acquiring M video clips associated with the N items of bullet screen information;
generating a second video based on the M video segments;
sending the second video to an N item target object, wherein the target object is an object for sending target bullet screen information;
wherein N and M are integers greater than 1.
In a second aspect, an embodiment of the present invention provides an electronic device, including:
the information acquisition module is used for acquiring the bullet screen information of the N items of the first video;
the video acquisition module is used for acquiring M video clips associated with the N items of bullet screen information;
the video synthesis module is used for generating a second video based on the M video clips;
the video sending module is used for sending the second video to the N item target objects, and the target objects are objects for sending target bullet screen information;
wherein N and M are integers greater than 1.
In a third aspect, an embodiment of the present invention provides an electronic device, which includes a processor, a memory, and a computer program stored in the memory and being executable on the processor, where the computer program, when executed by the processor, implements the steps of the video sharing method according to the first aspect.
In a fourth aspect, an embodiment of the present invention provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the steps of the video sharing method according to the first aspect.
In the embodiment of the invention, the second video can be generated by utilizing the video segment associated with the target barrage information of the first video, and the second video is sent to the target object sending the target barrage information, so that the video segment associated with the target barrage information can be rapidly shared by utilizing the target barrage information, a user can rapidly and accurately know the video segment commented by the target barrage information, and a plurality of users can effectively interact and share the interested video segment by utilizing the barrage information.
Drawings
Fig. 1 is a schematic flowchart of a video sharing method according to an embodiment of the present invention;
fig. 2 is a schematic diagram of a video playing interface of a first video according to an embodiment of the present invention;
fig. 3 is a schematic diagram of a video playing interface of a first video according to another embodiment of the present invention;
FIG. 4 is a schematic diagram of a video playing interface of a first video according to another embodiment of the present invention;
fig. 5 is a schematic structural diagram of an electronic device according to an embodiment of the present invention;
fig. 6 is a schematic diagram of a hardware structure of an electronic device implementing various embodiments of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In order to solve the problem of the prior art, embodiments of the present invention provide a video sharing method, an electronic device, and a medium. First, a video sharing method provided by an embodiment of the present invention is described below.
Fig. 1 is a flowchart illustrating a video sharing method according to an embodiment of the present invention. As shown in fig. 1, the video sharing method may include:
step 110, acquiring N item bullet screen information of a first video;
step 120, obtaining M video clips associated with the N items of bullet screen information;
step 130, generating a second video based on the M video clips;
step 140, sending the second video to an N item target object, wherein the target object is an object for sending target bullet screen information;
wherein N and M are integers greater than 1.
The video sharing method provided by the embodiment of the invention can be applied to electronic equipment. In some embodiments, the electronic device may be a server that provides the first video, and the server may be a high-performance electronic calculator that stores and processes data. In other embodiments, the electronic device may also be a first video playing device, and specifically, the first video playing device includes, but is not limited to, a mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted terminal, a wearable device, a pedometer, and the like.
In the embodiment of the invention, the second video can be generated by utilizing the video segment associated with the target barrage information of the first video, and the second video is sent to the target object sending the target barrage information, so that the video segment associated with the target barrage information can be rapidly shared by utilizing the target barrage information, a user can rapidly and accurately know the video segment commented by the target barrage information, and a plurality of users can effectively interact and share the interested video segment by utilizing the barrage information.
In the embodiment of the present invention, the number of the acquired target barrage information may be at least one, that is, when the number of the target barrage information is two or more than two, the video clip is shared.
In some embodiments of the present invention, when the electronic device is a server or a first video playing device, the multiple pieces of bullet screen information associated with the first video may be automatically analyzed and identified, and target bullet screen information including the first target element is determined in the multiple pieces of bullet screen information. In other embodiments of the present invention, when the electronic device is a first video playing device, a user may select target bullet screen information from a plurality of bullet screen information displayed in a video playing interface of the first video.
Next, the above two cases will be separately explained.
The electronic equipment is a server or a first video playing device
In some embodiments of the present invention, the specific method of step 110 may include:
identifying bullet screen information containing a first target element in at least two pieces of bullet screen information of a first video; wherein the first target element comprises at least one of a target character, a target symbol, and a target image;
and under the condition that the number of sending objects of the bullet screen information containing the first target element reaches a preset number threshold, acquiring the identified N items of bullet screen information containing the first target element.
Fig. 2 is a schematic diagram illustrating a video playing interface of a first video according to an embodiment of the present invention. As shown in fig. 2, the barrage comment display area at the top of the video playing interface of the first video is used for displaying barrage information 210 sent by different users. The bullet screen information 210 may be information input by a user in a bullet screen review area in the video playing interface of the first video, and the content of the bullet screen information 210 may be any character, symbol, and image. Wherein the characters may include a combination of one or more of numbers, letters, and words, and the symbols may include punctuation or emoticons.
The target character may be a combination of one or more of preset or user-set numbers, letters and characters, the target symbol may be a preset or user-set punctuation mark or emoticon, and the target image may be a preset or user-set image.
In some embodiments of the invention, the first target element may be used as a "key" that triggers the generation of the second video for video sharing. Different users can input barrage information with the same first target element in the barrage comment area, namely, the same 'secret key' is input, so that the sharing of video clips interested by the users is realized. In particular, the first target element may be at least one of the above.
The following description will take the first target element as "6666" in the bullet screen information 210 shown in fig. 2, that is, the "key" is "6666" as an example.
In some embodiments, the electronic device may filter bullet screen information including "6666" from at least two pieces of bullet screen information of the first video, and when a transmission object that uploads the bullet screen information including "6666" reaches a preset number threshold, for example, when the transmission object reaches 20 people, that is, when 20 users respectively input bullet screen information including "6666" in the bullet screen review area, the electronic device triggers the video sharing function, uses the bullet screen information including "6666" as target bullet screen information, and obtains the target bullet screen information.
In other embodiments, the electronic device may filter bullet screen information including "6666" from at least two pieces of bullet screen information of the first video, and when the uploaded bullet screen information including "6666" reaches a preset number of bullet screens, the electronic device triggers a video sharing function, uses the bullet screen information including "6666" as target bullet screen information, and obtains the target bullet screen information.
In the above embodiment, optionally, all the bullet screen information including the first target element may be used as the target bullet screen information, or the bullet screen information including the target element in the first predetermined time period may be used as the target bullet screen information. The first predetermined time period may be a day, a week, or a month, which is not limited herein.
Therefore, the method for acquiring the target barrage information can be applied to a scene in which multiple persons actively participate in the transmission of the barrage information, and the triggering of the video sharing function is realized by inputting the same secret key, so that the electronic equipment can automatically identify and acquire the target barrage information based on the first target element, the information content of the barrage information can be repeatedly utilized, and the automatic sharing of the video clips is realized.
In some embodiments of the present invention, at least one video clip is stored in association with each piece of target bullet screen information in the N pieces of bullet screen information, so that M is greater than or equal to N. At this time, the at least one video clip stored in association with the target barrage information may be a video clip transmitted simultaneously when the user transmits the target barrage information.
Therefore, in the embodiment of the present invention, optionally, the specific method of step 120 may include:
acquiring M video clips stored in association with the N items of bullet screen information, wherein the ith video clip in the M video clips is a video clip intercepted in a first video in response to a first input of an ith user;
wherein i is an integer greater than 1 and i is not greater than M, the ith video segment can be any one of the M video segments, and at least one video segment is correspondingly intercepted by the first input once.
In some embodiments of the present invention, the first input may be a click input to a display screen of the electronic device, a long press input, an input to a key of the electronic device, and the like.
In other embodiments of the present invention, the first input may also be at least one video capture operation, the user may perform the video capture operation on the first video within the video playing interface of the first video playing device, so that the first video playing device obtains at least one video clip based on the video capture operation, and the user may input bullet screen information with the first target element in a bullet screen comment area associated with the obtained video clip after obtaining the video clip, and may associate the bullet screen information with the obtained video clip.
The video capturing operation can be a shortcut gesture, a combination of a plurality of gestures, and a key combination.
For example, when a user views a video picture or a video clip of interest, the user may intercept the video clip by a shortcut gesture, for example, two fingers simultaneously click on a timeline of a progress bar to determine a start time point and an end time point, so that the electronic device intercepts the video clip between the start time point and the end time point. For another example, when the user views a video picture or a video clip of interest, the user may intercept the video picture or the video clip by a combination of keys, for example, actively intercept the corresponding video picture or the video clip by a combination of shortcut keys or physical keys on the screen.
In some embodiments, the barrage comment area associated with the retrieved video clip may be a barrage comment area within a sharing interface or a storage interface of the retrieved video clip.
In some embodiments of the present invention, in a case that the electronic device is a first video playing device, the obtained video clip may be stored locally in the first video playing device in association with the associated barrage information; in other embodiments of the present invention, in a case that the electronic device is a server, the obtained video clip may be stored locally in the first video playing device in association with the associated bullet screen information, and when the server needs to obtain the video clip associated with the target bullet screen information, the video clip is uploaded to the server; in still other embodiments of the present invention, in a case that the electronic device is a server, the acquired video clip may also be directly uploaded to the server together with the associated bullet screen information, and the server may store the target bullet screen information in association with the acquired video clip.
Therefore, in the embodiment of the invention, the user can intercept the video clip through simple and quick operation and upload the bullet screen information associated with the video clip, so that the user can share the interested video clip by using the target bullet screen information.
In some embodiments of the present invention, the specific method of step 130 may include:
determining a first video processing mode based on the first target element;
processing the M video clips according to a first video processing mode to generate a second video;
wherein, the first video processing mode comprises: at least one of an editing manner of video contents of the video clips and a composition order of the video clips.
After the electronic device obtains the target bullet screen information, a first keyword corresponding to the first target element can be determined, a first video processing mode corresponding to the first keyword is searched, the M video segments are processed according to the searched first video processing mode, and a second video is generated.
Wherein, the first target element may be analyzed to determine a target character, a target symbol and a first keyword corresponding to the target image, if the first target element includes the target character, the target character is a character, the character may be directly used as the first keyword, if the first target element includes the target symbol, the target symbol is an emoticon, the content that the emoticon wants to express may be determined, thereby determining the first keyword, and if the first target element includes the target image, the first keyword may be determined according to the image content, for example, if the first target element is a ragged image, the keyword may be determined to be "ragged".
In some embodiments of the present invention, the electronic device may preset a first video processing manner corresponding to each first target element, and the user may set the first video processing manner corresponding to each first target element.
Wherein, the first video processing mode comprises: at least one of an editing manner of video contents of the video clips and a composition order of the video clips. Editing of the video content may include changing the speaking rate and frequency of actions of the characters within the video segment, dubbing of tasks, and the like. The composition order of the video segments may be the playing order of the M video segments in the second video.
For example, when the first keyword is "laugh", the speaking speed and the action frequency of the person in the video segment can be changed, or the dubbing sound of the person can be changed to modify the video segment, and the modified video segment is spliced end to generate the second video.
For another example, when the first keyword is "xx actor action play highlights", the first keyword may be arranged from a large number to a small number according to the number of bullet screen information corresponding to each video clip, and the video clips are spliced end to end according to the arrangement order to generate the second video, so that the video clip with higher attention and higher comment can be played first.
For another example, when the first keyword is "character reversal", the actions and languages of the character a and the character B in the video segment may be reversed to modify the video segment, and the modified video segments may be spliced end to generate the second video.
Therefore, in the embodiment of the invention, the video clip associated with the target bullet screen information can be processed according to different first video processing modes based on different first target elements in the target bullet screen information to generate the second video, so that the interestingness of the synthesized second video is improved, and the interactivity of the shared second video is improved.
In the embodiment of the present invention, the generated second video may be optionally stored in the electronic device.
Secondly, the electronic equipment is first video playing equipment
In other embodiments of the present invention, the specific method of step 110 may include:
receiving a second input of the user to the N item bullet screen information in the at least two pieces of bullet screen information displayed in the video playing interface of the first video;
displaying N items of bullet screen information in the target area in response to the second input;
and acquiring the bullet screen information of the N items displayed in the target area.
In some embodiments, the second input may be a bullet screen selection operation on the target bullet screen information, for example, a click operation, a long-press operation, or a hook operation, and the like, and is used to enable the user to select the target bullet screen information of the at least two pieces of bullet screen information displayed in the video playing interface.
In some embodiments, the target area may be located within a video playback interface. In other embodiments, the electronic device may include a first display area and a second display area, the video playback interface may be located in the first display area of the electronic device, and the target area may be located in the second display area of the electronic device. The first display area and the second display area may be different display screens, or may be different areas in one display screen.
Fig. 3 is a schematic diagram illustrating a video playing interface of a first video according to another embodiment of the present invention. As shown in fig. 3, a video playing interface of the first video has a target area for displaying target bullet screen information, that is, a video sharing area, and the video sharing area is located in a right area of the video playing interface. The top of the video playing interface of the first video and the left area of the video sharing area are bullet screen comment display areas used for displaying bullet screen information 210 sent by different users. The contents of the bullet screen information 210 may be any characters, symbols, and images. Wherein the characters may include a combination of one or more of numbers, letters, and words, and the symbols may include punctuation or emoticons.
In some embodiments, the bullet screen selection operation may be a click operation. Referring to fig. 3, a user may click on interesting bullet screen information 210, such as "XXXX, fun", "XXX program", and "XXXX actor", among a plurality of bullet screen information 210 displayed on a video playing interface of a first video, as target bullet screen information 220, and the target bullet screen information 220 may be added to a video sharing area 230 for display.
In the above embodiment, optionally, the bullet screen information selected by the second input within the second predetermined time period may be used as the target bullet screen information. That is, the user may continue to select the bullet screen information as the target bullet screen information for a second predetermined period of time from the selection of the first bullet screen information by the second input. The second predetermined time period may be ten minutes, half an hour, or an hour, which is not limited herein.
In this embodiment of the present invention, optionally, before acquiring the N-entry bullet screen information displayed in the target area, the specific method in step 110 may further include:
receiving a fourth input of target bullet screen information input displayed in the target area by the user;
in response to a fourth input, the selected target bullet screen information is deleted from the target area.
In these embodiments, the specific method for acquiring the N-entry bullet screen information displayed in the target area may include:
and acquiring the target bullet screen information displayed in the target area after deleting the target bullet screen information selected by the fourth input.
In some embodiments, the fourth input may be a delete operation, wherein the delete operation may be a long press operation for the target barrage information in the target region. With continued reference to fig. 3, the user may press any target barrage information 220 in the video sharing area 230, for example, the target barrage information "XXXX, laugh", and the electronic device may delete the pressed target barrage information 220 from the video sharing area 230, at which point the "XXXX, laugh" is no longer the target barrage information.
Therefore, in the embodiment of the invention, the user can freely select interested bullet screen information through simple and quick operation, and the interestingness of interaction by utilizing the bullet screen information is improved.
In some embodiments of the present invention, after the user inputs the second input, the electronic device may trigger the video sharing function to obtain the target bullet screen information displayed in the target area.
In other embodiments of the present invention, the specific method of step 120 may include:
receiving a third input of a user to a second target element in the N items of bullet screen information;
searching the first video for M video segments matching the second target element in response to a third input;
wherein the second target element comprises at least one keyword.
In some embodiments, the third input may be a confirmation operation for the target area, e.g., a double-click operation on the target area. With reference to fig. 3, the user may double click any position of the video sharing area 230, and the electronic device may trigger the video sharing function, automatically obtain a second target element in the target barrage information displayed in the video sharing area 230, and search for M video segments matched with the second target element. At this time, the target bullet screen information may be matched with a keyword preset by the electronic device, so as to determine at least one keyword included in the target bullet screen information as a second target element.
In still other embodiments of the present invention, the third input may be a user's operation for arranging the target bullet screen information in the target area. With reference to fig. 3, the user may arrange the target barrage information "XXXX, laugh", "XXX program", and "XXXX actor" in the video sharing area 230, and sequentially drag the target barrage information 220 to the video playing interface located on the left side of the video sharing area 230.
Fig. 4 is a schematic diagram illustrating a video playing interface of a first video according to another embodiment of the present invention. As shown in fig. 4, the video playing interface located on the left side of the video sharing area only displays the dragged target barrage information 220, and hides the originally displayed barrage information, and after the user arranges all the target barrage information 220 in the video sharing area, the electronic device can hide the video sharing area. After the user drags all the target bullet screen information 220 in the video sharing area to the video playing interface located on the left side of the video sharing area, the electronic device may trigger the video sharing function, acquire the target bullet screen information 220 according to the arrangement sequence, and search for M video clips matched with the second target element. At this time, the target bullet screen information 220 may be matched with a keyword preset by the electronic device, so as to determine at least one keyword included in the target bullet screen information 220 as a second target element.
In some embodiments of the invention, the first video may be searched for M video segments that match the second target element. In another embodiment of the present invention, the network video resource may also be searched for M video segments matching the second target element, which is not limited herein.
When the video clips are searched in the network video resources, the network videos can be summarized or classified through system setting of the electronic equipment and user customization, and then the video clips with the same or similar types as the first video can be searched, so that the video clips which are interested in the network videos can be shared to the users more quickly and accurately.
In the embodiment of the present invention, the preset keyword may include a video type, and at this time, the second target element may include a video type corresponding to a character, a symbol, and an image in the target bullet screen information. In other embodiments of the present invention, the preset keywords may include preset words, and in this case, the second target element may include preset words that are the same as or similar to contents expressed by characters, symbols, and images in the target bullet screen information.
When the N-entry bullet screen information matches a keyword, the second target element may include a keyword. When the N-entry bullet screen information is matched to more than two keywords, the second target element may include more than two keywords.
Therefore, the video clips matched with the keywords can be searched based on the keywords in the interested target bullet screen information selected by the user, so that the second video is generated, and the video clips interested by the user can be quickly shared.
In some embodiments of the present invention, in a case where the second target element includes at least two keywords, a specific method of searching the first video for M video segments matching the second target element may include:
arranging at least two keywords according to a target sequence to obtain a target keyword combination;
in the first video, M video segments matching the target keyword combination are searched.
In some embodiments, when the user sorts the target barrage information, the keywords related to each target barrage information may be arranged according to the sorting order of the target barrage information by the user to obtain a target keyword combination, that is, a search formula, and the search formula is used to search for a video segment matching the target keyword combination.
With continued reference to fig. 4, the user sorts the target bullet screen information 220 into "XXXX actor," "XXX program," "XXXX, and" so that a target keyword combination "XXXX actor + XXX program + fun" may be generated, and a video clip having target video elements corresponding to all the second keywords is screened from the network video, so as to obtain a video clip of the fun expression of the actor in the program.
In other embodiments, when the user does not sort the target bullet screen information, the keywords related to each target bullet screen information may be further arranged according to the priority of each keyword to obtain a target keyword combination, that is, a search formula, and the search formula is used to search for a video clip matching the target keyword combination.
Therefore, according to the embodiment of the invention, the corresponding search formula can be generated according to the keyword sequencing preset by the electronic equipment or customized by the user, so as to screen the video clip matched with the search formula, and thus, the video clip interested by the user can be shared to the user more accurately and efficiently.
In other embodiments of the present invention, the specific method of step 130 may include:
determining a second video processing mode based on the second target element;
processing the M video clips according to a second video processing mode to generate a second video;
wherein, the second video processing mode comprises: at least one of a manner of synthesizing the video clips and an order of synthesizing the video clips.
In some embodiments, the synthesizing manner may include a processing manner for the video content, which is similar to the first video processing manner described above and is not described herein again. In other embodiments, the synthesis method may further include a splicing method, such as a first splicing method. In still other embodiments, the composition order may be in an order of how many clicks of the searched video segments are combined.
Therefore, in the embodiment of the invention, the video clips associated with the target barrage information interested by the user can be integrated and spliced, all the video clips can be conveniently and uniformly shared to the sending object uploading the target barrage information at one time, and the downloading and the watching of the target barrage information are facilitated.
In some embodiments, the second video processing method may further include performing duplicate removal processing on the video segments associated with the target barrage information, and sequentially splicing the duplicate-removed video segments to generate the second video.
Specifically, if there are two video clips with duplicate content, the duplicate content in the two video clips can be deleted, and the content with non-duplicate content in the two video clips can be spliced in time sequence to form a deduplicated video clip.
Therefore, in the embodiment of the invention, the same video clip can be prevented from being repeatedly sent to the sending object of the uploaded target barrage information, and the user experience is improved.
In the embodiment of the present invention, the target object is an object for sending target bullet screen information, and specifically, may be a user account for sending the target bullet screen information.
In some embodiments of the present invention, after the second video is sent to the user account sending the target barrage information in step 140, a video prompt identifier corresponding to the second video may be displayed in a video playing interface corresponding to the user account, and the user may perform a fifth input on the video prompt identifier to obtain and view the second video.
For example, a flashing small window may be displayed in the video playing interface of the user account sending the target bullet screen information, and a second video update is prompted. The user may click on the small circle window to obtain a second video associated with the small circle window. As another example, a flashing "! ", a second video update is suggested.
In other embodiments of the present invention, when the target barrage information is barrage information selected by the user in the video playing interface, the second video may be sent to the user account triggering the video sharing function, so that the user triggering the video sharing function may obtain and watch the second video.
Therefore, in the embodiment of the invention, the second video can be shared with the sending object of the target barrage information and the object triggering the video sharing function, so that the interactivity and interestingness of the barrage information are improved, and the interactive content of the barrage information is enriched.
Fig. 5 is a schematic structural diagram of an electronic device according to an embodiment of the present invention. As shown in fig. 5, the electronic device may include:
the information acquisition module 310 is configured to acquire N-entry bullet screen information of the first video;
the video acquisition module 320 is configured to acquire M video clips associated with the N-entry bullet screen information;
a video composition module 330, configured to generate a second video based on the M video segments;
the video sending module 340 is configured to send the second video to an N entry target object, where the target object is an object for sending target bullet screen information;
wherein N and M are integers greater than 1.
In some embodiments, the electronic device may be a server that provides the first video, and the server may be a high-performance electronic calculator that stores and processes data. In other embodiments, the electronic device may also be a first video playing device, and specifically, the first video playing device includes, but is not limited to, a mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted terminal, a wearable device, a pedometer, and the like.
In the embodiment of the invention, the second video can be generated by utilizing the video clip associated with the target barrage information of the first video, and the second video is sent to the target object sending the target barrage information, so that the video clip associated with the target barrage information can be rapidly shared, and the barrage information is no longer only used for expressing the idea and the viewpoint of the user in the mode of character information and emoticons.
In the embodiment of the present invention, the number of the acquired target barrage information may be at least one, that is, when the number of the target barrage information is two or more than two, the video clip is shared.
In some embodiments of the present invention, when the electronic device is a server or a first video playing device, the multiple pieces of bullet screen information associated with the first video may be automatically analyzed and identified, and target bullet screen information including the first target element is determined in the multiple pieces of bullet screen information.
In some embodiments of the present invention, the information obtaining module 310 may specifically be configured to:
identifying bullet screen information containing a first target element in at least two pieces of bullet screen information of a first video; wherein the first target element comprises at least one of a target character, a target symbol, and a target image;
and under the condition that the number of sending objects of the bullet screen information containing the first target element reaches a preset number threshold, acquiring the identified N items of bullet screen information containing the first target element.
The target character may be a combination of one or more of preset or user-set numbers, letters and characters, the target symbol may be a preset or user-set punctuation mark or emoticon, and the target image may be a preset or user-set image.
In some embodiments of the invention, the first target element may be used as a "key" that triggers the generation of the second video for video sharing. Different users can input barrage information with the same first target element in the barrage comment area and input the same 'secret key' so as to share video clips interested in the barrage comment area.
Therefore, the method for acquiring the target barrage information can be applied to a scene in which multiple persons actively participate in the transmission of the barrage information, and the triggering of the video sharing function is realized by inputting the same secret key, so that the electronic equipment can automatically identify and acquire the target barrage information based on the first target element, the information content of the barrage information can be repeatedly utilized, and the automatic sharing of the video clips is realized.
In some embodiments of the present invention, each piece of target bullet screen information in the N pieces of bullet screen information stores at least one video clip in association.
In these embodiments, optionally, the video obtaining module 320 may be specifically configured to:
acquiring M video clips stored in association with the N items of bullet screen information, wherein the ith video clip in the M video clips is a video clip intercepted in a first video in response to a first input of an ith user;
wherein i is an integer greater than 1 and i is not greater than M, the ith video segment can be any one of the M video segments, and at least one video segment is correspondingly intercepted by the first input once.
Therefore, in the embodiment of the invention, the user can intercept the video clip through simple and quick operation and upload the bullet screen information associated with the video clip, so that the user can share the interested video clip by using the target bullet screen information.
In some embodiments of the present invention, the video composition module 330 may be specifically configured to:
determining a first video processing mode based on the first target element;
processing the M video clips according to a first video processing mode to generate a second video;
wherein, the first video processing mode comprises: at least one of an editing manner of video contents of the video clips and a composition order of the video clips.
After the electronic device obtains the target bullet screen information, a first keyword corresponding to the first target element can be determined, a first video processing mode corresponding to the first keyword is searched, the M video segments are processed according to the searched first video processing mode, and a second video is generated.
In some embodiments of the present invention, the electronic device may preset a first video processing manner corresponding to each first target element, and the user may set the first video processing manner corresponding to each first target element.
Therefore, in the embodiment of the invention, the video clip associated with the target bullet screen information can be processed according to different first video processing modes based on different first target elements in the target bullet screen information to generate the second video, so that the interestingness of the synthesized second video is improved, and the interactivity of the shared second video is improved.
In other embodiments of the present invention, when the electronic device is a first video playing device, a user may select target bullet screen information from a plurality of bullet screen information displayed in a video playing interface of the first video.
In other embodiments of the present invention, the information obtaining module 310 may be specifically configured to:
receiving a second input of the user to the N item bullet screen information in the at least two pieces of bullet screen information displayed in the video playing interface of the first video;
displaying N items of bullet screen information in the target area in response to the second input;
and acquiring the bullet screen information of the N items displayed in the target area.
In some embodiments, the second input may be a bullet screen selection operation on the target bullet screen information, for example, a click operation, a long-press operation, or a hook operation, and the like, and is used to enable the user to select the target bullet screen information of the at least two pieces of bullet screen information displayed in the video playing interface.
In some embodiments, the target area may be located within a video playback interface. In other embodiments, the electronic device may include a first display area and a second display area, the video playback interface may be located in the first display area of the electronic device, and the target area may be located in the second display area of the electronic device. The first display area and the second display area may be different display screens, or may be different areas in one display screen.
Therefore, in the embodiment of the invention, the user can freely select interested bullet screen information through simple and quick operation, and the interestingness of interaction by utilizing the bullet screen information is improved.
In other embodiments of the present invention, the video obtaining module 320 may be specifically configured to:
receiving a third input of a user to a second target element in the N items of bullet screen information;
searching the first video for M video segments matching the second target element in response to a third input;
wherein the second target element comprises at least one keyword.
In some embodiments of the invention, the first video may be searched for M video segments that match the second target element. In another embodiment of the present invention, the network video resource may also be searched for M video segments matching the second target element, which is not limited herein.
When the video clips are searched in the network video resources, the network videos can be summarized or classified through system setting of the electronic equipment and user customization, and then the video clips with the same or similar types as the first video can be searched, so that the video clips which are interested in the network videos can be shared to the users more quickly and accurately.
In the embodiment of the present invention, the preset keyword may include a video type, and at this time, the second target element may include a video type corresponding to a character, a symbol, and an image in the target bullet screen information. In other embodiments of the present invention, the preset keywords may include preset words, and in this case, the second target element may include preset words that are the same as or similar to contents expressed by characters, symbols, and images in the target bullet screen information.
When the N-entry bullet screen information matches a keyword, the second target element may include a keyword. When the N-entry bullet screen information is matched to more than two keywords, the second target element may include more than two keywords.
Therefore, the video clips matched with the keywords can be searched based on the keywords in the interested target bullet screen information selected by the user, so that the second video is generated, and the video clips interested by the user can be quickly shared.
In other embodiments of the present invention, the second target element includes at least two keywords.
In these embodiments, optionally, the video obtaining module 320 may further be configured to:
arranging at least two keywords according to a target sequence to obtain a target keyword combination;
in the first video, M video segments matching the target keyword combination are searched.
In some embodiments, when the user sorts the target barrage information, the keywords related to each target barrage information may be arranged according to the sorting order of the target barrage information by the user to obtain a target keyword combination, that is, a search formula, and the search formula is used to search for a video segment matching the target keyword combination.
In other embodiments, when the user does not sort the target bullet screen information, the keywords related to each target bullet screen information may be further arranged according to the priority of each keyword to obtain a target keyword combination, that is, a search formula, and the search formula is used to search for a video clip matching the target keyword combination.
Therefore, according to the embodiment of the invention, the corresponding search formula can be generated according to the keyword sequencing preset by the electronic equipment or customized by the user, so as to screen the video clip matched with the search formula, and thus, the video clip interested by the user can be shared to the user more accurately and efficiently.
In other embodiments of the present invention, the video composition module 330 may be specifically configured to:
determining a second video processing mode based on the second target element;
processing the M video clips according to a second video processing mode to generate a second video;
wherein, the second video processing mode comprises: at least one of a manner of synthesizing the video clips and an order of synthesizing the video clips.
In some embodiments, the synthesizing manner may include a processing manner for the video content, which is similar to the first video processing manner described above and is not described herein again. In other embodiments, the synthesis method may further include a splicing method, such as a first splicing method. In still other embodiments, the composition order may be in an order of how many clicks of the searched video segments are combined.
Therefore, in the embodiment of the invention, the video clips associated with the target barrage information interested by the user can be integrated and spliced, all the video clips can be conveniently and uniformly shared to the sending object uploading the target barrage information at one time, and the downloading and the watching of the target barrage information are facilitated.
The electronic device provided in the embodiment of the present invention can implement each process implemented by the electronic device in the method embodiments of fig. 1 to fig. 4, and is not described herein again to avoid repetition.
Fig. 6 is a schematic diagram of a hardware structure of an electronic device implementing various embodiments of the present invention. As shown in fig. 6, the electronic device 400 includes, but is not limited to: radio frequency unit 401, network module 402, audio output unit 403, input unit 404, sensor 405, display unit 406, user input unit 407, interface unit 408, memory 409, processor 410, and power supply 411. Those skilled in the art will appreciate that the electronic device configuration shown in fig. 6 does not constitute a limitation of the electronic device, and that the electronic device may include more or fewer components than shown, or some components may be combined, or a different arrangement of components. In the embodiment of the present invention, the electronic device includes, but is not limited to, a mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted terminal, a wearable device, a pedometer, and the like.
Wherein, the processor 410 is configured to: acquiring the bullet screen information of the N items of the first video; acquiring M video clips associated with the N items of bullet screen information; generating a second video based on the M video segments; sending the second video to an N item target object, wherein the target object is an object for sending target bullet screen information; wherein N and M are integers greater than 1.
In the embodiment of the invention, the second video can be generated by utilizing the video clip associated with the target barrage information of the first video, and the second video is sent to the target object sending the target barrage information, so that the video clip associated with the target barrage information can be rapidly shared, and the barrage information is no longer only used for expressing the idea and the viewpoint of the user in the mode of character information and emoticons.
It should be understood that, in the embodiment of the present invention, the radio frequency unit 401 may be used for receiving and sending signals during a message sending and receiving process or a call process, and specifically, receives downlink data from a base station and then processes the received downlink data to the processor 410; in addition, the uplink data is transmitted to the base station. Typically, radio unit 401 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like. Further, the radio unit 401 can also communicate with a network and other devices through a wireless communication system.
The electronic device provides wireless broadband internet access to the user via the network module 402, such as assisting the user in sending and receiving e-mails, browsing web pages, and accessing streaming media.
The audio output unit 403 may convert audio data received by the radio frequency unit 401 or the network module 402 or stored in the memory 409 into an audio signal and output as sound. Also, the audio output unit 403 may also provide audio output related to a specific function performed by the electronic apparatus 400 (e.g., a call signal reception sound, a message reception sound, etc.). The audio output unit 403 includes a speaker, a buzzer, a receiver, and the like.
The input unit 404 is used to receive audio or video signals. The input Unit 404 may include a Graphics Processing Unit (GPU) 4041 and a microphone 4042, and the Graphics processor 4041 processes image data of a still picture or video obtained by an image capturing apparatus (such as a camera) in a video capturing mode or an image capturing mode. The processed image frames may be displayed on the display unit 406. The image frames processed by the graphic processor 4041 may be stored in the memory 409 (or other storage medium) or transmitted via the radio frequency unit 401 or the network module 402. The microphone 4042 may receive sound, and may be capable of processing such sound into audio data. The processed audio data may be converted into a format output transmittable to a mobile communication base station via the radio frequency unit 401 in case of the phone call mode.
The electronic device 400 also includes at least one sensor 405, such as light sensors, motion sensors, and other sensors. Specifically, the light sensor includes an ambient light sensor that adjusts the brightness of the display panel 4061 according to the brightness of ambient light, and a proximity sensor that turns off the display panel 4061 and/or the backlight when the electronic apparatus 400 is moved to the ear. As one type of motion sensor, an accelerometer sensor can detect the magnitude of acceleration in each direction (generally three axes), detect the magnitude and direction of gravity when stationary, and can be used to identify the posture of an electronic device (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), and vibration identification related functions (such as pedometer, tapping); the sensors 405 may also include a fingerprint sensor, a pressure sensor, an iris sensor, a molecular sensor, a gyroscope, a barometer, a hygrometer, a thermometer, an infrared sensor, etc., which will not be described in detail herein.
The display unit 406 is used to display information input by the user or information provided to the user. The Display unit 406 may include a Display panel 4061, and the Display panel 4061 may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like.
The user input unit 407 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the electronic device. Specifically, the user input unit 407 includes a touch panel 4071 and other input devices 4072. Touch panel 4071, also referred to as a touch screen, may collect touch operations by a user on or near it (e.g., operations by a user on or near touch panel 4071 using a finger, a stylus, or any suitable object or attachment). The touch panel 4071 may include two parts, a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 410, receives a command from the processor 410, and executes the command. In addition, the touch panel 4071 can be implemented by using various types such as a resistive type, a capacitive type, an infrared ray, and a surface acoustic wave. In addition to the touch panel 4071, the user input unit 407 may include other input devices 4072. Specifically, the other input devices 4072 may include, but are not limited to, a physical keyboard, function keys (such as volume control keys, switch keys, etc.), a track ball, a mouse, and a joystick, which are not described herein again.
Further, the touch panel 4071 can be overlaid on the display panel 4061, and when the touch panel 4071 detects a touch operation thereon or nearby, the touch operation is transmitted to the processor 410 to determine the type of the touch event, and then the processor 410 provides a corresponding visual output on the display panel 4061 according to the type of the touch event. Although in fig. 6, the touch panel 4071 and the display panel 4061 are two independent components to implement the input and output functions of the electronic device, in some embodiments, the touch panel 4071 and the display panel 4061 may be integrated to implement the input and output functions of the electronic device, and this is not limited herein.
The interface unit 408 is an interface for connecting an external device to the electronic apparatus 400. For example, the external device may include a wired or wireless headset port, an external power supply (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. The interface unit 408 may be used to receive input (e.g., data information, power, etc.) from an external device and transmit the received input to one or more elements within the electronic apparatus 400 or may be used to transmit data between the electronic apparatus 400 and an external device.
The memory 409 may be used to store software programs as well as various data. The memory 409 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. Further, the memory 409 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The processor 410 is a control center of the electronic device, connects various parts of the entire electronic device using various interfaces and lines, performs various functions of the electronic device and processes data by operating or executing software programs and/or modules stored in the memory 409 and calling data stored in the memory 409, thereby performing overall monitoring of the electronic device. Processor 410 may include one or more processing units; preferably, the processor 410 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 410.
The electronic device 400 may further include a power supply 411 (e.g., a battery) for supplying power to various components, and preferably, the power supply 411 may be logically connected to the processor 410 through a power management system, so as to implement functions of managing charging, discharging, and power consumption through the power management system.
In addition, the electronic device 400 includes some functional modules that are not shown, and are not described in detail herein.
Preferably, an embodiment of the present invention further provides an electronic device, which includes a processor 410, a memory 409, and a computer program that is stored in the memory 409 and can be run on the processor 410, and when being executed by the processor 410, the computer program implements each process of the video sharing method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not described here again.
The embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program implements each process of the video sharing method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here. The computer-readable storage medium may be a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present invention.
While the present invention has been described with reference to the embodiments shown in the drawings, the present invention is not limited to the embodiments, which are illustrative and not restrictive, and it will be apparent to those skilled in the art that various changes and modifications can be made therein without departing from the spirit and scope of the invention as defined in the appended claims.

Claims (11)

1. A video sharing method is applied to electronic equipment and is characterized by comprising the following steps:
acquiring the bullet screen information of the N items of the first video;
acquiring M video clips associated with the N items of bullet screen information;
generating a second video based on the M video segments;
sending the second video to an N item target object, wherein the target object is an object for sending the target bullet screen information;
wherein N and M are integers greater than 1.
2. The method of claim 1, wherein the obtaining N-entry bulletin screen information comprises:
identifying bullet screen information containing a first target element in at least two pieces of bullet screen information of the first video; wherein the first target element comprises at least one of a target character, a target symbol, and a target image;
and under the condition that the number of the sending objects of the bullet screen information containing the first target element reaches a preset number threshold, acquiring the identified N items of bullet screen information containing the first target element.
3. The method according to claim 2, wherein each piece of target bullet screen information in the N pieces of bullet screen information stores at least one video clip in association therewith;
the acquiring M video clips associated with the N entry banner screen information includes:
acquiring M video clips stored in association with the N items of bullet screen information, wherein the ith video clip in the M video clips is a video clip intercepted in the first video in response to the first input of the ith user;
and i is an integer larger than 1 and is not more than M, and at least one video clip is correspondingly intercepted by the first input once.
4. The method of claim 2, wherein generating a second video based on the M video segments comprises:
determining a first video processing mode based on the first target element;
processing the M video clips according to the first video processing mode to generate a second video;
wherein the first video processing mode comprises: at least one of an editing manner of video contents of the video clips and a composition order of the video clips.
5. The method of claim 1, wherein the obtaining N-entry bulletin screen information comprises:
receiving a second input of the user to the N item bullet screen information in the at least two pieces of bullet screen information displayed in the video playing interface of the first video;
displaying the N items of bullet screen information in a target area in response to the second input;
and acquiring the bullet screen information of the N items displayed in the target area.
6. The method of claim 5, wherein obtaining the M video segments associated with the N items of bullet screen information comprises:
receiving a third input of a user to a second target element in the N items of bullet screen information;
searching the first video for M video segments that match the second target element in response to the third input;
wherein the second target element comprises at least one keyword.
7. The method of claim 6, wherein the second target element comprises at least two keywords;
wherein the searching for M video segments in the first video that match the second target element comprises:
arranging the at least two keywords according to a target sequence to obtain a target keyword combination;
and searching the first video for M video clips matched with the target keyword combination.
8. The method of claim 7, wherein generating a second video based on the M video segments comprises:
determining a second video processing mode based on the second target element;
processing the M video clips according to the second video processing mode to generate a second video;
wherein the second video processing mode comprises: at least one of a manner of synthesizing the video clips and an order of synthesizing the video clips.
9. An electronic device, comprising:
the information acquisition module is used for acquiring the bullet screen information of the N items of the first video;
the video acquisition module is used for acquiring M video clips associated with the N items of bullet screen information;
the video synthesis module is used for generating a second video based on the M video clips;
the video sending module is used for sending the second video to an N item target object, and the target object is an object for sending the target bullet screen information;
wherein N and M are integers greater than 1.
10. An electronic device comprising a processor, a memory, and a computer program stored on the memory and executable on the processor, the computer program, when executed by the processor, implementing the steps of the video sharing method according to any one of claims 1 to 8.
11. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the video sharing method according to any one of claims 1 to 8.
CN201911206986.7A 2019-11-29 2019-11-29 Video sharing method, electronic device and medium Active CN110933511B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911206986.7A CN110933511B (en) 2019-11-29 2019-11-29 Video sharing method, electronic device and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911206986.7A CN110933511B (en) 2019-11-29 2019-11-29 Video sharing method, electronic device and medium

Publications (2)

Publication Number Publication Date
CN110933511A true CN110933511A (en) 2020-03-27
CN110933511B CN110933511B (en) 2021-12-14

Family

ID=69848025

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911206986.7A Active CN110933511B (en) 2019-11-29 2019-11-29 Video sharing method, electronic device and medium

Country Status (1)

Country Link
CN (1) CN110933511B (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111600931A (en) * 2020-04-13 2020-08-28 维沃移动通信有限公司 Information sharing method and electronic equipment
CN113747223A (en) * 2020-05-29 2021-12-03 口碑(上海)信息技术有限公司 Video comment method and device and electronic equipment
CN114339362A (en) * 2021-12-08 2022-04-12 腾讯科技(深圳)有限公司 Video bullet screen matching method and device, computer equipment and storage medium
CN114979054A (en) * 2022-05-13 2022-08-30 维沃移动通信有限公司 Video generation method and device, electronic equipment and readable storage medium
CN115086742A (en) * 2022-06-13 2022-09-20 北京达佳互联信息技术有限公司 Audio and video generation method and device
WO2022253141A1 (en) * 2021-06-02 2022-12-08 北京字跳网络技术有限公司 Video sharing method and apparatus, device, and medium
WO2023246395A1 (en) * 2022-06-21 2023-12-28 北京字跳网络技术有限公司 Method and apparatus for audio-visual content sharing, device, and storage medium

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140344853A1 (en) * 2013-05-16 2014-11-20 Panasonic Corporation Comment information generation device, and comment display device
CN104469508A (en) * 2013-09-13 2015-03-25 中国电信股份有限公司 Method, server and system for performing video positioning based on bullet screen information content
CN106921891A (en) * 2015-12-24 2017-07-04 北京奇虎科技有限公司 The methods of exhibiting and device of a kind of video feature information
CN107071587A (en) * 2017-04-25 2017-08-18 腾讯科技(深圳)有限公司 The acquisition methods and device of video segment
CN107801106A (en) * 2017-10-24 2018-03-13 维沃移动通信有限公司 A kind of video segment intercept method and electronic equipment
JP2018101892A (en) * 2016-12-20 2018-06-28 デフセッション株式会社 Video generation system, comment video generation device, an video generation method
CN108769822A (en) * 2018-04-13 2018-11-06 维沃移动通信有限公司 A kind of image display method and terminal device
CN109089127A (en) * 2018-07-10 2018-12-25 武汉斗鱼网络科技有限公司 A kind of video-splicing method, apparatus, equipment and medium
CN109104642A (en) * 2018-09-26 2018-12-28 北京搜狗科技发展有限公司 A kind of video generation method and device
CN109947981A (en) * 2017-10-30 2019-06-28 上海全土豆文化传播有限公司 Video sharing method and device

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140344853A1 (en) * 2013-05-16 2014-11-20 Panasonic Corporation Comment information generation device, and comment display device
CN104469508A (en) * 2013-09-13 2015-03-25 中国电信股份有限公司 Method, server and system for performing video positioning based on bullet screen information content
CN106921891A (en) * 2015-12-24 2017-07-04 北京奇虎科技有限公司 The methods of exhibiting and device of a kind of video feature information
JP2018101892A (en) * 2016-12-20 2018-06-28 デフセッション株式会社 Video generation system, comment video generation device, an video generation method
CN107071587A (en) * 2017-04-25 2017-08-18 腾讯科技(深圳)有限公司 The acquisition methods and device of video segment
CN107801106A (en) * 2017-10-24 2018-03-13 维沃移动通信有限公司 A kind of video segment intercept method and electronic equipment
CN109947981A (en) * 2017-10-30 2019-06-28 上海全土豆文化传播有限公司 Video sharing method and device
CN108769822A (en) * 2018-04-13 2018-11-06 维沃移动通信有限公司 A kind of image display method and terminal device
CN109089127A (en) * 2018-07-10 2018-12-25 武汉斗鱼网络科技有限公司 A kind of video-splicing method, apparatus, equipment and medium
CN109104642A (en) * 2018-09-26 2018-12-28 北京搜狗科技发展有限公司 A kind of video generation method and device

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
袁政 等: "基于跨屏互动的媒体内容分享系统研究与实践", 《有线电视技术》 *

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111600931A (en) * 2020-04-13 2020-08-28 维沃移动通信有限公司 Information sharing method and electronic equipment
CN113747223A (en) * 2020-05-29 2021-12-03 口碑(上海)信息技术有限公司 Video comment method and device and electronic equipment
CN113747223B (en) * 2020-05-29 2023-11-21 口碑(上海)信息技术有限公司 Video comment method and device and electronic equipment
WO2022253141A1 (en) * 2021-06-02 2022-12-08 北京字跳网络技术有限公司 Video sharing method and apparatus, device, and medium
CN114339362A (en) * 2021-12-08 2022-04-12 腾讯科技(深圳)有限公司 Video bullet screen matching method and device, computer equipment and storage medium
CN114979054A (en) * 2022-05-13 2022-08-30 维沃移动通信有限公司 Video generation method and device, electronic equipment and readable storage medium
CN115086742A (en) * 2022-06-13 2022-09-20 北京达佳互联信息技术有限公司 Audio and video generation method and device
WO2023246395A1 (en) * 2022-06-21 2023-12-28 北京字跳网络技术有限公司 Method and apparatus for audio-visual content sharing, device, and storage medium

Also Published As

Publication number Publication date
CN110933511B (en) 2021-12-14

Similar Documents

Publication Publication Date Title
CN110933511B (en) Video sharing method, electronic device and medium
CN110087117B (en) Video playing method and terminal
CN113360238A (en) Message processing method and device, electronic equipment and storage medium
CN108958867B (en) Task operation method and device for application
CN109525874B (en) Screen capturing method and terminal equipment
CN108737904B (en) Video data processing method and mobile terminal
CN109343755B (en) File processing method and terminal equipment
CN107943390B (en) Character copying method and mobile terminal
CN110602565A (en) Image processing method and electronic equipment
CN109213416B (en) Display information processing method and mobile terminal
CN110830362B (en) Content generation method and mobile terminal
CN112969087B (en) Information display method, client, electronic equipment and storage medium
CN109495638B (en) Information display method and terminal
CN109271262B (en) Display method and terminal
CN108600079B (en) Chat record display method and mobile terminal
CN111770374B (en) Video playing method and device
CN110913261A (en) Multimedia file generation method and electronic equipment
CN114422461A (en) Message reference method and device
CN108710521B (en) Note generation method and terminal equipment
CN111143614A (en) Video display method and electronic equipment
CN111212316A (en) Video generation method and electronic equipment
CN109166164B (en) Expression picture generation method and terminal
CN108540649B (en) Content display method and mobile terminal
CN111049977B (en) Alarm clock reminding method and electronic equipment
CN110333803B (en) Multimedia object selection method and terminal equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant