CN108989691B - Video shooting method and device, electronic equipment and computer readable storage medium - Google Patents

Video shooting method and device, electronic equipment and computer readable storage medium Download PDF

Info

Publication number
CN108989691B
CN108989691B CN201811223743.XA CN201811223743A CN108989691B CN 108989691 B CN108989691 B CN 108989691B CN 201811223743 A CN201811223743 A CN 201811223743A CN 108989691 B CN108989691 B CN 108989691B
Authority
CN
China
Prior art keywords
video
user
shooting
window
original
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811223743.XA
Other languages
Chinese (zh)
Other versions
CN108989691A (en
Inventor
王海婷
郝一鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Microlive Vision Technology Co Ltd
Original Assignee
Beijing Microlive Vision Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Microlive Vision Technology Co Ltd filed Critical Beijing Microlive Vision Technology Co Ltd
Priority to CN201811223743.XA priority Critical patent/CN108989691B/en
Publication of CN108989691A publication Critical patent/CN108989691A/en
Priority to PCT/CN2018/124065 priority patent/WO2020077855A1/en
Application granted granted Critical
Publication of CN108989691B publication Critical patent/CN108989691B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/62Control of parameters via user interfaces
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects

Abstract

The embodiment of the disclosure provides a video shooting method, a video shooting device, an electronic device and a computer readable storage medium, wherein the method comprises the following steps: receiving a video shooting triggering operation of a user through a video playing interface of an original video; responding to a video shooting triggering operation, and displaying a video shooting window on a video playing interface in an overlapping manner; receiving video shooting operation of a user through a video playing interface; responding to the video shooting operation, shooting a user video, simultaneously playing an original video, and displaying the user video through a video shooting window; and synthesizing the user video and the original video to obtain the snap-shot video. Through the scheme of this embodiment, the user only needs to carry out the relevant operation of user's video shooting at the video playing interface, can obtain the video function of taking a match, and operation process is simple quick, because can reflect the user to the perception of former video through the user video, consequently, can facilitate the user to demonstrate the perception of former video through this scheme, has improved user's interactive experience.

Description

Video shooting method and device, electronic equipment and computer readable storage medium
Technical Field
The present disclosure relates to the field of internet technologies, and in particular, to a video shooting method, an apparatus, an electronic device, and a computer-readable storage medium.
Background
In the video interaction platform, a user can publish his own ideas or watching feelings of other videos in the platform in the form of videos, so that the interaction with the videos is realized.
In the prior art, when a user wants to shoot an interactive video based on a certain video in a video platform, the original video in the video platform is generally downloaded and stored, then the recording of the interactive video is completed by using a plurality of professional video recording tools, and then the completed interactive video is uploaded to the video platform, so that the shooting process of the whole interactive video cannot be completed only through the video platform, and the interactive experience of the user is reduced.
Therefore, the existing interactive video recording mode is complex, the user interactive experience is poor, and the actual application requirements of users cannot be met.
Disclosure of Invention
The purpose of this disclosure is to solve at least one of the above technical drawbacks and to improve the user experience. The technical scheme adopted by the disclosure is as follows:
in a first aspect, the present disclosure provides a video shooting method, including:
receiving a video shooting triggering operation of a user through a video playing interface of an original video;
responding to a video shooting triggering operation, and displaying a video shooting window on a video playing interface in an overlapping manner;
receiving video shooting operation of a user through a video playing interface;
responding to the video shooting operation, shooting a user video, simultaneously playing an original video, and displaying the user video through a video shooting window;
and synthesizing the user video and the original video to obtain the snap-shot video.
In the embodiment of the present disclosure, synthesizing a user video and an original video to obtain a co-shooting video includes:
synthesizing the audio information of the user video and the audio information of the original video to obtain the audio information of the co-shooting video;
synthesizing video information of a user video and video information of an original video to obtain video information of a co-shooting video;
and synthesizing the audio information of the co-shooting video and the video information of the co-shooting video to obtain the co-shooting video.
In an embodiment of the present disclosure, the method further includes:
receiving volume adjustment operation of a user through a video playing interface;
and responding to the volume adjustment operation, and correspondingly adjusting the volume of the audio information of the original video and/or the audio information of the user video.
In an embodiment of the present disclosure, the method further includes:
receiving special effect adding operation of a user aiming at a special effect to be added through a video playing interface;
and responding to the special effect adding operation, and adding the special effect to be added into the user video.
In an embodiment of the present disclosure, the method further includes:
and providing an operation prompt option for the user, wherein the operation prompt option is used for providing prompt information of the shooting operation of the close-up video for the user when the operation of the user is received.
In the embodiment of the present disclosure, in response to a video shooting operation, before shooting a user video and playing an original video at the same time and displaying the user video through a video shooting window, the method further includes:
receiving a recording selection operation of a user aiming at a recording mode of a user video through a video playing interface, wherein the recording mode comprises at least one of a fast recording mode, a slow recording mode and a standard recording mode;
and responding to the recording selection operation, and determining the recording mode of the user video.
In the embodiment of the present disclosure, after synthesizing the user video and the original video to obtain the co-shooting video, the method further includes:
receiving video storage operation and/or video release operation of a user;
and responding to the video saving operation, saving the co-shooting video locally, and/or responding to the video publishing operation, and publishing the co-shooting video.
In the embodiment of the present disclosure, issuing a snap-shot video in response to a video issuing operation includes:
responding to video release operation, and acquiring a user's view permission of a video in close shot;
and issuing the snap-shot video according to the checking permission of the snap-shot video.
In an embodiment of the present disclosure, when publishing a snap video in response to a video publishing operation, the method further includes:
generating a push message of the co-shooting video;
and sending the push information to the associated user of the user and/or the associated user of the original video.
In the embodiment of the present disclosure, if the duration of the user video is less than the duration of the original video, the user video and the original video are synthesized to obtain a close-shot video, including:
determining a first video which corresponds to the recording starting time and is consistent with the duration of the user video in the original video according to the recording starting time of the user video;
synthesizing the user video and the first video into a second video;
and obtaining a snap shot video according to the second video and the videos except the first video in the original video.
In a second aspect, the present disclosure provides a video photographing apparatus, the apparatus comprising:
the trigger operation receiving module is used for receiving the video shooting trigger operation of a user through a video playing interface of an original video;
the shooting window display module is used for responding to the video shooting triggering operation and displaying the video shooting window on the video playing interface in an overlapping mode;
the shooting operation receiving module is used for receiving the video shooting operation of a user through a video playing interface;
the user video shooting module is used for responding to video shooting operation, shooting a user video, playing an original video at the same time, and displaying the user video through a video shooting window;
and the co-shooting video generation module is used for synthesizing the user video and the original video to obtain a co-shooting video.
In the embodiment of the present disclosure, the close-up video generation module is specifically configured to, when synthesizing the user video and the original video to obtain the close-up video:
synthesizing the audio information of the user video and the audio information of the original video to obtain the audio information of the co-shooting video;
synthesizing video information of a user video and video information of an original video to obtain video information of a co-shooting video;
and synthesizing the audio information of the co-shooting video and the video information of the co-shooting video to obtain the co-shooting video.
In an embodiment of the present disclosure, the apparatus further includes:
and the volume adjusting module is used for receiving volume adjusting operation of a user through the video playing interface, responding to the volume adjusting operation, and correspondingly adjusting the volume of the audio information of the original video and/or the audio information of the user video.
In an embodiment of the present disclosure, the apparatus further includes:
and the special effect adding module is used for receiving the special effect adding operation of the user through the video playing interface, responding to the special effect adding operation and adding the special effect to be added to the video of the user.
In an embodiment of the present disclosure, the apparatus further includes:
and the operation prompting module is used for providing prompting operation options for the user, and the prompting operation options are used for providing prompting information of the shooting operation of the video of the user for the user.
In an embodiment of the present disclosure, the user video shooting module is further configured to:
the method comprises the steps of responding to a video shooting operation, shooting a user video, playing an original video at the same time, receiving a recording selection operation of a user for a recording mode of the user video through a video playing interface before displaying the user video through a video shooting window, responding to the recording selection operation, and determining the recording mode of the user video, wherein the recording mode comprises at least one of a fast recording mode, a slow recording mode and a standard recording mode.
In an embodiment of the present disclosure, the apparatus further includes:
and the close-shot video processing module is used for receiving the video storage operation and/or the video publishing operation of the user after the close-shot video is synthesized by the user video and the original video, responding to the video storage operation, storing the close-shot video locally, and/or responding to the video publishing operation, and publishing the close-shot video.
In the embodiment of the present disclosure, when responding to a video publishing operation and publishing a snap video, the snap video processing module is specifically configured to:
responding to video release operation, and acquiring a user's view permission of a video in close shot;
and issuing the snap-shot video according to the checking permission of the snap-shot video.
In an embodiment of the present disclosure, the apparatus further includes:
and the push message sending module is used for generating a push message of the co-shooting video and sending the push message to a user associated with the user and/or a user associated with the original video.
In the embodiment of the present disclosure, if the duration of the user video is less than the duration of the original video, the close-up video generation module is specifically configured to, when synthesizing the user video and the original video to obtain the close-up video:
determining a first video which corresponds to the recording starting time and is consistent with the duration of the user video in the original video according to the recording starting time of the user video;
synthesizing the user video and the first video into a second video;
and obtaining a snap shot video according to the second video and the videos except the first video in the original video.
In a third aspect, the present disclosure provides an electronic device comprising a processor and a memory;
a memory for storing operating instructions;
a processor for executing the method as shown in any embodiment of the first aspect of the present disclosure by calling an operation instruction.
In a fourth aspect, the present disclosure provides a computer readable storage medium having stored thereon at least one instruction, at least one program, set of codes, or set of instructions, which is loaded and executed by a processor to implement a method as shown in any one of the embodiments of the first aspect of the present disclosure.
The technical scheme provided by the embodiment of the disclosure has the following beneficial effects:
the video shooting method, the video shooting device, the electronic equipment and the computer readable storage medium can superpose and display the video shooting window on the video playing interface based on the video shooting triggering operation of the user, and can finish the shooting of the video of the user through the video shooting window on the basis of the original video; through this scheme, the user only needs carry out the relevant operation that the user video was shot at the video broadcast interface, can record the user video on former video basis through the video shooting window realization, finally obtain the video function of taking a match of user video and former video synthesis, the operation process is simple quick, because can reflect the user to the perception of former video through the user video, comment or watch the reaction, consequently, can conveniently demonstrate its opinion or reaction to former video through this scheme user, the user's of satisfying user's that can be better practical application demand, user's interactive experience has been improved, promote the interest that the video was shot.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present disclosure, the drawings that are required to be used in the description of the embodiments of the present disclosure will be briefly described below.
Fig. 1 is a schematic flowchart of a video shooting method according to an embodiment of the present disclosure;
fig. 2 is a schematic diagram of a video playing interface provided in an embodiment of the present disclosure;
fig. 3A is a schematic diagram of a volume adjustment manner provided by an embodiment of the disclosure;
fig. 3B is a schematic diagram of another volume adjustment manner provided by the embodiment of the disclosure;
fig. 4A is a schematic diagram of another video playing interface provided by an embodiment of the present disclosure;
fig. 4B is a schematic diagram of another video playing interface provided by the embodiment of the present disclosure;
fig. 5 is a schematic view of still another video playing interface provided by an embodiment of the present disclosure;
fig. 6 is a schematic structural diagram of a video shooting device according to an embodiment of the present disclosure;
fig. 7 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure.
Detailed Description
Reference will now be made in detail to embodiments of the present disclosure, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below by referring to the drawings are exemplary only for explaining technical senses of the present disclosure, and are not construed as limiting the present disclosure.
As used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms "comprises" and/or "comprising," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. It will be understood that when an element is referred to as being "connected" or "coupled" to another element, it can be directly connected or coupled to the other element or intervening elements may also be present. Further, "connected" or "coupled" as used herein may include wirelessly connected or wirelessly coupled. As used herein, the term "and/or" includes all or any element and all combinations of one or more of the associated listed items.
The following describes the technical solutions of the present disclosure and how to solve the above technical problems in specific embodiments. The following several specific embodiments may be combined with each other, and details of the same or similar concepts or processes may not be repeated in some embodiments. Embodiments of the present disclosure will be described below with reference to the accompanying drawings.
An embodiment of the present disclosure provides a video photographing method, which may include, as shown in fig. 1:
and step S110, receiving a video shooting triggering operation of a user through a video playing interface of the original video.
The video shooting triggering operation indicates that a user wants to shoot a user video based on an original video in a video playing interface, that is, the user is used for triggering an action of starting to shoot the user video, and a specific form of the operation is configured as required, for example, the operation may be a triggering action of an operation position of the user on an interface of an application program of a client. The video playing interface is used for interaction between the terminal device and a user, and relevant operations of the user on the original video can be received through the interface, for example, operations such as sharing or close shooting the original video are performed.
In practical applications, the operation may be triggered through a relevant trigger of the client, where a specific form of the trigger may be configured according to actual needs, for example, the trigger may be a specified trigger button or an input box on the client interface, or may also be a voice instruction of the user, and specifically, for example, the trigger may be a virtual button displayed on the application interface of the client for "taking a photo" and an operation of clicking the button by the user is a video shooting trigger operation of the user.
And step S120, responding to the video shooting triggering operation, and displaying the video shooting window on the video playing interface in an overlapping manner.
In practical application, the video shooting window may be displayed in a superimposed manner at a preset position on the video playing interface, where the preset position may be a display position pre-configured based on the size of the display interface of the terminal device of the user, for example, the upper left corner of the video playing interface; the size of the video shooting window is smaller than that of the display window of the original video, so that the video shooting window only shields partial picture content of the original video. The initial size of the video shooting window can be configured according to actual needs, and can be selected to reduce the shielding of the original video picture as much as possible when the original video is played, so that the watching of the user on the original video is not influenced, and the watching size of the recorded picture by the user is not influenced as much as possible when the video of the user is shot. For example, the automatic adjustment of the size of the video shooting window displayed on the terminal device may be configured according to the size of the display interface of the terminal device of the user, for example, the video shooting window is one tenth or one fifth of the display interface of the terminal device.
And step S130, receiving the video shooting operation of the user through the video playing interface.
Similarly, the video playing interface includes a related trigger for triggering the video shooting operation, such as a specific trigger button or an input box, and may also be a voice instruction of the user; specifically, the content to be shot of the user may be obtained by displaying a "shooting" virtual button on an application interface of the client, where an operation of clicking the button by the user is a video shooting operation of the user, and a shooting function of a terminal device of the user may be triggered by the video shooting operation.
Step S140, in response to the video shooting operation, shooting the user video and playing the original video at the same time, and displaying the user video through the video shooting window.
In order to enable the comment content in the user video to correspond to the content in the original video, the user video can be synchronously recorded while the original video is played, namely, when the video shooting operation is received, the user video starts to be shot, and the original video is synchronously played, so that the function of synchronously recording the user video while the original video is played can be realized, the user can synchronously record the wanted content or the comment content in the user video based on the video content played in the original video in the process of recording the user video, and the interaction experience of the user is further improved.
In practical application, if the original video is in a playing state before the video shooting operation of a user is received through a video playing interface of the original video, the original video is automatically paused when the video shooting operation of the user is received, or the original video is paused by the user, the paused original video can be played when the video shooting operation is received, the original video is played while the user video is shot, and the user video is displayed through a video shooting window.
It should be noted that, in the embodiment of the present disclosure, the user video may be selected as a video included in the user, that is, a video of the user is recorded. Of course, the video may also be the video of other scenes recorded after the user adjusts the video as required.
And S150, synthesizing the user video and the original video to obtain a co-shooting video.
The method comprises the steps that a user video and an original video are combined, the user video and the original video can be combined in the process of shooting the user video, the user video and the original video can also be combined after the user video is shot, the obtained co-shooting video comprises the content in the original video and the content in the user video, the user video can be watched while the original video is watched through the co-shooting video, and the watching reaction or the thinking of the user to the original video can be known through watching the co-shooting video when the user video is the reaction video of the user.
In practical application, the original video may be a video that has not been subjected to the snap shot, or may be a snap shot video that has been subjected to the snap shot.
According to the scheme in the embodiment of the disclosure, the video shooting window can be overlapped and displayed on the video playing interface based on the video shooting triggering operation of the user, and the shooting of the video of the user can be completed through the video shooting window on the basis of the original video; through this scheme, the user only needs carry out the relevant operation that the user video was shot at the video broadcast interface, can record the user video on former video basis through the video shooting window realization, finally obtain the video function of taking a match of user video and former video synthesis, the operation process is simple quick, because can reflect the user to the perception of former video through the user video, comment or watch the reaction, consequently, can conveniently demonstrate its opinion or reaction to former video through this scheme user, the user's of satisfying user's that can be better practical application demand, user's interactive experience has been improved, promote the interest that the video was shot.
As an example, fig. 2 is a schematic diagram illustrating a video playing interface of an original video of an application program of a client in a terminal device, where a virtual button of "close shooting" displayed in the interface is a video shooting trigger button, and an operation of clicking the button by a user is a video shooting trigger operation of the user; in a video playing interface, after receiving a video shooting trigger operation of a user, a video shooting window A is superposed and displayed on a video playing interface B, a virtual button shown in the interface as shooting is a shooting trigger button, the operation of clicking the button by the user is the video shooting operation of the user, and after receiving the operation, the video of the user is shot through the video shooting window A, so that the function of shooting the video of the user on the basis of the original video is realized.
It should be noted that, in practical applications, the specific form of the video playing interface and the form of each button may be configured according to practical needs, and the above example is only an optional implementation manner.
In the embodiment of the present disclosure, synthesizing the user video and the original video to obtain the snap-shot video may include:
synthesizing the audio information of the user video and the audio information of the original video to obtain the audio information of the co-shooting video;
synthesizing video information of a user video and video information of an original video to obtain video information of a co-shooting video;
and synthesizing the audio information of the co-shooting video and the video information of the co-shooting video to obtain the co-shooting video.
The video comprises video information and audio information, so that the respective video information and audio information can be synthesized respectively in the process of synthesizing the user video and the original video, and finally the synthesized video information and audio information are synthesized into a co-shooting video.
In an embodiment of the present disclosure, the method may further include:
receiving volume adjustment operation of a user through a video playing interface;
and responding to the volume adjustment operation, and correspondingly adjusting the volume of the audio information of the original video and/or the audio information of the user video.
In order to further improve the interactive experience of the user, the volume in the original video and/or the user video can be adjusted to meet the video playing requirements of different users, and in practical application, if the user does not need to adjust the volume of the original video and the user video, the volume in the shot user video can be the preset volume, such as: volume consistent with the volume in the original video or volume of a preset value.
In practical application, the volume adjustment is realized by the volume adjustment virtual button in the video playing interface, the volume adjustment virtual button can be a volume adjustment progress bar, and then the volume adjustment corresponding to the original video and the volume adjustment of the user video can be correspondingly configured with two volume adjustment progress bars, such as a volume adjustment progress bar a and a volume adjustment progress bar b, the volume of the original video is adjusted by the volume adjustment progress bar a, the volume of the user video is adjusted by the volume adjustment progress bar b, and different volume adjustment progress bars can be distinguished by different identifications.
As an example, fig. 3A shows a schematic diagram of a volume adjustment progress bar in a volume adjustment interface, wherein a user can adjust the volume by sliding the volume adjustment progress bar, and sliding the volume adjustment progress bar upward (i.e., + "sign direction) of the interface to increase the volume; sliding down the interface (i.e., "-" sign direction) indicates turning the volume down. According to actual requirements, the volume adjustment progress bar can be set to be in a horizontal direction, that is, the volume adjustment progress bar shown in fig. 3B slides to the left of the interface (i.e., "-" sign direction), which means to turn down the volume, and slides to the right of the interface (i.e., "+" sign direction), which means to turn up the volume.
It should be noted that, in practical applications, the volume adjustment interface and the video playing interface may be the same display interface or different display interfaces. If the video playing interface is different, when the volume adjusting operation of the user is received through the video playing interface, the volume adjusting interface can be displayed, volume adjustment is performed through the interface, and optionally, in order to not affect recording and playing of the video, the volume adjusting interface can be displayed on the video playing interface in an overlapped mode, such as the edge position displayed on the video playing interface.
In an embodiment of the present disclosure, the method may further include:
receiving special effect adding operation of a user aiming at a special effect to be added through a video playing interface;
and responding to the special effect adding operation, and adding the special effect to be added into the user video.
In order to meet the video shooting requirements of different users, a function of adding a special effect in a user video can be provided for the user, namely, the selected special effect to be added is added to the user video through the special effect adding operation of the user. The special effect to be added can be added before the user video shooting, can also be added in the user video shooting process, and can also be added after the user video shooting is finished.
In practical application, the function of adding special effects in the user video can be realized by at least one of the following ways:
the first method comprises the following steps: the special effect adding function can be realized through a special effect virtual button displayed on a video playing interface, the operation of clicking the button by a user is the special effect adding operation aiming at the special effect to be added by the user, and the special effect corresponding to the button is added into the video of the user.
And the second method comprises the following steps: the special effect can be added by sliding the display interface of the user video, and the user can add the corresponding special effect to the user video by sliding the display interface of the user video left and right through an operator, such as a finger.
In an embodiment of the present disclosure, the method may further include:
and providing an operation prompt option for the user, wherein the operation prompt option is used for providing prompt information of the shooting operation of the close-up video for the user when the operation of the user is received.
If the user shoots the user video and obtains the co-shooting video when using the co-shooting function, namely on the basis of the original video, it is not clear how to specifically operate how to realize the co-shooting function, a prompt can be given to the user through a prompt operation option, in practical application, the prompt operation option can be displayed in a video playing interface through a 'help' virtual button, the user can obtain corresponding prompt information by clicking the button, the prompt information can be displayed to the user in an operation preview mode, the user can also be prompted how to operate in a text mode, and the presentation mode of the prompt information is not limited in the disclosure.
In the embodiment of the present disclosure, in response to a video shooting operation, before shooting a user video and playing an original video at the same time and displaying the user video through a video shooting window, the method may further include:
receiving a recording selection operation of a user aiming at a recording mode of a user video through a video playing interface, wherein the recording mode comprises at least one of a fast recording mode, a slow recording mode and a standard recording mode;
and responding to the recording selection operation, and determining the recording mode of the user video.
In order to meet the requirements of different users, before the user videos are shot, the function of selecting the recording mode of the user videos can be provided for the users, namely, the user videos are recorded according to the selected recording mode through the recording selection operation of the users. The recording speed of the fast recording mode, the recording speed of the standard recording mode and the recording speed of the slow recording mode are sequentially reduced; through the selection of different recording modes, the function of recording the user video at variable speed can be realized, and the interactive experience of the user is further improved.
It can be understood that the fast recording mode, the slow recording mode and the standard recording mode are relative, the recording rates of the different recording modes are different, and the recording rate of each recording mode can be configured as required. For example, the fast recording mode is a recording mode with a recording rate of a first rate, the slow recording mode is a recording mode with a recording rate of a second rate, and the standard recording mode is a recording mode with a recording rate of a third rate, wherein the first rate is greater than the third rate, and the third rate is greater than the second rate.
In the embodiment of the present disclosure, after synthesizing the user video and the original video to obtain the snap-shot video, the method may further include:
receiving video storage operation and/or video release operation of a user;
and responding to the video saving operation, saving the co-shooting video locally, and/or responding to the video publishing operation, and publishing the co-shooting video.
After the captured video is obtained, a function of publishing and/or storing the captured video can be provided for the user, that is, the captured video is published to a specified video platform through a video publishing operation of the user so as to realize sharing of the captured video; or the video storage operation of the user is used for storing the snap-shot video locally for the user to view. In practical application, after a co-shooting video is obtained, the user can jump to a video publishing interface, receive a video publishing operation of the user through the video publishing interface, and also can directly receive the video publishing operation of the user through a video playing interface, wherein the video publishing operation can be obtained by clicking a 'publishing' virtual button by the user.
In the embodiment of the present disclosure, issuing a snap-shot video in response to a video issuing operation may include:
responding to video release operation, and acquiring a user's view permission of a video in close shot;
and issuing the snap-shot video according to the checking permission of the snap-shot video.
In order to meet the privacy requirement of a user on the captured video, a function of configuring the view permission of the captured video is provided for the user, namely the view permission of the captured video of the user is obtained through the video publishing operation of the user, and the captured video is published according to the view permission of the captured video of the user. Through the checking authority of the co-shooting video, the co-shooting video can be checked only by the user corresponding to the checking authority of the co-shooting video, and the user who is not in the checking authority of the co-shooting video can not check the co-shooting video. In practical application, the checking authority of the snap-shot video can be configured in advance, and any snap-shot video needing to be issued is the checking authority of the snap-shot video; the view permission of the video taken in a close shot can also be configured when the current video taken in a close shot is published, and correspondingly, the current video taken in a close shot is published according to the configured privacy permission.
The group of the users can view the same video, and the group of the users can view the same video.
In an embodiment of the present disclosure, the method may further include:
generating a push message of the co-shooting video;
and sending the push information to the associated user of the user and/or the associated user of the original video.
In order to inform people related to the taken video, when the taken video is published, a push message of the taken video can be generated, and the associated user of the user and/or the associated user of the original video can know the publishing of the taken video in time through the push message. The related user of the user refers to a user having a relationship with the user, and the related scope of the relationship may be configured as required, for example, the related scope may include but is not limited to a person concerned by the user or a person concerned by the user. For example, the associated users of the original video may include, but are not limited to, the publisher of the original video and people related to the original video, for example, the original video is a video subjected to one-time co-shooting, the publisher of the original video is a user a, the author of the initial original video before the co-shooting of the original video is a user b, and then the associated users of the original video may include the user a and the user b.
In practical applications, when a live video is distributed, related attention information may be added to the title of the live video to indicate which user the distribution of the live video is desired to know, and a receiver of push information may be embodied in the form of @ a certain user.
In an example, a user a pays attention to a user b, the user a publishes a co-shooting video, and the user a is associated with the user b, that is, the user a @ the user b, wherein the user a @ the user b can be displayed in a title of the co-shooting video, and then a push message of the co-shooting video is sent to the user b, so that the user b is informed that the user a publishes the video.
In yet another example, user a, although paying attention to user b, publishes the snap video, but user a does not have @ user b, and user b does not receive the push message of the snap video.
In yet another example, user a is not interested in user b, user a publishes the snap video, but user b is @ when user a publishes the snap video, user b may receive the push message of the snap video.
In the embodiment of the present disclosure, if the duration of the user video is less than the duration of the original video, the user video and the original video are synthesized to obtain the close-shot video, which may include:
determining a first video which corresponds to the recording starting time and is consistent with the duration of the user video in the original video according to the recording starting time of the user video;
synthesizing the user video and the first video into a second video;
and obtaining a snap shot video according to the second video and the videos except the first video in the original video.
Based on the playing content in the original video, the time length of the user video recorded by the user can be consistent with or inconsistent with the time length of the original video, and the user can select the recording starting time of the user video based on the content in the original video, so that when the co-shooting video is played, the content of the user video corresponds to the content in the original video, and the interaction experience of the user is further improved.
In an embodiment of the present disclosure, the method may further include: and hiding the virtual buttons of the corresponding functions in the video playing interface.
In practical applications, virtual identifiers representing different functions can be displayed in the video playing interface, such as: a virtual button a indicating the start of shooting, a progress bar b indicating the progress of shooting, a virtual button c indicating the addition of a special effect, a virtual button d indicating the distribution of a co-shot video, and the like; a schematic diagram of a video playback interface is shown in fig. 4A and 4B. In order to further improve the interaction experience of the user, other virtual identifiers except for the virtual button a and the progress bar B in the video playing interface in fig. 4A may be hidden, for example, the virtual buttons c and d are hidden, the hidden interface is as shown in fig. 4B, and through the hiding of the virtual identifiers, the tidiness of the video playing interface may be maintained.
In practical applications, a virtual button for hiding a function button may be further set in the interface, and a user may set which function buttons are hidden or restored in display through the button, specifically, when receiving an operation of the button by the user, the user may select which virtual buttons are hidden through the button, or select to restore in display a virtual button that has been hidden before.
In the embodiment of the present disclosure, the shape of the video shooting window is not limited, and includes a circle, a rectangle, and other shapes, and may be configured according to actual requirements.
In an embodiment of the present disclosure, the method may further include:
receiving window moving operation of a user for a video shooting window;
and responding to the window moving operation, and adjusting the video shooting window to a corresponding area above the video playing interface.
The user can adjust the position of the video shooting window so as to meet the position requirements of different users on the video shooting window on the video playing interface. In practical application, the position of the video shooting window can be adjusted through any one of the following window moving operations of a user:
the first method comprises the following steps: the user can drag the video shooting window to adjust the position of the video shooting window through an operator, such as a finger, when the operator contacts the video shooting window to drag the video shooting window, the position of the video shooting window is indicated to be adjusted, and when the operator leaves the video shooting window, namely the dragging of the video shooting window is stopped, the position corresponding to the stopping of the dragging is the corresponding area of the video shooting window on the video playing interface.
And the second method comprises the following steps: the user can adjust the position of the video shooting window through a position progress bar displayed in the video playing interface, and the user can determine the corresponding area of the video shooting window on the video playing interface through sliding the position progress bar.
In the embodiment of the present disclosure, in step S140, in response to the window moving operation, adjusting the video shooting window to a corresponding area on the video playing interface may include:
responding to window moving operation, and displaying a pre-configured window adjusting boundary line on a video playing interface, wherein the window adjusting boundary line is used for limiting a display area of a video shooting window;
determining a current display area of a video shooting window according to window moving operation and a window adjusting boundary line;
and adjusting the video shooting window to a corresponding position above the video playing interface according to the current display area.
In practical application, the window adjustment boundary line can be preconfigured based on the display interface sizes of various different terminal devices, so that the content shot in the video shooting window can be adaptively displayed in the display interface of any terminal device. Based on the configuration of the window adjustment boundary line, when the window moving operation of a user is received, the pre-configured window adjustment boundary line is displayed on the video playing interface at the same time, so that the adjustment of the video shooting window has an adjustment basis when the user adjusts the video shooting window.
In practical applications, the video capture window may be configured according to requirements, for example: the window adjustment boundary line may be a guiding line located at a preconfigured position in the video playing interface, the preconfigured position may include at least one position of a top, a bottom, a left side, and a right side of the video playing interface, and the guiding lines at different positions may define an adjustment range of a corresponding position of the video capturing window in the video playing interface.
As shown in fig. 5, in a video playing interface, two top and left guiding lines in the video playing interface are taken as window adjusting lines, that is, window adjusting boundary lines a and b, a user drags a video shooting window to trigger a window adjusting operation, and at the same time, the window adjusting boundary lines a and b are displayed in the video playing interface, and the window adjusting boundary lines a and b are two lines perpendicular to each other. And the user drags the video shooting window f from the position A to the position B, and based on the position B, the video shooting window f is adjusted to the position, corresponding to the position B, above the video playing interface, so that the adjustment of the video shooting window is realized.
In the embodiment of the present disclosure, determining the current display area of the video capture window according to the window moving operation and the window adjusting boundary line may include:
determining a first display area of a video shooting window according to window moving operation;
if the distance between the first display area and any window adjusting boundary line is not less than the set distance, determining the first display area as the current display area;
if the distance between the first display area and any window adjusting boundary line is smaller than the set distance, determining the second display area as the current display area;
the second display area is an area obtained by translating the first display area to any window adjustment boundary line, and at least one position point of the second display area is overlapped with any window adjustment boundary line.
The video shooting window has a relatively better display position within an adjustment range defined by the window adjustment boundary line, for example, a display area close to the window adjustment boundary line, and in the process of adjusting the video window, except for a user who has a requirement on the display area of the video shooting window on the video playing interface, the user cannot accurately acquire the relatively better display position, so that the user can be helped to adjust the video shooting window to the relatively better position on the video playing interface through the distance between the display area of the video shooting window and the window adjustment boundary line in the adjustment process.
Specifically, in the process of adjusting the video shooting window, when the distance between the first display area of the video shooting window and any window adjustment boundary line is not less than the set distance, it indicates that the user may wish to adjust the video shooting window to the display position of the non-edge area of the video playing interface, and the first display area may be used as the area to which the video shooting window is to be adjusted, that is, the current display area. When the distance between the first display area and any window adjustment boundary line is smaller than the set distance, it indicates that the user may wish to adjust the video shooting window to the edge area of the video playing interface to reduce the occlusion of the playing interface of the original video as much as possible, and at this time, the current display area may be determined as the second display area at the boundary line.
In practical application, if the video shooting window is rectangular and the window adjustment boundary line is a straight line, the first display area is rectangular, and the area obtained by translating the first display area to any window adjustment boundary line is the area corresponding to the superposition of any boundary line of the first display area and any window adjustment boundary line; and if the video shooting window is circular and the window adjusting boundary line is a straight line, the first display area is circular, and the area obtained by translating the first display area to any window adjusting boundary line is the area corresponding to the superposition of at least one position point of the first display area and any window adjusting boundary line. It is understood that, when there is an adjustment boundary line, the display area of the photographing window cannot exceed the boundary line regardless of how the photographing window is adjusted.
In an embodiment of the present disclosure, the method may further include:
receiving window size adjusting operation of a user for a video shooting window;
and responding to the window size adjusting operation, and adjusting the video shooting window to the corresponding display size.
The size of the video shooting window can be set according to a preset default value, or the size of the video shooting window can be adjusted by a user based on the actual requirements of the user, and in the actual application, the video playing interface comprises a trigger mark for triggering the window size adjustment operation, such as a specified trigger button or an input box, or the voice of the user; specifically, the virtual button may be a "window" displayed on the video playing interface, and a user may trigger a window size adjustment operation through the virtual button, and the size of the video shooting window may be adjusted through the window size adjustment operation.
Based on the same principle as the method shown in fig. 1, an embodiment of the present disclosure also provides a video shooting apparatus 20, as shown in fig. 6, where the apparatus 20 may include: a trigger operation reception module 210, a photographing window display module 220, a photographing operation reception module 230, a user video photographing module 240, and a snap video generation module 250, wherein,
the trigger operation receiving module 210 is configured to receive a video shooting trigger operation of a user through a video playing interface of an original video;
a shooting window display module 220, configured to display a video shooting window overlaid on a video playing interface in response to a video shooting trigger operation;
a shooting operation receiving module 230, configured to receive a video shooting operation of a user through a video playing interface;
a user video shooting module 240, configured to, in response to a video shooting operation, shoot a user video while playing an original video, and display the user video through a video shooting window;
and a co-shooting video generating module 250, configured to synthesize the user video and the original video to obtain a co-shooting video.
According to the scheme in the embodiment of the disclosure, the video shooting window can be overlapped and displayed on the video playing interface based on the video shooting triggering operation of the user, and the shooting of the video of the user can be completed through the video shooting window on the basis of the original video; through this scheme, the user only needs carry out the relevant operation that the user video was shot at the video broadcast interface, can record the user video on former video basis through the video shooting window realization, finally obtain the video function of taking a match of user video and former video synthesis, the operation process is simple quick, because can reflect the user to the perception of former video through the user video, comment or watch the reaction, consequently, can conveniently demonstrate its opinion or reaction to former video through this scheme user, the user's of satisfying user's that can be better practical application demand, user's interactive experience has been improved, promote the interest that the video was shot.
In the embodiment of the present disclosure, when the snap video generation module 250 synthesizes the user video and the original video to obtain the snap video, it is specifically configured to:
synthesizing the audio information of the user video and the audio information of the original video to obtain the audio information of the co-shooting video;
synthesizing video information of a user video and video information of an original video to obtain video information of a co-shooting video;
and synthesizing the audio information of the co-shooting video and the video information of the co-shooting video to obtain the co-shooting video.
In an embodiment of the present disclosure, the apparatus further includes:
and the volume adjusting module is used for receiving volume adjusting operation of a user through the video playing interface, responding to the volume adjusting operation, and correspondingly adjusting the volume of the audio information of the original video and/or the audio information of the user video.
In an embodiment of the present disclosure, the apparatus may further include:
and the special effect adding module is used for receiving the special effect adding operation of the user aiming at the special effect to be added through the video playing interface, responding to the special effect adding operation and adding the special effect to be added into the video of the user.
In an embodiment of the present disclosure, the apparatus may further include:
and the operation prompting module is used for providing an operation prompting option for the user, and the operation prompting option is used for providing prompting information of the shooting operation of the photo-taking video for the user when the operation of the user is received.
In the embodiment of the present disclosure, the user video shooting module 240 is further configured to:
the method comprises the steps of responding to a video shooting operation, shooting a user video, playing an original video at the same time, receiving a recording selection operation of a user for a recording mode of the user video through a video playing interface before displaying the user video through a video shooting window, responding to the recording selection operation, and determining the recording mode of the user video, wherein the recording mode can comprise at least one of a fast recording mode, a slow recording mode and a standard recording mode.
In an embodiment of the present disclosure, the apparatus may further include:
and the close-shot video processing module is used for receiving the video storage operation and/or the video publishing operation of the user after the close-shot video is synthesized by the user video and the original video, responding to the video storage operation, storing the close-shot video locally, and/or responding to the video publishing operation, and publishing the close-shot video.
In the embodiment of the present disclosure, when responding to a video publishing operation and publishing a snap video, the snap video processing module is specifically configured to:
responding to video release operation, and acquiring a user's view permission of a video in close shot;
and issuing the snap-shot video according to the checking permission of the snap-shot video.
In an embodiment of the present disclosure, the apparatus may further include:
and the push message sending module is used for generating a push message of the co-shooting video and sending the push message to a user associated with the user and/or a user associated with the original video.
In the embodiment of the present disclosure, if the duration of the user video is less than the duration of the original video, the close-up video generating module 250 is specifically configured to, when synthesizing the user video and the original video to obtain the close-up video:
determining a first video which corresponds to the recording starting time and is consistent with the duration of the user video in the original video according to the recording starting time of the user video;
synthesizing the user video and the first video into a second video;
and obtaining a snap shot video according to the second video and the videos except the first video in the original video.
The video capturing apparatus of the embodiment of the present disclosure can execute the video capturing method provided by the embodiment of the present disclosure, and the implementation principles thereof are similar, the actions executed by the modules in the video capturing apparatus in the embodiments of the present disclosure correspond to the steps in the video capturing method in the embodiments of the present disclosure, and for the detailed functional description of the modules of the video capturing apparatus, reference may be specifically made to the description in the corresponding video capturing method shown in the foregoing, and details are not repeated here.
Based on the same principle as the video photographing method in the embodiment of the present disclosure, the present disclosure provides an electronic device including a processor and a memory; a memory for storing operating instructions; a processor for executing the method as shown in any one of the embodiments of the video capturing method of the present disclosure by calling an operation instruction.
Based on the same principles as the video capturing methods in the embodiments of the present disclosure, the present disclosure provides a computer readable storage medium storing at least one instruction, at least one program, code set, or set of instructions that is loaded and executed by a processor to implement a method as shown in any one of the embodiments of the video capturing methods of the present disclosure.
In the embodiment of the present disclosure, as shown in fig. 7, a schematic structural diagram of an electronic device 30 (e.g., a terminal device or a server implementing the method shown in fig. 1) suitable for implementing the embodiment of the present disclosure is shown. The electronic devices in the embodiments of the present disclosure may include, but are not limited to, mobile terminals such as mobile phones, notebook computers, digital broadcast receivers, PDAs (personal digital assistants), PADs (tablet computers), PMPs (portable multimedia players), in-vehicle terminals (e.g., car navigation terminals), and the like, and fixed terminals such as digital TVs, desktop computers, and the like. The electronic device shown in fig. 7 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 7, the electronic device 30 may include a processing means (e.g., a central processing unit, a graphics processor, etc.) 301 that may perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM)302 or a program loaded from a storage means 308 into a Random Access Memory (RAM) 303. In the RAM 303, various programs and data necessary for the operation of the electronic apparatus 30 are also stored. The processing device 301, the ROM 302, and the RAM 303 are connected to each other via a bus 304. An input/output (I/O) interface 305 is also connected to bus 304.
Generally, the following devices may be connected to the I/O interface 305: input devices 306 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; an output device 307 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage devices 308 including, for example, magnetic tape, hard disk, etc.; and a communication device 309. The communication means 309 may allow the electronic device 30 to communicate wirelessly or by wire with other devices to exchange data. While fig. 7 illustrates an electronic device 30 having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided.
In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network through the communication means 309, or installed from the storage means 308, or installed from the ROM 302. The computer program, when executed by the processing device 301, performs the above-described functions defined in the methods of the embodiments of the present disclosure.
It should be noted that the computer readable medium in the present disclosure can be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In contrast, in the present disclosure, a computer readable signal medium may comprise a propagated data signal with computer readable program code embodied therein, either in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
The computer readable medium may be embodied in the electronic device; or may exist separately without being assembled into the electronic device.
The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: acquiring at least two internet protocol addresses; sending a node evaluation request comprising at least two internet protocol addresses to node evaluation equipment, wherein the node evaluation equipment selects the internet protocol addresses from the at least two internet protocol addresses and returns the internet protocol addresses; receiving an internet protocol address returned by the node evaluation equipment; wherein the obtained internet protocol address indicates an edge node in the content distribution network.
Alternatively, the computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: receiving a node evaluation request comprising at least two internet protocol addresses; selecting an internet protocol address from at least two internet protocol addresses; returning the selected internet protocol address; wherein the received internet protocol address indicates an edge node in the content distribution network.
Computer program code for carrying out operations for aspects of the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present disclosure may be implemented by software or hardware. Where the name of a unit does not in some cases constitute a limitation of the unit itself, for example, the first retrieving unit may also be described as a "unit for retrieving at least two internet protocol addresses".
The foregoing description is only exemplary of the preferred embodiments of the disclosure and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the disclosure herein is not limited to the particular combination of features described above, but also encompasses other embodiments in which any combination of the features described above or their equivalents does not depart from the spirit of the disclosure. For example, the above features and (but not limited to) the features disclosed in this disclosure having similar functions are replaced with each other to form the technical solution.

Claims (13)

1. A video capture method, comprising:
receiving a video shooting triggering operation of a user through a video playing interface of an original video;
responding to the video shooting triggering operation, and displaying a video shooting window on the video playing interface in an overlapping mode;
receiving the video shooting operation of the user through the video playing interface;
responding to the video shooting operation, shooting a user video, simultaneously playing the original video, and displaying the user video through the video shooting window, wherein the user video is an interactive video aiming at the original video;
synthesizing the user video and the original video to obtain a co-shooting video;
the method further comprises the following steps:
receiving window moving operation of the user aiming at the video shooting window;
responding to the window moving operation, and displaying a pre-configured window adjusting boundary line on the video playing interface, wherein the window adjusting boundary line is used for limiting the display area of the video shooting window;
determining the current display area of the video shooting window according to the window moving operation and the window adjusting boundary line;
adjusting the video shooting window to a corresponding position above the video playing interface according to the current display area;
the window adjusting boundary line is a guiding line located at a preconfigured position in the video playing interface, the preconfigured position includes at least one position of the top, the bottom, the left side and the right side of the video playing interface, and the guiding lines at different positions can define an adjusting range of a corresponding position of the video shooting window in the video playing interface.
2. The method according to claim 1, wherein the synthesizing the user video and the original video to obtain a snap-shot video comprises:
synthesizing the audio information of the user video and the audio information of the original video to obtain the audio information of the co-shooting video;
synthesizing the video information of the user video and the video information of the original video to obtain the video information of the co-shooting video;
and synthesizing the audio information of the snap-shot video and the video information of the snap-shot video to obtain the snap-shot video.
3. The method of claim 2, further comprising:
receiving volume adjustment operation of the user through the video playing interface;
and responding to the volume adjustment operation, and correspondingly adjusting the volume of the audio information of the original video and/or the audio information of the user video.
4. The method of any of claims 1 to 3, further comprising:
receiving special effect adding operation of the user for the special effect to be added through the video playing interface;
responding to the special effect adding operation, and adding the special effect to be added into the user video.
5. The method of any of claims 1 to 3, further comprising:
and providing an operation prompt option for the user, wherein the operation prompt option is used for providing prompt information of the shooting operation of the close-shot video for the user when the operation of the user is received.
6. The method according to any one of claims 1 to 3, wherein before the capturing a user video while playing the original video and displaying the user video through the video capture window in response to the video capture operation, further comprising:
receiving a recording selection operation of the user for a recording mode of a user video through the video playing interface, wherein the recording mode comprises at least one of a fast recording mode, a slow recording mode and a standard recording mode;
and responding to the recording selection operation, and determining the recording mode of the user video.
7. The method according to any one of claims 1 to 3, wherein after the synthesizing the user video and the original video to obtain a snap-shot video, the method further comprises:
receiving video storage operation and/or video release operation of the user;
and responding to the video saving operation, saving the close-shot video locally, and/or responding to the video publishing operation, and publishing the close-shot video.
8. The method of claim 7, wherein publishing the snap video in response to the video publishing operation comprises:
responding to the video publishing operation, and acquiring the view permission of the user's close-shot video;
and issuing the close-shot video according to the view permission of the close-shot video.
9. The method of any of claims 1 to 3, further comprising:
generating a push message of the snap video;
and sending the push message to the associated user of the user and/or the associated user of the original video.
10. The method according to any one of claims 1 to 3, wherein if the duration of the user video is less than the duration of the original video, the synthesizing the user video and the original video to obtain a co-shot video comprises:
determining a first video which corresponds to the recording starting moment and is consistent with the duration of the user video in the original video according to the recording starting moment of the user video;
synthesizing the user video and the first video into a second video;
and obtaining a snap shot video according to the second video and the videos except the first video in the original video.
11. A video camera, comprising:
the trigger operation receiving module is used for receiving the video shooting trigger operation of a user through a video playing interface of an original video;
the shooting window display module is used for responding to the video shooting triggering operation and displaying the video shooting window on the video playing interface in an overlapping mode;
the shooting operation receiving module is used for receiving the video shooting operation of the user through the video playing interface;
the user video shooting module is used for responding to the video shooting operation, shooting a user video, simultaneously playing the original video, and displaying the user video through the video shooting window, wherein the user video is an interactive video aiming at the original video;
the close-up video generation module is used for synthesizing the user video and the original video to obtain a close-up video;
the device further comprises:
the window position adjusting module is used for receiving window moving operation of the user aiming at the video shooting window; responding to the window moving operation, and displaying a pre-configured window adjusting boundary line on the video playing interface, wherein the window adjusting boundary line is used for limiting the display area of the video shooting window; determining a current display area of the video shooting window according to the window moving operation and the window adjusting boundary line, and adjusting the video shooting window to a corresponding position above the video playing interface according to the current display area;
the window adjusting boundary line is a guiding line located at a preconfigured position in the video playing interface, the preconfigured position includes at least one position of the top, the bottom, the left side and the right side of the video playing interface, and the guiding lines at different positions can define an adjusting range of a corresponding position of the video shooting window in the video playing interface.
12. An electronic device, comprising:
a processor and a memory;
the memory is used for storing computer operation instructions;
the processor is used for executing the method of any one of the claims 1 to 10 by calling the computer operation instruction.
13. A computer readable storage medium having stored thereon a computer program, the storage medium having stored thereon at least one instruction, at least one program, set of codes, or set of instructions, which is loaded and executed by a processor to implement the method of any of the preceding claims 1 to 10.
CN201811223743.XA 2018-10-19 2018-10-19 Video shooting method and device, electronic equipment and computer readable storage medium Active CN108989691B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201811223743.XA CN108989691B (en) 2018-10-19 2018-10-19 Video shooting method and device, electronic equipment and computer readable storage medium
PCT/CN2018/124065 WO2020077855A1 (en) 2018-10-19 2018-12-26 Video photographing method and apparatus, electronic device and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811223743.XA CN108989691B (en) 2018-10-19 2018-10-19 Video shooting method and device, electronic equipment and computer readable storage medium

Publications (2)

Publication Number Publication Date
CN108989691A CN108989691A (en) 2018-12-11
CN108989691B true CN108989691B (en) 2021-04-06

Family

ID=64544498

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811223743.XA Active CN108989691B (en) 2018-10-19 2018-10-19 Video shooting method and device, electronic equipment and computer readable storage medium

Country Status (2)

Country Link
CN (1) CN108989691B (en)
WO (1) WO2020077855A1 (en)

Families Citing this family (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108989691B (en) * 2018-10-19 2021-04-06 北京微播视界科技有限公司 Video shooting method and device, electronic equipment and computer readable storage medium
CN109547841B (en) * 2018-12-20 2020-02-07 北京微播视界科技有限公司 Short video data processing method and device and electronic equipment
CN109862412B (en) * 2019-03-14 2021-08-13 广州酷狗计算机科技有限公司 Method and device for video co-shooting and storage medium
CN110087143B (en) * 2019-04-26 2020-06-09 北京谦仁科技有限公司 Video processing method and device, electronic equipment and computer readable storage medium
CN110209870B (en) * 2019-05-10 2021-11-09 杭州网易云音乐科技有限公司 Music log generation method, device, medium and computing equipment
CN110225020A (en) * 2019-06-04 2019-09-10 杭州网易云音乐科技有限公司 Audio frequency transmission method, system, electronic equipment and computer readable storage medium
CN110336968A (en) * 2019-07-17 2019-10-15 广州酷狗计算机科技有限公司 Video recording method, device, terminal device and storage medium
CN110602394A (en) * 2019-09-06 2019-12-20 北京达佳互联信息技术有限公司 Video shooting method and device and electronic equipment
CN110784652A (en) 2019-11-15 2020-02-11 北京达佳互联信息技术有限公司 Video shooting method and device, electronic equipment and storage medium
CN111629151B (en) * 2020-06-12 2023-01-24 北京字节跳动网络技术有限公司 Video co-shooting method and device, electronic equipment and computer readable medium
CN111726536B (en) * 2020-07-03 2024-01-05 腾讯科技(深圳)有限公司 Video generation method, device, storage medium and computer equipment
CN112004108B (en) * 2020-08-26 2022-11-01 深圳创维-Rgb电子有限公司 Live video recording processing method and device, intelligent terminal and storage medium
CN113068053A (en) * 2021-03-15 2021-07-02 北京字跳网络技术有限公司 Interaction method, device, equipment and storage medium in live broadcast room
CN113395588A (en) * 2021-06-23 2021-09-14 北京字跳网络技术有限公司 Video processing method, device, equipment and storage medium
CN113473224B (en) * 2021-06-29 2023-05-23 北京达佳互联信息技术有限公司 Video processing method, video processing device, electronic equipment and computer readable storage medium
CN113590076B (en) * 2021-07-12 2024-03-29 杭州网易云音乐科技有限公司 Audio processing method and device
CN113542844A (en) * 2021-07-28 2021-10-22 北京优酷科技有限公司 Video data processing method, device and storage medium
CN115720292A (en) * 2021-08-23 2023-02-28 北京字跳网络技术有限公司 Video recording method, apparatus, storage medium, and program product
CN113783997B (en) * 2021-09-13 2022-08-23 北京字跳网络技术有限公司 Video publishing method and device, electronic equipment and storage medium
CN115442519B (en) * 2022-08-08 2023-12-15 珠海普罗米修斯视觉技术有限公司 Video processing method, apparatus and computer readable storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104994314A (en) * 2015-08-10 2015-10-21 合一网络技术(北京)有限公司 Method and system for controlling picture in picture video on mobile terminal through gesture
CN106802759A (en) * 2016-12-21 2017-06-06 华为技术有限公司 The method and terminal device of video playback

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102255830B1 (en) * 2014-02-05 2021-05-25 삼성전자주식회사 Apparatus and Method for displaying plural windows
CN104125412B (en) * 2014-06-16 2018-07-06 联想(北京)有限公司 A kind of information processing method and electronic equipment
CN104967902B (en) * 2014-09-17 2018-10-12 腾讯科技(北京)有限公司 Video sharing method, apparatus and system
CN107920274B (en) * 2017-10-27 2020-08-04 优酷网络技术(北京)有限公司 Video processing method, client and server
CN107944397A (en) * 2017-11-27 2018-04-20 腾讯音乐娱乐科技(深圳)有限公司 Video recording method, device and computer-readable recording medium
CN108566519B (en) * 2018-04-28 2022-04-12 腾讯科技(深圳)有限公司 Video production method, device, terminal and storage medium
CN108989691B (en) * 2018-10-19 2021-04-06 北京微播视界科技有限公司 Video shooting method and device, electronic equipment and computer readable storage medium

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104994314A (en) * 2015-08-10 2015-10-21 合一网络技术(北京)有限公司 Method and system for controlling picture in picture video on mobile terminal through gesture
CN106802759A (en) * 2016-12-21 2017-06-06 华为技术有限公司 The method and terminal device of video playback

Also Published As

Publication number Publication date
CN108989691A (en) 2018-12-11
WO2020077855A1 (en) 2020-04-23

Similar Documents

Publication Publication Date Title
CN108989691B (en) Video shooting method and device, electronic equipment and computer readable storage medium
JP7053869B2 (en) Video generation methods, devices, electronics and computer readable storage media
WO2020077856A1 (en) Video photographing method and apparatus, electronic device and computer readable storage medium
CN109167950B (en) Video recording method, video playing method, device, equipment and storage medium
CN109275028B (en) Video acquisition method, device, terminal and medium
US11037600B2 (en) Video processing method and apparatus, terminal and medium
CN109600656B (en) Video list display method and device, terminal equipment and storage medium
CN104796795A (en) Video content publishing method and device
EP4343580A1 (en) Media file processing method and apparatus, device, readable storage medium, and product
CN111970571A (en) Video production method, device, equipment and storage medium
CN110740261A (en) Video recording method, device, terminal and storage medium
CN113225483A (en) Image fusion method and device, electronic equipment and storage medium
CN111367447A (en) Information display method and device, electronic equipment and computer readable storage medium
CN110784674B (en) Video processing method, device, terminal and storage medium
CN109636917B (en) Three-dimensional model generation method, device and hardware device
CN115830224A (en) Multimedia data editing method and device, electronic equipment and storage medium
KR20150048961A (en) System for servicing hot scene, method of servicing hot scene and apparatus for the same
CN110769129B (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
CN111385638B (en) Video processing method and device
EP4354885A1 (en) Video generation method and apparatus, device, storage medium, and program product
US20240129427A1 (en) Video processing method and apparatus, and terminal and storage medium
CN116112617A (en) Method and device for processing performance picture, electronic equipment and storage medium
CN116301528A (en) Interaction method, device, equipment and storage medium
CN116304132A (en) Interaction method, device, equipment and storage medium based on media content
CN116847147A (en) Special effect video determining method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant