WO2020077855A1 - 视频拍摄方法、装置、电子设备及计算机可读存储介质 - Google Patents
视频拍摄方法、装置、电子设备及计算机可读存储介质 Download PDFInfo
- Publication number
- WO2020077855A1 WO2020077855A1 PCT/CN2018/124065 CN2018124065W WO2020077855A1 WO 2020077855 A1 WO2020077855 A1 WO 2020077855A1 CN 2018124065 W CN2018124065 W CN 2018124065W WO 2020077855 A1 WO2020077855 A1 WO 2020077855A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- video
- user
- shooting
- original
- production
- Prior art date
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/62—Control of parameters via user interfaces
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/222—Studio circuitry; Studio devices; Studio equipment
- H04N5/262—Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
Definitions
- the present disclosure relates to the field of Internet technology, and in particular, the present disclosure relates to a video shooting method, apparatus, electronic equipment, and computer-readable storage medium.
- users can express their thoughts or viewing experience of other videos in the platform in the form of videos, so as to realize the interaction with the videos.
- the present disclosure provides a video shooting method, the method includes:
- the video shooting window is superimposed and displayed on the video playback interface
- synthesizing the user video and the original video to obtain a co-production video includes:
- the method further includes:
- the volume of the audio information of the original video and / or the audio information of the user video is adjusted accordingly.
- the method further includes:
- the special effect to be added is added to the user video.
- the method further includes:
- An operation prompt option is provided to the user, and the operation prompt option is used to provide the user with prompt information of the cooperative video shooting operation when the user's operation is received.
- the method further includes: before responding to the video shooting operation, shooting the user video, simultaneously playing the original video, and displaying the user video through the video shooting window,
- the recording method includes at least one of the fast recording method, the slow recording method, and the standard recording method;
- the recording method of the user's video is determined.
- the method further includes: after synthesizing the user video and the original video to obtain a co-production video,
- the co-production video is saved locally, and / or, in response to the video publishing operation, the co-production video is published.
- publishing the co-produced video in response to the video publishing operation includes:
- the method further includes: when publishing the co-production video in response to the video publishing operation,
- synthesizing the user video and the original video to obtain a co-production video includes:
- the recording start time of the user video determine the first video in the original video corresponding to the recording start time and consistent with the duration of the user video
- the co-production video is obtained.
- the present disclosure provides a video shooting device including:
- Trigger operation receiving module used to receive the user's video shooting trigger operation through the video playback interface of the original video
- the shooting window display module is used to superimpose and display the video shooting window on the video playback interface in response to the video shooting trigger operation;
- the shooting operation receiving module is used to receive the user's video shooting operation through the video playback interface
- User video capture module used to capture user video in response to the video capture operation, play the original video at the same time, and display the user video through the video capture window;
- Co-production video generation module used to synthesize user video and original video to obtain co-production video.
- the co-production video generation module may be configured to:
- the device further includes:
- the volume adjustment module is used to receive the user's volume adjustment operation through the video playback interface, and adjust the volume of the audio information of the original video and / or the audio information of the user video in response to the volume adjustment operation.
- the device further includes:
- the special effect adding module is used to receive the user's special effect adding operation through the video playing interface, and add the special effect to be added to the user video in response to the special effect adding operation.
- the device further includes:
- the operation prompt module is used to provide prompt operation options to the user, and the prompt operation options are used to provide prompt information of the user's video shooting operation to the user.
- the user video shooting module may also be configured as:
- the recording mode includes at least one of the fast recording mode, the slow recording mode, and the standard recording mode.
- the device further includes:
- Co-production video processing module which is used to receive the user's video saving operation and / or video publishing operation after synthesizing the user video and the original video to obtain the co-production video, and in response to the video saving operation, save the co-production video locally, and / or , In response to the video publishing operation, the co-production video is released.
- the co-production video processing module may be configured to:
- the device further includes:
- the push message sending module is used to generate push messages for co-produced videos, and send the push information to the associated users of the user, and / or the associated users of the original video.
- the co-production video generation module may be configured to:
- the recording start time of the user video determine the first video in the original video corresponding to the recording start time and consistent with the duration of the user video
- the co-production video is obtained.
- the present disclosure provides an electronic device including:
- Memory for storing computer operation instructions
- the processor is configured to execute the method as shown in any embodiment of the first aspect of the present disclosure by invoking the computer operation instruction.
- the present disclosure provides a computer-readable storage medium that stores at least one instruction, at least one program, code set, or instruction set, at least one instruction, at least one program, code set, or instruction The set is loaded and executed by the computer to implement the method as shown in any embodiment of the first aspect of the present disclosure.
- a user only needs to perform operations related to user video shooting on a video playback interface, and the user video can be recorded on the basis of the original video through the video shooting window, and finally a function of synchronizing the video of the user video and the original video can be obtained .
- the operation process is simple and fast.
- the user video can reflect the user's feelings, comments, or viewing reactions to the original video, so that the user can conveniently display his views or reactions to the original video, which can better meet the user's actual application needs and improve the user's Interactive experience improves the fun of video shooting.
- FIG. 1 is a schematic flowchart of a video shooting method provided by an embodiment of the present disclosure
- FIG. 2 is a schematic diagram of a video playback interface provided by an embodiment of the present disclosure
- 3A is a schematic diagram of a volume adjustment method provided by an embodiment of the present disclosure.
- 3B is a schematic diagram of yet another volume adjustment method provided by an embodiment of the present disclosure.
- FIG. 4A is a schematic diagram of another video playback interface provided by an embodiment of the present disclosure.
- FIG. 4B is a schematic diagram of another video playback interface provided by an embodiment of the present disclosure.
- FIG. 5 is a schematic diagram of yet another video playback interface provided by an embodiment of the present disclosure.
- FIG. 6 is a schematic structural diagram of a video shooting device provided by an embodiment of the present disclosure.
- FIG. 7 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure.
- An embodiment of the present disclosure provides a video shooting method. As shown in FIG. 1, the method may include:
- Step S110 Receive the user's video shooting trigger operation through the video playback interface of the original video.
- the video shooting trigger operation means that the user wants to shoot the user video based on the original video in the video playback interface, that is, the user is used to trigger the action of starting the user video shooting, and the specific form of the operation is configured as needed, for example, it can be The trigger action of the user operating position on the interface of the client application.
- the video playback interface is used for interaction between the terminal device and the user, and the user can receive related operations on the original video through the interface, for example, sharing the original video or performing joint shooting and other operations.
- the operation can be triggered by the relevant trigger identification of the client, where the specific form of the trigger identification can be configured according to actual needs, for example, it can be a designated trigger button or input box on the client interface, or it can be the user ’s
- the voice instruction specifically, may be, for example, a virtual button of "co-shot" displayed on the application interface of the client, and the operation of the user clicking the button is the user's video shooting trigger operation.
- step S120 in response to the video shooting trigger operation, the video shooting window is superimposed and displayed on the video playback interface.
- the video shooting window may be superimposed and displayed on a preset position on the video playback interface, and the preset position may be a pre-configured display position based on the size of the display interface of the user's terminal device, for example, the upper left of the video playback interface Corner; the size of the video capture window is smaller than the display window of the original video, so that the video capture window only blocks part of the content of the original video.
- the initial size of the video shooting window can be configured according to actual needs. It can be selected to minimize the occlusion of the original video screen when playing the original video, which does not affect the user's viewing of the original video. Affect the size of the user's viewing of the recorded picture.
- the size of the display interface of the user's terminal device can be configured to automatically adjust the size of the video capture window displayed on the terminal device.
- the video capture window is one-tenth or one-fifth of the display interface of the terminal device.
- Step S130 Receive a user's video shooting operation through the video playback interface.
- the video playback interface includes related trigger identifiers for triggering video shooting operations, such as specifying a trigger button or input box, and can also be a user's voice instruction; specifically, it can be "shooting" displayed on the client's application interface "Is a virtual button, the user clicks on the button is the user's video shooting operation, and the video shooting operation can trigger the shooting function of the user's terminal device to obtain the user's content to be shot, such as the user himself.
- related trigger identifiers for triggering video shooting operations, such as specifying a trigger button or input box, and can also be a user's voice instruction; specifically, it can be "shooting" displayed on the client's application interface "Is a virtual button, the user clicks on the button is the user's video shooting operation, and the video shooting operation can trigger the shooting function of the user's terminal device to obtain the user's content to be shot, such as the user himself.
- Step S140 in response to the video shooting operation, shooting the user video, playing the original video at the same time, and displaying the user video through the video shooting window.
- the user video in order to make the comment content in the user video correspond to the content in the original video, the user video can be recorded synchronously while the original video is playing, that is, when the video shooting operation is received, the user video starts to be taken and the original video is played synchronously
- the function of simultaneous recording of the user video can be realized while the original video is playing, so that during the recording of the user video, the user can perform synchronous recording of the thought content or comment content in the user video based on the video content played in the original video, further improving The user's interactive experience.
- the original video is in the playback state before receiving the user's video shooting operation through the video playback interface of the original video, the original video is automatically paused when the user's video shooting operation is received, or the user Pause, when receiving the video shooting operation, you can play the paused original video, shoot the user video, play the original video at the same time, and display the user video through the video shooting window.
- the user video in the embodiment of the present disclosure may be a video including the user, that is, the user's video is recorded.
- the user's video is recorded.
- it can also be the video of other scenes recorded by the user after adjustment as needed.
- Step S150 synthesizing the user video and the original video to obtain a co-production video.
- the user video and the original video synthesis method can be configured according to actual needs, the user video can be combined with the original video in the process of shooting the user video, or after the user video shooting is completed, then the user video and the original video Synthesize, and the resulting co-production video includes the content in the original video and the user video.
- the co-production video you can watch the user video while watching the original video.
- the user video is the user ’s reaction video
- the original video may be a video that has not been co-shot, or a co-produced video that has been obtained after co-shot.
- a user only needs to perform operations related to user video shooting on a video playback interface, and the user video can be recorded on the basis of the original video through the video shooting window, and finally a function of synchronizing the video of the user video and the original video can be obtained .
- the operation process is simple and fast.
- the user video can reflect the user's feelings, comments, or viewing reactions to the original video, so that the user can conveniently display his views or reactions to the original video, which can better meet the user's actual application needs and improve the user's Interactive experience improves the fun of video shooting.
- FIG. 2 shows a schematic diagram of a video playback interface of the original video of the client application in the terminal device.
- the virtual button of “co-shooting” displayed on the interface is a video shooting trigger button. The user clicks The operation of this button is the user's video shooting trigger operation; on the video playback interface, after receiving the user's video shooting trigger operation, the video shooting window A is superimposed and displayed on the video playback interface B.
- the "shooting" virtual button is the shooting trigger button, and the operation that the user clicks on the button is the user's video shooting operation. After receiving the operation, the user video is shot through the video shooting window A to realize the shooting of the user video based on the original video Function.
- synthesizing the user video and the original video to obtain a co-production video may include:
- the video includes two parts of video information and audio information, in the process of synthesizing the user video and the original video, the respective video information and audio information can be synthesized separately, and finally the synthesized video information and audio information are synthesized
- the above synthesis method can facilitate the processing of information.
- the method may further include:
- the volume of the audio information of the original video and / or the audio information of the user video is adjusted accordingly.
- the volume of the original video and / or user video can also be adjusted to meet the video playback requirements of different users.
- the volume of the captured user video may be a pre-configured volume, for example, a volume consistent with the volume in the original video, or a preset volume.
- the volume adjustment virtual button in the video playback interface can be used to adjust the volume.
- the volume adjustment virtual button can be the volume adjustment progress bar, which corresponds to the original video volume and user video volume adjustment, which can be configured accordingly.
- Two volume adjustment progress bars such as volume adjustment progress bar a and volume adjustment progress bar b, adjust the volume of the original video through the volume adjustment progress bar a, and adjust the volume of the user video through the volume adjustment progress bar b.
- logo to distinguish different volume adjustment progress bars.
- FIG. 3A shows a schematic diagram of the volume adjustment progress bar in the volume adjustment interface.
- the user can adjust the volume of the volume by sliding the volume adjustment progress bar toward the top of the interface (that is, the “+” sign direction) Slide to increase the volume; slide to the bottom of the interface (that is, the "-" sign direction) to decrease the volume.
- the volume adjustment interface and the video playback interface may be the same display interface or different display interfaces. If it is a different display interface, the volume adjustment interface can be displayed when the user's volume adjustment operation is received through the video playback interface, and the volume adjustment can be performed through this interface. Optionally, in order not to affect the recording and playback of the video, you can change The volume adjustment interface is displayed superimposed on the video playback interface, such as the edge position displayed on the video playback interface.
- the method may further include:
- the special effect to be added is added to the user video.
- the user can also be provided with the function of adding special effects to the user video, that is, adding the selected special effects to be added to the user video through the user's special effect adding operation.
- the special effect to be added may be added before the user's video shooting, may also be added during the user's video shooting, or may be added after the user's video shooting is completed.
- the disclosure does not limit the timing of adding the special effect.
- the function of adding special effects to user videos can be achieved in at least one of the following ways:
- the first type the function of adding special effects can be realized through the virtual button of "special effects" displayed on the video playback interface.
- Video the virtual button of "special effects" displayed on the video playback interface.
- the second type You can add special effects by sliding the display interface of the user video.
- the user can slide the display interface of the user video left and right through an operator, such as a finger, to add the corresponding special effects to the user video.
- the method may further include:
- An operation prompt option is provided to the user, and the operation prompt option is used to provide the user with prompt information of the cooperative video shooting operation when the user's operation is received.
- the prompt operation option can be displayed on the video playback interface through the "Help" virtual button.
- the user can get the corresponding prompt information by clicking the button.
- the prompt information can be displayed to the user in the form of operation preview, or can be displayed through the text.
- the method prompts the user how to operate, and the present disclosure does not limit the presentation form of the prompt information.
- the method may further include: before responding to the video shooting operation, shooting the user video, simultaneously playing the original video, and displaying the user video through the video shooting window,
- the recording method includes at least one of the fast recording method, the slow recording method, and the standard recording method;
- the recording method of the user's video is determined.
- the user video can provide the user with a function to select the recording mode of the user video before shooting, that is, to record the user video according to the selected recording mode through the user's recording selection operation.
- the recording rate of the fast recording mode, the recording rate of the standard recording mode, and the recording rate of the slow recording mode are sequentially slowed down; through the selection of different recording methods, the function of variable-speed recording of user video can be realized, further improving the user's interactive experience.
- the fast, slow and standard among the above fast recording mode, slow recording mode and standard recording mode are relative, the recording rate of different recording modes is different, and the recording rate of each recording mode can be as required Configuration.
- the fast recording mode refers to the recording mode with the first recording rate
- the slow recording mode refers to the recording mode with the second recording rate
- the standard recording mode refers to the recording mode with the third recording rate.
- the third rate, the third rate is greater than the second rate.
- the method may further include: after synthesizing the user video and the original video to obtain a co-production video,
- the co-production video is saved locally, and / or, in response to the video publishing operation, the co-production video is published.
- the user can be provided with the function of publishing and / or saving the co-produced video, that is, through the user's video publishing operation, the co-produced video is published to the designated video platform to share the co-produced video; Or through the user's video saving operation, the co-production video is saved locally for the user to view.
- the video publishing operation can be Obtained by the user clicking the "publish" virtual button.
- publishing the co-produced video in response to the video publishing operation may include:
- the user in order to meet the user's privacy requirements for co-produced videos, the user is provided with the function of configuring co-produced video viewing permissions, that is, obtaining the user's co-produced video viewing permissions through the user's video publishing operation, and publishing the co-produced video according to the user's co-produced video viewing permissions .
- the co-produced video can only be viewed by users corresponding to the permission to view the co-produced video, and users who are not in the permission to view the co-produced video cannot view the co-produced video.
- the permission to view the co-produced video can be pre-configured.
- the permission to view the co-produced video can also be configured.
- the currently co-produced video is released according to the configured privacy rights.
- the permission to view the co-production video includes at least one of anyone, friends, and only yourself.
- anyone indicates that the co-production video can be viewed by anyone.
- a friend means that only the user ’s friends can view the co-production video.
- the user himself can view the co-produced video.
- the method may further include:
- a push message of the co-produced video may be generated, and through the push message, the associated user of the user and / or the associated user of the original video may be made Be informed of the release of the co-produced video in time.
- the associated user of the user refers to a user who has an associated relationship with the user, and the related scope of the associated relationship can be configured according to needs, for example, it can include, but not limited to, the person concerned by the user or the person following the user.
- the users associated with the original video are associated with the publisher of the original video. For example, they may include, but are not limited to, the publisher of the original video and the people involved in the original video.
- the original video is a video after a co-production.
- the publisher of the video is user a
- the author of the original original video before the co-production of the original video is user b
- the associated users of the original video may include user a and user b.
- user a followed user b, user a posted a co-production video, and user a was associated with user b, namely user a @ user b, where user a @ user b could be displayed in the title of the co-production video, Then, a push message of the co-produced video is sent to user b, so that user b knows that user a has posted the video.
- user b cannot receive the push message of the co-production video.
- user a did not follow user b, user a posted a co-production video, but when user a posted a co-production video @ user b, user b can receive a push message for the co-production video.
- synthesizing the user video and the original video to obtain a co-production video may include:
- the recording start time of the user video determine the first video in the original video corresponding to the recording start time and consistent with the duration of the user video
- the co-production video is obtained.
- the duration of the user video recorded by the user may be the same as the duration of the original video, or may be inconsistent, and the user may select the recording start time of the user video based on the content in the original video, so that the video is co-produced During playback, the content of the user's video corresponds to the content in the original video, which further enhances the user's interactive experience.
- the method may further include: hiding virtual buttons of corresponding functions in the video playback interface.
- a virtual logo representing different functions can be displayed on the video playback interface, for example: a virtual button a indicating the start of shooting, a progress bar b indicating the progress of shooting, a virtual button c indicating adding special effects, and a virtual button indicating releasing a co-produced video Button d, etc .; a schematic diagram of a video playback interface as shown in FIGS. 4A and 4B.
- other virtual identifiers except the virtual button a and the progress bar b in the video playback interface in FIG. 4A can be hidden, for example, the virtual buttons c and d are hidden, and the hidden interface is shown in FIG. 4B As shown in the figure, by hiding the virtual logo, the video playback interface can be kept tidy.
- a virtual button for hiding function buttons can also be set in the interface, through which the user can set which function buttons to hide or display and restore. Specifically, when receiving the user's operation on the button, the user You can use this button to choose which virtual buttons to hide, or you can choose to restore the previously hidden virtual buttons.
- the shape of the video shooting window is not limited, including a circle, a rectangle, and other shapes, which can be configured according to actual needs.
- the method may further include:
- the video shooting window is adjusted to the corresponding area on the video playback interface.
- the user can adjust the position of the video shooting window to meet the needs of different users for the position of the video shooting window above the video playback interface.
- the position of the video shooting window can be adjusted by any of the following user window movement operations:
- the first type the user can adjust the position of the video shooting window by dragging the video shooting window through an operating object, such as a finger.
- an operating object such as a finger.
- the operating object touches the video shooting window to drag it indicates that the position of the video shooting window is adjusted when the operating object
- you leave the video capture window that is, when you drag the video capture window
- the corresponding position of the stop dragging is the corresponding area of the video capture window above the video playback interface.
- the second type the user can adjust the position of the video shooting window through the position progress bar displayed in the video playback interface, and the user can determine the corresponding area of the video shooting window above the video playback interface by sliding the position progress bar.
- adjusting the video shooting window to the corresponding area above the video playback interface in response to the window movement operation may include:
- window adjustment boundary line In response to the window movement operation, displaying a pre-configured window adjustment boundary line on the video playback interface, wherein the window adjustment boundary line is used to define the display area of the video shooting window;
- the video playback interface has a pre-configured window adjustment boundary line.
- the window adjustment boundary line is used to limit the display area of the video shooting window above the video playback interface.
- the window adjustment boundary line may be based on various The size of the display interface of the terminal device is pre-configured so that the content captured in the video shooting window can be adapted to be displayed on the display interface of any terminal device. Based on the configuration of the window adjustment boundary line, when receiving the user's window movement operation, the pre-configured window adjustment boundary line will be displayed on the video playback interface at the same time, so that when the user adjusts the video shooting window, the adjustment of the video shooting window is adjusted in accordance with.
- the video shooting window can be configured according to requirements.
- the window adjustment boundary line can be a guide line located at a pre-configured position in the video playback interface.
- the pre-configured position can include the top, bottom, and bottom of the video playback interface.
- At least one of the left and right positions, the guide lines at different positions can define the adjustment range of the corresponding position of the video shooting window in the video playback interface.
- a video playback interface as shown in FIG. 5, taking the two guide lines at the top and left in the video playback interface as window adjustment lines (ie, window adjustment boundary lines a and b) as an example, the user can drag the video The shooting window triggers the window adjustment operation.
- the window adjustment boundary lines a and b are displayed in the video playback interface.
- the window adjustment boundary lines a and b are two lines that are perpendicular to each other.
- the window adjustment boundary lines a and b can be marked by striking colors, such as red, or the window adjustment boundary lines a and b can be marked by different shapes, such as zigzag.
- the user drags the video shooting window f from the position A to the position B, and based on the position B, adjusts the video shooting window f to the position corresponding to the position B on the video playback interface to realize the adjustment of the video shooting window.
- determining the current display area of the video shooting window according to the window movement operation and the window adjustment boundary line may include:
- the window movement operation determine the first display area of the video shooting window
- the first display area is determined to be the current display area
- the second display area is the current display area
- the second display area is an area after the first display area is translated to any window adjustment boundary line, and at least one position point of the second display area coincides with any window adjustment boundary line.
- the video shooting window has a relatively better display position within the adjustment range defined by the window adjustment boundary line, for example, the display area near the window adjustment boundary line, in the process of adjusting the video window, in addition to the video shooting window in the video playback
- the user cannot accurately obtain the relatively better display position, you can use the distance between the display area of the video shooting window during the adjustment and the window adjustment boundary line to help the user to adjust the video
- the shooting window is adjusted to a relatively better position above the video playback interface.
- the display position of the non-edge area of, the first display area can be used as the area to which the video shooting window is to be adjusted, that is, the current display area.
- the distance between the first display area and any window adjustment boundary line is less than the set distance, it means that the user may want to adjust the video shooting window to the edge area of the video playback interface, so as to cover the original video playback interface as little as possible
- the current display area may be determined as the second display area at the boundary line.
- the first display area is rectangular, and the area after the first display area is adjusted to any window and the boundary line is translated is any of the first display area.
- a border line coincides with the area corresponding to any window adjustment border line; if the video capture window is circular and the window adjustment border line is a straight line, the first display area is circular, and the first display area is adjusted to any window border
- the area after the line translation is an area where at least one position point of the first display area coincides with any window adjustment boundary line. It can be understood that, when there is an adjustment boundary line, no matter how the shooting window is adjusted, the display area of the shooting window cannot exceed the boundary line.
- the method may further include:
- the video shooting window is adjusted to the corresponding display size.
- the size of the video shooting window can be set according to the pre-configured default value, or based on the actual needs of the user, the user can adjust the size of the video shooting window.
- the video playback interface includes a trigger Trigger identifiers related to window size adjustment operations, such as specifying a trigger button or input box, can also be the user's voice; specifically, it can be a virtual button of the "window" displayed on the video playback interface, and the user can use this button to trigger the window size Adjustment operation, through which the size of the video shooting window can be adjusted.
- an embodiment of the present disclosure also provides a video shooting device 20.
- the device 20 may include:
- the trigger operation receiving module 210 is used to receive the user's video shooting trigger operation through the video playback interface of the original video;
- the shooting window display module 220 is used to superimpose and display the video shooting window on the video playback interface in response to the video shooting trigger operation;
- the shooting operation receiving module 230 is used to receive the user's video shooting operation through the video playback interface
- the user video shooting module 240 is used for shooting user video in response to the video shooting operation, playing the original video at the same time, and displaying the user video through the video shooting window;
- the co-production video generation module 250 is used to synthesize user video and original video to obtain co-production video.
- the co-production video generation module 250 may be configured to:
- the device may further include:
- the volume adjustment module is used to receive the user's volume adjustment operation through the video playback interface, and adjust the volume of the audio information of the original video and / or the audio information of the user video in response to the volume adjustment operation.
- the device may further include:
- the special effect adding module is used to receive the user's special effect adding operation for the special effect to be added through the video playing interface, and add the special effect to be added to the user video in response to the special effect adding operation.
- the device may further include:
- the operation prompt module is used to provide the user with an operation prompt option, and the operation prompt option is used to provide the user with prompt information of the cooperative video shooting operation when the user's operation is received.
- the user video shooting module 240 may also be configured as:
- the recording mode may include at least one of a fast recording mode, a slow recording mode, and a standard recording mode.
- the device may further include:
- Co-production video processing module which is used to receive the user's video saving operation and / or video publishing operation after synthesizing the user video and the original video to obtain the co-production video, and in response to the video saving operation, save the co-production video locally, and / or , In response to the video publishing operation, the co-production video is released.
- the co-production video processing module may be configured to:
- the device may further include:
- the push message sending module is used to generate push messages for co-produced videos, and send the push information to the associated users of the user, and / or the associated users of the original video.
- the co-production video generation module 250 may be configured to:
- the recording start time of the user video determine the first video in the original video corresponding to the recording start time and consistent with the duration of the user video
- the co-production video is obtained.
- the video shooting device of the embodiments of the present disclosure may perform a video shooting method provided by the embodiments of the present disclosure, and the implementation principle is similar.
- the actions performed by the modules in the video shooting device in the embodiments of the present disclosure are: Corresponding to the steps in the video shooting method in the embodiments of the present disclosure, for the detailed function description of each module of the video shooting device, please refer to the description in the corresponding video shooting method shown in the foregoing, which will not be repeated here Repeat.
- the present disclosure provides an electronic device including a processor and a memory, wherein the memory is used to store computer operation instructions; the processor is used to call the The computer operation instruction executes the method as shown in any embodiment of the video shooting method of the present disclosure.
- the present disclosure provides a computer-readable storage medium that stores at least one instruction, at least one program, code set, or instruction set, at least one instruction At least one program, code set or instruction set is loaded and executed by the computer to implement the method as shown in any embodiment of the video shooting method of the present disclosure.
- FIG. 7 shows a schematic structural diagram of an electronic device 30 (for example, a terminal device or a server that implements the method shown in FIG. 1) suitable for implementing the embodiment of the present disclosure.
- the electronic devices in the embodiments of the present disclosure may include, but are not limited to, such as mobile phones, notebook computers, digital broadcast receivers, PDAs (personal digital assistants), PADs (tablet computers), PMPs (portable multimedia players), in-vehicle terminals (for example Mobile terminals such as car navigation terminals) and fixed terminals such as digital TVs, desktop computers and the like.
- the electronic device shown in FIG. 7 is only an example, and should not bring any limitation to the functions and use scope of the embodiments of the present disclosure.
- the electronic device 30 may include a processing device (for example, a central processing unit, a graphics processor, etc.) 301, which may be loaded into a random storage according to a program stored in a read-only memory (ROM) 302 or from the storage device 308
- the program in the memory (RAM) 303 is fetched to perform various appropriate actions and processes.
- various programs and data necessary for the operation of the electronic device 30 are also stored.
- the processing device 301, ROM 302, and RAM 303 are connected to each other via a bus 304.
- the input / output (I / O) interface 305 is also connected to the bus 304.
- the following devices can be connected to the I / O interface 305: including input devices 306 such as touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc .; including, for example, liquid crystal display (LCD), speaker, vibration
- input devices 306 such as touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc .
- output device 307 such as a storage device
- a storage device 308 including, for example, a magnetic tape, a hard disk, etc .
- the communication device 309 may allow the electronic device 30 to perform wireless or wired communication with other devices to exchange data.
- FIG. 7 shows an electronic device 30 having various devices, it should be understood that it is not required to implement or have all the devices shown. More or fewer devices may be implemented or provided instead.
- the process described above with reference to the flowchart may be implemented as a computer software program.
- embodiments of the present disclosure include a computer program product that includes a computer program carried on a computer-readable medium, the computer program containing program code for performing the method shown in the flowchart.
- the computer program may be downloaded and installed from the network through the communication device 309, or from the storage device 308, or from the ROM 302.
- the processing device 301 When the computer program is executed by the processing device 301, the above-mentioned functions defined in the method of the embodiments of the present disclosure are executed.
- the above-mentioned computer-readable medium in the present disclosure may be a computer-readable signal medium or a computer-readable storage medium or any combination of the two.
- the computer-readable storage medium may be, for example, but not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, device, or device, or any combination of the above. More specific examples of computer readable storage media may include, but are not limited to: electrical connections with one or more wires, portable computer diskettes, hard disks, random access memory (RAM), read only memory (ROM), erasable Programmable read-only memory (EPROM or flash memory), optical fiber, portable compact disk read-only memory (CD-ROM), optical storage device, magnetic storage device, or any suitable combination of the foregoing.
- the computer-readable storage medium may be any tangible medium that contains or stores a program, and the program may be used by or in combination with an instruction execution system, apparatus, or device.
- the computer-readable signal medium may include a data signal that is propagated in baseband or as part of a carrier wave, in which computer-readable program code is carried. This propagated data signal can take many forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of the above.
- the computer-readable signal medium may also be any computer-readable medium other than a computer-readable storage medium, and the computer-readable signal medium may send, propagate, or transmit a program for use by or in combination with an instruction execution system, apparatus, or device .
- the program code contained on the computer readable medium may be transmitted using any appropriate medium, including but not limited to: electric wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
- the computer-readable medium may be included in the electronic device; or it may exist alone without being assembled into the electronic device.
- the computer-readable medium carries one or more programs.
- the electronic device When the one or more programs are executed by the electronic device, the electronic device is caused to: obtain at least two Internet protocol addresses; send the node evaluation device to include at least two Internet programs A node evaluation request for a protocol address, where the node evaluation device selects and returns an Internet protocol address from at least two Internet protocol addresses; receives the Internet protocol address returned by the node evaluation device; wherein, the obtained Internet protocol address indicates a content distribution network
- the edge node in.
- the computer-readable medium carries one or more programs, and when the one or more programs are executed by the electronic device, the electronic device is caused to: receive a node evaluation request including at least two Internet protocol addresses; from at least two Among the Internet protocol addresses, select the Internet protocol address; return the selected Internet protocol address; wherein, the received Internet protocol address indicates an edge node in the content distribution network.
- the computer program code for performing the operations of the present disclosure can be written in one or more programming languages or a combination thereof.
- the above programming languages include object-oriented programming languages such as Java, Smalltalk, C ++, and also include conventional Procedural programming language-such as "C" language or similar programming language.
- the program code may execute entirely on the user's computer, partly on the user's computer, as an independent software package, partly on the user's computer and partly on a remote computer, or entirely on the remote computer or server.
- the remote computer can be connected to the user's computer through any kind of network, including a local area network (LAN) or a wide area network (WAN), or it can be connected to an external computer (for example, through an Internet service provider Internet connection).
- LAN local area network
- WAN wide area network
- Internet service provider Internet connection for example, AT&T, MCI, Sprint, EarthLink, MSN, GTE, etc.
- each block in the flowchart or block diagram may represent a module, program segment, or part of code that contains one or more logic functions Executable instructions.
- the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession can actually be executed in parallel, and sometimes they can also be executed in reverse order, depending on the functions involved.
- each block in the block diagrams and / or flowcharts, and combinations of blocks in the block diagrams and / or flowcharts can be implemented with dedicated hardware-based systems that perform specified functions or operations Or, it can be realized by a combination of dedicated hardware and computer instructions.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Human Computer Interaction (AREA)
- User Interface Of Digital Computer (AREA)
- Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
Abstract
本公开的实施例提供了一种视频拍摄方法、装置、电子设备及计算机可读存储介质。该方法包括:通过原视频的视频播放界面,接收用户的视频拍摄触发操作;响应于视频拍摄触发操作,将视频拍摄窗口叠加显示在视频播放界面之上;通过视频播放界面,接收用户的视频拍摄操作;响应于视频拍摄操作,拍摄用户视频,同时播放原视频,并通过视频拍摄窗口显示用户视频;将用户视频和原视频合成,得到合拍视频。根据本公开的实施例,用户只需在视频播放界面进行用户视频拍摄的相关操作,即可得到合拍视频的功能,操作过程简单快速。由于通过用户视频可以反映用户对原视频的感想,因此,可方便用户展示对原视频的感想,提升了用户的交互体验。
Description
相关申请的交叉应用
本申请要求2018年10月19日提交到中国国家知识产权局的申请号为201811223743X的专利申请的优先权,该申请的全部内容通过引用并入本文,以用于所有目的。
本公开涉及互联网技术领域,具体而言,本公开涉及一种视频拍摄方法、装置、电子设备和计算机可读存储介质。
在视频交互平台中,用户可通过视频的形式发表自己对平台中其他视频的想法或观看感受,以此实现与视频之间的交互。
现有技术中,当用户想基于视频平台中的某个视频拍摄交互视频时,通常先要将视频平台中的原视频下载保存下来,然后利用一些专业的视频录制工具完成交互视频的录制,再将完成好的交互视频上传至视频平台中。整个交互视频的拍摄过程不能只通过视频平台来完成,降低了用户的交互体验。
可见,现有的交互视频录制方式复杂,且用户交互体验较差,不能够满足用户的实际应用需求。
发明内容
第一方面,本公开提供了一种视频拍摄方法,该方法包括:
通过原视频的视频播放界面,接收用户的视频拍摄触发操作;
响应于视频拍摄触发操作,将视频拍摄窗口叠加显示在视频播放界面之上;
通过视频播放界面,接收用户的视频拍摄操作;
响应于视频拍摄操作,拍摄用户视频,同时播放原视频,并通过视频拍摄窗口显示用户视频;以及
将用户视频和原视频合成,得到合拍视频。
在本公开的实施例中,将用户视频和原视频合成得到合拍视频包括:
将用户视频的音频信息和原视频的音频信息合成,得到合拍视频的音频信息;
将用户视频的视频信息和原视频的视频信息合成,得到合拍视频的视频信息;以及
将合拍视频的音频信息和合拍视频的视频信息合成,得到合拍视频。
在本公开的实施例中,该方法还包括:
通过视频播放界面,接收用户的音量调节操作;以及
响应于音量调节操作,对原视频的音频信息和/或用户视频的音频信息的音量进行相应调节。
在本公开的实施例中,该方法还包括:
通过视频播放界面,接收用户针对待添加特效的特效添加操作;以及
响应于特效添加操作,将待添加特效添加至用户视频中。
在本公开的实施例中,该方法还包括:
向用户提供操作提示选项,操作提示选项用于在接收到用户的操作时,向用户提供合拍视频拍摄操作的提示信息。
在本公开的实施例中,该方法还包括:在响应于视频拍摄操作,拍摄用户视频,同时播放原视频,并通过视频拍摄窗口显示用户视频之前,
通过视频播放界面,接收用户针对用户视频的录制方式的录制选择操作,录制方式包括快录方式、慢录方式和标准录制方式中的至少一项;以及
响应于录制选择操作,确定用户视频的录制方式。
在本公开的实施例中,该方法还包括:在将用户视频和原视频合成得到合拍视频之后,
接收用户的视频保存操作和/或视频发布操作;以及
响应于视频保存操作,将合拍视频保存于本地,和/或,响应于视频发布操作,将合拍视频进行发布。
在本公开的实施例中,响应于视频发布操作将合拍视频进行发布包括:
响应于视频发布操作,获取用户的合拍视频查看权限;以及
依据合拍视频查看权限,将合拍视频进行发布。
在本公开的实施例中,该方法还包括:在响应于视频发布操作将合拍视频进行发布时,
生成合拍视频的推送消息;以及
将推送信息发送至用户的关联用户,和/或,原视频的关联用户。
在本公开的实施例中,若用户视频的时长小于原视频的时长,则将用户视频和原视频合成得到合拍视频包括:
依据用户视频的录制起始时刻,确定原视频中与录制起始时刻对应的、且与用户视频的时长一致的第一视频;
将用户视频与第一视频合成第二视频;以及
依据第二视频及原视频中除第一视频之外的视频,得到合拍视频。
第二方面,本公开提供了一种视频拍摄装置,该装置包括:
触发操作接收模块,用于通过原视频的视频播放界面,接收用户的视频拍摄触发操作;
拍摄窗口显示模块,用于响应于视频拍摄触发操作,将视频拍摄窗口叠加显示在视频播放界面之上;
拍摄操作接收模块,用于通过视频播放界面,接收用户的视频拍摄操作;
用户视频拍摄模块,用于响应于视频拍摄操作,拍摄用户视频,同时播放原视频,并通过视频拍摄窗口显示用户视频;以及
合拍视频生成模块,用于将用户视频和原视频合成,得到合拍视频。
在本公开的实施例中,合拍视频生成模块可以被配置成:
将用户视频的音频信息和原视频的音频信息合成,得到合拍视频的音频信息;
将用户视频的视频信息和原视频的视频信息合成,得到合拍视频的视频信息;以及
将合拍视频的音频信息和合拍视频的视频信息合成,得到合拍视频。
在本公开的实施例中,该装置还包括:
音量调节模块,用于通过视频播放界面,接收用户的音量调节操作,响应于音量调节操作,对原视频的音频信息和/或用户视频的音频信息的音量进行相应的调节。
在本公开的实施例中,该装置还包括:
特效添加模块,用于通过视频播放界面,接收用户的特效添加操作,响应于特效添加操作,将待添加特效添加至用户视频中。
在本公开的实施例中,该装置还包括:
操作提示模块,用于向用户提供提示操作选项,提示操作选项用于向用户提供用户视频的拍摄操作的提示信息。
在本公开的实施例中,用户视频拍摄模块还可被配置成:
在响应于视频拍摄操作,拍摄用户视频,同时播放原视频,并通过视频拍摄窗口显示用户视频之前,通过视频播放界面,接收用户针对用户视频的录制方式的录制选择操作,响应于录制选择操作,确定用户视频的录制方式,录制方式包括快录方式、慢录方式和标准录制方式中的至少一项。
在本公开的实施例中,该装置还包括:
合拍视频处理模块,用于在将用户视频和原视频合成,得到合拍视频之后,接收用户的视频保存操作和/或视频发布操作,响应于视频保存操作,将合拍视频保存于本地,和/或,响应于视频发布操作,将合拍视频进行发布。
在本公开的实施例中,合拍视频处理模块可以被配置成:
响应于视频发布操作,获取用户的合拍视频查看权限;以及
依据合拍视频查看权限,将合拍视频进行发布。
在本公开的实施例中,该装置还包括:
推送消息发送模块,用于生成合拍视频的推送消息,将推送信息发送至用户的关联用户,和/或,原视频的关联用户。
在本公开的实施例中,若用户视频的时长小于原视频的时长,则合拍视频生成模块可以被配置成:
依据用户视频的录制起始时刻,确定原视频中与录制起始时刻对应的、且与用户视频的时长一致的第一视频;
将用户视频与第一视频合成第二视频;以及
依据第二视频及原视频中除第一视频之外的视频,得到合拍视频。
第三方面,本公开提供了一种电子设备,该电子设备包括:
存储器,用于存储计算机操作指令;以及
处理器,用于通过调用该计算机操作指令,执行如本公开的第一方面的任一实施例中所示的方法。
第四方面,本公开提供了一种计算机可读存储介质,该计算机可读存储介质存储有至少一条指令、至少一段程序、代码集或指令集,至少一条指令、至少一段程序、代码集或指令集由计算机加载并执行以实现如本公开的第一方面的任一实施例中所示的方法。
根据本公开的实施例,用户只需在视频播放界面进行用户视频拍摄的相关操作,即可通过视频拍摄窗口实现在原视频基础上录制用户视频,最终得到用户视频与原视频合成的合拍视频的功能,操作过程简单快速。由于通过用户视频可以反映用户对原视频的感想、评论或观看反应,因此,使用户能够方便地展示其对原视频的看法或反应,能够更好地满足用户的实际应用需求,提升了用户的交互体验,提高了视频拍摄的趣味性。
为了更清楚地说明在本公开的实施例中的技术方案,下面将对本公开实施例描述中所需要使用的附图作简单地介绍。
图1为本公开的实施例提供的一种视频拍摄方法的流程示意图;
图2为本公开的实施例提供的一种视频播放界面的示意图;
图3A为本公开的实施例提供的一种音量调节方式的示意图;
图3B为本公开的实施例提供的又一种音量调节方式的示意图;
图4A为本公开的实施例提供的又一种视频播放界面的示意图;
图4B为本公开的实施例提供的另一种视频播放界面的示意图;
图5为本公开的实施例提供的再一种视频播放界面的示意图;
图6为本公开的实施例提供的一种视频拍摄装置的结构示意图;
图7为本公开的实施例提供的一种电子设备的结构示意图。
下面详细描述本公开的实施例,该实施例的示例在附图中示出,其中自始至终相同或类似的标号表示相同或类似的元件或具有相同或类似功能的元件。下面通过参考附图描述的实施例是示例性的,仅用于解释本公开的技术感,而不能解释为对本公开的限制。
本技术领域技术人员可以理解,除非特意声明,这里使用的单数形式“一”、“一个”和“该”也可包括复数形式。应该进一步理解的是,本公开的说明书中使用的措辞“包括”是指存在该特征、整数、步骤、操作、元件和/或组件,但是并不排除存在或添加一个或多个其他特征、整数、步骤、操作、元件、组件和/或它们的组。应该理解,当我们称元件被“连接”或“耦接”到另一元件时,它可以直接连接或耦接到其他元件,或者也可以存在中间元件。此外,这里使用的“连接”或“耦接”可以包括无线连接或无线耦接。这里使用的措辞“和/或”包括一个或更多个相关联的列出项的全部或任一单元和全部组合。
下面以具体地实施例对本公开的技术方案以及本公开的技术方案如何解决上述技术问题进行详细说明。下面这几个具体的实施例可以相互结合,对于相同或相似的概念或过程可能在某些实施例中不再赘述。下面将结合附图,对本公开的实施例进行描述。
本公开的实施例提供了一种视频拍摄方法,如图1所示,该方法可以包括:
步骤S110,通过原视频的视频播放界面,接收用户的视频拍摄触发操作。
其中,视频拍摄触发操作表示用户想要基于视频播放界面中的原视频进行用户视频的拍摄,即用户用于触发开始进行用户视频拍摄的动作,该 操作的具体形式根据需要配置,例如,可以是用户在客户端的应用程序的界面上操作位置的触发动作。其中,视频播放界面用于终端设备与用户之间的交互,通过该界面可以接收用户对原视频的相关操作,例如,对原视频进行分享或进行合拍等操作。
在实际应用中,可通过客户端的相关触发标识触发该操作,其中,触发标识的具体形式可以根据实际需要配置,比如,可以是客户端界面上的指定触发按钮或输入框,还可以是用户的语音指令,具体地,例如可以是在客户端的应用界面上显示的“合拍”的虚拟按钮,用户点击该按钮的操作即为用户的视频拍摄触发操作。
步骤S120,响应于视频拍摄触发操作,将视频拍摄窗口叠加显示在视频播放界面之上。
在实际应用中,视频拍摄窗口可以叠加显示在视频播放界面上的预设位置上,该预设位置可以为基于用户的终端设备的显示界面大小预先配置的显示位置,比如,视频播放界面的左上角;视频拍摄窗口的大小小于原视频的显示窗口,使得视频拍摄窗口只遮挡原视频的部分画面内容。其中,视频拍摄窗口的初始大小可以根据实际需要进行配置,可选为在播放原视频时,尽量减少对原视频画面的遮挡,不影响用户对原视频的观看,且拍摄用户视频时,尽量不影响用户对录制的画面的观看的大小。例如,可以根据用户的终端设备的显示界面的尺寸,配置自动化调整在终端设备上显示的视频拍摄窗口的大小,如视频拍摄窗口为终端设备的显示界面的十分之一或五分之一。
步骤S130,通过视频播放界面,接收用户的视频拍摄操作。
同理,视频播放界面中包括用于触发视频拍摄操作的相关触发标识,比如指定触发按钮或输入框,还可以是用户的语音指令;具体地,可以是在客户端的应用界面上显示的“拍摄”的虚拟按钮,用户点击该按钮的操作即为用户的视频拍摄操作,通过该视频拍摄操作可以触发用户的终端设备的拍摄功能,以获取用户的待拍摄内容,比如用户本人。
步骤S140,响应于视频拍摄操作,拍摄用户视频,同时播放原视频,并通过视频拍摄窗口显示用户视频。
其中,为了使用户视频中的评论内容与原视频中的内容相对应,可以在原视频播放的同时,同步录制用户视频,即在接收视频拍摄操作时,开始拍摄用户视频,并同步播放原视频,由此可实现原视频播放的同时,用户视频同步录制的功能,使得用户在录制用户视频的过程中可基于原视频中播放的视频内容进行用户视频中感想内容或评论内容的同步录制,进一步提升了用户的交互体验。
在实际应用中,如果在通过原视频的视频播放界面,接收用户的视频拍摄操作之前,原视频处于播放状态,在接收用户的视频拍摄操作时,自动将原视频暂停,或由用户将原视频暂停,则在接收到视频拍摄操作时,可以播放暂停的原视频,拍摄用户视频,同时播放原视频,并通过视频拍摄窗口显示用户视频。
需要说明的是,本公开实施例中的用户视频可选为包括用户在内的视频,即录制的是用户的视频。当然,也可以是用户根据需要调整后录制的其它场景的视频。
步骤S150,将用户视频和原视频合成,得到合拍视频。
其中,用户视频和原视频的合成方式可根据实际需求进行配置,可以在拍摄用户视频的过程中,将用户视频与原视频合成,也可以在用户视频拍摄完成后,再将用户视频和原视频合成,得到的合拍视频中包括原视频中的内容和用户视频中的内容,通过该合拍视频,可以在观看原视频的同时观看用户视频,在用户视频为用户的反应视频时,通过观看该合拍视频,可了解用户对该原视频的观看反应或想法。
在实际应用中,原视频可以是未经合拍过的视频,也可以是已经经过合拍后得到的合拍视频。
根据本公开的实施例,用户只需在视频播放界面进行用户视频拍摄的相关操作,即可通过视频拍摄窗口实现在原视频基础上录制用户视频,最终得到用户视频与原视频合成的合拍视频的功能,操作过程简单快速。由于通过用户视频可以反映用户对原视频的感想、评论或观看反应,因此,使用户能够方便地展示其对原视频的看法或反应,能够更好地满足用户的实际应用需求,提升了用户的交互体验,提高了视频拍摄的趣味性。
作为一个示例,图2中示出了一种终端设备中客户端的应用程序的原视频的视频播放界面的示意图,该界面中所显示的“合拍”的虚拟按钮即为视频拍摄触发按钮,用户点击该按钮的操作即为用户的视频拍摄触发操作;在视频播放界面,接收到用户的视频拍摄触发操作后,将视频拍摄窗口A叠加显示在视频播放界面B之上,该界面中所示的“拍摄”的虚拟按钮即为拍摄触发按钮,用户点击该按钮的操作即为用户的视频拍摄操作,在接收到该操作之后,通过视频拍摄窗口A拍摄用户视频,实现在原视频的基础上拍摄用户视频的功能。
需要说明的是,在实际应用中,视频播放界面的具体形式、各按钮的形式均可以根据实际需要配置,上述示例中只是一种可选的实施方式。
在本公开的实施例中,将用户视频和原视频合成得到合拍视频可以包括:
将用户视频的音频信息和原视频的音频信息合成,得到合拍视频的音频信息;
将用户视频的视频信息和原视频的视频信息合成,得到合拍视频的视频信息;以及
将合拍视频的音频信息和合拍视频的视频信息合成,得到合拍视频。
其中,视频中包括视频信息和音频信息两部分,则在将用户视频和原视频的合成的过程中,可以将各自的视频信息和音频信息分别合成,最终将合成后的视频信息和音频信息合成为合拍视频,通过以上合成方式,可便于信息的处理。
在本公开的实施例中,该方法还可以包括:
通过视频播放界面,接收用户的音量调节操作;以及
响应于音量调节操作,对原视频的音频信息和/或用户视频的音频信息的音量进行相应调节。
其中,为了进一步提升用户的交互体验,还可以调节原视频和/或用户视频中的音量,以满足不同用户的视频播放需求,在实际应用中,如果用户不需要对原视频和用户视频的音量进行调节,则拍摄的用户视频中的音量可以为预先配置的音量,比如:与原视频中的音量一致的音量,或者 预设值的音量。
在实际应用中,可通过视频播放界面中音量调节虚拟按钮来实现音量大小的调节,音量调节虚拟按钮可以为音量调节进度条,则对应于原视频的音量和用户视频的音量调节,可以对应配置两个音量调节进度条,比如音量调节进度条a和音量调节进度条b,通过音量调节进度条a来调节原视频的音量,通过音量调节进度条b来调节用户视频的音量,且可以通过不同的标识来区分不同的音量调节进度条。
作为一个示例,图3A中示出了一种音量调节界面中音量调节进度条的示意图,用户可通过滑动音量调节进度条来调节音量的大小,向该界面的上方(即“+”标识方向)滑动,表示将音量调大;向该界面的下方(即“-”标识方向)滑动,表示将音量调小。根据实际需求,还可以将音量调节进度条设置为水平方向,即如图3B所示的音量调节进度条,向该界面的左方(即“-”标识方向)滑动,表示将音量调小,向该界面的右方(即“+”标识方向)滑动,表示将音量调大。
需要说明的是,在实际应用中,音量调节界面与视频播放界面可以是同一显示界面,也可以是不同的显示界面。若是不同的显示界面,则在通过视频播放界面接收到用户的音量调节操作时,可以显示出音量调节界面,通过该界面进行音量调整,可选地,为了不影响视频的录制与播放,可以将音量调节界面叠加显示在视频播放界面之上,如显示在视频播放界面之上的边缘位置。
在本公开的实施例中,该方法还可以包括:
通过视频播放界面,接收用户针对待添加特效的特效添加操作;以及
响应于特效添加操作,将待添加特效添加至用户视频中。
其中,为了满足不同用户的视频拍摄需求,还可以为用户提供在用户视频中添加特效的功能,即通过用户的特效添加操作,对用户视频增加所选择的待添加特效。该待添加特效可以在用户视频拍摄之前添加,也可以在用户视频拍摄过程中添加,也可以在用户视频拍摄完成之后添加,本公开中不限定特效的添加时机。
在实际应用中,可通过以下至少一种方式实现在用户视频中添加特效 的功能:
第一种:可以通过视频播放界面上显示的“特效”的虚拟按钮实现特效添加功能,用户点击该按钮的操作即为用户针对待添加特效的特效添加操作,将该按钮对应的特效添加在用户视频中。
第二种:可以通过滑动用户视频的显示界面添加特效,用户通过操作物,比如手指,左右滑动用户视频的显示界面,即可将相应的特效添加至用户视频中。
在本公开的实施例中,该方法还可以包括:
向用户提供操作提示选项,操作提示选项用于在接收到用户的操作时,向用户提供合拍视频拍摄操作的提示信息。
其中,如果用户在使用合拍功能时,即在原视频的基础上,拍摄用户视频并得到合拍视频,对于如何实现合拍功能不是很清楚具体怎样操作,则可以通过提示操作选项向用户进行提示,在实际应用中,提示操作选项可以通过“帮助”虚拟按钮显示在视频播放界面中,用户通过点击该按钮可以得到相应的提示信息,该提示信息可以通过操作预览的形式展示给用户,也可以通过文字的方式提示用户该怎样操作,本公开中不限定提示信息的表现形式。
在本公开的实施例中,该方法还可以包括:在响应于视频拍摄操作,拍摄用户视频,同时播放原视频,并通过视频拍摄窗口显示用户视频之前,
通过视频播放界面,接收用户针对用户视频的录制方式的录制选择操作,录制方式包括快录方式、慢录方式和标准录制方式中的至少一项;以及
响应于录制选择操作,确定用户视频的录制方式。
其中,为了满足不同用户的需求,用户视频在拍摄之前,可以向用户提供选择用户视频的录制方式的功能,即通过用户的录制选择操作,按照所选择的录制方式录制用户视频。快录方式的录制速率,标准录制方式的录制速率以及慢录方式的录制速率依次减慢;通过不同录制方式的选择,可以实现变速录制用户视频的功能,进一步提升了用户的交互体验。
可以理解的是,上述快录方式、慢录方式和标准录制方式中的快、慢 和标准是相对而言的,不同录制方式的录制速率是不同的,每种录制方式的录制速率可以根据需要配置。例如,快录方式是指录制速率为第一速率的录制方式,慢录方式为录制速率为第二速率的录制方式,标准录制方式是指录制速率为第三速率的录制方式,第一速率大于第三速率,第三速率大于第二速率。
在本公开的实施例中,该方法还可以包括:在将用户视频和原视频合成得到合拍视频之后,
接收用户的视频保存操作和/或视频发布操作;以及
响应于视频保存操作,将合拍视频保存于本地,和/或,响应于视频发布操作,将合拍视频进行发布。
其中,在得到合拍视频之后,可以向用户提供将合拍视频发布和/或保存的功能,即通过用户的视频发布操作,将合拍视频发布到指定的视频平台中,以实现对合拍视频的分享;或者通过用户的视频保存操作,将合拍视频保存在本地,以供该用户查看。在实际应用中,得到合拍视频后,可以跳转到视频发布界面,通过视频发布界面接收用户的视频发布操作,也可以直接通过视频播放界面接收该用户的视频发布操作,其中,视频发布操作可以通过用户点击“发布”虚拟按钮得到。
在本公开的实施例中,响应于视频发布操作将合拍视频进行发布可以包括:
响应于视频发布操作,获取用户的合拍视频查看权限;以及
依据合拍视频查看权限,将合拍视频进行发布。
其中,为了满足用户对合拍视频的隐私需求,向用户提供配置合拍视频查看权限的功能,即通过用户的视频发布操作,获取用户的合拍视频查看权限,按照用户的合拍视频查看权限将合拍视频发布。通过合拍视频查看权限,使得该合拍视频只可为该合拍视频查看权限对应的用户查看,不在该合拍视频查看权限中的用户不可以查看该合拍视频。在实际应用中,该合拍视频查看权限可以是预先配置好的,对于任何需要发布的合拍视频均为该合拍视频查看权限;该合拍视频查看权限也可以是在对当前合拍视频进行发布时进行配置的,则对应地,该当前合拍视频根据配置的隐私权 限进行发布。
其中,合拍视频查看权限包括任何人、好友和仅自己中的至少一项,任何人表示该合拍视频任何人都可查看,好友表示只有该用户的好友可以查看该合拍视频,仅自己表示只有该用户本人可以查看该合拍视频。
在本公开的实施例中,该方法还可以包括:
生成合拍视频的推送消息;以及
将推送信息发送至用户的关联用户,和/或,原视频的关联用户。
其中,为了告知与该合拍视频相关的人,在将合拍视频进行发布时,可以生成合拍视频的推送消息,通过该推送消息,可以使得该用户的关联用户,和/或,原视频的关联用户及时获知该合拍视频的发布。其中,用户的关联用户指的是与用户有关联关系的用户,该关联关系的涉及范围可以根据需要配置,例如可以包括但不限定该用户关注的人或关注该用户的人。原视频的关联用户与原视频的发布者具有关联关系的用户,例如,可以包括但不限于原视频的发布者以及原视频所涉及的人,比如,原视频为经过一次合拍的视频,该原视频的发布者为用户a,该原视频合拍前对应的初始原视频的作者为用户b,则原视频的关联用户可以包括用户a和用户b。
在实际应用中,在发布合拍视频时,可以在合拍视频的标题中添加相关的关注信息来表示该合拍视频的发布希望被哪个用户知道,可以通过@某用户的形式来体现推送信息的接收者。
在一示例中,用户a关注了用户b,用户a发布了合拍视频,且用户a关联了用户b,即用户a@用户b,其中,用户a@用户b可以显示在合拍视频的标题中,则将合拍视频的推送消息发送至用户b,以使用户b得知用户a发布了视频。
在又一示例中,用户a虽然关注了用户b,用户a发布了合拍视频,但用户a没有@用户b,则用户b接收不到合拍视频的推送消息。
在又一示例中,用户a没有关注用户b,用户a发布了合拍视频,但用户a发布合拍视频时@了用户b,则用户b可以接收到合拍视频的推送消息。
在本公开的实施例中,若用户视频的时长小于原视频的时长,则将用户视频和原视频合成得到合拍视频可以包括:
依据用户视频的录制起始时刻,确定原视频中与录制起始时刻对应的、且与用户视频的时长一致的第一视频;
将用户视频与第一视频合成第二视频;以及
依据第二视频及原视频中除第一视频之外的视频,得到合拍视频。
其中,基于原视频中的播放内容,用户录制的用户视频的时长可以与原视频的时长一致,也可以不一致,用户基于原视频中的内容可以选择用户视频的录制起始时刻,以使得合拍视频播放时,用户视频的内容与原视频中的内容相对应,进一步提升了用户的交互体验。
在本公开的实施例中,该方法还可以包括:将视频播放界面中的相应功能的虚拟按钮进行隐藏。
在实际应用中,视频播放界面中可以显示表示不同功能的虚拟标识,比如:表示拍摄开始的虚拟按钮a,表示拍摄进度的进度条b、表示添加特效的虚拟按钮c以及表示发布合拍视频的虚拟按钮d等;如图4A和4B所示的一种视频播放界面的示意图。为了进一步提升用户的交互体验,可以将图4A中的视频播放界面中除了虚拟按钮a和进度条b之外的其他虚拟标识隐藏,比如将虚拟按钮c和d隐藏,隐藏后的界面如图4B中所示,通过虚拟标识的隐藏,可以保持视频播放界面的整洁。
在实际应用中,还可以在界面中设置用于隐藏功能按钮的虚拟按钮,通过该按钮用户可以设置将哪些功能按钮进行隐藏或显示恢复,具体地,在接收用户对该按钮的操作时,用户可以通过该按钮选择隐藏哪些虚拟按钮,或者选择对之前已隐藏的虚拟按钮进行显示恢复。
在本公开的实施例中,视频拍摄窗口的形状不限定,包括圆形、长方形以及其他形状,可以根据实际需求进行配置。
在本公开的实施例中,该方法还可以包括:
接收用户针对视频拍摄窗口的窗口移动操作;以及
响应于窗口移动操作,将视频拍摄窗口调整到视频播放界面之上的相应区域。
其中,用户可对视频拍摄窗口的位置进行调整,以满足不同用户对于视频拍摄窗口在视频播放界面之上的位置需求。在实际应用中,通过以下任一种用户的窗口移动操作均可实现视频拍摄窗口位置的调整:
第一种:用户可以通过操作物,比如手指,拖动视频拍摄窗口来调整视频拍摄窗口的位置,当操作物接触视频拍摄窗口进行拖动时,表示在调整视频拍摄窗口的位置,当操作物离开视频拍摄窗口,即停止拖动视频拍摄窗口时,该停止拖动对应的位置即为视频拍摄窗口在视频播放界面之上的相应区域。
第二种:用户可以通过视频播放界面中显示的位置进度条来调整视频拍摄窗口的位置,用户可通过滑动位置进度条确定视频拍摄窗口在视频播放界面之上的相应区域。
在本公开的实施例中,响应于窗口移动操作将视频拍摄窗口调整到视频播放界面之上的相应区域可以包括:
响应于窗口移动操作,将预配置的窗口调整边界线显示于视频播放界面,其中,窗口调整边界线用于限定视频拍摄窗口的显示区域;以及
依据窗口移动操作和窗口调整边界线,确定视频拍摄窗口的当前显示区域;
根据当前显示区域,将视频拍摄窗口调整到视频播放界面之上的相应位置。
其中,视频播放界面中有预先配置的窗口调整边界线,窗口调整边界线用于限定视频拍摄窗口在视频播放界面之上的显示区域,在实际应用中,该窗口调整边界线可以基于各种不同终端设备的显示界面尺寸进行预配置,以使得视频拍摄窗口中拍摄的内容可以适配显示在任何终端设备的显示界面中。基于窗口调整边界线的配置,当接收用户的窗口移动操作时,在视频播放界面上会同时显示预配置的窗口调整边界线,以使得用户在调整视频拍摄窗口时,视频拍摄窗口的调整有调整依据。
在实际应用中,视频拍摄窗口可以根据需求进行配置,例如:窗口调整边界线可以是位于视频播放界面中预配置的位置处的指引线,预配置的位置可以包括视频播放界面的顶部、底部、左边和右边中的至少一个位置, 不同位置的指引线可以限定视频拍摄窗口在视频播放界面中对应位置的调整范围。
如图5所示的一种视频播放界面中,以视频播放界面中的顶部和左边的两条指引线作为窗口调整线(即窗口调整边界线a和b)为例,用户可以通过拖动视频拍摄窗口触发窗口调整操作,同时,在视频播放界面中会显示出窗口调整边界线a和b,窗口调整边界线a和b为互相垂直的两条线,在实际应用中,为了便于用户辨认,可以通过醒目的颜色,比如红色来对窗口调整边界线a和b进行标注,或者通过不同的形状,比如锯齿型来对窗口调整边界线a和b进行标注。用户将视频拍摄窗口f由位置A拖动到位置B,基于位置B,将视频拍摄窗口f调整到视频播放界面之上的与位置B对应的位置,实现对视频拍摄窗口的调整。
在本公开的实施例中,依据窗口移动操作和窗口调整边界线确定视频拍摄窗口的当前显示区域可以包括:
依据窗口移动操作,确定视频拍摄窗口的第一显示区域;
若第一显示区域和任一窗口调整边界线的距离不小于设定距离,则确定第一显示区域为当前显示区域;
若第一显示区域和任一窗口调整边界线的距离小于设定距离,则确定第二显示区域为当前显示区域;
其中,第二显示区域为将第一显示区域向任一窗口调整边界线平移后的区域,第二显示区域的至少一个位置点与任一窗口调整边界线重合。
其中,视频拍摄窗口在窗口调整边界线限定的调整范围内具有相对较佳的显示位置,比如靠近窗口调整边界线的显示区域,用户在对视频窗口调整过程中,除了对视频拍摄窗口在视频播放界面之上的显示区域有要求的用户之外,用户无法准确获取该相对较佳的显示位置,则可以通过视频拍摄窗口在调整过程中的显示区域与窗口调整边界线的距离来帮助用户将视频拍摄窗口调整到视频播放界面之上的相对较佳的位置。
具体地,在调整视频拍摄窗口的过程中,当视频拍摄窗口的第一显示区域和任一窗口调整边界线的距离不小于设定距离时,表示用户可能希望将视频拍摄窗口调整至视频播放界面的非边缘区域的显示位置,则可将第 一显示区域作为视频拍摄窗口即将调整至的区域,即当前显示区域。当第一显示区域和任一窗口调整边界线的距离小于设定距离时,表示用户可能希望将视频拍摄窗口调整至视频播放界面的边缘区域,以尽可能较少对原视频的播放界面的遮挡,此时,则可以将当前显示区域确定为边界线处的第二显示区域。
在实际应用中,如果视频拍摄窗口为矩形,窗口调整边界线为直线,则第一显示区域为矩形,将第一显示区域向任一窗口调整边界线平移后的区域为第一显示区域的任一边界线与任一窗口调整边界线重合所对应的区域;如果视频拍摄窗口为圆形,窗口调整边界线为直线,则第一显示区域为圆形,将第一显示区域向任一窗口调整边界线平移后的区域为第一显示区域的至少一个位置点与任一窗口调整边界线重合所对应的区域。可以理解的是,在存在调整边界线时,无论如何调整拍摄窗口,拍摄窗口的显示区域均不能够超出边界线。
在本公开的实施例中,该方法还可以包括:
接收用户针对视频拍摄窗口的窗口大小调节操作;以及
响应于窗口大小调节操作,将视频拍摄窗口调整到相应的显示大小。
其中,视频拍摄窗口的大小可以根据预配置的默认值进行设置,也可以基于用户的实际需求,由用户对视频拍摄窗口的大小进行调节,在实际用于中,视频播放界面中包括用于触发窗口大小调节操作相关触发标识,比如指定触发按钮或输入框,还可以是用户的语音;具体地,可以是在视频播放界面上显示的“窗口”的虚拟按钮,用户可以通过该按钮触发窗口大小调节操作,通过该操作可实现对视频拍摄窗口大小的调节。
基于与图1所示方法的相同原理,本公开的实施例中还提供了一种视频拍摄装置20,如图6所示,该装置20可以包括:
触发操作接收模块210,用于通过原视频的视频播放界面,接收用户的视频拍摄触发操作;
拍摄窗口显示模块220,用于响应于视频拍摄触发操作,将视频拍摄窗口叠加显示在视频播放界面之上;
拍摄操作接收模块230,用于通过视频播放界面,接收用户的视频拍 摄操作;
用户视频拍摄模块240,用于响应于视频拍摄操作,拍摄用户视频,同时播放原视频,并通过视频拍摄窗口显示用户视频;以及
合拍视频生成模块250,用于将用户视频和原视频合成,得到合拍视频。
在本公开的实施例中,合拍视频生成模块250可以被配置成:
将用户视频的音频信息和原视频的音频信息合成,得到合拍视频的音频信息;
将用户视频的视频信息和原视频的视频信息合成,得到合拍视频的视频信息;以及
将合拍视频的音频信息和合拍视频的视频信息合成,得到合拍视频。
在本公开的实施例中,该装置还可以包括:
音量调节模块,用于通过视频播放界面,接收用户的音量调节操作,响应于音量调节操作,对原视频的音频信息和/或用户视频的音频信息的音量进行相应的调节。
在本公开的实施例中,该装置还可以包括:
特效添加模块,用于通过视频播放界面,接收用户针对待添加特效的特效添加操作,响应于特效添加操作,将待添加特效添加至用户视频中。
在本公开的实施例中,该装置还可以包括:
操作提示模块,用于向用户提供操作提示选项,操作提示选项用于在接收到用户的操作时,向用户提供合拍视频拍摄操作的提示信息。
在本公开的实施例中,用户视频拍摄模块240还可以被配置成:
在响应于视频拍摄操作,拍摄用户视频,同时播放原视频,并通过视频拍摄窗口显示用户视频之前,通过视频播放界面,接收用户针对用户视频的录制方式的录制选择操作,响应于录制选择操作,确定用户视频的录制方式,录制方式可以包括快录方式、慢录方式和标准录制方式中的至少一项。
在本公开的实施例中,该装置还可以包括:
合拍视频处理模块,用于在将用户视频和原视频合成,得到合拍视频 之后,接收用户的视频保存操作和/或视频发布操作,响应于视频保存操作,将合拍视频保存于本地,和/或,响应于视频发布操作,将合拍视频进行发布。
在本公开的实施例中,合拍视频处理模块可以被配置成:
响应于视频发布操作,获取用户的合拍视频查看权限;以及
依据合拍视频查看权限,将合拍视频进行发布。
在本公开的实施例中,该装置还可以包括:
推送消息发送模块,用于生成合拍视频的推送消息,将推送信息发送至用户的关联用户,和/或,原视频的关联用户。
在本公开的实施例中,若用户视频的时长小于原视频的时长,则合拍视频生成模块250可以被配置成:
依据用户视频的录制起始时刻,确定原视频中与录制起始时刻对应的、且与用户视频的时长一致的第一视频;
将用户视频与第一视频合成第二视频;以及
依据第二视频及原视频中除第一视频之外的视频,得到合拍视频。
本公开的实施例的视频拍摄装置可执行本公开的实施例所提供的一种视频拍摄方法,其实现原理相类似,本公开各实施例中的视频拍摄装置中的各模块所执行的动作是与本公开各实施例中的视频拍摄方法中的步骤相对应的,对于视频拍摄装置的各模块的详细功能描述具体可以参见前文中所示的对应的视频拍摄方法中的描述,此处不再赘述。
基于与本公开的实施例中的视频拍摄方法相同的原理,本公开提供了一种电子设备,该电子设备包括处理器和存储器,其中,存储器用于存储计算机操作指令;处理器用于通过调用该计算机操作指令,执行如本公开的视频拍摄方法中的任一实施例中所示的方法。
基于与本公开的实施例中的视频拍摄方法相同的原理,本公开提供了一种计算机可读存储介质,该存储介质存储有至少一条指令、至少一段程序、代码集或指令集,至少一条指令、至少一段程序、代码集或指令集由计算机加载并执行以实现如本公开的视频拍摄方法中的任一实施例中所示的方法。
在本公开的实施例中,如图7所示,其示出了适于用来实现本公开实施例的电子设备30(例如实现图1中所示的方法的终端设备或服务器)的结构示意图。本公开实施例中的电子设备可以包括但不限于诸如移动电话、笔记本电脑、数字广播接收器、PDA(个人数字助理)、PAD(平板电脑)、PMP(便携式多媒体播放器)、车载终端(例如车载导航终端)等等的移动终端以及诸如数字TV、台式计算机等等的固定终端。图7示出的电子设备仅仅是一个示例,不应对本公开实施例的功能和使用范围带来任何限制。
如图7所示,电子设备30可以包括处理装置(例如中央处理器、图形处理器等)301,其可以根据存储在只读存储器(ROM)302中的程序或者从存储装置308加载到随机存取存储器(RAM)303中的程序而执行各种适当的动作和处理。在RAM 303中,还存储有电子设备30操作所需的各种程序和数据。处理装置301、ROM 302以及RAM 303通过总线304彼此相连。输入/输出(I/O)接口305也连接至总线304。
通常,以下装置可以连接至I/O接口305:包括例如触摸屏、触摸板、键盘、鼠标、摄像头、麦克风、加速度计、陀螺仪等的输入装置306;包括例如液晶显示器(LCD)、扬声器、振动器等的输出装置307;包括例如磁带、硬盘等的存储装置308;以及通信装置309。通信装置309可以允许电子设备30与其他设备进行无线或有线通信以交换数据。虽然图7示出了具有各种装置的电子设备30,但是应理解的是,并不要求实施或具备所有示出的装置。可以替代地实施或具备更多或更少的装置。
特别地,根据本公开的实施例,上文参考流程图描述的过程可以被实现为计算机软件程序。例如,本公开的实施例包括一种计算机程序产品,其包括承载在计算机可读介质上的计算机程序,该计算机程序包含用于执行流程图所示的方法的程序代码。在这样的实施例中,该计算机程序可以通过通信装置309从网络上被下载和安装,或者从存储装置308被安装,或者从ROM 302被安装。在该计算机程序被处理装置301执行时,执行本公开实施例的方法中限定的上述功能。
需要说明的是,本公开上述的计算机可读介质可以是计算机可读信号 介质或者计算机可读存储介质或者是上述两者的任意组合。计算机可读存储介质例如可以是——但不限于——电、磁、光、电磁、红外线、或半导体的系统、装置或器件,或者任意以上的组合。计算机可读存储介质的更具体的例子可以包括但不限于:具有一个或多个导线的电连接、便携式计算机磁盘、硬盘、随机存取存储器(RAM)、只读存储器(ROM)、可擦式可编程只读存储器(EPROM或闪存)、光纤、便携式紧凑磁盘只读存储器(CD-ROM)、光存储器件、磁存储器件、或者上述的任意合适的组合。在本公开中,计算机可读存储介质可以是任何包含或存储程序的有形介质,该程序可以被指令执行系统、装置或者器件使用或者与其结合使用。而在本公开中,计算机可读信号介质可以包括在基带中或者作为载波一部分传播的数据信号,其中承载了计算机可读的程序代码。这种传播的数据信号可以采用多种形式,包括但不限于电磁信号、光信号或上述的任意合适的组合。计算机可读信号介质还可以是计算机可读存储介质以外的任何计算机可读介质,该计算机可读信号介质可以发送、传播或者传输用于由指令执行系统、装置或者器件使用或者与其结合使用的程序。计算机可读介质上包含的程序代码可以用任何适当的介质传输,包括但不限于:电线、光缆、RF(射频)等等,或者上述的任意合适的组合。
上述计算机可读介质可以是上述电子设备中所包含的;也可以是单独存在,而未装配入该电子设备中。
上述计算机可读介质承载有一个或者多个程序,当上述一个或者多个程序被该电子设备执行时,使得该电子设备:获取至少两个网际协议地址;向节点评价设备发送包括至少两个网际协议地址的节点评价请求,其中,节点评价设备从至少两个网际协议地址中,选取网际协议地址并返回;接收节点评价设备返回的网际协议地址;其中,所获取的网际协议地址指示内容分发网络中的边缘节点。
或者,上述计算机可读介质承载有一个或者多个程序,当上述一个或者多个程序被该电子设备执行时,使得该电子设备:接收包括至少两个网际协议地址的节点评价请求;从至少两个网际协议地址中,选取网际协议地址;返回选取出的网际协议地址;其中,接收到的网际协议地址指示内 容分发网络中的边缘节点。
可以以一种或多种程序设计语言或其组合来编写用于执行本公开的操作的计算机程序代码,上述程序设计语言包括面向对象的程序设计语言—诸如Java、Smalltalk、C++,还包括常规的过程式程序设计语言—诸如“C”语言或类似的程序设计语言。程序代码可以完全地在用户计算机上执行、部分地在用户计算机上执行、作为一个独立的软件包执行、部分在用户计算机上部分在远程计算机上执行、或者完全在远程计算机或服务器上执行。在涉及远程计算机的情形中,远程计算机可以通过任意种类的网络——包括局域网(LAN)或广域网(WAN)—连接到用户计算机,或者,可以连接到外部计算机(例如利用因特网服务提供商来通过因特网连接)。
附图中的流程图和框图,图示了按照本公开各种实施例的系统、方法和计算机程序产品的可能实现的体系架构、功能和操作。在这点上,流程图或框图中的每个方框可以代表一个模块、程序段、或代码的一部分,该模块、程序段、或代码的一部分包含一个或多个用于实现规定的逻辑功能的可执行指令。也应当注意,在有些作为替换的实现中,方框中所标注的功能也可以以不同于附图中所标注的顺序发生。例如,两个接连地表示的方框实际上可以基本并行地执行,它们有时也可以按相反的顺序执行,这依所涉及的功能而定。也要注意的是,框图和/或流程图中的每个方框、以及框图和/或流程图中的方框的组合,可以用执行规定的功能或操作的专用的基于硬件的系统来实现,或者可以用专用硬件与计算机指令的组合来实现。
以上描述仅为本公开的较佳实施例以及对所运用技术原理的说明。本领域技术人员应当理解,本公开中所涉及的公开范围,并不限于上述技术特征特定组合而成的技术方案,同时也应涵盖在不脱离上述公开构思的情况下,由上述技术特征或其等同特征进行任意组合而形成的其它技术方案。例如上述特征与本公开中公开的(但不限于)具有类似功能的技术特征进行互相替换而形成的技术方案。
Claims (13)
- 一种视频拍摄方法,包括:通过原视频的视频播放界面,接收用户的视频拍摄触发操作;响应于所述视频拍摄触发操作,将视频拍摄窗口叠加显示在所述视频播放界面之上;通过所述视频播放界面,接收所述用户的视频拍摄操作;响应于所述视频拍摄操作,拍摄用户视频,同时播放所述原视频,并通过所述视频拍摄窗口显示所述用户视频;以及将所述用户视频和所述原视频合成,得到合拍视频。
- 根据权利要求1所述的方法,其中,将所述用户视频和所述原视频合成得到合拍视频包括:将所述用户视频的音频信息和所述原视频的音频信息合成,得到所述合拍视频的音频信息;将所述用户视频的视频信息和所述原视频的视频信息合成,得到所述合拍视频的视频信息;以及将所述合拍视频的音频信息和所述合拍视频的视频信息合成,得到所述合拍视频。
- 根据权利要求2所述的方法,还包括:通过所述视频播放界面,接收所述用户的音量调节操作;以及响应于所述音量调节操作,对所述原视频的音频信息和/或所述用户视频的音频信息的音量进行相应的调节。
- 根据权利要求1至3中任一项所述的方法,还包括:通过所述视频播放界面,接收所述用户针对待添加特效的特效添加操作;以及响应于所述特效添加操作,将所述待添加特效添加至所述用户视频 中。
- 根据权利要求1至3中任一项所述的方法,还包括:向所述用户提供操作提示选项,所述操作提示选项用于在接收到所述用户的操作时,向所述用户提供合拍视频拍摄操作的提示信息。
- 根据权利要求1至3中任一项所述的方法,还包括:在响应于所述视频拍摄操作,拍摄用户视频,同时播放所述原视频,并通过所述视频拍摄窗口显示所述用户视频之前,通过所述视频播放界面,接收所述用户针对用户视频的录制方式的录制选择操作,其中,所述录制方式包括快录方式、慢录方式和标准录制方式中的至少一项;以及响应于所述录制选择操作,确定所述用户视频的录制方式。
- 根据权利要求1至3中任一项所述的方法,还包括:在将所述用户视频和所述原视频合成得到合拍视频之后,接收所述用户的视频保存操作和/或视频发布操作;以及响应于所述视频保存操作,将所述合拍视频保存于本地,和/或,响应于所述视频发布操作,将所述合拍视频进行发布。
- 根据权利要求7所述的方法,其中,响应于所述视频发布操作将所述合拍视频进行发布包括:响应于所述视频发布操作,获取所述用户的合拍视频查看权限;以及依据所述合拍视频查看权限,将所述合拍视频进行发布。
- 根据权利要求1至3中任一项所述的方法,还包括:生成所述合拍视频的推送消息;以及将所述推送信息发送至所述用户的关联用户,和/或,所述原视频的关联用户。
- 根据权利要求1至3中任一项所述的方法,其中,若所述用户视频的时长小于所述原视频的时长,则将所述用户视频和所述原视频合成得到合拍视频包括:依据所述用户视频的录制起始时刻,确定所述原视频中与所述录制起始时刻对应的、且与所述用户视频的时长一致的第一视频;将所述用户视频与所述第一视频合成第二视频;以及依据所述第二视频及所述原视频中除所述第一视频之外的视频,得到合拍视频。
- 一种视频拍摄装置,包括:触发操作接收模块,用于通过原视频的视频播放界面,接收用户的视频拍摄触发操作;拍摄窗口显示模块,用于响应于所述视频拍摄触发操作,将视频拍摄窗口叠加显示在所述视频播放界面之上;拍摄操作接收模块,用于通过所述视频播放界面,接收所述用户的视频拍摄操作;用户视频拍摄模块,用于响应于所述视频拍摄操作,拍摄用户视频,同时播放所述原视频,并通过所述视频拍摄窗口显示所述用户视频;以及合拍视频生成模块,用于将所述用户视频和所述原视频合成,得到合拍视频。
- 一种电子设备,包括:存储器,所述存储器用于存储计算机操作指令;以及处理器,所述处理器用于通过调用所述计算机操作指令,执行权利要求1至10中任一项所述的方法。
- 一种计算机可读存储介质,所述计算机可读存储介质存储有至少一条指令、至少一段程序、代码集或指令集,所述至少一条指令、所述至少一段程序、所述代码集或指令集由计算机加载并执行以实现权利要求1 至10中任一项所述的方法。
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811223743.XA CN108989691B (zh) | 2018-10-19 | 2018-10-19 | 视频拍摄方法、装置、电子设备及计算机可读存储介质 |
CN201811223743.X | 2018-10-19 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2020077855A1 true WO2020077855A1 (zh) | 2020-04-23 |
Family
ID=64544498
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2018/124065 WO2020077855A1 (zh) | 2018-10-19 | 2018-12-26 | 视频拍摄方法、装置、电子设备及计算机可读存储介质 |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN108989691B (zh) |
WO (1) | WO2020077855A1 (zh) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113590076A (zh) * | 2021-07-12 | 2021-11-02 | 杭州网易云音乐科技有限公司 | 一种音频处理方法及装置 |
CN115720292A (zh) * | 2021-08-23 | 2023-02-28 | 北京字跳网络技术有限公司 | 视频录制方法、设备、存储介质及程序产品 |
Families Citing this family (18)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108989691B (zh) * | 2018-10-19 | 2021-04-06 | 北京微播视界科技有限公司 | 视频拍摄方法、装置、电子设备及计算机可读存储介质 |
CN109547841B (zh) * | 2018-12-20 | 2020-02-07 | 北京微播视界科技有限公司 | 短视频数据的处理方法、装置及电子设备 |
CN109862412B (zh) * | 2019-03-14 | 2021-08-13 | 广州酷狗计算机科技有限公司 | 合拍视频的方法、装置及存储介质 |
CN110087143B (zh) * | 2019-04-26 | 2020-06-09 | 北京谦仁科技有限公司 | 视频处理方法和装置、电子设备及计算机可读存储介质 |
CN110209870B (zh) * | 2019-05-10 | 2021-11-09 | 杭州网易云音乐科技有限公司 | 音乐日志生成方法、装置、介质和计算设备 |
CN110225020A (zh) * | 2019-06-04 | 2019-09-10 | 杭州网易云音乐科技有限公司 | 音频传输方法、系统、电子设备以及计算机可读存储介质 |
CN110336968A (zh) * | 2019-07-17 | 2019-10-15 | 广州酷狗计算机科技有限公司 | 视频录制方法、装置、终端设备及存储介质 |
CN110602394A (zh) * | 2019-09-06 | 2019-12-20 | 北京达佳互联信息技术有限公司 | 一种视频拍摄方法、装置及电子设备 |
CN110784652A (zh) | 2019-11-15 | 2020-02-11 | 北京达佳互联信息技术有限公司 | 视频拍摄方法、装置、电子设备及存储介质 |
CN111629151B (zh) * | 2020-06-12 | 2023-01-24 | 北京字节跳动网络技术有限公司 | 视频合拍方法、装置、电子设备及计算机可读介质 |
CN111726536B (zh) * | 2020-07-03 | 2024-01-05 | 腾讯科技(深圳)有限公司 | 视频生成方法、装置、存储介质及计算机设备 |
CN112004108B (zh) * | 2020-08-26 | 2022-11-01 | 深圳创维-Rgb电子有限公司 | 一种视频直播录制处理方法、装置、智能终端及存储介质 |
CN113068053A (zh) * | 2021-03-15 | 2021-07-02 | 北京字跳网络技术有限公司 | 一种直播间内的交互方法、装置、设备及存储介质 |
CN113395588A (zh) * | 2021-06-23 | 2021-09-14 | 北京字跳网络技术有限公司 | 一种视频处理方法、装置、设备及存储介质 |
CN113473224B (zh) * | 2021-06-29 | 2023-05-23 | 北京达佳互联信息技术有限公司 | 视频处理方法、装置、电子设备及计算机可读存储介质 |
CN113542844A (zh) * | 2021-07-28 | 2021-10-22 | 北京优酷科技有限公司 | 视频数据处理方法、装置及存储介质 |
CN113783997B (zh) * | 2021-09-13 | 2022-08-23 | 北京字跳网络技术有限公司 | 一种视频发布方法、装置、电子设备及存储介质 |
CN115442519B (zh) * | 2022-08-08 | 2023-12-15 | 珠海普罗米修斯视觉技术有限公司 | 视频处理方法、装置及计算机可读存储介质 |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104125412A (zh) * | 2014-06-16 | 2014-10-29 | 联想(北京)有限公司 | 一种信息处理方法及电子设备 |
CN104967902A (zh) * | 2014-09-17 | 2015-10-07 | 腾讯科技(北京)有限公司 | 视频分享方法、装置及系统 |
CN108989691A (zh) * | 2018-10-19 | 2018-12-11 | 北京微播视界科技有限公司 | 视频拍摄方法、装置、电子设备及计算机可读存储介质 |
Family Cites Families (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR102255830B1 (ko) * | 2014-02-05 | 2021-05-25 | 삼성전자주식회사 | 복수 개의 윈도우를 디스플레이하는 방법 및 장치 |
CN104994314B (zh) * | 2015-08-10 | 2019-04-09 | 优酷网络技术(北京)有限公司 | 在移动终端上通过手势控制画中画视频的方法及系统 |
CN106802759A (zh) * | 2016-12-21 | 2017-06-06 | 华为技术有限公司 | 视频播放的方法及终端设备 |
CN107920274B (zh) * | 2017-10-27 | 2020-08-04 | 优酷网络技术(北京)有限公司 | 一种视频处理方法、客户端及服务器 |
CN107944397A (zh) * | 2017-11-27 | 2018-04-20 | 腾讯音乐娱乐科技(深圳)有限公司 | 视频录制方法、装置及计算机可读存储介质 |
CN108566519B (zh) * | 2018-04-28 | 2022-04-12 | 腾讯科技(深圳)有限公司 | 视频制作方法、装置、终端和存储介质 |
-
2018
- 2018-10-19 CN CN201811223743.XA patent/CN108989691B/zh active Active
- 2018-12-26 WO PCT/CN2018/124065 patent/WO2020077855A1/zh active Application Filing
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104125412A (zh) * | 2014-06-16 | 2014-10-29 | 联想(北京)有限公司 | 一种信息处理方法及电子设备 |
CN104967902A (zh) * | 2014-09-17 | 2015-10-07 | 腾讯科技(北京)有限公司 | 视频分享方法、装置及系统 |
CN108989691A (zh) * | 2018-10-19 | 2018-12-11 | 北京微播视界科技有限公司 | 视频拍摄方法、装置、电子设备及计算机可读存储介质 |
Non-Patent Citations (2)
Title |
---|
"How does Kuaishou co-produce video with others", BAIDU EXPERIENCE, 25 June 2018 (2018-06-25), pages 1 - 7, XP055704136, Retrieved from the Internet <URL:https://jingyan.baidu.com/article/ff42efa9fb7b16c19e2202f0.html> * |
ANONYMOUS: "Introduction of Douyin Short Video Release Co-production Video Method", PCONLINE, 10 July 2018 (2018-07-10), pages 1 - 3, XP055704133, Retrieved from the Internet <URL:https://pcedu.pconline.com.cn/1145/11450490.html> * |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113590076A (zh) * | 2021-07-12 | 2021-11-02 | 杭州网易云音乐科技有限公司 | 一种音频处理方法及装置 |
CN113590076B (zh) * | 2021-07-12 | 2024-03-29 | 杭州网易云音乐科技有限公司 | 一种音频处理方法及装置 |
CN115720292A (zh) * | 2021-08-23 | 2023-02-28 | 北京字跳网络技术有限公司 | 视频录制方法、设备、存储介质及程序产品 |
Also Published As
Publication number | Publication date |
---|---|
CN108989691A (zh) | 2018-12-11 |
CN108989691B (zh) | 2021-04-06 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2020077855A1 (zh) | 视频拍摄方法、装置、电子设备及计算机可读存储介质 | |
WO2020077856A1 (zh) | 视频拍摄方法、装置、电子设备及计算机可读存储介质 | |
WO2020077854A1 (zh) | 视频生成的方法、装置、电子设备及计算机存储介质 | |
US20200245048A1 (en) | Video headphones, system, platform, methods, apparatuses and media | |
WO2021073315A1 (zh) | 视频文件的生成方法、装置、终端及存储介质 | |
WO2020029526A1 (zh) | 视频特效添加方法、装置、终端设备及存储介质 | |
WO2022152064A1 (zh) | 视频生成方法、装置、电子设备和存储介质 | |
US11670339B2 (en) | Video acquisition method and device, terminal and medium | |
US11037600B2 (en) | Video processing method and apparatus, terminal and medium | |
WO2020062684A1 (zh) | 视频处理方法、装置、终端和介质 | |
WO2021218518A1 (zh) | 视频的处理方法、装置、设备及介质 | |
WO2023104102A1 (zh) | 一种直播评论展示方法、装置、设备、程序产品及介质 | |
WO2020220773A1 (zh) | 图片预览信息的显示方法、装置、电子设备及计算机可读存储介质 | |
US11076121B2 (en) | Apparatus and associated methods for video presentation | |
WO2024037491A1 (zh) | 媒体内容处理方法、装置、设备及存储介质 | |
WO2023273692A1 (zh) | 信息回复方法、装置、电子设备、计算机存储介质和产品 | |
WO2023098011A1 (zh) | 视频播放方法及电子设备 | |
US20140282000A1 (en) | Animated character conversation generator | |
CN109636917A (zh) | 三维模型的生成方法、装置、硬件装置 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 18937188 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 13.08.2021) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 18937188 Country of ref document: EP Kind code of ref document: A1 |