WO2020077856A1 - 视频拍摄方法、装置、电子设备及计算机可读存储介质 - Google Patents

视频拍摄方法、装置、电子设备及计算机可读存储介质 Download PDF

Info

Publication number
WO2020077856A1
WO2020077856A1 PCT/CN2018/124066 CN2018124066W WO2020077856A1 WO 2020077856 A1 WO2020077856 A1 WO 2020077856A1 CN 2018124066 W CN2018124066 W CN 2018124066W WO 2020077856 A1 WO2020077856 A1 WO 2020077856A1
Authority
WO
WIPO (PCT)
Prior art keywords
video
user
window
shooting
display area
Prior art date
Application number
PCT/CN2018/124066
Other languages
English (en)
French (fr)
Inventor
陈海东
郝一鹏
王海婷
林俊杰
Original Assignee
北京微播视界科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京微播视界科技有限公司 filed Critical 北京微播视界科技有限公司
Priority to GB2017755.6A priority Critical patent/GB2590545B/en
Priority to US16/980,213 priority patent/US11895426B2/en
Priority to JP2021510503A priority patent/JP7139515B2/ja
Publication of WO2020077856A1 publication Critical patent/WO2020077856A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/272Means for inserting a foreground image in a background image, i.e. inlay, outlay
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/76Television signal recording
    • H04N5/765Interface circuits between an apparatus for recording and another apparatus
    • H04N5/77Interface circuits between an apparatus for recording and another apparatus between a recording apparatus and a television camera
    • H04N5/772Interface circuits between an apparatus for recording and another apparatus between a recording apparatus and a television camera the recording apparatus and the television camera being placed in the same enclosure
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/62Control of parameters via user interfaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • G06F3/165Management of the audio stream, e.g. setting of volume, audio stream path
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/631Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/2621Cameras specially adapted for the electronic generation of special effects during image pickup, e.g. digital cameras, camcorders, video cameras having integrated special effects capability

Definitions

  • the present disclosure relates to the field of Internet technology, and in particular, the present disclosure relates to a video shooting method, apparatus, electronic equipment, and computer-readable storage medium.
  • users can express their thoughts or viewing experience of other videos in the platform in the form of videos, so as to realize the interaction with the videos.
  • the present disclosure provides a video shooting method, the method includes:
  • the video shooting window is superimposed and displayed on the video playback interface
  • the user video is captured, and the user video is displayed through the video capturing window.
  • the method further includes:
  • the video shooting window is adjusted to the corresponding area on the video playback interface.
  • adjusting the video shooting window to the corresponding area above the video playback interface in response to the window movement operation includes:
  • determining the current display area of the video shooting window according to the window movement operation and the window adjustment boundary line includes:
  • the window movement operation determine the first display area of the video shooting window
  • the first display area is determined to be the current display area
  • the second display area is the current display area
  • the second display area is an area after the first display area is translated to any window adjustment boundary line, and at least one position point of the second display area coincides with any window adjustment boundary line.
  • taking a user video in response to a video shooting operation and displaying the user video through the video shooting window includes:
  • the user video is shot, the original video is simultaneously played, and the user video is displayed through the video shooting window.
  • the method further includes:
  • the video shooting window is adjusted to the corresponding display size.
  • the method further includes:
  • the special effect to be added is added to the user video.
  • the method further includes: before shooting the user video in response to the video shooting operation and displaying the user video through the video shooting window,
  • the recording method includes at least one of the fast recording method, the slow recording method, and the standard recording method;
  • the recording selection operation determine the recording method of user video.
  • the method further includes:
  • the method further includes:
  • the volume of the audio information of the user video and / or the audio information of the original video is adjusted accordingly.
  • the method further includes:
  • An operation prompt option is provided to the user, and the operation prompt option is used to provide the user with prompt information of the cooperative video shooting operation when the user's operation is received.
  • the present disclosure provides a video shooting device including:
  • Trigger operation receiving module used to receive the user's video shooting trigger operation through the video playback interface of the original video
  • the shooting window display module is used to superimpose and display the video shooting window on the video playback interface in response to the video shooting trigger operation;
  • the shooting operation receiving module is used to receive the user's video shooting operation through the video playback interface
  • the user video shooting module is used to shoot the user video in response to the video shooting operation and display the user video through the video shooting window.
  • the device further includes:
  • the window position adjustment module is configured to receive a user's window movement operation for the video shooting window, and in response to the window movement operation, adjust the video shooting window to a corresponding area on the video playback interface.
  • the window position adjustment module may be configured as:
  • the window position adjustment module may be configured as:
  • the window movement operation determine the first display area of the video shooting window
  • the first display area is determined to be the current display area
  • the second display area is the current display area
  • the second display area is an area after the first display area is translated to any window adjustment boundary line, and at least one position point of the second display area coincides with any window adjustment boundary line.
  • the user video shooting module may be configured as:
  • the user video is shot, the original video is simultaneously played, and the user video is displayed through the video shooting window.
  • the device further includes:
  • the window size adjustment module is used to receive the user's window size adjustment operation for the video shooting window, and in response to the window size adjustment operation, adjust the video shooting window to the corresponding display size.
  • the device further includes:
  • the special effect adding module is used to receive the user's special effect adding operation for the special effect to be added through the video playing interface, and add the special effect to be added to the user video in response to the special effect adding operation.
  • the user video shooting module may also be configured as:
  • the recording method includes at least one of the fast recording method, the slow recording method, and the standard recording method.
  • the device further includes:
  • Co-production video generation module used to synthesize user video and original video to obtain co-production video.
  • the device further includes:
  • the volume adjustment module is used to receive the user's volume adjustment operation through the video playback interface, and adjust the volume of the audio information of the user's video and / or the audio information of the original video in response to the volume adjustment operation.
  • the device further includes:
  • the operation prompt module is used to provide the user with an operation prompt option, and the operation prompt option is used to provide the user with prompt information of the cooperative video shooting operation when the user's operation is received.
  • the present disclosure provides an electronic device including a processor and a memory
  • the memory is used to store computer operation instructions
  • the processor is configured to execute the method as shown in any embodiment of the first aspect of the present disclosure by invoking the computer operation instruction.
  • the present disclosure provides a computer-readable storage medium storing at least one operation, at least one program, code set, or operation set, the at least one operation, at least one program, code set, or operation set Load and execute to implement the method as shown in any embodiment of the first aspect of the present disclosure.
  • a user only needs to perform operations related to user video shooting on the video playback interface, and the user video can be recorded on the basis of the original video through the video shooting window, and the operation process is simple and fast.
  • the user video can reflect the user's feelings, comments, or viewing reactions to the original video, so that the user can conveniently display his views or reactions to the original video, which can better meet the user's actual application needs and improve the user's Interactive experience improves the fun of video shooting.
  • FIG. 1 is a schematic flowchart of a video shooting method provided by an embodiment of the present disclosure
  • FIG. 2 is a schematic diagram of a video playback interface provided by an embodiment of the present disclosure
  • FIG. 3 is a schematic diagram of yet another video playback interface provided by an embodiment of the present disclosure.
  • FIG. 4 is a schematic diagram of yet another video playback interface provided by an embodiment of the present disclosure.
  • 5A is a schematic diagram of a volume adjustment method provided by an embodiment of the present disclosure.
  • FIG. 5B is a schematic diagram of yet another volume adjustment method provided by an embodiment of the present disclosure.
  • FIG. 6A is a schematic diagram of another video playback interface provided by an embodiment of the present disclosure.
  • FIG. 6B is a schematic diagram of yet another video playback interface provided by an embodiment of the present disclosure.
  • FIG. 7 is a schematic structural diagram of a video shooting device provided by an embodiment of the present disclosure.
  • FIG. 8 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure.
  • An embodiment of the present disclosure provides a video shooting method. As shown in FIG. 1, the method may include:
  • Step S110 Receive the user's video shooting trigger operation through the video playback interface of the original video.
  • the video shooting trigger operation means that the user wants to shoot the user video based on the original video in the video playback interface, that is, the user is used to trigger the action of starting the user video shooting, and the specific form of the operation is configured as needed, for example, it can be The trigger action of the user operating position on the interface of the client application.
  • the video playback interface is used for interaction between the terminal device and the user, and the user can receive related operations on the original video through the interface, for example, sharing the original video or performing joint shooting and other operations.
  • the operation can be triggered by the relevant trigger identification of the client, such as a specified trigger button or input box on the client interface, or the user's voice, specifically, the "displayed on the client's application interface"
  • the virtual button of "co-shot” the user clicks the button to trigger the user's video shooting operation.
  • the original video may be a video that has not been co-shot, or may have been obtained after co-shot.
  • step S120 in response to the video shooting trigger operation, the video shooting window is superimposed and displayed on the video playback interface.
  • the video shooting window may be superimposed and displayed on a preset position on the video playback interface, and the preset position may be a pre-configured display position based on the size of the display interface of the user's terminal device, for example, the upper left of the video playback interface Corner; the size of the video capture window is smaller than the display window of the original video, so that the video capture window only blocks part of the content of the original video.
  • the initial size of the video shooting window can be configured according to actual needs. It can be selected to minimize the occlusion of the original video screen when playing the original video, which does not affect the user's viewing of the original video. Affect the size of the user's viewing of the recorded picture.
  • the size of the display interface of the user's terminal device can be configured to automatically adjust the size of the video capture window displayed on the terminal device.
  • the video capture window is one-tenth or one-fifth of the display interface of the terminal device.
  • Step S130 Receive a user's video shooting operation through the video playback interface.
  • the video playback interface includes related trigger identifiers for triggering video shooting operations, such as specifying a trigger button or input box, and can also be a user's voice instruction; specifically, it can be "shooting" displayed on the client's application interface "Is a virtual button, the user clicks on the button is the user's video shooting operation, and the video shooting operation can trigger the shooting function of the user's terminal device to obtain the user's content to be shot, such as the user himself.
  • related trigger identifiers for triggering video shooting operations, such as specifying a trigger button or input box, and can also be a user's voice instruction; specifically, it can be "shooting" displayed on the client's application interface "Is a virtual button, the user clicks on the button is the user's video shooting operation, and the video shooting operation can trigger the shooting function of the user's terminal device to obtain the user's content to be shot, such as the user himself.
  • step S140 in response to the video shooting operation, the user video is shot, and the user video is displayed through the video shooting window.
  • the playback state of the original video is not limited, that is, the original video may be in a playback state, or may be in a state of being paused to a certain frame of a video frame, which may be configured based on actual needs.
  • the original video may be a video that has not been co-shot, or a co-produced video that has been obtained after co-shot.
  • the user video in the embodiment of the present disclosure may be a video including the user, that is, the user's video is recorded.
  • the user's video is recorded.
  • it can also be the video of other scenes recorded by the user after adjustment as needed.
  • a user only needs to perform operations related to user video shooting on the video playback interface, and the user video can be recorded on the basis of the original video through the video shooting window, and the operation process is simple and fast.
  • the user video can reflect the user's feelings, comments, or viewing reactions to the original video, so that the user can conveniently display his views or reactions to the original video, which can better meet the user's actual application needs and improve the user's Interactive experience improves the fun of video shooting.
  • FIG. 2 shows a schematic diagram of a video playback interface of the original video of the client application in the terminal device.
  • the virtual button of “co-shooting” displayed on the interface is a video shooting trigger button. The user clicks The operation of this button is the user's video shooting trigger operation; on the video playback interface, after receiving the user's video shooting trigger operation, the video shooting window A is superimposed and displayed on the video playback interface B.
  • the "shooting" virtual button is the shooting trigger button, and the operation that the user clicks on the button is the user's video shooting operation. After receiving the operation, the user video is shot through the video shooting window A to realize the shooting of the user video based on the original video Function.
  • the shape of the video shooting window is not limited, including a circle, a rectangle, or other shapes, and can be configured according to actual needs.
  • the method may further include:
  • the video shooting window is adjusted to the corresponding area on the video playback interface.
  • the user can adjust the position of the video shooting window to meet the needs of different users for the position of the video shooting window above the video playback interface.
  • the position of the video shooting window can be adjusted by any of the following user window movement operations:
  • the first type the user can adjust the position of the video shooting window by dragging the video shooting window through an operating object, such as a finger.
  • an operating object such as a finger.
  • the second type the user can adjust the position of the video shooting window through the position progress bar displayed in the video playback interface.
  • the corresponding different positions in the position progress bar can represent the position of the shooting window above the video playback interface.
  • the user can slide the position The progress bar determines the corresponding area of the video shooting window above the video playback interface.
  • adjusting the video shooting window to the corresponding area above the video playback interface in response to the window movement operation may include:
  • the video playback interface has a pre-configured window adjustment boundary line.
  • the window adjustment boundary line is used to limit the display area of the video shooting window above the video playback interface.
  • the window adjustment boundary line may be based on various The size of the display interface of the terminal device is pre-configured so that the content captured in the video shooting window can be adapted to be displayed on the display interface of any terminal device. Based on the configuration of the window adjustment boundary line, when receiving the user's window movement operation, the pre-configured window adjustment boundary line will be displayed on the video playback interface at the same time, so that when the user adjusts the video shooting window, the adjustment of the video shooting window is adjusted in accordance with.
  • the video shooting window can be configured according to requirements.
  • the window adjustment boundary line may be a guide line located at a pre-configured position in the video playback interface, and the pre-configured position may include at least one of the top, bottom, left, and right positions of the video playback interface, and guide lines at different positions
  • the adjustment range of the corresponding position of the video shooting window in the video playback interface can be limited.
  • the two guide lines at the top and left in the video playback interface are used as window adjustment lines (that is, window adjustment boundary lines a and b) as an example.
  • the user can trigger the window adjustment operation by dragging the video shooting window f.
  • the window adjustment boundary lines a and b will be displayed in the video playback interface, and the window adjustment boundary lines a and b are two perpendicular to each other Lines, in practical applications, in order to facilitate user identification, you can mark the window adjustment boundary lines a and b by eye-catching colors, such as red, or adjust the boundary lines a and b of the window by different shapes, such as zigzag Make annotations.
  • the user drags the video shooting window f from position A to position B. Based on the position B, the video shooting window f is adjusted to the position corresponding to position B on the video playback interface to realize the adjustment of the video shooting window .
  • determining the current display area of the video shooting window according to the window movement operation and the window adjustment boundary line may include:
  • the window movement operation determine the first display area of the video shooting window
  • the first display area is determined to be the current display area
  • the second display area is determined to be the current display area
  • the second display area is an area after the first display area is translated to any window adjustment boundary line, and at least one position point of the second display area coincides with any window adjustment boundary line.
  • the video shooting window has a relatively better display position within the adjustment range defined by the window adjustment boundary line, for example, the display area near the window adjustment boundary line, in the process of adjusting the video window, in addition to the video shooting window in the video playback
  • the user cannot accurately obtain the relatively better display position, you can use the distance between the display area of the video shooting window during the adjustment and the window adjustment boundary line to help the user to adjust the video
  • the shooting window is adjusted to a relatively better position above the video playback interface.
  • the display position of the non-edge area of, the first display area can be used as the area to which the video shooting window is to be adjusted, that is, the current display area.
  • the distance between the first display area and any window adjustment boundary line is less than the set distance, it means that the user may want to adjust the video shooting window to the edge area of the video playback interface, so as to cover the original video playback interface as little as possible
  • the current display area may be determined as the second display area at the boundary line.
  • the first display area is rectangular, and the area after the first display area is adjusted to any window and the boundary line is translated is any of the first display area.
  • a border line coincides with the area corresponding to any window adjustment border line; if the video capture window is circular and the window adjustment border line is a straight line, the first display area is circular, and the first display area is adjusted to any window border
  • the area after the line translation is an area where at least one position point of the first display area coincides with any window adjustment boundary line. It can be understood that, when there is an adjustment boundary line, no matter how the shooting window is adjusted, the display area of the shooting window cannot exceed the boundary line.
  • taking a user video in response to a video shooting operation and displaying the user video through a video shooting window may include:
  • the user video is shot, the original video is simultaneously played, and the user video is displayed through the video shooting window.
  • the user video in order to make the comment content in the user video correspond to the content in the original video, the user video can be recorded synchronously while the original video is playing, that is, when the video shooting operation is received, the user video starts to be taken and the original video is played synchronously
  • the function of simultaneous recording of the user video can be realized while the original video is playing, so that during the recording of the user video, the user can perform synchronous recording of the thought content or comment content in the user video based on the video content played in the original video, further improving The user's interactive experience.
  • the original video is in the playback state before receiving the user's video shooting operation through the video playback interface of the original video, the original video is automatically paused when the user's video shooting operation is received, or the user Pause, when receiving the video shooting operation, you can play the paused original video, shoot the user video, and display the user video through the video shooting window.
  • the method may further include:
  • the video shooting window is adjusted to the corresponding display size.
  • the size of the video shooting window can be set according to the pre-configured default value, or the size of the video shooting window can be adjusted by the user based on the actual needs of the user.
  • the video playback interface includes a trigger window Trigger identification related to the size adjustment operation, such as specifying a trigger button or input box, or a user's voice instruction; specifically, it can be a virtual button of a "window" displayed on the video playback interface, and the user can trigger the window size through the button Adjustment operation, through which the size of the video shooting window can be adjusted.
  • the method may further include:
  • the special effect to be added is added to the user video.
  • the user can also be provided with the function of adding special effects to the user video, that is, adding the selected special effects to be added to the user video through the user's special effect adding operation.
  • the special effect to be added may be added before the user's video shooting, may also be added during the user's video shooting, or may be added after the user's video shooting is completed.
  • the disclosure does not limit the timing of adding the special effect.
  • the function of adding special effects to user videos can be achieved in at least one of the following ways:
  • the first type the function of adding special effects can be realized through the virtual button of "special effects" displayed on the video playback interface.
  • Video the virtual button of "special effects" displayed on the video playback interface.
  • the second type You can add special effects by sliding the display interface of the user video.
  • the user can slide the display interface of the user video left and right through an operator, such as a finger, to add the corresponding special effects to the user video.
  • the method may further include: before responding to the video shooting operation, shooting the user video, and displaying the user video through the video shooting window,
  • the recording method includes at least one of the fast recording method, the slow recording method, and the standard recording method;
  • the recording selection operation determine the recording method of user video.
  • the user video can provide the user with a function to select the recording mode of the user video before shooting, that is, to record the user video according to the selected recording mode through the user's recording selection operation.
  • the recording rate of the fast recording mode, the recording rate of the standard recording mode, and the recording rate of the slow recording mode are sequentially slowed down; through the selection of different recording methods, the function of variable-speed recording of user video can be realized, further improving the user's interactive experience.
  • the fast, slow and standard among the above fast recording mode, slow recording mode and standard recording mode are relative, the recording rate of different recording modes is different, and the recording rate of each recording mode can be as required Configuration.
  • the fast recording mode refers to the recording mode with the first recording rate
  • the slow recording mode refers to the recording mode with the second recording rate
  • the standard recording mode refers to the recording mode with the third recording rate.
  • the third rate, the third rate is greater than the second rate.
  • the method may further include:
  • the user video and the original video synthesis method can be configured according to actual needs, the user video can be combined with the original video in the process of shooting the user video, or after the user video shooting is completed, then the user video and the original video Synthesize, and the resulting co-production video includes the content in the original video and the user video.
  • the co-production video you can watch the user video while watching the original video.
  • the user video is the user ’s reaction video
  • the video frame image of the co-shot video includes the video frame image in the user video and the video frame image in the original video, wherein the video frame image in the user video is displayed on the video frame image in the original video on.
  • the video frame image of the user video and the corresponding video frame image of the original video are combined , Synthesizing the audio information corresponding to the video frame image of the user video with the audio information corresponding to the corresponding video frame image of the original video, and then synthesizing the synthesized video frame image and the corresponding audio information to obtain a composite video.
  • synthesizing the video frame image and the video frame image it refers to synthesizing the corresponding two video frame images into one frame image, and the video frame image of the user video in the synthesized one frame image is located in the original video On top of the video frame image.
  • the size of the video frame image of the user video is smaller than the size of the video frame image of the original video.
  • the duration of the user video is 10s, and the duration of the original video is also 10s.
  • synthesizing the video frame image of the user video with the corresponding video frame image of the original video it is the first s of the user video
  • the video frame image of the original video is combined with the video frame image of the first s of the original video
  • the obtained video frame image is the video frame image of the first s of the corresponding co-produced video.
  • each frame in the user video is sequentially
  • the video frame image is combined with each video frame image in the corresponding original video to obtain a co-production video.
  • FIG. 4 shows a schematic diagram of a video frame image in a synthesized video obtained by synthesizing a video frame image in a user video and a video frame image in an original video, as shown in the figure
  • the image a is the part of the video frame image in the original video
  • the image b is the part of the video frame image in the user video
  • the image shown in the picture after the synthesis of the image a and the image b is the synthesized video frame image .
  • the method may further include:
  • the volume of the audio information of the user video and / or the audio information of the original video is adjusted accordingly.
  • the volume of the original video and / or user video can also be adjusted to meet the video playback requirements of different users.
  • the volume of the captured user video may be a pre-configured volume, for example, a volume consistent with the volume in the original video, or a preset volume.
  • the volume adjustment virtual button in the video playback interface can be used to adjust the volume.
  • the volume adjustment virtual button can be the volume adjustment progress bar, which corresponds to the original video volume and user video volume adjustment, which can be configured accordingly.
  • Two volume adjustment progress bars such as volume adjustment progress bar a and volume adjustment progress bar b, adjust the volume of the original video through the volume adjustment progress bar a, and adjust the volume of the user video through the volume adjustment progress bar b.
  • logo to distinguish different volume adjustment progress bars.
  • FIG. 5A a schematic diagram of a volume adjustment method is shown in FIG. 5A.
  • the user can adjust the volume by sliding the volume adjustment progress bar, and slide upward on the interface (that is, in the direction of the "+” sign), indicating that the volume is adjusted. Turn up; slide down the interface (that is, in the direction of the "-” sign) to indicate that the volume is turned down.
  • you can also set the volume adjustment progress bar to the horizontal direction that is, a schematic diagram of another volume adjustment method shown in Figure 5B. Swipe to the left of the interface (that is, the "-" sign direction) to indicate that When the volume is turned down, slide to the right of the interface (that is, the direction of the "+” sign) to indicate that the volume is turned up.
  • the volume adjustment interface and the video playback interface may be the same display interface or different display interfaces. If it is a different display interface, the volume adjustment interface can be displayed when the user's volume adjustment operation is received through the video playback interface, and the volume adjustment can be performed through this interface. Optionally, in order not to affect the recording and playback of the video, you can change The volume adjustment interface is displayed superimposed on the video playback interface, such as the edge position displayed on the video playback interface.
  • the method may further include:
  • An operation prompt option is provided to the user, and the operation prompt option is used to provide the user with prompt information of the cooperative video shooting operation when the user's operation is received.
  • the prompt operation option can be displayed on the video playback interface through the "Help" virtual button.
  • the user can get the corresponding prompt information by clicking the button.
  • the prompt information can be displayed to the user in the form of operation preview, or can be displayed through the text.
  • the method prompts the user how to operate, and the present disclosure does not limit the presentation form of the prompt information.
  • synthesizing the user video and the original video to obtain a co-production video may include:
  • the video includes two parts of video information and audio information, in the process of synthesizing the user video and the original video, the respective video information and audio information can be synthesized separately, and finally the synthesized video information and audio information are synthesized
  • the above synthesis method can facilitate the processing of information.
  • the method may further include: after synthesizing the user video and the original video to obtain a co-production video,
  • the co-production video is saved locally, and / or, in response to the video publishing operation, the co-production video is published.
  • the user can be provided with the function of publishing and / or saving the co-produced video, that is, through the user's video publishing operation, the co-produced video is published to the designated video platform to share the co-produced video; Or through the user's video saving operation, the co-production video is saved locally for the user to view.
  • the video publishing operation can be Obtained by the user clicking the "publish" virtual button.
  • publishing the co-produced video in response to the video publishing operation may include:
  • the user in order to meet the user's privacy requirements for co-produced videos, the user is provided with the function of configuring co-produced video viewing permissions, that is, obtaining the user's co-produced video viewing permissions through the user's video publishing operation, and publishing the co-produced video according to the user's co-produced video viewing permissions .
  • the co-produced video can only be viewed by users corresponding to the permission to view the co-produced video, and users who are not in the permission to view the co-produced video cannot view the co-produced video.
  • the permission to view the co-produced video can be pre-configured.
  • the permission to view the co-produced video can also be configured.
  • the currently co-produced video is released according to the configured privacy rights.
  • the permission to view the co-production video includes at least one of anyone, friends, and only yourself.
  • anyone indicates that the co-production video can be viewed by anyone.
  • a friend means that only the user ’s friends can view the co-production video.
  • the user himself can view the co-produced video.
  • the method may further include:
  • a push message of the co-produced video may be generated, and through the push message, the associated user of the user and / or the associated user of the original video may be made Be informed of the release of the co-produced video in time.
  • the associated user of the user refers to a user who has an associated relationship with the user, and the related scope of the associated relationship can be configured according to needs, for example, it can include, but not limited to, the person concerned by the user or the person following the user.
  • the users associated with the original video are associated with the publisher of the original video. For example, they may include, but are not limited to, the publisher of the original video and the people involved in the original video.
  • the original video is a video after a co-production.
  • the publisher of the video is user a
  • the author of the original original video before the co-production of the original video is user b
  • the associated users of the original video may include user a and user b.
  • user a followed user b, user a posted a co-production video, and user a was associated with user b, namely user a @ user b, where user a @ user b could be displayed in the title of the co-production video, Then, a push message of the co-produced video is sent to user b, so that user b knows that user a has posted the video.
  • user b cannot receive the push message of the co-production video.
  • user a did not follow user b, user a posted a co-production video, but when user a posted a co-production video @ user b, user b can receive a push message for the co-production video.
  • synthesizing the user video and the original video to obtain a co-production video may include:
  • the recording start time of the user video determine the first video in the original video that corresponds to the recording start time and the duration of the user video; synthesize the user video and the first video into the second video; and based on the second video And the video in the original video except the first video, get the co-production video.
  • the duration of the user video recorded by the user may be the same as the duration of the original video, or may be inconsistent, and the user may select the recording start time of the user video based on the content in the original video, so that the video is co-shot During playback, the content of the user's video corresponds to the content in the original video, which further enhances the user's interactive experience.
  • the method may further include: hiding virtual buttons of corresponding functions in the video playback interface.
  • a virtual logo representing different functions can be displayed on the video playback interface, for example: a virtual button a indicating the start of shooting, a progress bar b indicating the progress of shooting, a virtual button c indicating adding special effects, and a virtual button indicating releasing a co-produced video Button d, etc .; a schematic diagram of a video playback interface shown in FIGS. 6A and 6B.
  • the virtual identifiers other than the virtual button a and the progress bar b in the video playback interface in FIG. 6A can be hidden, for example, the virtual buttons c and d are hidden. As shown in the figure, by hiding the virtual logo, the video playback interface can be kept tidy.
  • a virtual button for hiding function buttons can also be set in the interface, through which the user can set which function buttons to hide or display and restore. Specifically, when receiving the user's operation on the button, the user You can use this button to choose which virtual buttons to hide, or you can choose to restore the previously hidden virtual buttons.
  • an embodiment of the present disclosure also provides a video shooting device 20.
  • the device 20 may include:
  • the trigger operation receiving module 210 is used to receive the user's video shooting trigger operation through the video playback interface of the original video;
  • the shooting window display module 220 is used to superimpose and display the video shooting window on the video playback interface in response to the video shooting trigger operation;
  • the shooting operation receiving module 230 is used to receive the user's video shooting operation through the video playback interface.
  • the user video shooting module 240 is used to shoot user video in response to the video shooting operation and display the user video through the video shooting window.
  • the device may further include:
  • the window position adjustment module is configured to receive a user's window movement operation for the video shooting window, and in response to the window movement operation, adjust the video shooting window to a corresponding area on the video playback interface.
  • the window position adjustment module may be configured as:
  • the window position adjustment module may be configured as:
  • the window movement operation determine the first display area of the video shooting window
  • the first display area is determined to be the current display area
  • the second display area is the current display area
  • the second display area is an area after the first display area is translated to any window adjustment boundary line, and at least one position point of the second display area coincides with any window adjustment boundary line.
  • the user video shooting module 240 may be configured as:
  • the user video is shot, the original video is simultaneously played, and the user video is displayed through the video shooting window.
  • the device may further include:
  • the window size adjustment module is used to receive the user's window size adjustment operation for the video shooting window, and in response to the window size adjustment operation, adjust the video shooting window to the corresponding display size.
  • the device may further include:
  • the special effect adding module is used to receive the user's special effect adding operation for the special effect to be added through the video playing interface, and add the special effect to be added to the user video in response to the special effect adding operation.
  • the user video shooting module 240 may also be configured as:
  • the recording method may include at least one of a fast recording method, a slow recording method, and a standard recording method.
  • the device may further include:
  • Co-production video generation module used to synthesize user video and original video to obtain co-production video.
  • the device may further include:
  • the volume adjustment module is used to receive the user's volume adjustment operation through the video playback interface, and adjust the volume of the audio information of the user's video and / or the audio information of the original video in response to the volume adjustment operation.
  • the device may further include:
  • the operation prompt module is used to provide the user with an operation prompt option, and the operation prompt option is used to provide the user with prompt information of the cooperative video shooting operation when the user's operation is received.
  • the video shooting device of the embodiments of the present disclosure may perform a video shooting method provided by the embodiments of the present disclosure, and the implementation principle is similar.
  • the actions performed by the modules in the video shooting device in the embodiments of the present disclosure are: Corresponding to the steps in the video shooting method in the embodiments of the present disclosure, for the detailed function description of each module of the video shooting device, please refer to the description in the corresponding video shooting method shown in the foregoing, which will not be repeated here Repeat.
  • the present disclosure provides an electronic device including a processor and a memory, wherein the memory is used to store computer operation instructions; and the processor is used to By calling the computer operation instruction, the method as shown in any embodiment of the video shooting method of the present disclosure is executed.
  • the present disclosure provides a computer-readable storage medium that stores at least one instruction, at least one program, code set, or instruction set, at least one instruction At least one program, code set or instruction set is loaded and executed by the computer to implement the method as shown in any embodiment of the video shooting method of the present disclosure.
  • FIG. 8 shows a schematic structural diagram of an electronic device 800 (for example, a terminal device or a server that implements the method shown in FIG. 1) suitable for implementing the embodiments of the present disclosure.
  • Electronic devices in the embodiments of the present disclosure may include, but are not limited to, such as mobile phones, notebook computers, digital broadcast receivers, PDAs (personal digital assistants), PADs (tablet computers), PMPs (portable multimedia players), in-vehicle terminals (e.g. Mobile terminals such as car navigation terminals) and fixed terminals such as digital TVs, desktop computers and the like.
  • the electronic device shown in FIG. 8 is only an example, and should not bring any limitation to the functions and use scope of the embodiments of the present disclosure.
  • the electronic device 800 may include a processing device (such as a central processing unit, a graphics processor, etc.) 801, which may be loaded into a random storage according to a program stored in a read only memory (ROM) 802 or from the storage device 808
  • the program in the memory (RAM) 803 is fetched to perform various appropriate actions and processes.
  • various programs and data necessary for the operation of the electronic device 800 are also stored.
  • the processing device 801, ROM 802, and RAM 803 are connected to each other through a bus 804.
  • An input / output (I / O) interface 805 is also connected to the bus 804.
  • the following devices can be connected to the I / O interface 805: including input devices 806 such as touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc .; including, for example, liquid crystal display (LCD), speaker, vibration
  • An output device 807 such as a storage device; a storage device 808 including, for example, a magnetic tape, a hard disk, etc .; and a communication device 809.
  • the communication device 809 may allow the electronic device 800 to perform wireless or wired communication with other devices to exchange data.
  • FIG. 8 shows an electronic device 800 having various devices, it should be understood that it is not required to implement or have all the devices shown. More or fewer devices may be implemented or provided instead.
  • the process described above with reference to the flowchart may be implemented as a computer software program.
  • embodiments of the present disclosure include a computer program product that includes a computer program carried on a computer-readable medium, the computer program containing program code for performing the method shown in the flowchart.
  • the computer program may be downloaded and installed from the network through the communication device 809, or installed from the storage device 808, or installed from the ROM 802.
  • the processing device 801 the above-described functions defined in the method of the embodiments of the present disclosure are executed.
  • the above-mentioned computer-readable medium in the present disclosure may be a computer-readable signal medium or a computer-readable storage medium or any combination of the two.
  • the computer-readable storage medium may be, for example but not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, device, or device, or any combination of the above. More specific examples of computer readable storage media may include, but are not limited to: electrical connections with one or more wires, portable computer diskettes, hard disks, random access memory (RAM), read only memory (ROM), erasable Programmable read-only memory (EPROM or flash memory), optical fiber, portable compact disk read-only memory (CD-ROM), optical storage device, magnetic storage device, or any suitable combination of the foregoing.
  • the computer-readable storage medium may be any tangible medium that contains or stores a program, and the program may be used by or in combination with an instruction execution system, apparatus, or device.
  • the computer-readable signal medium may include a data signal that is propagated in baseband or as part of a carrier wave, in which computer-readable program code is carried. This propagated data signal can take many forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of the above.
  • the computer-readable signal medium may also be any computer-readable medium other than a computer-readable storage medium, and the computer-readable signal medium may send, propagate, or transmit a program for use by or in combination with an instruction execution system, apparatus, or device .
  • the program code contained on the computer-readable medium may be transmitted using any appropriate medium, including but not limited to: electric wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
  • the computer-readable medium may be included in the electronic device; or it may exist alone without being assembled into the electronic device.
  • the computer-readable medium carries one or more programs.
  • the electronic device When the one or more programs are executed by the electronic device, the electronic device is caused to: obtain at least two Internet protocol addresses; send the node evaluation device to include at least two Internet programs A node evaluation request for a protocol address, where the node evaluation device selects and returns an Internet protocol address from at least two Internet protocol addresses; receives the Internet protocol address returned by the node evaluation device; wherein, the obtained Internet protocol address indicates a content distribution network
  • the edge node in.
  • the computer-readable medium carries one or more programs, and when the one or more programs are executed by the electronic device, the electronic device is caused to: receive a node evaluation request including at least two Internet protocol addresses; from at least two Among the Internet protocol addresses, select the Internet protocol address; return the selected Internet protocol address; wherein, the received Internet protocol address indicates an edge node in the content distribution network.
  • the computer program code for performing the operations of the present disclosure can be written in one or more programming languages or a combination thereof.
  • the above programming languages include object-oriented programming languages such as Java, Smalltalk, C ++, and also include conventional Procedural programming language-such as "C" language or similar programming language.
  • the program code may execute entirely on the user's computer, partly on the user's computer, as an independent software package, partly on the user's computer and partly on a remote computer, or entirely on the remote computer or server.
  • the remote computer can be connected to the user's computer through any kind of network, including a local area network (LAN) or a wide area network (WAN), or it can be connected to an external computer (for example, through an Internet service provider Internet connection).
  • LAN local area network
  • WAN wide area network
  • Internet service provider Internet connection for example, AT&T, MCI, Sprint, EarthLink, MSN, GTE, etc.
  • each block in the flowchart or block diagram may represent a module, program segment, or part of code that contains one or more logic functions Executable instructions.
  • the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession can actually be executed in parallel, and sometimes they can also be executed in reverse order, depending on the functions involved.
  • each block in the block diagrams and / or flowcharts, and combinations of blocks in the block diagrams and / or flowcharts can be implemented with dedicated hardware-based systems that perform specified functions or operations Or, it can be realized by a combination of dedicated hardware and computer instructions.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • User Interface Of Digital Computer (AREA)
  • Management Or Editing Of Information On Record Carriers (AREA)
  • Studio Devices (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
  • Controls And Circuits For Display Device (AREA)
  • Signal Processing For Digital Recording And Reproducing (AREA)

Abstract

本发明提供了一种视频拍摄方法、装置、电子设备及计算机可读存储介质。该方法包括:通过原视频的视频播放界面,接收用户的视频拍摄触发操作;响应于视频拍摄触发操作,将视频拍摄窗口叠加显示在视频播放界面之上;通过视频播放界面,接收用户的视频拍摄操作;响应于视频拍摄操作,拍摄用户视频,并通过视频拍摄窗口显示用户视频。根据本发明,用户只需在视频播放界面进行用户视频拍摄的相关操作即可得到合拍视频的功能,操作过程简单快速。

Description

视频拍摄方法、装置、电子设备及计算机可读存储介质
相关申请的交叉应用
本申请要求2018年10月19日提交到中国国家知识产权局的申请号为2018112237887的专利申请的优先权,该申请的全部内容通过引用并入本文,以用于所有目的。
技术领域
本公开涉及互联网技术领域,具体而言,本公开涉及一种视频拍摄方法、装置、电子设备和计算机可读存储介质。
背景技术
在视频交互平台中,用户可通过视频的形式发表自己对平台中其他视频的想法或观看感受,以此实现与视频之间的交互。
现有技术中,当用户想基于视频平台中的某个视频拍摄交互视频时,通常先要将视频平台中的原视频下载保存下来,然后利用一些专业的视频录制工具完成交互视频的录制,再将完成好的交互视频上传至视频平台中。整个交互视频的拍摄过程不能只通过视频平台来完成,降低了用户的交互体验。
可见,现有的交互视频录制方式复杂,且用户交互体验较差,不能够满足用户的实际应用需求。
发明内容
第一方面,本公开提供了一种视频拍摄方法,该方法包括:
通过原视频的视频播放界面,接收用户的视频拍摄触发操作;
响应于视频拍摄触发操作,将视频拍摄窗口叠加显示在视频播放界面之上;
通过视频播放界面,接收用户的视频拍摄操作;以及
响应于视频拍摄操作,拍摄用户视频,并通过视频拍摄窗口显示用户视频。
在本公开的实施例中,该方法还包括:
接收用户针对视频拍摄窗口的窗口移动操作;以及
响应于窗口移动操作,将视频拍摄窗口调整到视频播放界面之上的相应区域。
在本公开的实施例中,响应于窗口移动操作将视频拍摄窗口调整到视频播放界面之上的相应区域包括:
响应于窗口移动操作,将预配置的窗口调整边界线显示于视频播放界面,其中,窗口调整边界线用于限定视频拍摄窗口的显示区域;
依据窗口移动操作和窗口调整边界线,确定视频拍摄窗口的当前显示区域;以及
根据当前显示区域,将视频拍摄窗口调整到视频播放界面之上的相应位置。
在本公开的实施例中,依据窗口移动操作和窗口调整边界线确定视频拍摄窗口的当前显示区域包括:
依据窗口移动操作,确定视频拍摄窗口的第一显示区域;
若第一显示区域和任一窗口调整边界线的距离不小于设定距离,则确定第一显示区域为当前显示区域;以及
若第一显示区域和任一窗口调整边界线的距离小于设定距离,则确定第二显示区域为当前显示区域;
其中,第二显示区域为将第一显示区域向任一窗口调整边界线平移后的区域,第二显示区域的至少一个位置点与任一窗口调整边界线重合。
在本公开的实施例中,响应于视频拍摄操作拍摄用户视频并通过视频拍摄窗口显示用户视频包括:
响应于视频拍摄操作,拍摄用户视频,同时播放原视频,并通过视频拍摄窗口显示用户视频。
在本公开的实施例中,该方法还包括:
接收用户针对视频拍摄窗口的窗口大小调节操作;以及
响应于窗口大小调节操作,将视频拍摄窗口调整到相应的显示大小。
在本公开的实施例中,该方法还包括:
通过视频播放界面,接收用户针对待添加特效的特效添加操作;以及
响应于特效添加操作,将待添加特效添加至用户视频中。
在本公开的实施例中,该方法还包括:在响应于视频拍摄操作拍摄用户视频并通过视频拍摄窗口显示用户视频之前,
通过视频播放界面,接收用户针对用户视频的录制方式的录制选择操作,录制方式包括快录方式、慢录方式和标准录制方式中的至少一项;以及
依据录制选择操作,确定用户视频的录制方式。
在本公开的实施例中,该方法还包括:
将用户视频和原视频合成,得到合拍视频。
在本公开的实施例中,该方法还包括:
通过视频播放界面,接收用户的音量调节操作;以及
响应于音量调节操作,对用户视频的音频信息和/或原视频的音频信息的音量进行相应的调节。
在本公开的实施例中,该方法还包括:
向用户提供操作提示选项,操作提示选项用于在接收到用户的操作时,向用户提供合拍视频拍摄操作的提示信息。
第二方面,本公开提供了一种视频拍摄装置,该装置包括:
触发操作接收模块,用于通过原视频的视频播放界面,接收用户的视频拍摄触发操作;
拍摄窗口显示模块,用于响应于视频拍摄触发操作,将视频拍摄窗口叠加显示在视频播放界面之上;
拍摄操作接收模块,用于通过视频播放界面,接收用户的视频拍摄操作;以及
用户视频拍摄模块,用于响应于视频拍摄操作,拍摄用户视频,并通过视频拍摄窗口显示用户视频。
在本公开的实施例中,该装置还包括:
窗口位置调节模块,用于接收用户针对视频拍摄窗口的窗口移动操作,响应于窗口移动操作,将视频拍摄窗口调整到视频播放界面之上的相应区域。
在本公开的实施例中,窗口位置调节模块可以被配置成:
响应于窗口移动操作,将预配置的窗口调整边界线显示于视频播放界面,其中,窗口调整边界线用于限定视频拍摄窗口的显示区域;
依据窗口移动操作和窗口调整边界线,确定视频拍摄窗口的当前显示区域;以及
根据当前显示区域,将视频拍摄窗口调整到视频播放界面之上的相应位置。
在本公开的实施例中,窗口位置调节模块可以被配置成:
依据窗口移动操作,确定视频拍摄窗口的第一显示区域;
若第一显示区域和任一窗口调整边界线的距离不小于设定距离,则确定第一显示区域为当前显示区域;
若第一显示区域和任一窗口调整边界线的距离小于设定距离,则确定第二显示区域为当前显示区域;
其中,第二显示区域为将第一显示区域向任一窗口调整边界线平移后的区域,第二显示区域的至少一个位置点与任一窗口调整边界线重合。
在本公开的实施例中,用户视频拍摄模块可以被配置成:
响应于视频拍摄操作,拍摄用户视频,同时播放原视频,并通过视频拍摄窗口显示用户视频。
在本公开的实施例中,该装置还包括:
窗口大小调节模块,用于接收用户针对视频拍摄窗口的窗口大小调节操作,响应于窗口大小调节操作,将视频拍摄窗口调整到相应的显示大小。
在本公开的实施例中,该装置还包括:
特效添加模块,用于通过视频播放界面,接收用户针对待添加特效的特效添加操作,响应于特效添加操作,将待添加特效添加至用户视频中。
在本公开的实施例中,用户视频拍摄模块还可被配置成:
在响应于视频拍摄操作,拍摄用户视频,并通过视频拍摄窗口显示用户视频之前,通过视频播放界面,接收用户针对用户视频的录制方式的录制选择操作,依据录制选择操作,确定用户视频的录制方式,录制方式包括快录方式、慢录方式和标准录制方式中的至少一项。
在本公开的实施例中,该装置还包括:
合拍视频生成模块,用于将用户视频和原视频合成,得到合拍视频。
在本公开的实施例中,该装置还包括:
音量调节模块,用于通过视频播放界面,接收用户的音量调节操作,响应于音量调节操作,对用户视频的音频信息和/或原视频的音频信息的音量进行相应的调节。
在本公开的实施例中,该装置还包括:
操作提示模块,用于向用户提供操作提示选项,操作提示选项用于在接收到用户的操作时,向用户提供合拍视频拍摄操作的提示信息。
第三方面,本公开提供了一种电子设备,该电子设备包括处理器和存储器,
其中,存储器用于存储计算机操作指令;以及
处理器用于通过调用该计算机操作指令,执行如本公开的第一方面的任一实施例中所示的方法。
第四方面,本公开提供了一种计算机可读存储介质,该存储介质存储有至少一条操作、至少一段程序、代码集或操作集,至少一条操作、至少一段程序、代码集或操作集由计算机加载并执行以实现如本公开的第一方面的任一实施例中所示的方法。
根据本公开的实施例,用户只需在视频播放界面进行用户视频拍摄的相关操作,即可通过视频拍摄窗口实现在原视频基础上录制用户视频,操作过程简单快速。由于通过用户视频可以反映用户对原视频的感想、评论或观看反应,因此,使用户能够方便地展示其对原视频的看法或反应,能够更好地满足用户的实际应用需求,提升了用户的交互体验,提高了视频拍摄的趣味性。
附图说明
为了更清楚地说明在本公开的实施例中的技术方案,下面将对本公开实施例描述中所需要使用的附图作简单地介绍。
图1为本公开的实施例提供的一种视频拍摄方法的流程示意图;
图2为本公开的实施例提供的一种视频播放界面的示意图;
图3为本公开的实施例提供的又一种视频播放界面的示意图;
图4为本公开的实施例提供的再一种视频播放界面的示意图;
图5A为本公开的实施例提供的一种音量调节方式的示意图;
图5B为本公开的实施例提供的又一种音量调节方式的示意图;
图6A为本公开的实施例提供的又一种视频播放界面的示意图;
图6B为本公开的实施例提供的再一种视频播放界面的示意图;
图7为本公开的实施例提供的一种视频拍摄装置的结构示意图;
图8为本公开的实施例提供的一种电子设备的结构示意图。
具体实施方式
下面详细描述本公开的实施例,该实施例的示例在附图中示出,其中自始至终相同或类似的标号表示相同或类似的元件或具有相同或类似功能的元件。下面通过参考附图描述的实施例是示例性的,仅用于解释本公开的技术感,而不能解释为对本公开的限制。
本技术领域技术人员可以理解,除非特意声明,这里使用的单数形式“一”、“一个”和“该”也可包括复数形式。应该进一步理解的是,本公开的说明书中使用的措辞“包括”是指存在该特征、整数、步骤、操作、元件和/或组件,但是并不排除存在或添加一个或多个其他特征、整数、步骤、操作、元件、组件和/或它们的组。应该理解,当我们称元件被“连接”或“耦接”到另一元件时,它可以直接连接或耦接到其他元件,或者也可以存在中间元件。此外,这里使用的“连接”或“耦接”可以包括无线连接或无线耦接。这里使用的措辞“和/或”包括一个或更多个相关联的列出项的全部或任一单元和全部组合。
下面以具体地实施例对本公开的技术方案以及本公开的技术方案如 何解决上述技术问题进行详细说明。下面这几个具体的实施例可以相互结合,对于相同或相似的概念或过程可能在某些实施例中不再赘述。下面将结合附图,对本公开的实施例进行描述。
本公开的实施例提供了一种视频拍摄方法,如图1所示,该方法可以包括:
步骤S110,通过原视频的视频播放界面,接收用户的视频拍摄触发操作。
其中,视频拍摄触发操作表示用户想要基于视频播放界面中的原视频进行用户视频的拍摄,即用户用于触发开始进行用户视频拍摄的动作,该操作的具体形式根据需要配置,例如,可以是用户在客户端的应用程序的界面上操作位置的触发动作。其中,视频播放界面用于终端设备与用户之间的交互,通过该界面可以接收用户对原视频的相关操作,例如,对原视频进行分享或进行合拍等操作。
在实际应用中,可通过客户端的相关触发标识触发该操作,比如客户端界面上的指定触发按钮或输入框,还可以是用户的语音,具体地,可以是在客户端的应用界面上显示的“合拍”的虚拟按钮,用户点击该按钮的操作即为用户的视频拍摄触发操作。在实际应用中,原视频可以是未经合拍过的视频,也可以是已经经过合拍后得到的视频。
步骤S120,响应于视频拍摄触发操作,将视频拍摄窗口叠加显示在视频播放界面之上。
在实际应用中,视频拍摄窗口可以叠加显示在视频播放界面上的预设位置上,该预设位置可以为基于用户的终端设备的显示界面大小预先配置的显示位置,比如,视频播放界面的左上角;视频拍摄窗口的大小小于原视频的显示窗口,使得视频拍摄窗口只遮挡原视频的部分画面内容。其中,视频拍摄窗口的初始大小可以根据实际需要进行配置,可选为在播放原视频时,尽量减少对原视频画面的遮挡,不影响用户对原视频的观看,且拍摄用户视频时,尽量不影响用户对录制的画面的观看的大小。例如,可以根据用户的终端设备的显示界面的尺寸,配置自动化调整在终端设备上显示的视频拍摄窗口的大小,如视频拍摄窗口为终端设备的显示界面的十分 之一或五分之一。
步骤S130,通过视频播放界面,接收用户的视频拍摄操作。
同理,视频播放界面中包括用于触发视频拍摄操作的相关触发标识,比如指定触发按钮或输入框,还可以是用户的语音指令;具体地,可以是在客户端的应用界面上显示的“拍摄”的虚拟按钮,用户点击该按钮的操作即为用户的视频拍摄操作,通过该视频拍摄操作可以触发用户的终端设备的拍摄功能,以获取用户的待拍摄内容,比如用户本人。
步骤S140,响应于视频拍摄操作,拍摄用户视频,并通过视频拍摄窗口显示用户视频。
其中,拍摄用户视频时,不限定原视频的播放状态,即该原视频可以是处于播放状态,也可以处于暂停到某一帧视频帧的图像的状态,具体可基于实际需求进行配置。
在实际应用中,原视频可以是未经合拍过的视频,也可以是已经经过合拍后得到的合拍视频。
需要说明的是,本公开的实施例中的用户视频可选为包括用户在内的视频,即录制的是用户的视频。当然,也可以是用户根据需要调整后录制的其它场景的视频。
根据本公开的实施例,用户只需在视频播放界面进行用户视频拍摄的相关操作,即可通过视频拍摄窗口实现在原视频基础上录制用户视频,操作过程简单快速。由于通过用户视频可以反映用户对原视频的感想、评论或观看反应,因此,使用户能够方便地展示其对原视频的看法或反应,能够更好地满足用户的实际应用需求,提升了用户的交互体验,提高了视频拍摄的趣味性。
作为一个示例,图2中示出了一种终端设备中客户端的应用程序的原视频的视频播放界面的示意图,该界面中所显示的“合拍”的虚拟按钮即为视频拍摄触发按钮,用户点击该按钮的操作即为用户的视频拍摄触发操作;在视频播放界面,接收到用户的视频拍摄触发操作后,将视频拍摄窗口A叠加显示在视频播放界面B之上,该界面中所示的“拍摄”的虚拟按钮即为拍摄触发按钮,用户点击该按钮的操作即为用户的视频拍摄操 作,在接收到该操作之后,通过视频拍摄窗口A拍摄用户视频,实现在原视频的基础上拍摄用户视频的功能。
需要说明的是,在实际应用中,视频播放界面的具体形式、各按钮的形式均可以根据实际需要配置,上述示例中只是一种可选的实施方式。
在本公开的实施例中,视频拍摄窗口的形状不限定,包括圆形、长方形或其他形状,可以根据实际需求进行配置。
在本公开的实施例中,该方法还可以包括:
接收用户针对视频拍摄窗口的窗口移动操作;以及
响应于窗口移动操作,将视频拍摄窗口调整到视频播放界面之上的相应区域。
其中,用户可对视频拍摄窗口的位置进行调整,以满足不同用户对于视频拍摄窗口在视频播放界面之上的位置需求。在实际应用中,通过以下任一种用户的窗口移动操作均可实现视频拍摄窗口位置的调整:
第一种:用户可以通过操作物,比如手指,拖动视频拍摄窗口来调整视频拍摄窗口的位置,当操作物接触视频拍摄窗口进行拖动时,表示在调整视频拍摄窗口的位置,当操作物离开视频拍摄窗口,即停止拖动视频拍摄窗口时,该停止拖动时对应的位置即为视频拍摄窗口在视频播放界面之上的相应区域。
第二种:用户可以通过视频播放界面中显示的位置进度条来调整视频拍摄窗口的位置,位置进度条中对应的不同位置可表示拍摄窗口在视频播放界面之上的位置,用户可通过滑动位置进度条确定视频拍摄窗口在视频播放界面之上的相应区域。
在本公开的实施例中,响应于窗口移动操作将视频拍摄窗口调整到视频播放界面之上的相应区域可以包括:
响应于窗口移动操作,将预配置的窗口调整边界线显示于视频播放界面,其中,窗口调整边界线用于限定视频拍摄窗口的显示区域;
依据窗口移动操作和窗口调整边界线,确定视频拍摄窗口的当前显示区域;以及
根据当前显示区域,将视频拍摄窗口调整到视频播放界面之上的相应 位置。
其中,视频播放界面中有预先配置的窗口调整边界线,窗口调整边界线用于限定视频拍摄窗口在视频播放界面之上的显示区域,在实际应用中,该窗口调整边界线可以基于各种不同终端设备的显示界面尺寸进行预配置,以使得视频拍摄窗口中拍摄的内容可以适配显示在任何终端设备的显示界面中。基于窗口调整边界线的配置,当接收用户的窗口移动操作时,在视频播放界面上会同时显示预配置的窗口调整边界线,以使得用户在调整视频拍摄窗口时,视频拍摄窗口的调整有调整依据。
在实际应用中,视频拍摄窗口可以根据需求进行配置。例如,窗口调整边界线可以是位于视频播放界面中预配置的位置处的指引线,预配置的位置可以包括视频播放界面的顶部、底部、左边和右边中的至少一个位置,不同位置的指引线可以限定视频拍摄窗口在视频播放界面中对应位置的调整范围。
如图3所示的一种视频播放界面的示意图中,以视频播放界面中的顶部和左边的两条指引线作为窗口调整线(即窗口调整边界线a和b)为例。用户可以通过拖动视频拍摄窗口f触发窗口调整操作,在接收到该操作时,在视频播放界面中会显示出窗口调整边界线a和b,窗口调整边界线a和b为互相垂直的两条线,在实际应用中,为了便于用户辨认,可以通过醒目的颜色,比如红色来对窗口调整边界线a和b进行标注,或者通过不同的形状,比如锯齿型来对窗口调整边界线a和b进行标注。本示例中,用户将视频拍摄窗口f由位置A拖动到位置B,基于位置B,将视频拍摄窗口f调整到视频播放界面之上的与位置B对应的位置,实现对视频拍摄窗口的调整。
在本公开的实施例中,依据窗口移动操作和窗口调整边界线确定视频拍摄窗口的当前显示区域可以包括:
依据窗口移动操作,确定视频拍摄窗口的第一显示区域;
若第一显示区域和任一窗口调整边界线的距离不小于设定距离,则确定第一显示区域为当前显示区域;
若第一显示区域和任一窗口调整边界线的距离小于设定距离,则确定 第二显示区域为当前显示区域;
其中,第二显示区域为将第一显示区域向任一窗口调整边界线平移后的区域,第二显示区域的至少一个位置点与任一窗口调整边界线重合。
其中,视频拍摄窗口在窗口调整边界线限定的调整范围内具有相对较佳的显示位置,比如靠近窗口调整边界线的显示区域,用户在对视频窗口调整过程中,除了对视频拍摄窗口在视频播放界面之上的显示区域有要求的用户之外,用户无法准确获取该相对较佳的显示位置,则可以通过视频拍摄窗口在调整过程中的显示区域与窗口调整边界线的距离来帮助用户将视频拍摄窗口调整到视频播放界面之上的相对较佳的位置。
具体地,在调整视频拍摄窗口的过程中,当视频拍摄窗口的第一显示区域和任一窗口调整边界线的距离不小于设定距离时,表示用户可能希望将视频拍摄窗口调整至视频播放界面的非边缘区域的显示位置,则可将第一显示区域作为视频拍摄窗口即将调整至的区域,即当前显示区域。当第一显示区域和任一窗口调整边界线的距离小于设定距离时,表示用户可能希望将视频拍摄窗口调整至视频播放界面的边缘区域,以尽可能较少对原视频的播放界面的遮挡,此时,则可以将当前显示区域确定为边界线处的第二显示区域。
在实际应用中,如果视频拍摄窗口为矩形,窗口调整边界线为直线,则第一显示区域为矩形,将第一显示区域向任一窗口调整边界线平移后的区域为第一显示区域的任一边界线与任一窗口调整边界线重合所对应的区域;如果视频拍摄窗口为圆形,窗口调整边界线为直线,则第一显示区域为圆形,将第一显示区域向任一窗口调整边界线平移后的区域为第一显示区域的至少一个位置点与任一窗口调整边界线重合所对应的区域。可以理解的是,在存在调整边界线时,无论如何调整拍摄窗口,拍摄窗口的显示区域均不能够超出边界线。
在本公开的实施例中,响应于视频拍摄操作拍摄用户视频并通过视频拍摄窗口显示用户视频可以包括:
响应于视频拍摄操作,拍摄用户视频,同时播放原视频,并通过视频拍摄窗口显示用户视频。
其中,为了使用户视频中的评论内容与原视频中的内容相对应,可以在原视频播放的同时,同步录制用户视频,即在接收视频拍摄操作时,开始拍摄用户视频,并同步播放原视频,由此可实现原视频播放的同时,用户视频同步录制的功能,使得用户在录制用户视频的过程中可基于原视频中播放的视频内容进行用户视频中感想内容或评论内容的同步录制,进一步提升了用户的交互体验。
在实际应用中,如果在通过原视频的视频播放界面,接收用户的视频拍摄操作之前,原视频处于播放状态,在接收用户的视频拍摄操作时,自动将原视频暂停,或由用户将原视频暂停,则在接收到视频拍摄操作时,可以播放暂停的原视频,拍摄用户视频,并通过视频拍摄窗口显示用户视频。
在本公开的实施例中,该方法还可以包括:
接收用户针对视频拍摄窗口的窗口大小调节操作;以及
响应于窗口大小调节操作,将视频拍摄窗口调整到相应的显示大小。
其中,视频拍摄窗口的大小可以根据预配置的默认值进行设置,也可以基于用户的实际需求,由用户对视频拍摄窗口的大小进行调节,在实际应用中,视频播放界面中包括用于触发窗口大小调节操作相关触发标识,比如指定触发按钮或输入框,还可以是用户的语音指令;具体地,可以是在视频播放界面上显示的“窗口”的虚拟按钮,用户可以通过该按钮触发窗口大小调节操作,通过该操作可实现对视频拍摄窗口大小的调节。
在本公开的实施例中,该方法还可以包括:
通过视频播放界面,接收用户针对待添加特效的特效添加操作;以及
响应于特效添加操作,将待添加特效添加至用户视频中。
其中,为了满足不同用户的视频拍摄需求,还可以为用户提供在用户视频中添加特效的功能,即通过用户的特效添加操作,对用户视频增加所选择的待添加特效。该待添加特效可以在用户视频拍摄之前添加,也可以在用户视频拍摄过程中添加,也可以在用户视频拍摄完成之后添加,本公开中不限定特效的添加时机。
在实际应用中,可通过以下至少一种方式实现在用户视频中添加特效 的功能:
第一种:可以通过视频播放界面上显示的“特效”的虚拟按钮实现特效添加功能,用户点击该按钮的操作即为用户针对待添加特效的特效添加操作,将该按钮对应的特效添加在用户视频中。
第二种:可以通过滑动用户视频的显示界面添加特效,用户通过操作物,比如手指,左右滑动用户视频的显示界面,即可将相应的特效添加至用户视频中。
在本公开的实施例中,该方法还可以包括:在响应于视频拍摄操作,拍摄用户视频,并通过视频拍摄窗口显示用户视频之前,
通过视频播放界面,接收用户针对用户视频的录制方式的录制选择操作,录制方式包括快录方式、慢录方式和标准录制方式中的至少一项;以及
依据录制选择操作,确定用户视频的录制方式。
其中,为了满足不同用户的需求,用户视频在拍摄之前,可以向用户提供选择用户视频的录制方式的功能,即通过用户的录制选择操作,按照所选择的录制方式录制用户视频。快录方式的录制速率,标准录制方式的录制速率以及慢录方式的录制速率依次减慢;通过不同录制方式的选择,可以实现变速录制用户视频的功能,进一步提升了用户的交互体验。
可以理解的是,上述快录方式、慢录方式和标准录制方式中的快、慢和标准是相对而言的,不同录制方式的录制速率是不同的,每种录制方式的录制速率可以根据需要配置。例如,快录方式是指录制速率为第一速率的录制方式,慢录方式为录制速率为第二速率的录制方式,标准录制方式是指录制速率为第三速率的录制方式,第一速率大于第三速率,第三速率大于第二速率。
在本公开的实施例中,该方法还可以包括:
将用户视频和原视频合成,得到合拍视频。
其中,用户视频和原视频的合成方式可根据实际需求进行配置,可以在拍摄用户视频的过程中,将用户视频与原视频合成,也可以在用户视频拍摄完成后,再将用户视频和原视频合成,得到的合拍视频中包括原视频 中的内容和用户视频中的内容,通过该合拍视频,可以在观看原视频的同时观看用户视频,在用户视频为用户的反应视频时,通过观看该合拍视频,可了解用户对该原视频的观看反应或想法。
在本公开的实施例中,合拍视频的视频帧图像包括用户视频中的视频帧图像和原视频中的视频帧图像,其中,用户视频中的视频帧图像显示于原视频中的视频帧图像之上。
需要说明的是,本公开的实施例所提供的视频生成方法中,在将原视频和用户视频合成得到合拍视频时,是将用户视频的视频帧图像与对应的原视频的视频帧图像与合成,将用户视频的视频帧图像对应的音频信息与对应的原视频的视频帧图像对应的音频信息合成,再将合成得到的视频帧图像和对应的音频信息合成,得到合成视频。其中,可选地,在将视频帧图像与视频帧图像合成时,指的是将相应的两个视频帧图像合成为一帧图像,合成的一帧图像中用户视频的视频帧图像位于原视频的视频帧图像之上。在将原视频和用户视频合成得到合拍视频时,用户视频的视频帧图像的尺寸小于原视频的视频帧图像的尺寸。
在一示例中,比如,用户视频的时长为10s,原视频的时长也为10s,在将用户视频的视频帧图像与对应的原视频的视频帧图像与合成时,是将用户视频的第1s的视频帧图像与原视频的第1s的视频帧图像合成,得到的视频帧图像为对应的合拍视频中的第1s的视频帧图像,按照上述同样的合成方式,依次将用户视频中的每帧视频帧图像与对应的原视频中的每帧视频帧图像合成,得到合拍视频。
作为一个示例,图4中示出了一帧用户视频中的视频帧图像与一帧原视频中的视频帧图像的合成后得到的一帧合成视频中的视频帧图像的示意图,如图中所示,图像a为原视频中的视频帧图像的部分,图像b为用户视频中的视频帧图像的部分,图像a和图像b合成后的图中所示的图像即为合成后的视频帧图像。
在本公开的实施例中,该方法还可以包括:
通过视频播放界面,接收用户的音量调节操作;以及
响应于音量调节操作,对用户视频的音频信息和/或原视频的音频信 息的音量进行相应的调节。
其中,为了进一步提升用户的交互体验,还可以调节原视频和/或用户视频中的音量,以满足不同用户的视频播放需求,在实际应用中,如果用户不需要对原视频和用户视频的音量进行调节,则拍摄的用户视频中的音量可以为预先配置的音量,比如:与原视频中的音量一致的音量,或者预设值的音量。
在实际应用中,可通过视频播放界面中音量调节虚拟按钮来实现音量大小的调节,音量调节虚拟按钮可以为音量调节进度条,则对应于原视频的音量和用户视频的音量调节,可以对应配置两个音量调节进度条,比如音量调节进度条a和音量调节进度条b,通过音量调节进度条a来调节原视频的音量,通过音量调节进度条b来调节用户视频的音量,且可以通过不同的标识来区分不同的音量调节进度条。
作为一个示例,图5A中示出了一种音量调节方式的示意图,用户可通过滑动音量调节进度条来调节音量的大小,向该界面的上方(即“+”标识方向)滑动,表示将音量调大;向该界面的下方(即“-”标识方向)滑动,表示将音量调小。根据实际需求,还可以将音量调节进度条设置为水平方向,即如图5B所示的另一种音量调节方式的示意图,向该界面的左方(即“-”标识方向)滑动,表示将音量调小,向该界面的右方(即“+”标识方向)滑动,表示将音量调大。
需要说明的是,在实际应用中,音量调节界面与视频播放界面可以是同一显示界面,也可以是不同的显示界面。若是不同的显示界面,则在通过视频播放界面接收到用户的音量调节操作时,可以显示出音量调节界面,通过该界面进行音量调整,可选地,为了不影响视频的录制与播放,可以将音量调节界面叠加显示在视频播放界面之上,如显示在视频播放界面之上的边缘位置。
在本公开的实施例中,该方法还可以包括:
向用户提供操作提示选项,操作提示选项用于在接收到用户的操作时,向用户提供合拍视频拍摄操作的提示信息。
其中,如果用户在使用合拍功能时,即在原视频的基础上,拍摄用户 视频并得到合拍视频,对于如何实现合拍功能不是很清楚具体怎样操作,则可以通过提示操作选项向用户进行提示,在实际应用中,提示操作选项可以通过“帮助”虚拟按钮显示在视频播放界面中,用户通过点击该按钮可以得到相应的提示信息,该提示信息可以通过操作预览的形式展示给用户,也可以通过文字的方式提示用户该怎样操作,本公开中不限定提示信息的表现形式。
在本公开的实施例中,将用户视频和原视频合成,得到合拍视频,可以包括:
将用户视频的音频信息和原视频的音频信息合成,得到合拍视频的音频信息;
将用户视频的视频信息和原视频的视频信息合成,得到合拍视频的视频信息;以及
将合拍视频的音频信息和合拍视频的视频信息合成,得到合拍视频。
其中,视频中包括视频信息和音频信息两部分,则在将用户视频和原视频的合成的过程中,可以将各自的视频信息和音频信息分别合成,最终将合成后的视频信息和音频信息合成为合拍视频,通过以上合成方式,可便于信息的处理。
在本公开的实施例中,该方法还可以包括:将用户视频和原视频合成得到合拍视频之后,
接收用户的视频保存操作和/或视频发布操作;以及
响应于视频保存操作,将合拍视频保存于本地,和/或,响应于视频发布操作,将合拍视频进行发布。
其中,在得到合拍视频之后,可以向用户提供将合拍视频发布和/或保存的功能,即通过用户的视频发布操作,将合拍视频发布到指定的视频平台中,以实现对合拍视频的分享;或者通过用户的视频保存操作,将合拍视频保存在本地,以供该用户查看。在实际应用中,得到合拍视频后,可以跳转到视频发布界面,通过视频发布界面接收用户的视频发布操作,也可以直接通过视频播放界面接收该用户的视频发布操作,其中,视频发布操作可以通过用户点击“发布”虚拟按钮得到。
在本公开的实施例中,响应于视频发布操作将合拍视频进行发布可以包括:
响应于视频发布操作,获取用户的合拍视频查看权限;以及
依据合拍视频查看权限,将合拍视频进行发布。
其中,为了满足用户对合拍视频的隐私需求,向用户提供配置合拍视频查看权限的功能,即通过用户的视频发布操作,获取用户的合拍视频查看权限,按照用户的合拍视频查看权限将合拍视频发布。通过合拍视频查看权限,使得该合拍视频只可为该合拍视频查看权限对应的用户查看,不在该合拍视频查看权限中的用户不可以查看该合拍视频。在实际应用中,该合拍视频查看权限可以是预先配置好的,对于任何需要发布的合拍视频均为该合拍视频查看权限;该合拍视频查看权限也可以是在对当前合拍视频进行发布时进行配置的,则对应地,该当前合拍视频根据配置的隐私权限进行发布。
其中,合拍视频查看权限包括任何人、好友和仅自己中的至少一项,任何人表示该合拍视频任何人都可查看,好友表示只有该用户的好友可以查看该合拍视频,仅自己表示只有该用户本人可以查看该合拍视频。
在本公开的实施例中,该方法还可以包括:
生成合拍视频的推送消息;以及
将推送信息发送至用户的关联用户,和/或,原视频的关联用户。
其中,为了告知与该合拍视频相关的人,在将合拍视频进行发布时,可以生成合拍视频的推送消息,通过该推送消息,可以使得该用户的关联用户,和/或,原视频的关联用户及时获知该合拍视频的发布。其中,用户的关联用户指的是与用户有关联关系的用户,该关联关系的涉及范围可以根据需要配置,例如可以包括但不限定该用户关注的人或关注该用户的人。原视频的关联用户与原视频的发布者具有关联关系的用户,例如,可以包括但不限于原视频的发布者以及原视频所涉及的人,比如,原视频为经过一次合拍的视频,该原视频的发布者为用户a,该原视频合拍前对应的初始原视频的作者为用户b,则原视频的关联用户可以包括用户a和用户b。
在实际应用中,在发布合拍视频时,可以在合拍视频的标题中添加相关的关注信息来表示该合拍视频的发布希望被哪个用户知道,可以通过@某用户的形式来体现推送信息的接收者。
在一示例中,用户a关注了用户b,用户a发布了合拍视频,且用户a关联了用户b,即用户a@用户b,其中,用户a@用户b可以显示在合拍视频的标题中,则将合拍视频的推送消息发送至用户b,以使用户b得知用户a发布了视频。
在又一示例中,用户a虽然关注了用户b,用户a发布了合拍视频,但用户a没有@用户b,则用户b接收不到合拍视频的推送消息。
在又一示例中,用户a没有关注用户b,用户a发布了合拍视频,但用户a发布合拍视频时@了用户b,则用户b可以接收到合拍视频的推送消息。
在本公开的实施例中,若用户视频的时长小于原视频的时长,则将用户视频和原视频合成得到合拍视频可以包括:
依据用户视频的录制起始时刻,确定原视频中与录制起始时刻对应的、且与用户视频的时长一致的第一视频;将用户视频与第一视频合成第二视频;以及依据第二视频及原视频中除第一视频之外的视频,得到合拍视频。
其中,基于原视频中的播放内容,用户录制的用户视频的时长可以与原视频的时长一致,也可以不一致,用户基于原视频中的内容可以选择用户视频的录制起始时刻,以使得合拍视频播放时,用户视频的内容与原视频中的内容相对应,进一步提升了用户的交互体验。
在本公开的实施例中,该方法还可以包括:将视频播放界面中的相应功能的虚拟按钮进行隐藏。
在实际应用中,视频播放界面中可以显示表示不同功能的虚拟标识,比如:表示拍摄开始的虚拟按钮a,表示拍摄进度的进度条b、表示添加特效的虚拟按钮c以及表示发布合拍视频的虚拟按钮d等;如图6A和6B所示的一种视频播放界面的示意图。为了进一步提升用户的交互体验,可以将图6A中的视频播放界面中除了虚拟按钮a和进度条b之外的其他虚 拟标识隐藏,比如将虚拟按钮c和d隐藏,隐藏后的界面如图6B中所示,通过虚拟标识的隐藏,可以保持视频播放界面的整洁。
在实际应用中,还可以在界面中设置用于隐藏功能按钮的虚拟按钮,通过该按钮用户可以设置将哪些功能按钮进行隐藏或显示恢复,具体地,在接收用户对该按钮的操作时,用户可以通过该按钮选择隐藏哪些虚拟按钮,或者选择对之前已隐藏的虚拟按钮进行显示恢复。
基于与图1所示方法的相同原理,本公开的实施例还提供了一种视频拍摄装置20,如图7所示,该装置20可以包括:
触发操作接收模块210,用于通过原视频的视频播放界面,接收用户的视频拍摄触发操作;
拍摄窗口显示模块220,用于响应于视频拍摄触发操作,将视频拍摄窗口叠加显示在视频播放界面之上;
拍摄操作接收模块230,用于通过视频播放界面,接收用户的视频拍摄操作;以及
用户视频拍摄模块240,用于响应于视频拍摄操作,拍摄用户视频,并通过视频拍摄窗口显示用户视频。
在本公开的实施例中,该装置还可以包括:
窗口位置调节模块,用于接收用户针对视频拍摄窗口的窗口移动操作,响应于窗口移动操作,将视频拍摄窗口调整到视频播放界面之上的相应区域。
在本公开的实施例中,窗口位置调节模块可以被配置成:
响应于窗口移动操作,将预配置的窗口调整边界线显示于视频播放界面,其中,窗口调整边界线用于限定视频拍摄窗口的显示区域;
依据窗口移动操作和窗口调整边界线,确定视频拍摄窗口的当前显示区域;以及
根据当前显示区域,将视频拍摄窗口调整到视频播放界面之上的相应位置。
在本公开的实施例中,窗口位置调节模块可以被配置成:
依据窗口移动操作,确定视频拍摄窗口的第一显示区域;
若第一显示区域和任一窗口调整边界线的距离不小于设定距离,则确定第一显示区域为当前显示区域;
若第一显示区域和任一窗口调整边界线的距离小于设定距离,则确定第二显示区域为当前显示区域;
其中,第二显示区域为将第一显示区域向任一窗口调整边界线平移后的区域,第二显示区域的至少一个位置点与任一窗口调整边界线重合。
在本公开的实施例中,用户视频拍摄模块240可以被配置成:
响应于视频拍摄操作,拍摄用户视频,同时播放原视频,并通过视频拍摄窗口显示用户视频。
在本公开的实施例中,该装置还可以包括:
窗口大小调节模块,用于接收用户针对视频拍摄窗口的窗口大小调节操作,响应于窗口大小调节操作,将视频拍摄窗口调整到相应的显示大小。
在本公开的实施例中,该装置还可以包括:
特效添加模块,用于通过视频播放界面,接收用户针对待添加特效的特效添加操作,响应于特效添加操作,将待添加特效添加至用户视频中。
在本公开的实施例中,用户视频拍摄模块240还可被配置成:
在响应于视频拍摄操作,拍摄用户视频,并通过视频拍摄窗口显示用户视频之前,通过视频播放界面,接收用户针对用户视频的录制方式的录制选择操作,依据录制选择操作,确定用户视频的录制方式,录制方式可以包括快录方式、慢录方式和标准录制方式中的至少一项。
在本公开的实施例中,该装置还可以包括:
合拍视频生成模块,用于将用户视频和原视频合成,得到合拍视频。
在本公开的实施例中,该装置还可以包括:
音量调节模块,用于通过视频播放界面,接收用户的音量调节操作,响应于音量调节操作,对用户视频的音频信息和/或原视频的音频信息的音量进行相应的调节。
在本公开的实施例中,该装置还可以包括:
操作提示模块,用于向用户提供操作提示选项,操作提示选项用于在接收到用户的操作时,向用户提供合拍视频拍摄操作的提示信息。
本公开的实施例的视频拍摄装置可执行本公开的实施例所提供的一种视频拍摄方法,其实现原理相类似,本公开各实施例中的视频拍摄装置中的各模块所执行的动作是与本公开各实施例中的视频拍摄方法中的步骤相对应的,对于视频拍摄装置的各模块的详细功能描述具体可以参见前文中所示的对应的视频拍摄方法中的描述,此处不再赘述。
基于与本公开的实施例中的视频拍摄方法相同的原理,本公开提供了一种电子设备,该电子设备包括处理器和存储器,其中,存储器用于存储计算机操作指令;以及处理器,用于通过调用该计算机操作指令,执行如本公开的视频拍摄方法中的任一实施例中所示的方法。
基于与本公开的实施例中的视频拍摄方法相同的原理,本公开提供了一种计算机可读存储介质,该存储介质存储有至少一条指令、至少一段程序、代码集或指令集,至少一条指令、至少一段程序、代码集或指令集由计算机加载并执行以实现如本公开的视频拍摄方法中的任一实施例中所示的方法。
在一示例中,如图8所示,其示出了适于用来实现本公开实施例的电子设备800(例如实现图1中所示的方法的终端设备或服务器)的结构示意图。本公开实施例中的电子设备可以包括但不限于诸如移动电话、笔记本电脑、数字广播接收器、PDA(个人数字助理)、PAD(平板电脑)、PMP(便携式多媒体播放器)、车载终端(例如车载导航终端)等等的移动终端以及诸如数字TV、台式计算机等等的固定终端。图8示出的电子设备仅仅是一个示例,不应对本公开实施例的功能和使用范围带来任何限制。
如图8所示,电子设备800可以包括处理装置(例如中央处理器、图形处理器等)801,其可以根据存储在只读存储器(ROM)802中的程序或者从存储装置808加载到随机存取存储器(RAM)803中的程序而执行各种适当的动作和处理。在RAM 803中,还存储有电子设备800操作所需的各种程序和数据。处理装置801、ROM 802以及RAM 803通过总线804彼此相连。输入/输出(I/O)接口805也连接至总线804。
通常,以下装置可以连接至I/O接口805:包括例如触摸屏、触摸板、 键盘、鼠标、摄像头、麦克风、加速度计、陀螺仪等的输入装置806;包括例如液晶显示器(LCD)、扬声器、振动器等的输出装置807;包括例如磁带、硬盘等的存储装置808;以及通信装置809。通信装置809可以允许电子设备800与其他设备进行无线或有线通信以交换数据。虽然图8示出了具有各种装置的电子设备800,但是应理解的是,并不要求实施或具备所有示出的装置。可以替代地实施或具备更多或更少的装置。
特别地,根据本公开的实施例,上文参考流程图描述的过程可以被实现为计算机软件程序。例如,本公开的实施例包括一种计算机程序产品,其包括承载在计算机可读介质上的计算机程序,该计算机程序包含用于执行流程图所示的方法的程序代码。在这样的实施例中,该计算机程序可以通过通信装置809从网络上被下载和安装,或者从存储装置808被安装,或者从ROM 802被安装。在该计算机程序被处理装置801执行时,执行本公开实施例的方法中限定的上述功能。
需要说明的是,本公开上述的计算机可读介质可以是计算机可读信号介质或者计算机可读存储介质或者是上述两者的任意组合。计算机可读存储介质例如可以是但不限于电、磁、光、电磁、红外线、或半导体的系统、装置或器件,或者任意以上的组合。计算机可读存储介质的更具体的例子可以包括但不限于:具有一个或多个导线的电连接、便携式计算机磁盘、硬盘、随机存取存储器(RAM)、只读存储器(ROM)、可擦式可编程只读存储器(EPROM或闪存)、光纤、便携式紧凑磁盘只读存储器(CD-ROM)、光存储器件、磁存储器件、或者上述的任意合适的组合。在本公开中,计算机可读存储介质可以是任何包含或存储程序的有形介质,该程序可以被指令执行系统、装置或者器件使用或者与其结合使用。而在本公开中,计算机可读信号介质可以包括在基带中或者作为载波一部分传播的数据信号,其中承载了计算机可读的程序代码。这种传播的数据信号可以采用多种形式,包括但不限于电磁信号、光信号或上述的任意合适的组合。计算机可读信号介质还可以是计算机可读存储介质以外的任何计算机可读介质,该计算机可读信号介质可以发送、传播或者传输用于由指令执行系统、装置或者器件使用或者与其结合使用的程序。计算机可读 介质上包含的程序代码可以用任何适当的介质传输,包括但不限于:电线、光缆、RF(射频)等等,或者上述的任意合适的组合。
上述计算机可读介质可以是上述电子设备中所包含的;也可以是单独存在,而未装配入该电子设备中。
上述计算机可读介质承载有一个或者多个程序,当上述一个或者多个程序被该电子设备执行时,使得该电子设备:获取至少两个网际协议地址;向节点评价设备发送包括至少两个网际协议地址的节点评价请求,其中,节点评价设备从至少两个网际协议地址中,选取网际协议地址并返回;接收节点评价设备返回的网际协议地址;其中,所获取的网际协议地址指示内容分发网络中的边缘节点。
或者,上述计算机可读介质承载有一个或者多个程序,当上述一个或者多个程序被该电子设备执行时,使得该电子设备:接收包括至少两个网际协议地址的节点评价请求;从至少两个网际协议地址中,选取网际协议地址;返回选取出的网际协议地址;其中,接收到的网际协议地址指示内容分发网络中的边缘节点。
可以以一种或多种程序设计语言或其组合来编写用于执行本公开的操作的计算机程序代码,上述程序设计语言包括面向对象的程序设计语言—诸如Java、Smalltalk、C++,还包括常规的过程式程序设计语言—诸如“C”语言或类似的程序设计语言。程序代码可以完全地在用户计算机上执行、部分地在用户计算机上执行、作为一个独立的软件包执行、部分在用户计算机上部分在远程计算机上执行、或者完全在远程计算机或服务器上执行。在涉及远程计算机的情形中,远程计算机可以通过任意种类的网络——包括局域网(LAN)或广域网(WAN)—连接到用户计算机,或者,可以连接到外部计算机(例如利用因特网服务提供商来通过因特网连接)。
附图中的流程图和框图,图示了按照本公开各种实施例的系统、方法和计算机程序产品的可能实现的体系架构、功能和操作。在这点上,流程图或框图中的每个方框可以代表一个模块、程序段、或代码的一部分,该模块、程序段、或代码的一部分包含一个或多个用于实现规定的逻辑功能的可执行指令。也应当注意,在有些作为替换的实现中,方框中所标注的 功能也可以以不同于附图中所标注的顺序发生。例如,两个接连地表示的方框实际上可以基本并行地执行,它们有时也可以按相反的顺序执行,这依所涉及的功能而定。也要注意的是,框图和/或流程图中的每个方框、以及框图和/或流程图中的方框的组合,可以用执行规定的功能或操作的专用的基于硬件的系统来实现,或者可以用专用硬件与计算机指令的组合来实现。
以上描述仅为本公开的较佳实施例以及对所运用技术原理的说明。本领域技术人员应当理解,本公开中所涉及的公开范围,并不限于上述技术特征特定组合而成的技术方案,同时也应涵盖在不脱离上述公开构思的情况下,由上述技术特征或其等同特征进行任意组合而形成的其它技术方案。例如上述特征与本公开中公开的(但不限于)具有类似功能的技术特征进行互相替换而形成的技术方案。

Claims (14)

  1. 一种视频拍摄方法,包括:
    通过原视频的视频播放界面,接收用户的视频拍摄触发操作;
    响应于所述视频拍摄触发操作,将视频拍摄窗口叠加显示在所述视频播放界面之上;
    通过所述视频播放界面,接收所述用户的视频拍摄操作;以及
    响应于所述视频拍摄操作,拍摄用户视频,并通过所述视频拍摄窗口显示所述用户视频。
  2. 根据权利要求1所述的方法,还包括:
    接收所述用户针对所述视频拍摄窗口的窗口移动操作;以及
    响应于所述窗口移动操作,将所述视频拍摄窗口调整到所述视频播放界面之上的相应区域。
  3. 根据权利要求2所述的方法,其中,将所述视频拍摄窗口调整到所述视频播放界面之上的相应区域包括:
    响应于所述窗口移动操作,将预配置的窗口调整边界线显示于所述视频播放界面,其中,所述窗口调整边界线用于限定所述视频拍摄窗口的显示区域;
    依据所述窗口移动操作和所述窗口调整边界线,确定所述视频拍摄窗口的当前显示区域;以及
    根据所述当前显示区域,将所述视频拍摄窗口调整到所述视频播放界面之上的相应位置。
  4. 根据权利要求3所述的方法,其中,依据所述窗口移动操作和所述窗口调整边界线确定所述视频拍摄窗口的当前显示区域包括:
    依据所述窗口移动操作,确定所述视频拍摄窗口的第一显示区域;
    若所述第一显示区域和所述任一所述窗口调整边界线的距离不小于 设定距离,则确定所述第一显示区域为所述当前显示区域;
    若所述第一显示区域和所述任一所述窗口调整边界线的距离小于所述设定距离,则确定第二显示区域为所述当前显示区域;
    其中,所述第二显示区域为将所述第一显示区域向所述任一所述窗口调整边界线平移后的区域,所述第二显示区域的至少一个位置点与所述任一所述窗口调整边界线重合。
  5. 根据权利要求1至4中任一项所述的方法,其中,响应于所述视频拍摄操作拍摄用户视频并通过所述视频拍摄窗口显示所述用户视频包括:
    响应于所述视频拍摄操作,拍摄用户视频,同时播放所述原视频,并通过所述视频拍摄窗口显示所述用户视频。
  6. 根据权利要求1至4中任一项所述的方法,还包括:
    接收所述用户针对所述视频拍摄窗口的窗口大小调节操作;以及
    响应于所述窗口大小调节操作,将所述视频拍摄窗口调整到相应的显示大小。
  7. 根据权利要求1至4中任一项所述的方法,还包括:
    通过所述视频播放界面,接收所述用户针对待添加特效的特效添加操作;以及
    响应于所述特效添加操作,将所述待添加特效添加至所述用户视频中。
  8. 根据权利要求1至4中任一项所述的方法,还包括:在响应于所述视频拍摄操作拍摄用户视频并通过所述视频拍摄窗口显示所述用户视频之前,
    通过所述视频播放界面,接收所述用户针对用户视频的录制方式的录制选择操作,所述录制方式包括快录方式、慢录方式和标准录制方式中的 至少一项;以及
    依据所述录制选择操作,确定所述用户视频的录制方式。
  9. 根据权利要求1至4中任一项所述的方法,还包括:
    将所述用户视频和所述原视频合成,得到合拍视频。
  10. 根据权利要求9所述的方法,还包括:
    通过所述视频播放界面,接收所述用户的音量调节操作;以及
    响应于所述音量调节操作,对所述用户视频的音频信息和/或所述原视频的音频信息的音量进行相应的调节。
  11. 根据权利要求1至4中任一项所述的方法,还包括:
    向所述用户提供操作提示选项,所述操作提示选项用于在接收到所述用户的操作时,向所述用户提供合拍视频拍摄操作的提示信息。
  12. 一种视频拍摄装置,包括:
    触发操作接收模块,用于通过原视频的视频播放界面,接收用户的视频拍摄触发操作;
    拍摄窗口显示模块,用于响应于所述视频拍摄触发操作,将视频拍摄窗口叠加显示在所述视频播放界面之上;
    拍摄操作接收模块,用于通过所述视频播放界面,接收所述用户的视频拍摄操作;以及
    用户视频拍摄模块,用于响应于所述视频拍摄操作,拍摄用户视频,并通过所述视频拍摄窗口显示所述用户视频。
  13. 一种电子设备,包括:
    存储器,所述存储器用于存储计算机操作指令;以及
    处理器,所述处理器用于通过调用所述计算机操作指令,执行权利要求1至11中任一项所述的方法。
  14. 一种计算机可读存储介质,所述计算机可读存储介质存储有至少一条指令、至少一段程序、代码集或指令集,所述至少一条指令、所述至少一段程序、所述代码集或指令集由计算机加载并执行以实现权利要求1至11中任一项所述的方法。
PCT/CN2018/124066 2018-10-19 2018-12-26 视频拍摄方法、装置、电子设备及计算机可读存储介质 WO2020077856A1 (zh)

Priority Applications (3)

Application Number Priority Date Filing Date Title
GB2017755.6A GB2590545B (en) 2018-10-19 2018-12-26 Method and apparatus for capturing video, electronic device and computer-readable storage medium
US16/980,213 US11895426B2 (en) 2018-10-19 2018-12-26 Method and apparatus for capturing video, electronic device and computer-readable storage medium
JP2021510503A JP7139515B2 (ja) 2018-10-19 2018-12-26 動画撮像方法、動画撮像装置、電子機器、およびコンピューター読取可能な記憶媒体

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201811223788.7A CN108989692A (zh) 2018-10-19 2018-10-19 视频拍摄方法、装置、电子设备及计算机可读存储介质
CN201811223788.7 2018-10-19

Publications (1)

Publication Number Publication Date
WO2020077856A1 true WO2020077856A1 (zh) 2020-04-23

Family

ID=64544476

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/124066 WO2020077856A1 (zh) 2018-10-19 2018-12-26 视频拍摄方法、装置、电子设备及计算机可读存储介质

Country Status (5)

Country Link
US (1) US11895426B2 (zh)
JP (1) JP7139515B2 (zh)
CN (1) CN108989692A (zh)
GB (1) GB2590545B (zh)
WO (1) WO2020077856A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114845152A (zh) * 2021-02-01 2022-08-02 腾讯科技(深圳)有限公司 播放控件的显示方法、装置、电子设备及存储介质

Families Citing this family (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108769814B (zh) * 2018-06-01 2022-02-01 腾讯科技(深圳)有限公司 视频互动方法、装置、终端及可读存储介质
CN109089059A (zh) * 2018-10-19 2018-12-25 北京微播视界科技有限公司 视频生成的方法、装置、电子设备及计算机存储介质
CN108989692A (zh) 2018-10-19 2018-12-11 北京微播视界科技有限公司 视频拍摄方法、装置、电子设备及计算机可读存储介质
CN109862412B (zh) * 2019-03-14 2021-08-13 广州酷狗计算机科技有限公司 合拍视频的方法、装置及存储介质
CN110336968A (zh) * 2019-07-17 2019-10-15 广州酷狗计算机科技有限公司 视频录制方法、装置、终端设备及存储介质
CN112449210A (zh) * 2019-08-28 2021-03-05 北京字节跳动网络技术有限公司 声音处理方法、装置、电子设备及计算机可读存储介质
CN111629151B (zh) * 2020-06-12 2023-01-24 北京字节跳动网络技术有限公司 视频合拍方法、装置、电子设备及计算机可读介质
CN114079822A (zh) * 2020-08-21 2022-02-22 聚好看科技股份有限公司 显示设备
CN112004045A (zh) * 2020-08-26 2020-11-27 Oppo(重庆)智能科技有限公司 一种视频处理方法、装置和存储介质
CN112263388A (zh) * 2020-10-14 2021-01-26 深圳市乐升科技有限公司 一种采耳设备控制方法及系统
CN113542844A (zh) * 2021-07-28 2021-10-22 北京优酷科技有限公司 视频数据处理方法、装置及存储介质
CN113672326B (zh) * 2021-08-13 2024-05-28 康佳集团股份有限公司 应用窗口录屏方法、装置、终端设备及存储介质
CN115720292B (zh) * 2021-08-23 2024-08-23 北京字跳网络技术有限公司 视频录制方法、设备、存储介质及程序产品
CN113783997B (zh) * 2021-09-13 2022-08-23 北京字跳网络技术有限公司 一种视频发布方法、装置、电子设备及存储介质
CN114095793A (zh) * 2021-11-12 2022-02-25 广州博冠信息科技有限公司 一种视频播放方法、装置、计算机设备及存储介质
CN114546229B (zh) * 2022-01-14 2023-09-22 阿里巴巴(中国)有限公司 信息处理方法、截屏方法及电子设备
CN116546130A (zh) * 2022-01-26 2023-08-04 广州三星通信技术研究有限公司 多媒体数据控制方法、装置、终端和存储介质
CN114666648B (zh) * 2022-03-30 2023-04-28 阿里巴巴(中国)有限公司 视频播放方法及电子设备
CN115082301B (zh) * 2022-08-22 2022-12-02 中关村科学城城市大脑股份有限公司 定制视频生成方法、装置、设备和计算机可读介质

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20030041034A (ko) * 2001-11-19 2003-05-23 쓰리에스휴먼 주식회사 동작비교를 통한 자세교정 운동 장치 및 동작비교 방법,이 동작비교 방법을 저장한 기록매체
CN104967902A (zh) * 2014-09-17 2015-10-07 腾讯科技(北京)有限公司 视频分享方法、装置及系统
CN105898133A (zh) * 2015-08-19 2016-08-24 乐视网信息技术(北京)股份有限公司 一种视频拍摄方法及装置
CN107920274A (zh) * 2017-10-27 2018-04-17 优酷网络技术(北京)有限公司 一种视频处理方法、客户端及服务器
CN108566519A (zh) * 2018-04-28 2018-09-21 腾讯科技(深圳)有限公司 视频制作方法、装置、终端和存储介质
CN108989692A (zh) * 2018-10-19 2018-12-11 北京微播视界科技有限公司 视频拍摄方法、装置、电子设备及计算机可读存储介质

Family Cites Families (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7269334B2 (en) * 2001-07-27 2007-09-11 Thomson Licensing Recording and playing back multiple programs
EP2243078A4 (en) * 2008-01-07 2011-05-11 Smart Technologies Ulc METHOD FOR STARTING A SELECTED APPLICATION IN A COMPUTER SYSTEM WITH MULTIPLE MONITORS AND COMPUTER SYSTEM WITH MULTIPLE MONITORS FOR USING THIS METHOD
US8434006B2 (en) * 2009-07-31 2013-04-30 Echostar Technologies L.L.C. Systems and methods for adjusting volume of combined audio channels
JP2011238125A (ja) 2010-05-12 2011-11-24 Sony Corp 画像処理装置および方法、並びにプログラム
US8866943B2 (en) * 2012-03-09 2014-10-21 Apple Inc. Video camera providing a composite video sequence
KR102182398B1 (ko) * 2013-07-10 2020-11-24 엘지전자 주식회사 전자 기기 및 그 제어 방법
JP6210220B2 (ja) 2014-02-28 2017-10-11 ブラザー工業株式会社 カラオケ装置
CN104394481B (zh) * 2014-09-30 2016-09-21 腾讯科技(深圳)有限公司 视频播放方法及装置
CN104994314B (zh) * 2015-08-10 2019-04-09 优酷网络技术(北京)有限公司 在移动终端上通过手势控制画中画视频的方法及系统
KR20170029329A (ko) 2015-09-07 2017-03-15 엘지전자 주식회사 이동단말기 및 그 제어방법
US9349414B1 (en) * 2015-09-18 2016-05-24 Odile Aimee Furment System and method for simultaneous capture of two video streams
JP6478162B2 (ja) 2016-02-29 2019-03-06 株式会社Hearr 携帯端末装置およびコンテンツ配信システム
JP6267381B1 (ja) 2017-02-28 2018-01-24 株式会社東宣エイディ 看板設置データ生成送信プログラム、看板設置データ生成送信プログラムを実行する情報処理通信端末、情報処理通信サーバ
FR3066671B1 (fr) * 2017-05-18 2020-07-24 Darmon Yves Procede d'incrustation d'images ou de video au sein d'une autre sequence video

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20030041034A (ko) * 2001-11-19 2003-05-23 쓰리에스휴먼 주식회사 동작비교를 통한 자세교정 운동 장치 및 동작비교 방법,이 동작비교 방법을 저장한 기록매체
CN104967902A (zh) * 2014-09-17 2015-10-07 腾讯科技(北京)有限公司 视频分享方法、装置及系统
CN105898133A (zh) * 2015-08-19 2016-08-24 乐视网信息技术(北京)股份有限公司 一种视频拍摄方法及装置
CN107920274A (zh) * 2017-10-27 2018-04-17 优酷网络技术(北京)有限公司 一种视频处理方法、客户端及服务器
CN108566519A (zh) * 2018-04-28 2018-09-21 腾讯科技(深圳)有限公司 视频制作方法、装置、终端和存储介质
CN108989692A (zh) * 2018-10-19 2018-12-11 北京微播视界科技有限公司 视频拍摄方法、装置、电子设备及计算机可读存储介质

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114845152A (zh) * 2021-02-01 2022-08-02 腾讯科技(深圳)有限公司 播放控件的显示方法、装置、电子设备及存储介质
CN114845152B (zh) * 2021-02-01 2023-06-30 腾讯科技(深圳)有限公司 播放控件的显示方法、装置、电子设备及存储介质

Also Published As

Publication number Publication date
JP7139515B2 (ja) 2022-09-20
US20210014431A1 (en) 2021-01-14
JP2021520764A (ja) 2021-08-19
US11895426B2 (en) 2024-02-06
GB2590545B (en) 2023-02-22
GB202017755D0 (en) 2020-12-23
CN108989692A (zh) 2018-12-11
GB2590545A (en) 2021-06-30

Similar Documents

Publication Publication Date Title
WO2020077856A1 (zh) 视频拍摄方法、装置、电子设备及计算机可读存储介质
WO2020077855A1 (zh) 视频拍摄方法、装置、电子设备及计算机可读存储介质
WO2020077854A1 (zh) 视频生成的方法、装置、电子设备及计算机存储介质
US11943486B2 (en) Live video broadcast method, live broadcast device and storage medium
WO2020029526A1 (zh) 视频特效添加方法、装置、终端设备及存储介质
US11670339B2 (en) Video acquisition method and device, terminal and medium
WO2020062684A1 (zh) 视频处理方法、装置、终端和介质
US11037600B2 (en) Video processing method and apparatus, terminal and medium
WO2021218518A1 (zh) 视频的处理方法、装置、设备及介质
WO2022253141A1 (zh) 视频分享方法、装置、设备及介质
WO2023104102A1 (zh) 一种直播评论展示方法、装置、设备、程序产品及介质
WO2022042035A1 (zh) 视频制作方法、装置、设备及存储介质
WO2020220773A1 (zh) 图片预览信息的显示方法、装置、电子设备及计算机可读存储介质
US11076121B2 (en) Apparatus and associated methods for video presentation
WO2024037491A1 (zh) 媒体内容处理方法、装置、设备及存储介质
WO2023273692A1 (zh) 信息回复方法、装置、电子设备、计算机存储介质和产品
JP2024529251A (ja) メディアファイル処理方法、装置、デバイス、可読記憶媒体および製品
WO2023098011A1 (zh) 视频播放方法及电子设备
US20140282000A1 (en) Animated character conversation generator
WO2024104333A1 (zh) 演播画面的处理方法、装置、电子设备及存储介质
CN109636917B (zh) 三维模型的生成方法、装置、硬件装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18937347

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2021510503

Country of ref document: JP

Kind code of ref document: A

ENP Entry into the national phase

Ref document number: 202017755

Country of ref document: GB

Kind code of ref document: A

Free format text: PCT FILING DATE = 20181226

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 13.08.2021)

122 Ep: pct application non-entry in european phase

Ref document number: 18937347

Country of ref document: EP

Kind code of ref document: A1