WO2020077856A1 - 视频拍摄方法、装置、电子设备及计算机可读存储介质 - Google Patents
视频拍摄方法、装置、电子设备及计算机可读存储介质 Download PDFInfo
- Publication number
- WO2020077856A1 WO2020077856A1 PCT/CN2018/124066 CN2018124066W WO2020077856A1 WO 2020077856 A1 WO2020077856 A1 WO 2020077856A1 CN 2018124066 W CN2018124066 W CN 2018124066W WO 2020077856 A1 WO2020077856 A1 WO 2020077856A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- video
- user
- window
- shooting
- display area
- Prior art date
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/222—Studio circuitry; Studio devices; Studio equipment
- H04N5/262—Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
- H04N5/272—Means for inserting a foreground image in a background image, i.e. inlay, outlay
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/76—Television signal recording
- H04N5/765—Interface circuits between an apparatus for recording and another apparatus
- H04N5/77—Interface circuits between an apparatus for recording and another apparatus between a recording apparatus and a television camera
- H04N5/772—Interface circuits between an apparatus for recording and another apparatus between a recording apparatus and a television camera the recording apparatus and the television camera being placed in the same enclosure
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/62—Control of parameters via user interfaces
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/16—Sound input; Sound output
- G06F3/165—Management of the audio stream, e.g. setting of volume, audio stream path
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/63—Control of cameras or camera modules by using electronic viewfinders
- H04N23/631—Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/222—Studio circuitry; Studio devices; Studio equipment
- H04N5/262—Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/222—Studio circuitry; Studio devices; Studio equipment
- H04N5/262—Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
- H04N5/2621—Cameras specially adapted for the electronic generation of special effects during image pickup, e.g. digital cameras, camcorders, video cameras having integrated special effects capability
Definitions
- the present disclosure relates to the field of Internet technology, and in particular, the present disclosure relates to a video shooting method, apparatus, electronic equipment, and computer-readable storage medium.
- users can express their thoughts or viewing experience of other videos in the platform in the form of videos, so as to realize the interaction with the videos.
- the present disclosure provides a video shooting method, the method includes:
- the video shooting window is superimposed and displayed on the video playback interface
- the user video is captured, and the user video is displayed through the video capturing window.
- the method further includes:
- the video shooting window is adjusted to the corresponding area on the video playback interface.
- adjusting the video shooting window to the corresponding area above the video playback interface in response to the window movement operation includes:
- determining the current display area of the video shooting window according to the window movement operation and the window adjustment boundary line includes:
- the window movement operation determine the first display area of the video shooting window
- the first display area is determined to be the current display area
- the second display area is the current display area
- the second display area is an area after the first display area is translated to any window adjustment boundary line, and at least one position point of the second display area coincides with any window adjustment boundary line.
- taking a user video in response to a video shooting operation and displaying the user video through the video shooting window includes:
- the user video is shot, the original video is simultaneously played, and the user video is displayed through the video shooting window.
- the method further includes:
- the video shooting window is adjusted to the corresponding display size.
- the method further includes:
- the special effect to be added is added to the user video.
- the method further includes: before shooting the user video in response to the video shooting operation and displaying the user video through the video shooting window,
- the recording method includes at least one of the fast recording method, the slow recording method, and the standard recording method;
- the recording selection operation determine the recording method of user video.
- the method further includes:
- the method further includes:
- the volume of the audio information of the user video and / or the audio information of the original video is adjusted accordingly.
- the method further includes:
- An operation prompt option is provided to the user, and the operation prompt option is used to provide the user with prompt information of the cooperative video shooting operation when the user's operation is received.
- the present disclosure provides a video shooting device including:
- Trigger operation receiving module used to receive the user's video shooting trigger operation through the video playback interface of the original video
- the shooting window display module is used to superimpose and display the video shooting window on the video playback interface in response to the video shooting trigger operation;
- the shooting operation receiving module is used to receive the user's video shooting operation through the video playback interface
- the user video shooting module is used to shoot the user video in response to the video shooting operation and display the user video through the video shooting window.
- the device further includes:
- the window position adjustment module is configured to receive a user's window movement operation for the video shooting window, and in response to the window movement operation, adjust the video shooting window to a corresponding area on the video playback interface.
- the window position adjustment module may be configured as:
- the window position adjustment module may be configured as:
- the window movement operation determine the first display area of the video shooting window
- the first display area is determined to be the current display area
- the second display area is the current display area
- the second display area is an area after the first display area is translated to any window adjustment boundary line, and at least one position point of the second display area coincides with any window adjustment boundary line.
- the user video shooting module may be configured as:
- the user video is shot, the original video is simultaneously played, and the user video is displayed through the video shooting window.
- the device further includes:
- the window size adjustment module is used to receive the user's window size adjustment operation for the video shooting window, and in response to the window size adjustment operation, adjust the video shooting window to the corresponding display size.
- the device further includes:
- the special effect adding module is used to receive the user's special effect adding operation for the special effect to be added through the video playing interface, and add the special effect to be added to the user video in response to the special effect adding operation.
- the user video shooting module may also be configured as:
- the recording method includes at least one of the fast recording method, the slow recording method, and the standard recording method.
- the device further includes:
- Co-production video generation module used to synthesize user video and original video to obtain co-production video.
- the device further includes:
- the volume adjustment module is used to receive the user's volume adjustment operation through the video playback interface, and adjust the volume of the audio information of the user's video and / or the audio information of the original video in response to the volume adjustment operation.
- the device further includes:
- the operation prompt module is used to provide the user with an operation prompt option, and the operation prompt option is used to provide the user with prompt information of the cooperative video shooting operation when the user's operation is received.
- the present disclosure provides an electronic device including a processor and a memory
- the memory is used to store computer operation instructions
- the processor is configured to execute the method as shown in any embodiment of the first aspect of the present disclosure by invoking the computer operation instruction.
- the present disclosure provides a computer-readable storage medium storing at least one operation, at least one program, code set, or operation set, the at least one operation, at least one program, code set, or operation set Load and execute to implement the method as shown in any embodiment of the first aspect of the present disclosure.
- a user only needs to perform operations related to user video shooting on the video playback interface, and the user video can be recorded on the basis of the original video through the video shooting window, and the operation process is simple and fast.
- the user video can reflect the user's feelings, comments, or viewing reactions to the original video, so that the user can conveniently display his views or reactions to the original video, which can better meet the user's actual application needs and improve the user's Interactive experience improves the fun of video shooting.
- FIG. 1 is a schematic flowchart of a video shooting method provided by an embodiment of the present disclosure
- FIG. 2 is a schematic diagram of a video playback interface provided by an embodiment of the present disclosure
- FIG. 3 is a schematic diagram of yet another video playback interface provided by an embodiment of the present disclosure.
- FIG. 4 is a schematic diagram of yet another video playback interface provided by an embodiment of the present disclosure.
- 5A is a schematic diagram of a volume adjustment method provided by an embodiment of the present disclosure.
- FIG. 5B is a schematic diagram of yet another volume adjustment method provided by an embodiment of the present disclosure.
- FIG. 6A is a schematic diagram of another video playback interface provided by an embodiment of the present disclosure.
- FIG. 6B is a schematic diagram of yet another video playback interface provided by an embodiment of the present disclosure.
- FIG. 7 is a schematic structural diagram of a video shooting device provided by an embodiment of the present disclosure.
- FIG. 8 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure.
- An embodiment of the present disclosure provides a video shooting method. As shown in FIG. 1, the method may include:
- Step S110 Receive the user's video shooting trigger operation through the video playback interface of the original video.
- the video shooting trigger operation means that the user wants to shoot the user video based on the original video in the video playback interface, that is, the user is used to trigger the action of starting the user video shooting, and the specific form of the operation is configured as needed, for example, it can be The trigger action of the user operating position on the interface of the client application.
- the video playback interface is used for interaction between the terminal device and the user, and the user can receive related operations on the original video through the interface, for example, sharing the original video or performing joint shooting and other operations.
- the operation can be triggered by the relevant trigger identification of the client, such as a specified trigger button or input box on the client interface, or the user's voice, specifically, the "displayed on the client's application interface"
- the virtual button of "co-shot” the user clicks the button to trigger the user's video shooting operation.
- the original video may be a video that has not been co-shot, or may have been obtained after co-shot.
- step S120 in response to the video shooting trigger operation, the video shooting window is superimposed and displayed on the video playback interface.
- the video shooting window may be superimposed and displayed on a preset position on the video playback interface, and the preset position may be a pre-configured display position based on the size of the display interface of the user's terminal device, for example, the upper left of the video playback interface Corner; the size of the video capture window is smaller than the display window of the original video, so that the video capture window only blocks part of the content of the original video.
- the initial size of the video shooting window can be configured according to actual needs. It can be selected to minimize the occlusion of the original video screen when playing the original video, which does not affect the user's viewing of the original video. Affect the size of the user's viewing of the recorded picture.
- the size of the display interface of the user's terminal device can be configured to automatically adjust the size of the video capture window displayed on the terminal device.
- the video capture window is one-tenth or one-fifth of the display interface of the terminal device.
- Step S130 Receive a user's video shooting operation through the video playback interface.
- the video playback interface includes related trigger identifiers for triggering video shooting operations, such as specifying a trigger button or input box, and can also be a user's voice instruction; specifically, it can be "shooting" displayed on the client's application interface "Is a virtual button, the user clicks on the button is the user's video shooting operation, and the video shooting operation can trigger the shooting function of the user's terminal device to obtain the user's content to be shot, such as the user himself.
- related trigger identifiers for triggering video shooting operations, such as specifying a trigger button or input box, and can also be a user's voice instruction; specifically, it can be "shooting" displayed on the client's application interface "Is a virtual button, the user clicks on the button is the user's video shooting operation, and the video shooting operation can trigger the shooting function of the user's terminal device to obtain the user's content to be shot, such as the user himself.
- step S140 in response to the video shooting operation, the user video is shot, and the user video is displayed through the video shooting window.
- the playback state of the original video is not limited, that is, the original video may be in a playback state, or may be in a state of being paused to a certain frame of a video frame, which may be configured based on actual needs.
- the original video may be a video that has not been co-shot, or a co-produced video that has been obtained after co-shot.
- the user video in the embodiment of the present disclosure may be a video including the user, that is, the user's video is recorded.
- the user's video is recorded.
- it can also be the video of other scenes recorded by the user after adjustment as needed.
- a user only needs to perform operations related to user video shooting on the video playback interface, and the user video can be recorded on the basis of the original video through the video shooting window, and the operation process is simple and fast.
- the user video can reflect the user's feelings, comments, or viewing reactions to the original video, so that the user can conveniently display his views or reactions to the original video, which can better meet the user's actual application needs and improve the user's Interactive experience improves the fun of video shooting.
- FIG. 2 shows a schematic diagram of a video playback interface of the original video of the client application in the terminal device.
- the virtual button of “co-shooting” displayed on the interface is a video shooting trigger button. The user clicks The operation of this button is the user's video shooting trigger operation; on the video playback interface, after receiving the user's video shooting trigger operation, the video shooting window A is superimposed and displayed on the video playback interface B.
- the "shooting" virtual button is the shooting trigger button, and the operation that the user clicks on the button is the user's video shooting operation. After receiving the operation, the user video is shot through the video shooting window A to realize the shooting of the user video based on the original video Function.
- the shape of the video shooting window is not limited, including a circle, a rectangle, or other shapes, and can be configured according to actual needs.
- the method may further include:
- the video shooting window is adjusted to the corresponding area on the video playback interface.
- the user can adjust the position of the video shooting window to meet the needs of different users for the position of the video shooting window above the video playback interface.
- the position of the video shooting window can be adjusted by any of the following user window movement operations:
- the first type the user can adjust the position of the video shooting window by dragging the video shooting window through an operating object, such as a finger.
- an operating object such as a finger.
- the second type the user can adjust the position of the video shooting window through the position progress bar displayed in the video playback interface.
- the corresponding different positions in the position progress bar can represent the position of the shooting window above the video playback interface.
- the user can slide the position The progress bar determines the corresponding area of the video shooting window above the video playback interface.
- adjusting the video shooting window to the corresponding area above the video playback interface in response to the window movement operation may include:
- the video playback interface has a pre-configured window adjustment boundary line.
- the window adjustment boundary line is used to limit the display area of the video shooting window above the video playback interface.
- the window adjustment boundary line may be based on various The size of the display interface of the terminal device is pre-configured so that the content captured in the video shooting window can be adapted to be displayed on the display interface of any terminal device. Based on the configuration of the window adjustment boundary line, when receiving the user's window movement operation, the pre-configured window adjustment boundary line will be displayed on the video playback interface at the same time, so that when the user adjusts the video shooting window, the adjustment of the video shooting window is adjusted in accordance with.
- the video shooting window can be configured according to requirements.
- the window adjustment boundary line may be a guide line located at a pre-configured position in the video playback interface, and the pre-configured position may include at least one of the top, bottom, left, and right positions of the video playback interface, and guide lines at different positions
- the adjustment range of the corresponding position of the video shooting window in the video playback interface can be limited.
- the two guide lines at the top and left in the video playback interface are used as window adjustment lines (that is, window adjustment boundary lines a and b) as an example.
- the user can trigger the window adjustment operation by dragging the video shooting window f.
- the window adjustment boundary lines a and b will be displayed in the video playback interface, and the window adjustment boundary lines a and b are two perpendicular to each other Lines, in practical applications, in order to facilitate user identification, you can mark the window adjustment boundary lines a and b by eye-catching colors, such as red, or adjust the boundary lines a and b of the window by different shapes, such as zigzag Make annotations.
- the user drags the video shooting window f from position A to position B. Based on the position B, the video shooting window f is adjusted to the position corresponding to position B on the video playback interface to realize the adjustment of the video shooting window .
- determining the current display area of the video shooting window according to the window movement operation and the window adjustment boundary line may include:
- the window movement operation determine the first display area of the video shooting window
- the first display area is determined to be the current display area
- the second display area is determined to be the current display area
- the second display area is an area after the first display area is translated to any window adjustment boundary line, and at least one position point of the second display area coincides with any window adjustment boundary line.
- the video shooting window has a relatively better display position within the adjustment range defined by the window adjustment boundary line, for example, the display area near the window adjustment boundary line, in the process of adjusting the video window, in addition to the video shooting window in the video playback
- the user cannot accurately obtain the relatively better display position, you can use the distance between the display area of the video shooting window during the adjustment and the window adjustment boundary line to help the user to adjust the video
- the shooting window is adjusted to a relatively better position above the video playback interface.
- the display position of the non-edge area of, the first display area can be used as the area to which the video shooting window is to be adjusted, that is, the current display area.
- the distance between the first display area and any window adjustment boundary line is less than the set distance, it means that the user may want to adjust the video shooting window to the edge area of the video playback interface, so as to cover the original video playback interface as little as possible
- the current display area may be determined as the second display area at the boundary line.
- the first display area is rectangular, and the area after the first display area is adjusted to any window and the boundary line is translated is any of the first display area.
- a border line coincides with the area corresponding to any window adjustment border line; if the video capture window is circular and the window adjustment border line is a straight line, the first display area is circular, and the first display area is adjusted to any window border
- the area after the line translation is an area where at least one position point of the first display area coincides with any window adjustment boundary line. It can be understood that, when there is an adjustment boundary line, no matter how the shooting window is adjusted, the display area of the shooting window cannot exceed the boundary line.
- taking a user video in response to a video shooting operation and displaying the user video through a video shooting window may include:
- the user video is shot, the original video is simultaneously played, and the user video is displayed through the video shooting window.
- the user video in order to make the comment content in the user video correspond to the content in the original video, the user video can be recorded synchronously while the original video is playing, that is, when the video shooting operation is received, the user video starts to be taken and the original video is played synchronously
- the function of simultaneous recording of the user video can be realized while the original video is playing, so that during the recording of the user video, the user can perform synchronous recording of the thought content or comment content in the user video based on the video content played in the original video, further improving The user's interactive experience.
- the original video is in the playback state before receiving the user's video shooting operation through the video playback interface of the original video, the original video is automatically paused when the user's video shooting operation is received, or the user Pause, when receiving the video shooting operation, you can play the paused original video, shoot the user video, and display the user video through the video shooting window.
- the method may further include:
- the video shooting window is adjusted to the corresponding display size.
- the size of the video shooting window can be set according to the pre-configured default value, or the size of the video shooting window can be adjusted by the user based on the actual needs of the user.
- the video playback interface includes a trigger window Trigger identification related to the size adjustment operation, such as specifying a trigger button or input box, or a user's voice instruction; specifically, it can be a virtual button of a "window" displayed on the video playback interface, and the user can trigger the window size through the button Adjustment operation, through which the size of the video shooting window can be adjusted.
- the method may further include:
- the special effect to be added is added to the user video.
- the user can also be provided with the function of adding special effects to the user video, that is, adding the selected special effects to be added to the user video through the user's special effect adding operation.
- the special effect to be added may be added before the user's video shooting, may also be added during the user's video shooting, or may be added after the user's video shooting is completed.
- the disclosure does not limit the timing of adding the special effect.
- the function of adding special effects to user videos can be achieved in at least one of the following ways:
- the first type the function of adding special effects can be realized through the virtual button of "special effects" displayed on the video playback interface.
- Video the virtual button of "special effects" displayed on the video playback interface.
- the second type You can add special effects by sliding the display interface of the user video.
- the user can slide the display interface of the user video left and right through an operator, such as a finger, to add the corresponding special effects to the user video.
- the method may further include: before responding to the video shooting operation, shooting the user video, and displaying the user video through the video shooting window,
- the recording method includes at least one of the fast recording method, the slow recording method, and the standard recording method;
- the recording selection operation determine the recording method of user video.
- the user video can provide the user with a function to select the recording mode of the user video before shooting, that is, to record the user video according to the selected recording mode through the user's recording selection operation.
- the recording rate of the fast recording mode, the recording rate of the standard recording mode, and the recording rate of the slow recording mode are sequentially slowed down; through the selection of different recording methods, the function of variable-speed recording of user video can be realized, further improving the user's interactive experience.
- the fast, slow and standard among the above fast recording mode, slow recording mode and standard recording mode are relative, the recording rate of different recording modes is different, and the recording rate of each recording mode can be as required Configuration.
- the fast recording mode refers to the recording mode with the first recording rate
- the slow recording mode refers to the recording mode with the second recording rate
- the standard recording mode refers to the recording mode with the third recording rate.
- the third rate, the third rate is greater than the second rate.
- the method may further include:
- the user video and the original video synthesis method can be configured according to actual needs, the user video can be combined with the original video in the process of shooting the user video, or after the user video shooting is completed, then the user video and the original video Synthesize, and the resulting co-production video includes the content in the original video and the user video.
- the co-production video you can watch the user video while watching the original video.
- the user video is the user ’s reaction video
- the video frame image of the co-shot video includes the video frame image in the user video and the video frame image in the original video, wherein the video frame image in the user video is displayed on the video frame image in the original video on.
- the video frame image of the user video and the corresponding video frame image of the original video are combined , Synthesizing the audio information corresponding to the video frame image of the user video with the audio information corresponding to the corresponding video frame image of the original video, and then synthesizing the synthesized video frame image and the corresponding audio information to obtain a composite video.
- synthesizing the video frame image and the video frame image it refers to synthesizing the corresponding two video frame images into one frame image, and the video frame image of the user video in the synthesized one frame image is located in the original video On top of the video frame image.
- the size of the video frame image of the user video is smaller than the size of the video frame image of the original video.
- the duration of the user video is 10s, and the duration of the original video is also 10s.
- synthesizing the video frame image of the user video with the corresponding video frame image of the original video it is the first s of the user video
- the video frame image of the original video is combined with the video frame image of the first s of the original video
- the obtained video frame image is the video frame image of the first s of the corresponding co-produced video.
- each frame in the user video is sequentially
- the video frame image is combined with each video frame image in the corresponding original video to obtain a co-production video.
- FIG. 4 shows a schematic diagram of a video frame image in a synthesized video obtained by synthesizing a video frame image in a user video and a video frame image in an original video, as shown in the figure
- the image a is the part of the video frame image in the original video
- the image b is the part of the video frame image in the user video
- the image shown in the picture after the synthesis of the image a and the image b is the synthesized video frame image .
- the method may further include:
- the volume of the audio information of the user video and / or the audio information of the original video is adjusted accordingly.
- the volume of the original video and / or user video can also be adjusted to meet the video playback requirements of different users.
- the volume of the captured user video may be a pre-configured volume, for example, a volume consistent with the volume in the original video, or a preset volume.
- the volume adjustment virtual button in the video playback interface can be used to adjust the volume.
- the volume adjustment virtual button can be the volume adjustment progress bar, which corresponds to the original video volume and user video volume adjustment, which can be configured accordingly.
- Two volume adjustment progress bars such as volume adjustment progress bar a and volume adjustment progress bar b, adjust the volume of the original video through the volume adjustment progress bar a, and adjust the volume of the user video through the volume adjustment progress bar b.
- logo to distinguish different volume adjustment progress bars.
- FIG. 5A a schematic diagram of a volume adjustment method is shown in FIG. 5A.
- the user can adjust the volume by sliding the volume adjustment progress bar, and slide upward on the interface (that is, in the direction of the "+” sign), indicating that the volume is adjusted. Turn up; slide down the interface (that is, in the direction of the "-” sign) to indicate that the volume is turned down.
- you can also set the volume adjustment progress bar to the horizontal direction that is, a schematic diagram of another volume adjustment method shown in Figure 5B. Swipe to the left of the interface (that is, the "-" sign direction) to indicate that When the volume is turned down, slide to the right of the interface (that is, the direction of the "+” sign) to indicate that the volume is turned up.
- the volume adjustment interface and the video playback interface may be the same display interface or different display interfaces. If it is a different display interface, the volume adjustment interface can be displayed when the user's volume adjustment operation is received through the video playback interface, and the volume adjustment can be performed through this interface. Optionally, in order not to affect the recording and playback of the video, you can change The volume adjustment interface is displayed superimposed on the video playback interface, such as the edge position displayed on the video playback interface.
- the method may further include:
- An operation prompt option is provided to the user, and the operation prompt option is used to provide the user with prompt information of the cooperative video shooting operation when the user's operation is received.
- the prompt operation option can be displayed on the video playback interface through the "Help" virtual button.
- the user can get the corresponding prompt information by clicking the button.
- the prompt information can be displayed to the user in the form of operation preview, or can be displayed through the text.
- the method prompts the user how to operate, and the present disclosure does not limit the presentation form of the prompt information.
- synthesizing the user video and the original video to obtain a co-production video may include:
- the video includes two parts of video information and audio information, in the process of synthesizing the user video and the original video, the respective video information and audio information can be synthesized separately, and finally the synthesized video information and audio information are synthesized
- the above synthesis method can facilitate the processing of information.
- the method may further include: after synthesizing the user video and the original video to obtain a co-production video,
- the co-production video is saved locally, and / or, in response to the video publishing operation, the co-production video is published.
- the user can be provided with the function of publishing and / or saving the co-produced video, that is, through the user's video publishing operation, the co-produced video is published to the designated video platform to share the co-produced video; Or through the user's video saving operation, the co-production video is saved locally for the user to view.
- the video publishing operation can be Obtained by the user clicking the "publish" virtual button.
- publishing the co-produced video in response to the video publishing operation may include:
- the user in order to meet the user's privacy requirements for co-produced videos, the user is provided with the function of configuring co-produced video viewing permissions, that is, obtaining the user's co-produced video viewing permissions through the user's video publishing operation, and publishing the co-produced video according to the user's co-produced video viewing permissions .
- the co-produced video can only be viewed by users corresponding to the permission to view the co-produced video, and users who are not in the permission to view the co-produced video cannot view the co-produced video.
- the permission to view the co-produced video can be pre-configured.
- the permission to view the co-produced video can also be configured.
- the currently co-produced video is released according to the configured privacy rights.
- the permission to view the co-production video includes at least one of anyone, friends, and only yourself.
- anyone indicates that the co-production video can be viewed by anyone.
- a friend means that only the user ’s friends can view the co-production video.
- the user himself can view the co-produced video.
- the method may further include:
- a push message of the co-produced video may be generated, and through the push message, the associated user of the user and / or the associated user of the original video may be made Be informed of the release of the co-produced video in time.
- the associated user of the user refers to a user who has an associated relationship with the user, and the related scope of the associated relationship can be configured according to needs, for example, it can include, but not limited to, the person concerned by the user or the person following the user.
- the users associated with the original video are associated with the publisher of the original video. For example, they may include, but are not limited to, the publisher of the original video and the people involved in the original video.
- the original video is a video after a co-production.
- the publisher of the video is user a
- the author of the original original video before the co-production of the original video is user b
- the associated users of the original video may include user a and user b.
- user a followed user b, user a posted a co-production video, and user a was associated with user b, namely user a @ user b, where user a @ user b could be displayed in the title of the co-production video, Then, a push message of the co-produced video is sent to user b, so that user b knows that user a has posted the video.
- user b cannot receive the push message of the co-production video.
- user a did not follow user b, user a posted a co-production video, but when user a posted a co-production video @ user b, user b can receive a push message for the co-production video.
- synthesizing the user video and the original video to obtain a co-production video may include:
- the recording start time of the user video determine the first video in the original video that corresponds to the recording start time and the duration of the user video; synthesize the user video and the first video into the second video; and based on the second video And the video in the original video except the first video, get the co-production video.
- the duration of the user video recorded by the user may be the same as the duration of the original video, or may be inconsistent, and the user may select the recording start time of the user video based on the content in the original video, so that the video is co-shot During playback, the content of the user's video corresponds to the content in the original video, which further enhances the user's interactive experience.
- the method may further include: hiding virtual buttons of corresponding functions in the video playback interface.
- a virtual logo representing different functions can be displayed on the video playback interface, for example: a virtual button a indicating the start of shooting, a progress bar b indicating the progress of shooting, a virtual button c indicating adding special effects, and a virtual button indicating releasing a co-produced video Button d, etc .; a schematic diagram of a video playback interface shown in FIGS. 6A and 6B.
- the virtual identifiers other than the virtual button a and the progress bar b in the video playback interface in FIG. 6A can be hidden, for example, the virtual buttons c and d are hidden. As shown in the figure, by hiding the virtual logo, the video playback interface can be kept tidy.
- a virtual button for hiding function buttons can also be set in the interface, through which the user can set which function buttons to hide or display and restore. Specifically, when receiving the user's operation on the button, the user You can use this button to choose which virtual buttons to hide, or you can choose to restore the previously hidden virtual buttons.
- an embodiment of the present disclosure also provides a video shooting device 20.
- the device 20 may include:
- the trigger operation receiving module 210 is used to receive the user's video shooting trigger operation through the video playback interface of the original video;
- the shooting window display module 220 is used to superimpose and display the video shooting window on the video playback interface in response to the video shooting trigger operation;
- the shooting operation receiving module 230 is used to receive the user's video shooting operation through the video playback interface.
- the user video shooting module 240 is used to shoot user video in response to the video shooting operation and display the user video through the video shooting window.
- the device may further include:
- the window position adjustment module is configured to receive a user's window movement operation for the video shooting window, and in response to the window movement operation, adjust the video shooting window to a corresponding area on the video playback interface.
- the window position adjustment module may be configured as:
- the window position adjustment module may be configured as:
- the window movement operation determine the first display area of the video shooting window
- the first display area is determined to be the current display area
- the second display area is the current display area
- the second display area is an area after the first display area is translated to any window adjustment boundary line, and at least one position point of the second display area coincides with any window adjustment boundary line.
- the user video shooting module 240 may be configured as:
- the user video is shot, the original video is simultaneously played, and the user video is displayed through the video shooting window.
- the device may further include:
- the window size adjustment module is used to receive the user's window size adjustment operation for the video shooting window, and in response to the window size adjustment operation, adjust the video shooting window to the corresponding display size.
- the device may further include:
- the special effect adding module is used to receive the user's special effect adding operation for the special effect to be added through the video playing interface, and add the special effect to be added to the user video in response to the special effect adding operation.
- the user video shooting module 240 may also be configured as:
- the recording method may include at least one of a fast recording method, a slow recording method, and a standard recording method.
- the device may further include:
- Co-production video generation module used to synthesize user video and original video to obtain co-production video.
- the device may further include:
- the volume adjustment module is used to receive the user's volume adjustment operation through the video playback interface, and adjust the volume of the audio information of the user's video and / or the audio information of the original video in response to the volume adjustment operation.
- the device may further include:
- the operation prompt module is used to provide the user with an operation prompt option, and the operation prompt option is used to provide the user with prompt information of the cooperative video shooting operation when the user's operation is received.
- the video shooting device of the embodiments of the present disclosure may perform a video shooting method provided by the embodiments of the present disclosure, and the implementation principle is similar.
- the actions performed by the modules in the video shooting device in the embodiments of the present disclosure are: Corresponding to the steps in the video shooting method in the embodiments of the present disclosure, for the detailed function description of each module of the video shooting device, please refer to the description in the corresponding video shooting method shown in the foregoing, which will not be repeated here Repeat.
- the present disclosure provides an electronic device including a processor and a memory, wherein the memory is used to store computer operation instructions; and the processor is used to By calling the computer operation instruction, the method as shown in any embodiment of the video shooting method of the present disclosure is executed.
- the present disclosure provides a computer-readable storage medium that stores at least one instruction, at least one program, code set, or instruction set, at least one instruction At least one program, code set or instruction set is loaded and executed by the computer to implement the method as shown in any embodiment of the video shooting method of the present disclosure.
- FIG. 8 shows a schematic structural diagram of an electronic device 800 (for example, a terminal device or a server that implements the method shown in FIG. 1) suitable for implementing the embodiments of the present disclosure.
- Electronic devices in the embodiments of the present disclosure may include, but are not limited to, such as mobile phones, notebook computers, digital broadcast receivers, PDAs (personal digital assistants), PADs (tablet computers), PMPs (portable multimedia players), in-vehicle terminals (e.g. Mobile terminals such as car navigation terminals) and fixed terminals such as digital TVs, desktop computers and the like.
- the electronic device shown in FIG. 8 is only an example, and should not bring any limitation to the functions and use scope of the embodiments of the present disclosure.
- the electronic device 800 may include a processing device (such as a central processing unit, a graphics processor, etc.) 801, which may be loaded into a random storage according to a program stored in a read only memory (ROM) 802 or from the storage device 808
- the program in the memory (RAM) 803 is fetched to perform various appropriate actions and processes.
- various programs and data necessary for the operation of the electronic device 800 are also stored.
- the processing device 801, ROM 802, and RAM 803 are connected to each other through a bus 804.
- An input / output (I / O) interface 805 is also connected to the bus 804.
- the following devices can be connected to the I / O interface 805: including input devices 806 such as touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc .; including, for example, liquid crystal display (LCD), speaker, vibration
- An output device 807 such as a storage device; a storage device 808 including, for example, a magnetic tape, a hard disk, etc .; and a communication device 809.
- the communication device 809 may allow the electronic device 800 to perform wireless or wired communication with other devices to exchange data.
- FIG. 8 shows an electronic device 800 having various devices, it should be understood that it is not required to implement or have all the devices shown. More or fewer devices may be implemented or provided instead.
- the process described above with reference to the flowchart may be implemented as a computer software program.
- embodiments of the present disclosure include a computer program product that includes a computer program carried on a computer-readable medium, the computer program containing program code for performing the method shown in the flowchart.
- the computer program may be downloaded and installed from the network through the communication device 809, or installed from the storage device 808, or installed from the ROM 802.
- the processing device 801 the above-described functions defined in the method of the embodiments of the present disclosure are executed.
- the above-mentioned computer-readable medium in the present disclosure may be a computer-readable signal medium or a computer-readable storage medium or any combination of the two.
- the computer-readable storage medium may be, for example but not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, device, or device, or any combination of the above. More specific examples of computer readable storage media may include, but are not limited to: electrical connections with one or more wires, portable computer diskettes, hard disks, random access memory (RAM), read only memory (ROM), erasable Programmable read-only memory (EPROM or flash memory), optical fiber, portable compact disk read-only memory (CD-ROM), optical storage device, magnetic storage device, or any suitable combination of the foregoing.
- the computer-readable storage medium may be any tangible medium that contains or stores a program, and the program may be used by or in combination with an instruction execution system, apparatus, or device.
- the computer-readable signal medium may include a data signal that is propagated in baseband or as part of a carrier wave, in which computer-readable program code is carried. This propagated data signal can take many forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of the above.
- the computer-readable signal medium may also be any computer-readable medium other than a computer-readable storage medium, and the computer-readable signal medium may send, propagate, or transmit a program for use by or in combination with an instruction execution system, apparatus, or device .
- the program code contained on the computer-readable medium may be transmitted using any appropriate medium, including but not limited to: electric wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
- the computer-readable medium may be included in the electronic device; or it may exist alone without being assembled into the electronic device.
- the computer-readable medium carries one or more programs.
- the electronic device When the one or more programs are executed by the electronic device, the electronic device is caused to: obtain at least two Internet protocol addresses; send the node evaluation device to include at least two Internet programs A node evaluation request for a protocol address, where the node evaluation device selects and returns an Internet protocol address from at least two Internet protocol addresses; receives the Internet protocol address returned by the node evaluation device; wherein, the obtained Internet protocol address indicates a content distribution network
- the edge node in.
- the computer-readable medium carries one or more programs, and when the one or more programs are executed by the electronic device, the electronic device is caused to: receive a node evaluation request including at least two Internet protocol addresses; from at least two Among the Internet protocol addresses, select the Internet protocol address; return the selected Internet protocol address; wherein, the received Internet protocol address indicates an edge node in the content distribution network.
- the computer program code for performing the operations of the present disclosure can be written in one or more programming languages or a combination thereof.
- the above programming languages include object-oriented programming languages such as Java, Smalltalk, C ++, and also include conventional Procedural programming language-such as "C" language or similar programming language.
- the program code may execute entirely on the user's computer, partly on the user's computer, as an independent software package, partly on the user's computer and partly on a remote computer, or entirely on the remote computer or server.
- the remote computer can be connected to the user's computer through any kind of network, including a local area network (LAN) or a wide area network (WAN), or it can be connected to an external computer (for example, through an Internet service provider Internet connection).
- LAN local area network
- WAN wide area network
- Internet service provider Internet connection for example, AT&T, MCI, Sprint, EarthLink, MSN, GTE, etc.
- each block in the flowchart or block diagram may represent a module, program segment, or part of code that contains one or more logic functions Executable instructions.
- the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession can actually be executed in parallel, and sometimes they can also be executed in reverse order, depending on the functions involved.
- each block in the block diagrams and / or flowcharts, and combinations of blocks in the block diagrams and / or flowcharts can be implemented with dedicated hardware-based systems that perform specified functions or operations Or, it can be realized by a combination of dedicated hardware and computer instructions.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Human Computer Interaction (AREA)
- Theoretical Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- General Health & Medical Sciences (AREA)
- Physics & Mathematics (AREA)
- Audiology, Speech & Language Pathology (AREA)
- General Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- User Interface Of Digital Computer (AREA)
- Management Or Editing Of Information On Record Carriers (AREA)
- Studio Devices (AREA)
- Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
- Controls And Circuits For Display Device (AREA)
- Signal Processing For Digital Recording And Reproducing (AREA)
Abstract
Description
Claims (14)
- 一种视频拍摄方法,包括:通过原视频的视频播放界面,接收用户的视频拍摄触发操作;响应于所述视频拍摄触发操作,将视频拍摄窗口叠加显示在所述视频播放界面之上;通过所述视频播放界面,接收所述用户的视频拍摄操作;以及响应于所述视频拍摄操作,拍摄用户视频,并通过所述视频拍摄窗口显示所述用户视频。
- 根据权利要求1所述的方法,还包括:接收所述用户针对所述视频拍摄窗口的窗口移动操作;以及响应于所述窗口移动操作,将所述视频拍摄窗口调整到所述视频播放界面之上的相应区域。
- 根据权利要求2所述的方法,其中,将所述视频拍摄窗口调整到所述视频播放界面之上的相应区域包括:响应于所述窗口移动操作,将预配置的窗口调整边界线显示于所述视频播放界面,其中,所述窗口调整边界线用于限定所述视频拍摄窗口的显示区域;依据所述窗口移动操作和所述窗口调整边界线,确定所述视频拍摄窗口的当前显示区域;以及根据所述当前显示区域,将所述视频拍摄窗口调整到所述视频播放界面之上的相应位置。
- 根据权利要求3所述的方法,其中,依据所述窗口移动操作和所述窗口调整边界线确定所述视频拍摄窗口的当前显示区域包括:依据所述窗口移动操作,确定所述视频拍摄窗口的第一显示区域;若所述第一显示区域和所述任一所述窗口调整边界线的距离不小于 设定距离,则确定所述第一显示区域为所述当前显示区域;若所述第一显示区域和所述任一所述窗口调整边界线的距离小于所述设定距离,则确定第二显示区域为所述当前显示区域;其中,所述第二显示区域为将所述第一显示区域向所述任一所述窗口调整边界线平移后的区域,所述第二显示区域的至少一个位置点与所述任一所述窗口调整边界线重合。
- 根据权利要求1至4中任一项所述的方法,其中,响应于所述视频拍摄操作拍摄用户视频并通过所述视频拍摄窗口显示所述用户视频包括:响应于所述视频拍摄操作,拍摄用户视频,同时播放所述原视频,并通过所述视频拍摄窗口显示所述用户视频。
- 根据权利要求1至4中任一项所述的方法,还包括:接收所述用户针对所述视频拍摄窗口的窗口大小调节操作;以及响应于所述窗口大小调节操作,将所述视频拍摄窗口调整到相应的显示大小。
- 根据权利要求1至4中任一项所述的方法,还包括:通过所述视频播放界面,接收所述用户针对待添加特效的特效添加操作;以及响应于所述特效添加操作,将所述待添加特效添加至所述用户视频中。
- 根据权利要求1至4中任一项所述的方法,还包括:在响应于所述视频拍摄操作拍摄用户视频并通过所述视频拍摄窗口显示所述用户视频之前,通过所述视频播放界面,接收所述用户针对用户视频的录制方式的录制选择操作,所述录制方式包括快录方式、慢录方式和标准录制方式中的 至少一项;以及依据所述录制选择操作,确定所述用户视频的录制方式。
- 根据权利要求1至4中任一项所述的方法,还包括:将所述用户视频和所述原视频合成,得到合拍视频。
- 根据权利要求9所述的方法,还包括:通过所述视频播放界面,接收所述用户的音量调节操作;以及响应于所述音量调节操作,对所述用户视频的音频信息和/或所述原视频的音频信息的音量进行相应的调节。
- 根据权利要求1至4中任一项所述的方法,还包括:向所述用户提供操作提示选项,所述操作提示选项用于在接收到所述用户的操作时,向所述用户提供合拍视频拍摄操作的提示信息。
- 一种视频拍摄装置,包括:触发操作接收模块,用于通过原视频的视频播放界面,接收用户的视频拍摄触发操作;拍摄窗口显示模块,用于响应于所述视频拍摄触发操作,将视频拍摄窗口叠加显示在所述视频播放界面之上;拍摄操作接收模块,用于通过所述视频播放界面,接收所述用户的视频拍摄操作;以及用户视频拍摄模块,用于响应于所述视频拍摄操作,拍摄用户视频,并通过所述视频拍摄窗口显示所述用户视频。
- 一种电子设备,包括:存储器,所述存储器用于存储计算机操作指令;以及处理器,所述处理器用于通过调用所述计算机操作指令,执行权利要求1至11中任一项所述的方法。
- 一种计算机可读存储介质,所述计算机可读存储介质存储有至少一条指令、至少一段程序、代码集或指令集,所述至少一条指令、所述至少一段程序、所述代码集或指令集由计算机加载并执行以实现权利要求1至11中任一项所述的方法。
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
GB2017755.6A GB2590545B (en) | 2018-10-19 | 2018-12-26 | Method and apparatus for capturing video, electronic device and computer-readable storage medium |
US16/980,213 US11895426B2 (en) | 2018-10-19 | 2018-12-26 | Method and apparatus for capturing video, electronic device and computer-readable storage medium |
JP2021510503A JP7139515B2 (ja) | 2018-10-19 | 2018-12-26 | 動画撮像方法、動画撮像装置、電子機器、およびコンピューター読取可能な記憶媒体 |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811223788.7A CN108989692A (zh) | 2018-10-19 | 2018-10-19 | 视频拍摄方法、装置、电子设备及计算机可读存储介质 |
CN201811223788.7 | 2018-10-19 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2020077856A1 true WO2020077856A1 (zh) | 2020-04-23 |
Family
ID=64544476
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2018/124066 WO2020077856A1 (zh) | 2018-10-19 | 2018-12-26 | 视频拍摄方法、装置、电子设备及计算机可读存储介质 |
Country Status (5)
Country | Link |
---|---|
US (1) | US11895426B2 (zh) |
JP (1) | JP7139515B2 (zh) |
CN (1) | CN108989692A (zh) |
GB (1) | GB2590545B (zh) |
WO (1) | WO2020077856A1 (zh) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114845152A (zh) * | 2021-02-01 | 2022-08-02 | 腾讯科技(深圳)有限公司 | 播放控件的显示方法、装置、电子设备及存储介质 |
Families Citing this family (19)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108769814B (zh) * | 2018-06-01 | 2022-02-01 | 腾讯科技(深圳)有限公司 | 视频互动方法、装置、终端及可读存储介质 |
CN109089059A (zh) * | 2018-10-19 | 2018-12-25 | 北京微播视界科技有限公司 | 视频生成的方法、装置、电子设备及计算机存储介质 |
CN108989692A (zh) | 2018-10-19 | 2018-12-11 | 北京微播视界科技有限公司 | 视频拍摄方法、装置、电子设备及计算机可读存储介质 |
CN109862412B (zh) * | 2019-03-14 | 2021-08-13 | 广州酷狗计算机科技有限公司 | 合拍视频的方法、装置及存储介质 |
CN110336968A (zh) * | 2019-07-17 | 2019-10-15 | 广州酷狗计算机科技有限公司 | 视频录制方法、装置、终端设备及存储介质 |
CN112449210A (zh) * | 2019-08-28 | 2021-03-05 | 北京字节跳动网络技术有限公司 | 声音处理方法、装置、电子设备及计算机可读存储介质 |
CN111629151B (zh) * | 2020-06-12 | 2023-01-24 | 北京字节跳动网络技术有限公司 | 视频合拍方法、装置、电子设备及计算机可读介质 |
CN114079822A (zh) * | 2020-08-21 | 2022-02-22 | 聚好看科技股份有限公司 | 显示设备 |
CN112004045A (zh) * | 2020-08-26 | 2020-11-27 | Oppo(重庆)智能科技有限公司 | 一种视频处理方法、装置和存储介质 |
CN112263388A (zh) * | 2020-10-14 | 2021-01-26 | 深圳市乐升科技有限公司 | 一种采耳设备控制方法及系统 |
CN113542844A (zh) * | 2021-07-28 | 2021-10-22 | 北京优酷科技有限公司 | 视频数据处理方法、装置及存储介质 |
CN113672326B (zh) * | 2021-08-13 | 2024-05-28 | 康佳集团股份有限公司 | 应用窗口录屏方法、装置、终端设备及存储介质 |
CN115720292B (zh) * | 2021-08-23 | 2024-08-23 | 北京字跳网络技术有限公司 | 视频录制方法、设备、存储介质及程序产品 |
CN113783997B (zh) * | 2021-09-13 | 2022-08-23 | 北京字跳网络技术有限公司 | 一种视频发布方法、装置、电子设备及存储介质 |
CN114095793A (zh) * | 2021-11-12 | 2022-02-25 | 广州博冠信息科技有限公司 | 一种视频播放方法、装置、计算机设备及存储介质 |
CN114546229B (zh) * | 2022-01-14 | 2023-09-22 | 阿里巴巴(中国)有限公司 | 信息处理方法、截屏方法及电子设备 |
CN116546130A (zh) * | 2022-01-26 | 2023-08-04 | 广州三星通信技术研究有限公司 | 多媒体数据控制方法、装置、终端和存储介质 |
CN114666648B (zh) * | 2022-03-30 | 2023-04-28 | 阿里巴巴(中国)有限公司 | 视频播放方法及电子设备 |
CN115082301B (zh) * | 2022-08-22 | 2022-12-02 | 中关村科学城城市大脑股份有限公司 | 定制视频生成方法、装置、设备和计算机可读介质 |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20030041034A (ko) * | 2001-11-19 | 2003-05-23 | 쓰리에스휴먼 주식회사 | 동작비교를 통한 자세교정 운동 장치 및 동작비교 방법,이 동작비교 방법을 저장한 기록매체 |
CN104967902A (zh) * | 2014-09-17 | 2015-10-07 | 腾讯科技(北京)有限公司 | 视频分享方法、装置及系统 |
CN105898133A (zh) * | 2015-08-19 | 2016-08-24 | 乐视网信息技术(北京)股份有限公司 | 一种视频拍摄方法及装置 |
CN107920274A (zh) * | 2017-10-27 | 2018-04-17 | 优酷网络技术(北京)有限公司 | 一种视频处理方法、客户端及服务器 |
CN108566519A (zh) * | 2018-04-28 | 2018-09-21 | 腾讯科技(深圳)有限公司 | 视频制作方法、装置、终端和存储介质 |
CN108989692A (zh) * | 2018-10-19 | 2018-12-11 | 北京微播视界科技有限公司 | 视频拍摄方法、装置、电子设备及计算机可读存储介质 |
Family Cites Families (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7269334B2 (en) * | 2001-07-27 | 2007-09-11 | Thomson Licensing | Recording and playing back multiple programs |
EP2243078A4 (en) * | 2008-01-07 | 2011-05-11 | Smart Technologies Ulc | METHOD FOR STARTING A SELECTED APPLICATION IN A COMPUTER SYSTEM WITH MULTIPLE MONITORS AND COMPUTER SYSTEM WITH MULTIPLE MONITORS FOR USING THIS METHOD |
US8434006B2 (en) * | 2009-07-31 | 2013-04-30 | Echostar Technologies L.L.C. | Systems and methods for adjusting volume of combined audio channels |
JP2011238125A (ja) | 2010-05-12 | 2011-11-24 | Sony Corp | 画像処理装置および方法、並びにプログラム |
US8866943B2 (en) * | 2012-03-09 | 2014-10-21 | Apple Inc. | Video camera providing a composite video sequence |
KR102182398B1 (ko) * | 2013-07-10 | 2020-11-24 | 엘지전자 주식회사 | 전자 기기 및 그 제어 방법 |
JP6210220B2 (ja) | 2014-02-28 | 2017-10-11 | ブラザー工業株式会社 | カラオケ装置 |
CN104394481B (zh) * | 2014-09-30 | 2016-09-21 | 腾讯科技(深圳)有限公司 | 视频播放方法及装置 |
CN104994314B (zh) * | 2015-08-10 | 2019-04-09 | 优酷网络技术(北京)有限公司 | 在移动终端上通过手势控制画中画视频的方法及系统 |
KR20170029329A (ko) | 2015-09-07 | 2017-03-15 | 엘지전자 주식회사 | 이동단말기 및 그 제어방법 |
US9349414B1 (en) * | 2015-09-18 | 2016-05-24 | Odile Aimee Furment | System and method for simultaneous capture of two video streams |
JP6478162B2 (ja) | 2016-02-29 | 2019-03-06 | 株式会社Hearr | 携帯端末装置およびコンテンツ配信システム |
JP6267381B1 (ja) | 2017-02-28 | 2018-01-24 | 株式会社東宣エイディ | 看板設置データ生成送信プログラム、看板設置データ生成送信プログラムを実行する情報処理通信端末、情報処理通信サーバ |
FR3066671B1 (fr) * | 2017-05-18 | 2020-07-24 | Darmon Yves | Procede d'incrustation d'images ou de video au sein d'une autre sequence video |
-
2018
- 2018-10-19 CN CN201811223788.7A patent/CN108989692A/zh active Pending
- 2018-12-26 JP JP2021510503A patent/JP7139515B2/ja active Active
- 2018-12-26 US US16/980,213 patent/US11895426B2/en active Active
- 2018-12-26 WO PCT/CN2018/124066 patent/WO2020077856A1/zh active Application Filing
- 2018-12-26 GB GB2017755.6A patent/GB2590545B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20030041034A (ko) * | 2001-11-19 | 2003-05-23 | 쓰리에스휴먼 주식회사 | 동작비교를 통한 자세교정 운동 장치 및 동작비교 방법,이 동작비교 방법을 저장한 기록매체 |
CN104967902A (zh) * | 2014-09-17 | 2015-10-07 | 腾讯科技(北京)有限公司 | 视频分享方法、装置及系统 |
CN105898133A (zh) * | 2015-08-19 | 2016-08-24 | 乐视网信息技术(北京)股份有限公司 | 一种视频拍摄方法及装置 |
CN107920274A (zh) * | 2017-10-27 | 2018-04-17 | 优酷网络技术(北京)有限公司 | 一种视频处理方法、客户端及服务器 |
CN108566519A (zh) * | 2018-04-28 | 2018-09-21 | 腾讯科技(深圳)有限公司 | 视频制作方法、装置、终端和存储介质 |
CN108989692A (zh) * | 2018-10-19 | 2018-12-11 | 北京微播视界科技有限公司 | 视频拍摄方法、装置、电子设备及计算机可读存储介质 |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114845152A (zh) * | 2021-02-01 | 2022-08-02 | 腾讯科技(深圳)有限公司 | 播放控件的显示方法、装置、电子设备及存储介质 |
CN114845152B (zh) * | 2021-02-01 | 2023-06-30 | 腾讯科技(深圳)有限公司 | 播放控件的显示方法、装置、电子设备及存储介质 |
Also Published As
Publication number | Publication date |
---|---|
JP7139515B2 (ja) | 2022-09-20 |
US20210014431A1 (en) | 2021-01-14 |
JP2021520764A (ja) | 2021-08-19 |
US11895426B2 (en) | 2024-02-06 |
GB2590545B (en) | 2023-02-22 |
GB202017755D0 (en) | 2020-12-23 |
CN108989692A (zh) | 2018-12-11 |
GB2590545A (en) | 2021-06-30 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2020077856A1 (zh) | 视频拍摄方法、装置、电子设备及计算机可读存储介质 | |
WO2020077855A1 (zh) | 视频拍摄方法、装置、电子设备及计算机可读存储介质 | |
WO2020077854A1 (zh) | 视频生成的方法、装置、电子设备及计算机存储介质 | |
US11943486B2 (en) | Live video broadcast method, live broadcast device and storage medium | |
WO2020029526A1 (zh) | 视频特效添加方法、装置、终端设备及存储介质 | |
US11670339B2 (en) | Video acquisition method and device, terminal and medium | |
WO2020062684A1 (zh) | 视频处理方法、装置、终端和介质 | |
US11037600B2 (en) | Video processing method and apparatus, terminal and medium | |
WO2021218518A1 (zh) | 视频的处理方法、装置、设备及介质 | |
WO2022253141A1 (zh) | 视频分享方法、装置、设备及介质 | |
WO2023104102A1 (zh) | 一种直播评论展示方法、装置、设备、程序产品及介质 | |
WO2022042035A1 (zh) | 视频制作方法、装置、设备及存储介质 | |
WO2020220773A1 (zh) | 图片预览信息的显示方法、装置、电子设备及计算机可读存储介质 | |
US11076121B2 (en) | Apparatus and associated methods for video presentation | |
WO2024037491A1 (zh) | 媒体内容处理方法、装置、设备及存储介质 | |
WO2023273692A1 (zh) | 信息回复方法、装置、电子设备、计算机存储介质和产品 | |
JP2024529251A (ja) | メディアファイル処理方法、装置、デバイス、可読記憶媒体および製品 | |
WO2023098011A1 (zh) | 视频播放方法及电子设备 | |
US20140282000A1 (en) | Animated character conversation generator | |
WO2024104333A1 (zh) | 演播画面的处理方法、装置、电子设备及存储介质 | |
CN109636917B (zh) | 三维模型的生成方法、装置、硬件装置 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 18937347 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2021510503 Country of ref document: JP Kind code of ref document: A |
|
ENP | Entry into the national phase |
Ref document number: 202017755 Country of ref document: GB Kind code of ref document: A Free format text: PCT FILING DATE = 20181226 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 13.08.2021) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 18937347 Country of ref document: EP Kind code of ref document: A1 |