WO2020207080A1 - 视频拍摄方法、装置、电子设备及存储介质 - Google Patents

视频拍摄方法、装置、电子设备及存储介质 Download PDF

Info

Publication number
WO2020207080A1
WO2020207080A1 PCT/CN2020/071136 CN2020071136W WO2020207080A1 WO 2020207080 A1 WO2020207080 A1 WO 2020207080A1 CN 2020071136 W CN2020071136 W CN 2020071136W WO 2020207080 A1 WO2020207080 A1 WO 2020207080A1
Authority
WO
WIPO (PCT)
Prior art keywords
video
action
interactive
user
prompt
Prior art date
Application number
PCT/CN2020/071136
Other languages
English (en)
French (fr)
Inventor
王俊豪
Original Assignee
北京字节跳动网络技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京字节跳动网络技术有限公司 filed Critical 北京字节跳动网络技术有限公司
Publication of WO2020207080A1 publication Critical patent/WO2020207080A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • H04N21/4316Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations for displaying supplemental content in a region of the screen, e.g. an advertisement in a separate window
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/44008Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics in the video stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/442Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed, the storage space available from the internal hard disk
    • H04N21/44213Monitoring of end-user related data
    • H04N21/44218Detecting physical presence or behaviour of the user, e.g. using sensors to detect if the user is leaving the room or changes his face expression during a TV program
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/488Data services, e.g. news ticker
    • H04N21/4884Data services, e.g. news ticker for displaying subtitles
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/845Structuring of content, e.g. decomposing content into time segments
    • H04N21/8455Structuring of content, e.g. decomposing content into time segments involving pointers to the content, e.g. pointers to the I-frames of the video stream

Definitions

  • the embodiments of the present disclosure relate to video processing technology, for example, to a video shooting method, device, electronic device, and storage medium.
  • the video shooting window and the video playback window are usually displayed on the terminal screen at the same time.
  • the video playback window is placed on the right side and the video The shooting window is placed on the left. Play the original video while shooting the user video, and display the user video through the video shooting window, and then synthesize the user video and the original video to obtain a co-production video.
  • the disadvantage of the related technology is that most common co-production users cannot find a good shooting method through simple single video co-production when shooting co-production videos, and often require multiple shots, which will result in low efficiency and time-consuming co-production video shooting. , The interactive effect of co-production videos cannot be guaranteed, and the user experience is poor.
  • the present disclosure provides a video shooting method, device, electronic equipment, and storage medium, so as to optimize the co-production video shooting scheme in related technologies, guide co-production users, and improve the efficiency of co-production video shooting.
  • an embodiment of the present disclosure provides a video shooting method, including:
  • At least one item of interactive prompt information matching the video content of the basic pairing video is obtained to prompt the user.
  • the interactive prompt information is used to guide the user captured by the user pairing video to perform the interaction matching the basic pairing video action;
  • an embodiment of the present disclosure further provides a video shooting device, including:
  • the video acquisition module is set to acquire the basic paired video that matches the video co-production request
  • the user prompt module is configured to obtain at least one item of interactive prompt information that matches the video content of the basic pairing video during the shooting process of the user pairing video for user prompting, and the interactive prompt information is used to guide the user to perform and Interactive actions of basic matching video matching;
  • the video combination module is configured to combine the basic pairing video and the user pairing video that has been taken to form a co-production video.
  • an embodiment of the present disclosure further provides an electronic device, including:
  • One or more processors are One or more processors;
  • Storage device set to store one or more programs
  • One or more programs are executed by one or more processors, so that the one or more processors implement the video shooting method according to the embodiments of the present disclosure.
  • the embodiment of the present disclosure further provides a computer-readable storage medium, the computer-readable storage medium stores a computer program, and when the computer program is executed by a processor, the Video shooting method.
  • FIG. 1 is a flowchart of a video shooting method provided by an embodiment of the disclosure
  • FIG. 2 is a flowchart of another video shooting method provided by an embodiment of the disclosure.
  • FIG. 3 is a flowchart of still another video shooting method provided by an embodiment of the disclosure.
  • FIG. 4 is a schematic structural diagram of a video shooting device provided by an embodiment of the disclosure.
  • FIG. 5 is a schematic structural diagram of an electronic device provided by an embodiment of the disclosure.
  • FIG. 1 is a flowchart of a video shooting method provided by an embodiment of the disclosure. This embodiment is applicable to the situation of shooting co-production video.
  • the method can be executed by a video shooting device, which can be implemented in software and/or hardware, and the device can be configured in an electronic device, for example, a terminal device or a server . As shown in Figure 1, the method may include the following steps:
  • Step 110 Obtain a basic paired video matching the video co-production request.
  • Video interaction platform uses the shooting equipment on the mobile terminal to shoot videos and upload them to the video interaction platform.
  • the user can play the video to watch.
  • he can also send a video co-production request, requesting that the user pairing video is shot based on the video played in the video playback interface, and the basic pairing video is paired with the user who has been shot.
  • Video combination constitutes a co-production video.
  • set the co-shooting control on the video playback interface The user can send a video co-production request by clicking the co-production control on the video playback interface.
  • the video played in the video playback interface is the basic paired video that matches the video co-production request.
  • the video co-production request carries identification information of the basic paired video. After obtaining the video co-production request sent by the user, obtain the basic paired video matching the video co-production request according to the identification information in the video co-production request.
  • Step 120 In the process of shooting the user pairing video, obtain at least one item of interactive prompt information that matches the video content of the basic pairing video for user prompting.
  • the interactive prompt information is used to guide the user captured by the user pairing video to perform the basic pairing video Matching interactive actions.
  • the video shooting interface is displayed.
  • set shooting controls on the video shooting interface The user can click the shooting control to trigger the shooting function and start shooting the user pairing video.
  • the interactive prompt information is prompt information generated in advance based on the video content of the basic pairing video.
  • the interactive prompt information includes: interactive prompt time and interactive prompt action.
  • the interactive prompt action matches the action shown in the video image of the basic pairing video at the interactive prompt time.
  • the interactive prompt action is the most suitable interactive action shown in the video image during the interactive prompt time, that is, the optimal interactive action. For example, the action shown in the video image of the basic pairing video at the interactive prompt time is "left hand to heart", and the matching interactive prompt action is "right hand to heart”.
  • the target interactive prompt will be prompted according to the preset interactive prompt mode
  • the interactive prompt action in the information prompts the user to guide the user to perform an interactive action that matches the action shown in the video image.
  • a video shooting window and a video playing window are set on the video shooting interface.
  • the size of the video capture window and the size of the video playback window are the same.
  • the video playback window is located on the right side of the video capture interface, and the video capture window is located on the left side of the video capture interface.
  • the basic pairing video is played in the video playback window. The user can shoot the user pairing video according to the user's prompts and the basic pairing video that is played synchronously in the video playback window.
  • only the video shooting window is set on the video shooting interface. The user shoots the user pairing video according to the user prompt.
  • Step 130 Combine the basic pairing video and the user pairing video that has been shot to form a co-production video.
  • the basic pairing video and the user pairing video that have been shot are combined to form a co-production video.
  • the co-production video includes both the content in the basic pairing video and the content in the completed user pairing video.
  • the video includes two parts: video frame image and audio information.
  • the size of the video frame image of the basic pairing video is equal to the size of the video frame image of the user pairing video that has been shot.
  • Combining the video frame image of the basic pairing video with the video frame image of the user pairing video that has been shot is to sequentially combine each frame image in the basic pairing video with the corresponding video frame image in the user pairing video that has been shot. Combined into a frame of image.
  • the video frame image of the basic pairing video is located on the right side of the video frame image of the user pairing video that has been shot.
  • the co-production user can be prompted to perform interactive actions that match the video, so as to guide the co-production user to better (for example, the shooting effect is more beautiful, more interesting, etc.)
  • the shooting mode carries out co-production video shooting to improve shooting efficiency.
  • FIG. 2 is a flowchart of another video shooting method provided by an embodiment of the disclosure. This embodiment can be combined with multiple alternatives in one or more of the above embodiments.
  • the interactive prompt information can include: interactive prompt time and interactive prompt action.
  • the interactive prompt action and the basic pairing video are in the interactive prompt Match the action shown by the video image at time.
  • obtaining at least one item of interactive prompt information matching the video content of the basic pairing video for user prompting may include: if the current shooting time of the user pairing video is determined, the interactive prompt information with the target If the interactive prompt time matches, the user prompts the interactive prompt action in the target interactive prompt information according to the preset interactive prompt mode.
  • it may further include: performing action recognition on the basic paired video through an action recognition model to obtain at least one target action; determining an interactive prompt action matching the at least one target action; The time information and interactive prompt action of at least one target action generate interactive prompt information matching the at least one target action.
  • the method may include the following steps:
  • Step 201 Perform action recognition on the basic paired video through the action recognition model to obtain at least one target action.
  • an action recognition model is created in advance, and the action recognition model is trained based on preset action recognition sample data, so that the action recognition model can recognize the body parts and human actions in the video.
  • the basic paired video is input to the action recognition model to obtain at least one human action corresponding to the basic paired video, that is, the target action.
  • the action recognition model is a neural network model used to recognize human body parts and human actions in the video.
  • Step 202 Determine an interactive prompt action matching at least one target action.
  • an interactive prompt action matching at least one target action is determined according to the author's remarks of the basic pairing video.
  • the author of the basic matchmaking video produces the basic matchmaking video
  • the author's remarks information may include human actions in the basic pairing video and interactive actions corresponding to the human actions.
  • query the target action After the human body action consistent with the target action is inquired, the interactive action corresponding to the human body action is acquired, and the acquired interactive action is determined as an interactive prompt action matching the target action.
  • an interactive prompt action that matches at least one target action is determined according to the interactive recommendation information of the basic pairing video.
  • the interactive recommendation information may include human actions in the basic pairing video and interactive actions corresponding to the human actions.
  • query the target action After the human body action consistent with the target action is inquired, the interactive action corresponding to the human body action is acquired, and the acquired interactive action is determined as an interactive prompt action matching the target action.
  • an interactive prompt action matching at least one target action is determined according to the historical co-production video on the video interactive platform.
  • a set number of historical co-production videos on the video interactive platform are acquired for identification, and various human actions included in the historical co-production videos are obtained.
  • the set number of historical co-production videos includes: historical co-production videos obtained from historical information of users associated with the author of the basic paired video. Perform statistics on user data associated with various human actions, and determine the interactive action corresponding to each human action. For example, the human body action that interacts with each human body action the most times is determined as the corresponding interactive action.
  • multiple human actions in the co-production video and interactive actions corresponding to each human action are obtained.
  • the interactive action corresponding to the human body action is acquired, and the acquired interactive action is determined as an interactive prompt action matching the target action.
  • Step 203 Generate interactive prompt information matching the at least one target action based on the time information of the at least one target action and the interactive prompt action.
  • the time information of the target action is the time when the target action appears in the basic paired video.
  • the duration of the basic pairing video is 30 seconds
  • the time when the target action appears in the basic pairing video is the 15th second.
  • Step 204 Obtain a basic paired video matching the video co-production request.
  • Step 205 If it is determined that the current shooting time of the user pairing video matches the interactive prompt time in the target interactive prompt information, the user prompts the interactive prompt action in the target interactive prompt information according to a preset interactive prompt manner.
  • the interaction in the target interaction prompt information is performed according to the preset interactive prompt method.
  • the prompt action prompts the user.
  • the types of preset interactive prompting methods may include at least one of the following: subtitle prompts, sticker prompts, and voice prompts.
  • the preset interactive prompt mode is subtitle prompt.
  • the video shooting window will display the interactive prompt action in the target interactive prompt message.
  • Prompt subtitles so that users can perform interactive prompt actions according to the prompt subtitles.
  • the prompt subtitle “Right Hand Comparing Heart” is displayed in the video shooting window, so that the user can perform the interactive prompt action "Right Hand Comparing Heart” according to the prompt caption.
  • the preset interactive prompt mode is a sticker prompt.
  • the video shooting window will display the interactive prompt action in the target interactive prompt message.
  • the reminder sticker allows the user to perform interactive reminder actions according to the reminder sticker.
  • the reminder sticker can be a static sticker, such as a picture, or a dynamic sticker, such as an animation. For example, a prompt sticker matching the interactive prompt action "right hand compares the heart” is displayed in the video shooting window, so that the user performs the interactive prompt action "right hand compares the heart” according to the prompt sticker.
  • the preset interactive prompt mode is voice prompt.
  • control the terminal device if it is determined that the current shooting time of the user pairing video matches the interactive prompt time in the target interactive prompt information, control the terminal device to play a prompt that matches the interactive prompt action in the target interactive prompt message Voice, so that users can perform interactive prompt actions according to the prompt voice.
  • the terminal device is controlled to play the prompt voice "Right Hand Comparing Heart” that matches the interactive prompt action in the target interactive prompt information, so that the user performs the interactive prompt action "Right Hand Comparing Heart” according to the prompt voice.
  • Step 206 Combine the basic paired video with the user paired video that has been shot to form a co-production video.
  • the technical solution of this embodiment uses an action recognition model to perform action recognition on the basic paired video to obtain at least one target action, determine an interactive prompt action matching the at least one target action, and based on the time information and interactive prompt action of the at least one target action , Generate interactive prompt information matching at least one target action, and, when determining the current shooting time of the user’s paired video matches the interactive prompt time in the target interactive prompt information, interact with the target according to the preset interactive prompt mode
  • the interactive prompt action in the prompt message can prompt the user to guide the co-production user during the shooting process of the user pairing video, and can guide the co-production user to perform the interactive action matching the basic pairing video, enhance the interactive effect of the co-production video, and improve the co-production Video shooting efficiency improves user experience.
  • FIG. 3 is a flowchart of still another video shooting method provided by an embodiment of the disclosure. This embodiment can be combined with multiple alternatives in one or more of the above embodiments.
  • you before the user prompts the interactive prompt action in the target interactive prompt information according to the preset interactive prompt mode, you can also Including: determining a preset interactive prompt mode matching the user according to the user's historical co-production information; or determining a preset interactive prompt mode matching the user according to the user's co-production setting information.
  • the method may include the following steps:
  • Step 301 Obtain a basic paired video that matches the video co-production request.
  • Step 302 Determine a preset interactive prompt mode matching the user according to the user's historical co-production information.
  • the preset interactive prompting methods include subtitle prompts, sticker prompts, and voice prompts.
  • the number of user usage corresponding to each interactive prompt mode is determined. Sort the user usage times from high to low, and obtain the interactive prompt mode that is the first in the ranking result as a preset interactive prompt mode matching the user. As a result, the interactive prompt mode that is used the most is determined as the preset interactive prompt mode that matches the user.
  • a preset interactive prompt mode matching the user can be determined.
  • the user can specify the interactive prompt mode in the co-production setting information according to their own preferences.
  • the interactive prompt mode specified by the user is determined as a preset interactive prompt mode matching the user.
  • Step 303 If it is determined that the current shooting time of the user pairing video matches the interactive prompt time in the target interactive prompt information, the user prompts the interactive prompt action in the target interactive prompt information according to a preset interactive prompt manner.
  • Step 304 Combine the basic pairing video and the user pairing video that has been shot to form a co-production video.
  • the preset interactive prompt mode that matches the user. Determine the user's preferred interactive prompt mode according to the historical co-production information, and perform user prompts on the interactive prompt actions in the target interactive prompt information according to the user's preferred interactive prompt mode, so as to improve user experience.
  • FIG. 4 is a schematic structural diagram of a video shooting device provided by an embodiment of the disclosure. This embodiment is applicable to the situation of shooting co-production video.
  • the device can be implemented in software and/or hardware, and the device can be configured in an electronic device. As shown in Figure 4, the device may include: a video acquisition module 401, a user prompt module 402, and a video combination module 403.
  • the video acquisition module 401 is configured to acquire the basic pairing video matching the video co-production request;
  • the user prompt module 402 is configured to acquire at least one interaction that matches the video content of the basic pairing video during the shooting of the user pairing video
  • the prompt information is used to prompt the user, and the interactive prompt information is used to guide the user captured by the pairing video to perform an interactive action matching the basic pairing video;
  • the video combination module 403 is set to combine the basic pairing video with the completed user pairing video to form a co-production video.
  • the interactive prompt information may include: interactive prompt time and interactive prompt action, the interactive prompt action matches the action shown in the video image of the basic pairing video at the interactive prompt time; user prompt
  • the module 402 may include: an action prompt unit configured to, if it is determined that the current shooting time of the paired video of the user matches the interactive prompt time in the target interactive prompt information, the interaction in the target interactive prompt information is performed according to a preset interactive prompt mode.
  • the prompt action prompts the user.
  • it may further include: an action recognition module, configured to perform action recognition on the basic paired video through the action recognition model to obtain at least one target action; and an action determining module, configured to determine and at least one An interactive prompt action matching the target action; the prompt information generating module is configured to generate interactive prompt information matching the at least one target action according to the time information of the at least one target action and the interactive prompt action.
  • an action recognition module configured to perform action recognition on the basic paired video through the action recognition model to obtain at least one target action
  • an action determining module configured to determine and at least one An interactive prompt action matching the target action
  • the prompt information generating module is configured to generate interactive prompt information matching the at least one target action according to the time information of the at least one target action and the interactive prompt action.
  • the action determining module may include: a first action determining unit configured to determine an interactive prompt action matching at least one target action according to the author's remarks of the basic paired video; or The action determining unit is set to determine the interactive prompt action matching at least one target action based on the interactive recommendation information of the basic paired video; or the third action determining unit is set to determine the interactive prompt action matching at least one target action based on the historical co-production video on the video interactive platform. Interactive prompt action that matches the target action.
  • the preset interactive prompt mode type may include at least one of the following: subtitle prompt, sticker prompt, and voice prompt.
  • it may further include: a first mode determining unit configured to determine a preset interactive prompt mode matching the user according to the user's historical co-production information; or a second mode determining unit, The setting is to determine a preset interactive prompt mode matching the user according to the user's co-production setting information.
  • the preset type of interactive prompt mode is subtitle prompt;
  • the action prompt unit may include: a subtitle prompt subunit configured to display the interaction in the interactive prompt information with the target in the video shooting window Prompt action matching prompt subtitles, so that users can perform interactive prompt actions according to the prompt subtitles.
  • the video shooting device provided in the embodiment of the present disclosure can execute the video shooting method provided in the embodiment of the present disclosure, and has functional modules and effects corresponding to the execution method.
  • FIG. 5 shows a schematic structural diagram of an electronic device (such as a terminal device or a server) 500 suitable for implementing the embodiments of the present disclosure.
  • the terminal devices in the embodiments of the present disclosure may include mobile phones, notebook computers, digital broadcast receivers, personal digital assistants (Personal Digital Assistant, PDA), tablet computers (Portable Android Device, PAD), portable multimedia players (Portable Media Player, PMP), mobile terminals such as in-vehicle terminals (for example, in-vehicle navigation terminals), and fixed terminals such as digital televisions (Television, TV), desktop computers, and the like.
  • PDA Personal Digital Assistant
  • PAD Portable multimedia players
  • PMP Portable Media Player
  • mobile terminals such as in-vehicle terminals (for example, in-vehicle navigation terminals)
  • fixed terminals such as digital televisions (Television, TV), desktop computers, and the like.
  • the electronic device shown in FIG. 5 is only an example.
  • the electronic device 500 may include a processing device (such as a central processing unit, a graphics processor, etc.) 501, which may be based on a program stored in a read-only memory (Read-Only Memory, ROM) 502 or from a storage device 508
  • ROM Read-Only Memory
  • RAM Random Access Memory
  • a program loaded into a random access memory (Random Access Memory, RAM) 503 executes various appropriate actions and processes.
  • RAM 503 Random Access Memory
  • various programs and data required for the operation of the electronic device 500 are also stored.
  • the processing device 501, the ROM 502, and the RAM 503 are connected to each other through a bus 504.
  • An input/output (Input/Output, I/O) interface 505 is also connected to the bus 504.
  • the following devices can be connected to the I/O interface 505: including input devices 506 such as touch screens, touch pads, keyboards, mice, cameras, microphones, accelerometers, and/or gyroscopes; including, for example, liquid crystal displays (Liquid Crystal Display, LCD), output devices 507 such as speakers and/or vibrators; storage devices 508 including, for example, magnetic tapes and/or hard disks; and communication devices 509.
  • the communication device 509 may allow the electronic device 500 to perform wireless or wired communication with other devices to exchange data.
  • FIG. 5 shows an electronic device 500 having various devices, it should be understood that it is not required to implement or have all the illustrated devices. It may alternatively be implemented or provided with more or fewer devices.
  • the process described above with reference to the flowchart may be implemented as a computer software program.
  • the embodiments of the present disclosure include a computer program product, which includes a computer program carried on a computer-readable medium, and the computer program contains program code for executing the method shown in the flowchart.
  • the computer program may be downloaded and installed from the network through the communication device 509, or installed from the storage device 508, or installed from the ROM 502.
  • the processing device 501 the above-mentioned functions defined in the method of the embodiment of the present disclosure are executed.
  • the aforementioned computer-readable medium in the present disclosure may be a computer-readable signal medium or a computer-readable storage medium or any combination of the two.
  • the computer-readable storage medium may be, for example, but not limited to, an electric, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the above.
  • Computer-readable storage media may include: electrical connections with one or more wires, portable computer disks, hard disks, RAM, ROM, Erasable Programmable Read-Only Memory (EPROM) or flash memory, optical fiber , Portable Compact Disc Read-Only Memory (CD-ROM), optical storage device, magnetic storage device, or any suitable combination of the above.
  • the computer-readable storage medium may be a variety of tangible media containing or storing a program, and the program may be used by or in combination with an instruction execution system, apparatus, or device.
  • a computer-readable signal medium may include a data signal propagated in a baseband or as a part of a carrier wave, and a computer-readable program code is carried therein. This propagated data signal can take many forms, including electromagnetic signals, optical signals, or any suitable combination of the above.
  • the computer-readable signal medium may also be a variety of computer-readable media other than the computer-readable storage medium.
  • the computer-readable signal medium may send, propagate, or transmit for use by or in combination with the instruction execution system, apparatus, or device. program.
  • the program code contained on the computer-readable medium can be transmitted using a variety of suitable media, including: wire, optical cable, radio frequency (RF), etc., or any suitable combination of the above.
  • the above-mentioned computer-readable medium may be included in the above-mentioned electronic device; or it may exist alone without being assembled into the electronic device.
  • the above-mentioned computer-readable medium carries one or more programs.
  • the electronic device obtains the basic paired video matching the video co-production request; during the shooting process of the user paired video , Acquiring at least one item of interactive prompt information that matches the video content of the basic pairing video to prompt the user, and the interactive prompt information is used to guide the user captured by the user pairing video to perform an interactive action that matches the basic pairing video;
  • the user paired video after the shooting is combined to form a co-production video.
  • the computer program code used to perform the operations of the present disclosure may be written in one or more programming languages or a combination thereof.
  • the above-mentioned programming languages include object-oriented programming languages—such as Java, Smalltalk, C++, and also conventional Procedural programming language-such as "C" language or similar programming language.
  • the program code can be executed entirely on the user's computer, partly on the user's computer, executed as an independent software package, partly on the user's computer and partly executed on a remote computer, or entirely executed on the remote computer or server.
  • the remote computer can be connected to the user's computer through any kind of network-including Local Area Network (LAN) or Wide Area Network (WAN)-or it can be connected to an external computer (for example, use an Internet service provider to connect via the Internet).
  • LAN Local Area Network
  • WAN Wide Area Network
  • each block in the flowchart or block diagram can represent a module, program segment, or part of code, and the module, program segment, or part of code contains one or more for realizing the specified logical function Executable instructions.
  • the functions marked in the block may also occur in a different order from the order marked in the drawings. For example, two blocks shown in succession can actually be executed substantially in parallel, or they can sometimes be executed in the reverse order, depending on the functions involved.
  • Each block in the block diagram and/or flowchart, and the combination of the blocks in the block diagram and/or flowchart, can be implemented by a dedicated hardware-based system that performs the specified functions or operations, or can be implemented by dedicated hardware Realized in combination with computer instructions.
  • the modules, units, and subunits involved in the described embodiments of the present disclosure may be implemented in software or hardware.
  • the names of modules, units, and sub-units do not constitute a limitation on the module or the unit itself under certain circumstances.
  • the video acquisition module can also be described as "a module for acquiring a basic paired video that matches a video co-production request.
  • the first action determination unit can also be described as "a unit that determines an interactive prompt action matching the target action based on the author's remarks of the basic paired video”
  • the subtitle prompt subunit can also be described as "display in the video shooting window
  • the sub-unit of the prompt caption that matches the interactive prompt action in the target interactive prompt message so that the user can execute the interactive prompt action according to the prompt caption.”

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Social Psychology (AREA)
  • Business, Economics & Management (AREA)
  • Marketing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Databases & Information Systems (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

本文公开了一种视频拍摄方法、装置、电子设备及存储介质。其中,该方法包括:获取与视频合拍请求匹配的基础配对视频;在用户配对视频的拍摄过程中,获取与基础配对视频的视频内容匹配的至少一项互动提示信息进行用户提示,互动提示信息用于引导用户配对视频所拍摄的用户执行与基础配对视频匹配的互动动作;将基础配对视频与拍摄完成的用户配对视频组合构成合拍视频。

Description

视频拍摄方法、装置、电子设备及存储介质
本申请要求在2019年4月11日提交中国专利局、申请号为201910289648.8的中国专利申请的优先权,该申请的全部内容通过引用结合在本申请中。
技术领域
本公开实施例涉及视频处理技术,例如涉及一种视频拍摄方法、装置、电子设备及存储介质。
背景技术
随着移动互联网的发展以及移动终端的普及,视频行业逐渐崛起一批优质用户原创内容制作者。内容制作者利用移动终端上的拍摄设备拍摄自己喜欢的视频上传到视频交互平台上分享给其他用户。其他用户可以播放视频进行观看,在观看到感兴趣的视频时,可以点赞、评论和/或转发分享等,还可以通过合拍视频的形式与平台中的视频进行互动。
相关技术中,当用户想基于视频交互平台中的某个视频拍摄合拍视频时,通常将视频拍摄窗口和视频播放窗口同时显示在终端屏幕上,例如,将视频播放窗口放置于右侧,将视频拍摄窗口放置于左侧。在拍摄用户视频的同时播放原视频,并通过视频拍摄窗口显示用户视频,然后将用户视频和原视频合成,得到合拍视频。
相关技术的缺陷在于,大部分普通合拍用户在拍摄合拍视频时,无法通过简单的单次视频合拍找到效果好的拍摄方式,往往需要多次拍摄,这样会造成合拍视频拍摄的效率低下,耗时长,合拍视频的互动效果也无法得到保证,用户体验较差。
发明内容
本公开提供一种视频拍摄方法、装置、电子设备及存储介质,以实现对相关技术中的合拍视频拍摄方案进行优化,对合拍用户进行引导,提高合拍视频拍摄的效率。
在一实施例中,本公开实施例提供了一种视频拍摄方法,包括:
获取与视频合拍请求匹配的基础配对视频;
在用户配对视频的拍摄过程中,获取与基础配对视频的视频内容匹配的至少一项互动提示信息进行用户提示,互动提示信息用于引导用户配对视频所拍摄的用户执行与基础配对视频匹配的互动动作;
将基础配对视频与拍摄完成的用户配对视频组合构成合拍视频。
在一实施例中,本公开实施例还提供了一种视频拍摄装置,包括:
视频获取模块,设置为获取与视频合拍请求匹配的基础配对视频;
用户提示模块,设置为在用户配对视频的拍摄过程中,获取与基础配对视频的视频内容匹配的至少一项互动提示信息进行用户提示,互动提示信息用于引导用户配对视频所拍摄的用户执行与基础配对视频匹配的互动动作;
视频组合模块,设置为将基础配对视频与拍摄完成的用户配对视频组合构成合拍视频。
在一实施例中,本公开实施例还提供了一种电子设备,包括:
一个或多个处理器;
存储装置,设置为存储一个或多个程序;
一个或多个程序被一个或多个处理器执行,使得一个或多个处理器实现如本公开实施例所述的视频拍摄方法。
在一实施例中,本公开实施例还提供了一种计算机可读存储介质,该计算机可读存储介质上存储有计算机程序,该计算机程序被处理器执行时实现如本公开实施例所述的视频拍摄方法。
附图说明
图1为本公开实施例提供的一种视频拍摄方法的流程图;
图2为本公开实施例提供的另一种视频拍摄方法的流程图;
图3为本公开实施例提供的还一种视频拍摄方法的流程图;
图4为本公开实施例提供的一种视频拍摄装置的结构示意图;
图5为本公开实施例提供的一种电子设备的结构示意图。
具体实施方式
下面结合附图和实施例对本公开作进一步的详细说明。可以理解的是,此处所描述的具体实施例仅仅用于解释本公开,而非对本公开的限定。为了便于描述,附图中仅示出了与本公开相关的部分而非全部结构。
图1为本公开实施例提供的一种视频拍摄方法的流程图。本实施例可适用于拍摄合拍视频的情况,该方法可以由视频拍摄装置来执行,该装置可以采用软件和/或硬件的方式实现,该装置可以配置于电子设备,例如,终端设备或服务器中。如图1所示,该方法可以包括如下步骤:
步骤110、获取与视频合拍请求匹配的基础配对视频。
其中,视频交互平台的其他用户利用移动终端上的拍摄设备拍摄视频,并上传到视频交互平台上。用户可以播放视频进行观看,在观看到感兴趣的视频时,还可以发送视频合拍请求,请求基于视频播放界面中播放的视频进行用户配对视频的拍摄,并将基础配对视频与拍摄完成的用户配对视频组合构成合拍视频。
可选的,在视频播放界面上设置合拍控件。用户可以通过点击视频播放界面上的合拍控件,发送视频合拍请求。视频播放界面中播放的视频即为与视频合拍请求匹配的基础配对视频。
可选的,视频合拍请求中携带基础配对视频的标识信息。在获取用户发送的视频合拍请求后,根据视频合拍请求中的标识信息获取与视频合拍请求匹配的基础配对视频。
步骤120、在用户配对视频的拍摄过程中,获取与基础配对视频的视频内容匹配的至少一项互动提示信息进行用户提示,互动提示信息用于引导用户配对视频所拍摄的用户执行与基础配对视频匹配的互动动作。
其中,获取与视频合拍请求匹配的基础配对视频后,显示视频拍摄界面。可选的,在视频拍摄界面上设置拍摄控件。用户可以通过点击拍摄控件,触发拍摄功能,开始拍摄用户配对视频。
互动提示信息是预先根据基础配对视频的视频内容生成的提示信息。可选的,互动提示信息包括:互动提示时间以及互动提示动作。互动提示动作与基础配对视频在互动提示时间下的视频图像所展示的动作相匹配。互动提示动作是在互动提示时间下的视频图像所展示的动作最适合的互动动作,即最优互动动作。例如,基础配对视频在互动提示时间下的视频图像所展示的动作为“左手比心”,相匹配的互动提示动作为“右手比心”。
在一实施例中,在用户配对视频的拍摄过程中,如果确定用户配对视频的当前拍摄时间,与目标互动提示信息中的互动提示时间相匹配,则按照预设的互动提示方式对目标互动提示信息中的互动提示动作进行用户提示,从而引导用户执行与视频图像所展示的动作相匹配的互动动作。
在一个具体实例中,视频拍摄界面上设置视频拍摄窗口和视频播放窗口。例如,视频拍摄窗口的尺寸和视频播放窗口的尺寸相同。视频播放窗口位于视频拍摄界面右侧,视频拍摄窗口位于视频拍摄界面左侧。在通过视频拍摄窗口显示实时拍摄的用户配对视频的同时,在视频播放窗口播放基础配对视频。用户可以根据用户提示,以及在视频播放窗口同步播放的基础配对视频,进行用户配对视频的拍摄。
在另一个具体实例中,视频拍摄界面上仅设置视频拍摄窗口。用户根据用户提示进行用户配对视频的拍摄。
步骤130、将基础配对视频与拍摄完成的用户配对视频组合构成合拍视频。
其中,在用户视频拍摄完成后,将基础配对视频与拍摄完成的用户配对视频组合构成合拍视频。合拍视频中同时包括基础配对视频中的内容和拍摄完成的用户配对视频中的内容。
可选的,视频中包括视频帧图像和音频信息两部分。对基础配对视频的视频帧图像与拍摄完成的用户配对视频的视频帧图像进行组合,对基础配对视频的视频帧图像对应的音频信息与拍摄完成的用户配对视频对应的音频信息进行组合,再将组合得到的视频帧图像和对应的音频信息合成组合构成合拍视频。
可选的,基础配对视频的视频帧图像的尺寸等于拍摄完成的用户配对视频的视频帧图像的尺寸。对基础配对视频的视频帧图像与拍摄完成的用户配对视频的视频帧图像进行组合,是指依次将基础配对视频中的每帧视频帧图像与对 应的拍摄完成的用户配对视频中的视频帧图像合成为一帧图像。在合成的一帧图像中,基础配对视频的视频帧图像位于拍摄完成的用户配对视频的视频帧图像的右侧。
本实施例的技术方案,通过在用户配对视频的拍摄过程中,获取与基础配对视频的视频内容匹配的至少一项互动提示信息进行用户提示,根据互动提示信息引导用户配对视频所拍摄的用户执行与基础配对视频匹配的互动动作,将基础配对视频与拍摄完成的用户配对视频组合构成合拍视频,解决了相关技术中,大部分普通合拍用户无法通过简单的单次视频合拍找到效果好的拍摄方式,合拍视频拍摄的效率低下,耗时长的问题,可以根据互动提示信息,提示合拍用户执行与视频匹配的互动动作,从而引导合拍用户以更优(例如拍摄效果更美观、趣味性更高等)的拍摄方式进行合拍视频拍摄,提高拍摄效率。
图2为本公开实施例提供的另一种视频拍摄方法的流程图。本实施例可以与上述一个或者多个实施例中多个可选方案结合,在本实施例中,互动提示信息可以包括:互动提示时间以及互动提示动作,互动提示动作与基础配对视频在互动提示时间下的视频图像所展示的动作相匹配。
以及,在用户配对视频的拍摄过程中,获取与基础配对视频的视频内容匹配的至少一项互动提示信息进行用户提示,可以包括:如果确定用户配对视频的当前拍摄时间,与目标互动提示信息中的互动提示时间相匹配,则按照预设的互动提示方式对目标互动提示信息中的互动提示动作进行用户提示。
以及,在获取与视频合拍请求匹配的基础配对视频之前,可以还包括:通过动作识别模型对基础配对视频进行动作识别,得到至少一个目标动作;确定与至少一个目标动作匹配的互动提示动作;根据至少一个目标动作的时间信息和互动提示动作,生成与至少一个目标动作匹配的互动提示信息。
如图2所示,该方法可以包括如下步骤:
步骤201、通过动作识别模型对基础配对视频进行动作识别,得到至少一个目标动作。
其中,预先创建动作识别模型,根据预设的动作识别样本数据训练动作识 别模型,以使动作识别模型可以对视频中的人体部位以及人体动作进行识别。将基础配对视频输入至动作识别模型,得到基础配对视频对应的至少一个人体动作,即目标动作。
可选的,动作识别模型是用于对视频中的人体部位以及人体动作进行识别的神经网络模型。
步骤202、确定与至少一个目标动作匹配的互动提示动作。
其中,针对基础配对视频对应的全部目标动作,确定与每一个目标动作匹配的互动提示动作。
可选的,根据基础配对视频的作者备注信息,确定与至少一个目标动作匹配的互动提示动作。
基础配对视频的作者在产出基础配对视频时,可以通过作者备注信息标注适合基础配对视频的互动方式。在一实施例中,作者备注信息中可以包括基础配对视频中的人体动作,以及与人体动作对应的互动动作。在作者备注信息中,查询目标动作。在查询到与目标动作一致的人体动作后,获取与该人体动作对应的互动动作,并将获取的互动动作确定为与目标动作匹配的互动提示动作。
可选的,根据基础配对视频的互动推荐信息,确定与至少一个目标动作匹配的互动提示动作。
视频交互平台的其他用户,例如,平台运营者或者优质视频制作者,可以通过互动推荐信息推荐适合基础配对视频的互动方式。在一实施例中,互动推荐信息中可以包括基础配对视频中的人体动作,以及与人体动作对应的互动动作。在互动推荐信息中,查询目标动作。在查询到与目标动作一致的人体动作后,获取与该人体动作对应的互动动作,并将获取的互动动作确定为与目标动作匹配的互动提示动作。
可选的,根据视频交互平台上的历史合拍视频,确定与至少一个目标动作匹配的互动提示动作。
其中,获取视频交互平台上的设定数量的历史合拍视频进行识别,得到历史合拍视频中包括的多种人体动作。可选的,设定数量的历史合拍视频中包括:从与基础配对视频的作者相关联的用户的历史信息中获取的历史合拍视频。对 与多种人体动作关联的用户数据进行统计,确定与每一种人体动作对应的互动动作。例如,将与每一种人体动作进行互动的互动次数最多的人体动作,确定为对应的互动动作。
由此,根据视频交互平台上的历史合拍视频,得到合拍视频中的多种人体动作,以及与每一种人体动作对应的互动动作。在合拍视频中的多种人体动作中,查询目标动作。在查询到与目标动作一致的人体动作后,获取与该人体动作对应的互动动作,并将获取的互动动作确定为与目标动作匹配的互动提示动作。
步骤203、根据至少一个目标动作的时间信息和互动提示动作,生成与至少一个目标动作匹配的互动提示信息。
其中,目标动作的时间信息是目标动作在基础配对视频中出现的时间。例如,基础配对视频的时长为30秒,目标动作在基础配对视频中出现的时间为第15秒。
将目标动作在基础配对视频中出现的时间确定为互动提示时间。将互动提示时间以及互动提示动作确定为与目标动作匹配的互动提示信息。
由此,生成基础配对视频对应的每一个目标动作匹配的互动提示信息。
步骤204、获取与视频合拍请求匹配的基础配对视频。
步骤205、如果确定用户配对视频的当前拍摄时间,与目标互动提示信息中的互动提示时间相匹配,则按照预设的互动提示方式对目标互动提示信息中的互动提示动作进行用户提示。
其中,在用户配对视频的拍摄过程中,如果确定用户配对视频的当前拍摄时间,与目标互动提示信息中的互动提示时间相匹配,则按照预设的互动提示方式对目标互动提示信息中的互动提示动作进行用户提示。预设的互动提示方式的类型可以包括下述至少一项:字幕提示、贴纸提示以及语音提示。
在一个具体实例中,预设的互动提示方式的类型为字幕提示。在用户配对视频的拍摄过程中,如果确定用户配对视频的当前拍摄时间,与目标互动提示信息中的互动提示时间相匹配,则在视频拍摄窗口显示与目标互动提示信息中的互动提示动作匹配的提示字幕,以使用户根据提示字幕执行互动提示动作。 例如,在视频拍摄窗口显示提示字幕“右手比心”,以使用户根据提示字幕执行互动提示动作“右手比心”。
在另一个具体实例中,预设的互动提示方式的类型为贴纸提示。在用户配对视频的拍摄过程中,如果确定用户配对视频的当前拍摄时间,与目标互动提示信息中的互动提示时间相匹配,则在视频拍摄窗口显示与目标互动提示信息中的互动提示动作匹配的提示贴纸,以使用户根据提示贴纸执行互动提示动作。提示贴纸可以是静态贴纸,如图片等,还可以是动态贴纸,如动画等。例如,在视频拍摄窗口显示与互动提示动作“右手比心”匹配的提示贴纸,以使用户根据提示贴纸执行互动提示动作“右手比心”。
在另一个具体实例中,预设的互动提示方式的类型为语音提示。在用户配对视频的拍摄过程中,如果确定用户配对视频的当前拍摄时间,与目标互动提示信息中的互动提示时间相匹配,则控制终端设备播放与目标互动提示信息中的互动提示动作匹配的提示语音,以使用户根据提示语音执行互动提示动作。例如,控制终端设备播放与目标互动提示信息中的互动提示动作匹配的提示语音“右手比心”,以使用户根据提示语音执行互动提示动作“右手比心”。
步骤206、将基础配对视频与拍摄完成的用户配对视频组合构成合拍视频。
本实施例的技术方案,通过动作识别模型对基础配对视频进行动作识别,得到至少一个目标动作,确定与至少一个目标动作匹配的互动提示动作,并根据至少一个目标动作的时间信息和互动提示动作,生成与至少一个目标动作匹配的互动提示信息,以及,在确定用户配对视频的当前拍摄时间,与目标互动提示信息中的互动提示时间相匹配时,则按照预设的互动提示方式对目标互动提示信息中的互动提示动作进行用户提示,可以在用户配对视频的拍摄过程中,对合拍用户进行引导,可以引导合拍用户执行与基础配对视频匹配的互动动作,增强合拍视频的互动效果,提高合拍视频平拍摄效率,提高用户体验。
图3为本公开实施例提供的还一种视频拍摄方法的流程图。本实施例可以与上述一个或者多个实施例中多个可选方案结合,在本实施例中,在按照预设的互动提示方式对目标互动提示信息中互动提示动作进行用户提示之前,可以还包括:根据用户的历史合拍信息,确定与用户匹配的预设的互动提示方式; 或者根据用户的合拍设置信息,确定与用户匹配的预设的互动提示方式。
如图3所示,该方法可以包括如下步骤:
步骤301、获取与视频合拍请求匹配的基础配对视频。
步骤302、根据用户的历史合拍信息,确定与用户匹配的预设的互动提示方式。
其中,预设的互动提示方式的类型包括字幕提示、贴纸提示以及语音提示。
根据用户的历史合拍信息,确定与每一个互动提示方式对应的用户使用次数。对用户使用次数由高到低进行排序,获取处于排序结果的第一位的互动提示方式作为与用户匹配的预设的互动提示方式。由此,将使用次数最多的互动提示方式确定为与用户匹配的预设的互动提示方式。
或者,还可以根据用户的合拍设置信息,确定与用户匹配的预设的互动提示方式。
其中,用户可以根据自己的偏好,在合拍设置信息中指定互动提示方式。将用户指定的互动提示方式确定为与用户匹配的预设的互动提示方式。
步骤303、如果确定用户配对视频的当前拍摄时间,与目标互动提示信息中的互动提示时间相匹配,则按照预设的互动提示方式对目标互动提示信息中的互动提示动作进行用户提示。
步骤304、将基础配对视频与拍摄完成的用户配对视频组合构成合拍视频。
本实施例的技术方案,通过在按照预设的互动提示方式对目标互动提示信息中互动提示动作进行用户提示之前,根据用户的历史合拍信息,确定与用户匹配的预设的互动提示方式,可以根据历史合拍信息确定用户偏好的互动提示方式,并按照用户偏好的互动提示方式对目标互动提示信息中的互动提示动作进行用户提示,提高用户体验。
图4为本公开实施例提供的一种视频拍摄装置的结构示意图。本实施例可适用于拍摄合拍视频的情况。该装置可以采用软件和/或硬件的方式实现,该装置可以配置于电子设备。如图4所示,该装置可以包括:视频获取模块401、用 户提示模块402以及视频组合模块403。
其中,视频获取模块401,设置为获取与视频合拍请求匹配的基础配对视频;用户提示模块402,设置为在用户配对视频的拍摄过程中,获取与基础配对视频的视频内容匹配的至少一项互动提示信息进行用户提示,互动提示信息用于引导用户配对视频所拍摄的用户执行与基础配对视频匹配的互动动作;视频组合模块403,设置为将基础配对视频与拍摄完成的用户配对视频组合构成合拍视频。
本实施例的技术方案,通过在用户配对视频的拍摄过程中,获取与基础配对视频的视频内容匹配的至少一项互动提示信息进行用户提示,根据互动提示信息引导用户配对视频所拍摄的用户执行与基础配对视频匹配的互动动作,将基础配对视频与拍摄完成的用户配对视频组合构成合拍视频,解决了相关技术中,大部分普通合拍用户无法通过简单的单次视频合拍找到效果好的拍摄方式,合拍视频拍摄的效率低下,耗时长的问题,可以根据互动提示信息,提示合拍用户执行与视频匹配的互动动作,从而引导合拍用户以更优的拍摄方式进行合拍视频拍摄,提高拍摄效率。
可选的,在上述技术方案的基础上,互动提示信息可以包括:互动提示时间以及互动提示动作,互动提示动作与基础配对视频在互动提示时间下的视频图像所展示的动作相匹配;用户提示模块402可以包括:动作提示单元,设置为如果确定用户配对视频的当前拍摄时间,与目标互动提示信息中的互动提示时间相匹配,则按照预设的互动提示方式对目标互动提示信息中的互动提示动作进行用户提示。
可选的,在上述技术方案的基础上,可以还包括:动作识别模块,设置为通过动作识别模型对基础配对视频进行动作识别,得到至少一个目标动作;动作确定模块,设置为确定与至少一个目标动作匹配的互动提示动作;提示信息生成模块,设置为根据至少一个目标动作的时间信息和互动提示动作,生成与至少一个目标动作匹配的互动提示信息。
可选的,在上述技术方案的基础上,动作确定模块可以包括:第一动作确定单元,设置为根据基础配对视频的作者备注信息,确定与至少一个目标动作匹配的互动提示动作;或者第二动作确定单元,设置为根据基础配对视频的互 动推荐信息,确定与至少一个目标动作匹配的互动提示动作;或者第三动作确定单元,设置为根据视频交互平台上的历史合拍视频,确定与至少一个目标动作匹配的互动提示动作。
可选的,在上述技术方案的基础上,预设的互动提示方式的类型可以包括下述至少一项:字幕提示、贴纸提示以及语音提示。
可选的,在上述技术方案的基础上,可以还包括:第一方式确定单元,设置为根据用户的历史合拍信息,确定与用户匹配的预设的互动提示方式;或者第二方式确定单元,设置为根据用户的合拍设置信息,确定与用户匹配的预设的互动提示方式。
可选的,在上述技术方案的基础上,预设的互动提示方式的类型为字幕提示;动作提示单元可以包括:字幕提示子单元,设置为在视频拍摄窗口显示与目标互动提示信息中的互动提示动作匹配的提示字幕,以使用户根据提示字幕执行互动提示动作。
本公开实施例所提供的视频拍摄装置可执行本公开实施例所提供的视频拍摄方法,具备执行方法相应的功能模块和效果。
下面参考图5,其示出了适于用来实现本公开实施例的电子设备(例如终端设备或服务器)500的结构示意图。本公开实施例中的终端设备可以包括诸如移动电话、笔记本电脑、数字广播接收器、个人数字助理(Personal Digital Assistant,PDA)、平板电脑(Portable Android Device,PAD)、便携式多媒体播放器(Portable Media Player,PMP)、车载终端(例如车载导航终端)等等的移动终端以及诸如数字电视(Television,TV)、台式计算机等等的固定终端。图5示出的电子设备仅仅是一个示例。
如图5所示,电子设备500可以包括处理装置(例如中央处理器、图形处理器等)501,其可以根据存储在只读存储器(Read-Only Memory,ROM)502中的程序或者从存储装置508加载到随机访问存储器(Random Access Memory,RAM)503中的程序而执行多种适当的动作和处理。在RAM 503中,还存储有 电子设备500操作所需的多种程序和数据。处理装置501、ROM 502以及RAM 503通过总线504彼此相连。输入/输出(Input/Output,I/O)接口505也连接至总线504。
通常,以下装置可以连接至I/O接口505:包括例如触摸屏、触摸板、键盘、鼠标、摄像头、麦克风、加速度计和/或陀螺仪等的输入装置506;包括例如液晶显示器(Liquid Crystal Display,LCD)、扬声器和/或振动器等的输出装置507;包括例如磁带和/或硬盘等的存储装置508;以及通信装置509。通信装置509可以允许电子设备500与其他设备进行无线或有线通信以交换数据。虽然图5示出了具有多种装置的电子设备500,但是应理解的是,并不要求实施或具备所有示出的装置。可以替代地实施或具备更多或更少的装置。
根据本公开的实施例,上文参考流程图描述的过程可以被实现为计算机软件程序。例如,本公开的实施例包括一种计算机程序产品,其包括承载在计算机可读介质上的计算机程序,该计算机程序包含用于执行流程图所示的方法的程序代码。在这样的实施例中,该计算机程序可以通过通信装置509从网络上被下载和安装,或者从存储装置508被安装,或者从ROM 502被安装。在该计算机程序被处理装置501执行时,执行本公开实施例的方法中限定的上述功能。
本公开上述的计算机可读介质可以是计算机可读信号介质或者计算机可读存储介质或者是上述两者的任意组合。计算机可读存储介质例如可以是——但不限于——电、磁、光、电磁、红外线、或半导体的系统、装置或器件,或者任意以上的组合。计算机可读存储介质可以包括:具有一个或多个导线的电连接、便携式计算机磁盘、硬盘、RAM、ROM、可擦式可编程只读存储器(Erasable Programmable Read-Only Memory,EPROM)或闪存、光纤、便携式紧凑磁盘只读存储器(Compact Disc Read-Only Memory,CD-ROM)、光存储器件、磁存储器件、或者上述的任意合适的组合。在本公开中,计算机可读存储介质可以是多种包含或存储程序的有形介质,该程序可以被指令执行系统、装置或者器件使用或者与其结合使用。而在本公开中,计算机可读信号介质可以包括在基带中或者作为载波一部分传播的数据信号,其中承载了计算机可读的程序代码。这种传播的数据信号可以采用多种形式,包括电磁信号、光信号或上述的任意合适的组合。计算机可读信号介质还可以是计算机可读存储介质以外的多种计算机可读介质,该计算机可读信号介质可以发送、传播或者传输用于由指令执 行系统、装置或者器件使用或者与其结合使用的程序。计算机可读介质上包含的程序代码可以用多种适当的介质传输,包括:电线、光缆、射频(Radio Frequency,RF)等等,或者上述的任意合适的组合。
上述计算机可读介质可以是上述电子设备中所包含的;也可以是单独存在,而未装配入该电子设备中。
上述计算机可读介质承载有一个或者多个程序,当上述一个或者多个程序被该电子设备执行时,使得该电子设备:获取与视频合拍请求匹配的基础配对视频;在用户配对视频的拍摄过程中,获取与基础配对视频的视频内容匹配的至少一项互动提示信息进行用户提示,互动提示信息用于引导用户配对视频所拍摄的用户执行与基础配对视频匹配的互动动作;将基础配对视频与拍摄完成的用户配对视频组合构成合拍视频。
可以以一种或多种程序设计语言或其组合来编写用于执行本公开的操作的计算机程序代码,上述程序设计语言包括面向对象的程序设计语言—诸如Java、Smalltalk、C++,还包括常规的过程式程序设计语言—诸如“C”语言或类似的程序设计语言。程序代码可以完全地在用户计算机上执行、部分地在用户计算机上执行、作为一个独立的软件包执行、部分在用户计算机上部分在远程计算机上执行、或者完全在远程计算机或服务器上执行。在涉及远程计算机的情形中,远程计算机可以通过任意种类的网络——包括局域网(Local Area Network,LAN)或广域网(Wide Area Network,WAN)—连接到用户计算机,或者,可以连接到外部计算机(例如利用因特网服务提供商来通过因特网连接)。
附图中的流程图和框图,图示了按照本公开多种实施例的方法和计算机程序产品的可能实现的体系架构、功能和操作。在这点上,流程图或框图中的每个方框可以代表一个模块、程序段、或代码的一部分,该模块、程序段、或代码的一部分包含一个或多个用于实现规定的逻辑功能的可执行指令。在有些作为替换的实现中,方框中所标注的功能也可以以不同于附图中所标注的顺序发生。例如,两个接连地表示的方框实际上可以基本并行地执行,它们有时也可以按相反的顺序执行,这依所涉及的功能而定。框图和/或流程图中的每个方框、以及框图和/或流程图中的方框的组合,可以用执行规定的功能或操作的专用的基于硬件的系统来实现,或者可以用专用硬件与计算机指令的组合来实现。
描述于本公开实施例中所涉及到的模块、单元以及子单元可以通过软件的方式实现,也可以通过硬件的方式来实现。其中,模块、单元以及子单元的名称在某种情况下并不构成对该模块或单元本身的限定,例如,视频获取模块还可以被描述为“获取与视频合拍请求匹配的基础配对视频的模块”,第一动作确定单元还可以被描述为“根据基础配对视频的作者备注信息,确定与目标动作匹配的互动提示动作的单元”,字幕提示子单元还可以被描述为“在视频拍摄窗口显示与目标互动提示信息中的互动提示动作匹配的提示字幕,以使用户根据提示字幕执行互动提示动作的子单元”。

Claims (12)

  1. 一种视频拍摄方法,包括:
    获取与视频合拍请求匹配的基础配对视频;
    在用户配对视频的拍摄过程中,获取与所述基础配对视频的视频内容匹配的至少一项互动提示信息进行用户提示,所述互动提示信息用于引导所述用户配对视频所拍摄的用户执行与所述基础配对视频匹配的互动动作;
    将所述基础配对视频与拍摄完成的所述用户配对视频组合构成合拍视频。
  2. 根据权利要求1所述的方法,其中,所述互动提示信息包括:互动提示时间以及互动提示动作,所述互动提示动作与所述基础配对视频在所述互动提示时间下的视频图像所展示的动作相匹配;
    所述在用户配对视频的拍摄过程中,获取与所述基础配对视频的视频内容匹配的至少一项互动提示信息进行用户提示,包括:
    在确定所述用户配对视频的当前拍摄时间,与目标互动提示信息中的互动提示时间相匹配的情况下,按照预设的互动提示方式对所述目标互动提示信息中的互动提示动作进行用户提示。
  3. 根据权利要求2所述的方法,在所述获取与视频合拍请求匹配的基础配对视频之前,还包括:
    通过动作识别模型对所述基础配对视频进行动作识别,得到至少一个目标动作;
    确定与所述至少一个目标动作匹配的互动提示动作;
    根据所述至少一个目标动作的时间信息和所述互动提示动作,生成与所述至少一个目标动作匹配的互动提示信息。
  4. 根据权利要求3所述的方法,其中,所述确定与所述至少一个目标动作匹配的互动提示动作,包括:
    根据所述基础配对视频的作者备注信息,确定与所述至少一个目标动作匹配的互动提示动作;或者
    根据所述基础配对视频的互动推荐信息,确定与所述至少一个目标动作匹配的互动提示动作;或者
    根据视频交互平台上的历史合拍视频,确定与所述至少一个目标动作匹配的互动提示动作。
  5. 根据权利要求2所述的方法,其中,所述预设的互动提示方式的类型包括下述至少一项:字幕提示、贴纸提示以及语音提示。
  6. 根据权利要求2所述的方法,在所述按照预设的互动提示方式对所述目标互动提示信息中的互动提示动作进行用户提示之前,还包括:
    根据所述用户的历史合拍信息,确定与所述用户匹配的预设的互动提示方式;或者
    根据所述用户的合拍设置信息,确定与所述用户匹配的预设的互动提示方式。
  7. 根据权利要求2所述的方法,其中,所述预设的互动提示方式的类型为字幕提示;
    所述按照预设的互动提示方式对所述目标互动提示信息中的互动提示动作进行用户提示,包括:
    在视频拍摄窗口显示与所述目标互动提示信息中的互动提示动作匹配的提示字幕。
  8. 根据权利要求2所述的方法,其中,所述预设的互动提示方式的类型为贴纸提示;
    所述按照预设的互动提示方式对所述目标互动提示信息中的互动提示动作进行用户提示,包括:
    在视频拍摄窗口显示与所述目标互动提示信息中的互动提示动作匹配的提示贴纸。
  9. 根据权利要求2所述的方法,其中,所述预设的互动提示方式的类型为语音提示;
    所述按照预设的互动提示方式对所述目标互动提示信息中的互动提示动作进行用户提示,包括:
    播放与所述目标互动提示信息中的互动提示动作匹配的提示语音。
  10. 一种视频拍摄装置,包括:
    视频获取模块,设置为获取与视频合拍请求匹配的基础配对视频;
    用户提示模块,设置为在用户配对视频的拍摄过程中,获取与所述基础配对视频的视频内容匹配的至少一项互动提示信息进行用户提示,所述互动提示信息用于引导所述用户配对视频所拍摄的用户执行与所述基础配对视频匹配的互动动作;
    视频组合模块,设置为将所述基础配对视频与拍摄完成的所述用户配对视频组合构成合拍视频。
  11. 一种电子设备,包括:
    一个或多个处理器;
    存储装置,设置为存储一个或多个程序;
    所述一个或多个程序被所述一个或多个处理器执行,使得所述一个或多个处理器实现如权利要求1-9中任一项所述的方法。
  12. 一种计算机可读存储介质,所述计算机可读存储介质上存储有计算机程序,所述计算机程序被处理器执行时实现如权利要求1-9中任一项所述的方法。
PCT/CN2020/071136 2019-04-11 2020-01-09 视频拍摄方法、装置、电子设备及存储介质 WO2020207080A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910289648.8 2019-04-11
CN201910289648.8A CN109982130A (zh) 2019-04-11 2019-04-11 一种视频拍摄方法、装置、电子设备及存储介质

Publications (1)

Publication Number Publication Date
WO2020207080A1 true WO2020207080A1 (zh) 2020-10-15

Family

ID=67084129

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/071136 WO2020207080A1 (zh) 2019-04-11 2020-01-09 视频拍摄方法、装置、电子设备及存储介质

Country Status (2)

Country Link
CN (1) CN109982130A (zh)
WO (1) WO2020207080A1 (zh)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109982130A (zh) * 2019-04-11 2019-07-05 北京字节跳动网络技术有限公司 一种视频拍摄方法、装置、电子设备及存储介质
CN111726536B (zh) * 2020-07-03 2024-01-05 腾讯科技(深圳)有限公司 视频生成方法、装置、存储介质及计算机设备
CN112114925B (zh) 2020-09-25 2021-09-21 北京字跳网络技术有限公司 用于用户引导的方法、装置、设备和存储介质
CN114915722B (zh) * 2021-02-09 2023-08-22 华为技术有限公司 处理视频的方法和装置
CN113721807B (zh) * 2021-08-30 2023-08-22 北京字跳网络技术有限公司 一种信息展示方法、装置、电子设备和存储介质
CN114125181B (zh) * 2021-11-22 2024-06-21 北京达佳互联信息技术有限公司 视频处理方法和视频处理装置

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080252856A1 (en) * 2007-04-13 2008-10-16 Raytac Corp. Wireless presentation multi-proportion scaling method
CN107566914A (zh) * 2017-10-23 2018-01-09 咪咕动漫有限公司 一种弹幕的显示控制方法、电子设备以及存储介质
CN109005352A (zh) * 2018-09-05 2018-12-14 传线网络科技(上海)有限公司 合拍视频的方法及装置
CN109982130A (zh) * 2019-04-11 2019-07-05 北京字节跳动网络技术有限公司 一种视频拍摄方法、装置、电子设备及存储介质

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI220238B (en) * 2003-07-15 2004-08-11 Inventec Corp Web multimedia real-time interactive teaching system and method thereof
CN104754419A (zh) * 2015-03-13 2015-07-01 腾讯科技(北京)有限公司 基于视频的互动方法和装置
CN105307042A (zh) * 2015-10-28 2016-02-03 天脉聚源(北京)科技有限公司 一种用于互动电视系统的互动信息设置的方法及装置
CN106022707B (zh) * 2016-05-06 2022-04-26 北京小米移动软件有限公司 信息提示方法及装置
CN108632446A (zh) * 2018-03-13 2018-10-09 维沃移动通信有限公司 一种信息提示方法及移动终端
CN108377334B (zh) * 2018-04-03 2021-06-04 阿里巴巴(中国)有限公司 短视频拍摄方法、装置及电子终端

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080252856A1 (en) * 2007-04-13 2008-10-16 Raytac Corp. Wireless presentation multi-proportion scaling method
CN107566914A (zh) * 2017-10-23 2018-01-09 咪咕动漫有限公司 一种弹幕的显示控制方法、电子设备以及存储介质
CN109005352A (zh) * 2018-09-05 2018-12-14 传线网络科技(上海)有限公司 合拍视频的方法及装置
CN109982130A (zh) * 2019-04-11 2019-07-05 北京字节跳动网络技术有限公司 一种视频拍摄方法、装置、电子设备及存储介质

Also Published As

Publication number Publication date
CN109982130A (zh) 2019-07-05

Similar Documents

Publication Publication Date Title
WO2020207080A1 (zh) 视频拍摄方法、装置、电子设备及存储介质
WO2020253806A1 (zh) 展示视频的生成方法、装置、设备及存储介质
CN109167950B (zh) 视频录制方法、视频播放方法、装置、设备及存储介质
WO2020107904A1 (zh) 一种视频特效添加方法、装置、终端设备及存储介质
US11818424B2 (en) Method and apparatus for generating video, electronic device, and computer readable medium
WO2022121557A1 (zh) 一种直播互动方法、装置、设备及介质
WO2022121558A1 (zh) 一种直播演唱方法、装置、设备和介质
WO2021196903A1 (zh) 视频处理方法、装置、可读介质及电子设备
CN109640129B (zh) 视频推荐方法、装置,客户端设备、服务器及存储介质
US11842425B2 (en) Interaction method and apparatus, and electronic device and computer-readable storage medium
WO2020207106A1 (zh) 关注用户的信息展示方法、装置、设备及存储介质
CN106911967B (zh) 直播回放方法及装置
WO2022089178A1 (zh) 视频处理方法及设备
CN109600559B (zh) 一种视频特效添加方法、装置、终端设备及存储介质
CN113852767B (zh) 视频编辑方法、装置、设备及介质
US11886484B2 (en) Music playing method and apparatus based on user interaction, and device and storage medium
WO2020007082A1 (zh) 视频播放处理方法、终端设备、服务器及存储介质
WO2021169432A1 (zh) 直播应用的数据处理方法、装置、电子设备及存储介质
CN106604147A (zh) 一种视频处理方法及装置
WO2022214101A1 (zh) 一种视频生成方法、装置、电子设备及存储介质
CN104080006A (zh) 一种视频处理装置及方法
US20220078221A1 (en) Interactive method and apparatus for multimedia service
WO2024094130A1 (zh) 内容分享方法、装置、设备、计算机可读存储介质及产品
WO2022218109A1 (zh) 交互方法, 装置, 电子设备及计算机可读存储介质
CN115687666A (zh) 媒体内容推荐方法、装置、设备、可读存储介质及产品

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20788587

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 03.02.2022)

122 Ep: pct application non-entry in european phase

Ref document number: 20788587

Country of ref document: EP

Kind code of ref document: A1