CN115767141A - Video playing method and device and electronic equipment - Google Patents

Video playing method and device and electronic equipment Download PDF

Info

Publication number
CN115767141A
CN115767141A CN202211038653.XA CN202211038653A CN115767141A CN 115767141 A CN115767141 A CN 115767141A CN 202211038653 A CN202211038653 A CN 202211038653A CN 115767141 A CN115767141 A CN 115767141A
Authority
CN
China
Prior art keywords
video
special effect
target
input
playing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211038653.XA
Other languages
Chinese (zh)
Inventor
林君
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN202211038653.XA priority Critical patent/CN115767141A/en
Publication of CN115767141A publication Critical patent/CN115767141A/en
Priority to PCT/CN2023/114196 priority patent/WO2024041514A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/238Interfacing the downstream path of the transmission network, e.g. adapting the transmission rate of a video stream to network bandwidth; Processing of multiplex streams
    • H04N21/2387Stream processing in response to a playback request from an end-user, e.g. for trick-play
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/45Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
    • H04N21/458Scheduling content for creating a personalised stream, e.g. by combining a locally stored advertisement with an incoming stream; Updating operations, e.g. for OS modules ; time-related management operations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/482End-user interface for program selection

Abstract

The application discloses a video playing method and device and electronic equipment, and belongs to the technical field of camera shooting. The method comprises the following steps: receiving a first input of a user to a target object in a playing interface of a target video, wherein the target video comprises at least one shooting object associated with at least one special effect video, the target video comprises an original video or a synthesized video, the synthesized video is obtained by synthesizing the original video and the special effect video associated with at least one shooting object in the original video, the target object is any one of the at least one shooting object, and special effect parameters of different special effect videos are different; in response to the first input, at least one target special effect video associated with the target object is played.

Description

Video playing method and device and electronic equipment
Technical Field
The application belongs to the technical field of camera shooting, and particularly relates to a video playing method and device and electronic equipment.
Background
With the development of communication technology, the functions of electronic devices are more and more abundant, for example, users can play videos through the electronic devices.
Specifically, after the user clicks on the play control in the video play interface, the electronic device may play the video with a fixed play effect. This may result in a single playing effect of the video played by the electronic device.
Disclosure of Invention
The embodiment of the application aims to provide a video playing method, a video playing device and electronic equipment, which can enrich the special effect of a video, so that the diversity of video playing effects is improved.
In a first aspect, an embodiment of the present application provides a video playing method, where the method includes: receiving a first input of a user to a target object in a playing interface of a target video, wherein at least one shooting object in the target video is associated with at least one special effect video, the target video comprises an original video or a synthesized video, the synthesized video is obtained by synthesizing the original video and the special effect video associated with the at least one shooting object in the original video, the target object is at least one shooting object selected by the user in the at least one shooting object, and special effect parameters of different special effect videos are different; in response to the first input, at least one target special effect video associated with the target object is played.
In a second aspect, an embodiment of the present application provides a video playing method, where the method includes: displaying a video recording interface of an original video, wherein the video recording interface comprises at least one video recording window of a special effect video associated with at least one shooting object, the at least one video recording window of the special effect video is the video recording window associated with at least one shooting object in the video recording interface, and special effect parameters of different video recording windows are different; in the process of recording the original video, synchronously updating the preview image in each video recording window; under the condition that the original video and at least one special-effect video are recorded completely, outputting a target file; the target file comprises an original video and at least one special effect video, or the target file comprises a composite video which is obtained by combining the original video and the at least one special effect video, or the target file comprises the original video, the at least one special effect video and the composite video.
In a third aspect, an embodiment of the present application provides an apparatus for playing a video, where the apparatus includes: the device comprises a receiving module and a playing module; the device comprises a receiving module, a playing module and a processing module, wherein the receiving module is used for receiving first input of a user on a target object in a playing interface of a target video, at least one shooting object in the target video is associated with at least one special effect video, the target video comprises an original video or a synthesized video, the synthesized video is obtained by synthesizing the original video and the special effect video associated with the at least one shooting object in the original video, the target object is at least one shooting object selected by the user in the at least one shooting object, and the special effect parameters of different special effect videos are different; and the playing module is used for responding to the first input received by the receiving module and playing at least one target special effect video associated with the target object.
In a fourth aspect, an embodiment of the present application provides an apparatus for video shooting, where the apparatus includes: the device comprises a display module, an updating module and a processing module; the display module is used for displaying a video recording interface of an original video, the video recording interface comprises at least one video recording window of at least one special-effect video related to at least one shooting object, the at least one video recording window of the special-effect video is a video recording window related to at least one shooting object in the video recording interface, and special-effect parameters of different video recording windows are different; the updating module is used for synchronously updating the preview image in each video recording window in the process of recording the original video; the processing module is used for outputting a target file under the condition that the original video and at least one special-effect video are recorded; the target file comprises an original video and at least one special effect video, or the target file comprises a composite video which is synthesized by the original video and the at least one special effect video, or the target file comprises the original video, the at least one special effect video and the composite video.
In a fifth aspect, embodiments of the present application provide an electronic device, which includes a processor, a memory, and a program or instructions stored on the memory and executable on the processor, and when executed by the processor, implement the steps of the method according to the first aspect or the second aspect.
In a sixth aspect, embodiments of the present application provide a readable storage medium, on which a program or instructions are stored, which when executed by a processor implement the steps of the method according to the first or second aspect.
In a seventh aspect, an embodiment of the present application provides a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and the processor is configured to execute a program or instructions to implement the method according to the first aspect or the second aspect.
In an eighth aspect, embodiments of the present application provide a computer program product, stored on a storage medium, for execution by at least one processor to implement a method according to the first or second aspect.
In the embodiment of the application, a first input of a user to a target object in a playing interface of a target video can be received, at least one shooting object in the target video is associated with at least one special effect video, the target video comprises an original video or a synthesized video, the synthesized video is obtained by synthesizing the original video and the special effect video associated with the at least one shooting object in the original video, the target object is at least one shooting object selected by the user in the at least one shooting object, and different special effect parameters of the special effect videos are different; in response to the first input, at least one target special effect video associated with the target object is played. According to the scheme, at least one shooting object in the target video is associated with at least one special effect video, so that the at least one target special effect video associated with the target object can be played according to the input of the user to the target object in the playing interface of the target video instead of playing the target video with a fixed playing effect, the special effect of the video can be enriched, and the diversity of video playing effects is improved.
Drawings
Fig. 1 is a schematic diagram of a video playing method according to some embodiments of the present application;
fig. 2 is a schematic interface diagram of an application of a video playing method according to some embodiments of the present application;
fig. 3 is a schematic interface diagram of an application of a video playing method according to some embodiments of the present application;
fig. 4 is a schematic interface diagram of an application of a video playing method according to some embodiments of the present application;
fig. 5 is a schematic interface diagram of an application of a video playing method according to some embodiments of the present application;
fig. 6 is a schematic interface diagram of an application of a video playing method according to some embodiments of the present application;
fig. 7 is a schematic interface diagram of an application of a video playing method according to some embodiments of the present application;
fig. 8 is a schematic diagram of a video capture method provided by some embodiments of the present application;
fig. 9 is a schematic interface diagram of an application of a video shooting method according to some embodiments of the present application;
FIG. 10 is a schematic interface diagram of a video capture method application provided by some embodiments of the present application;
fig. 11 is a schematic interface diagram of an application of a video shooting method according to some embodiments of the present application;
FIG. 12 is a schematic interface diagram of a video capture method application provided by some embodiments of the present application;
FIG. 13 is a schematic interface diagram of a video capture method application provided by some embodiments of the present application;
FIG. 14 is a schematic interface diagram of a video capture method application provided by some embodiments of the present application;
FIG. 15 is a schematic interface diagram of a video capture method application provided by some embodiments of the present application;
FIG. 16 is a schematic interface diagram of a video capture method application provided by some embodiments of the present application;
FIG. 17 is a schematic interface diagram of an application of a video capture method provided by some embodiments of the present application;
FIG. 18 is a schematic interface diagram of a video capture method application provided by some embodiments of the present application;
FIG. 19 is a schematic interface diagram of an application of a video capture method provided by some embodiments of the present application;
FIG. 20 is a schematic interface diagram of a video capture method application provided by some embodiments of the present application;
fig. 21 is a schematic diagram of a video playback device according to some embodiments of the present application;
FIG. 22 is a schematic view of a video capture device provided by some embodiments of the present application;
fig. 23 is a schematic diagram of an electronic device provided by some embodiments of the present application;
fig. 24 is a hardware schematic diagram of an electronic device provided in some embodiments of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be described clearly below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments that can be derived by one of ordinary skill in the art from the embodiments given herein are intended to be within the scope of the present disclosure.
The terms first, second and the like in the description and in the claims of the present application are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It will be appreciated that the data so used may be interchanged under appropriate circumstances such that embodiments of the application may be practiced in sequences other than those illustrated or described herein, and that the terms "first," "second," and the like are generally used herein in a generic sense and do not limit the number of terms, e.g., the first term can be one or more than one. In addition, "and/or" in the specification and claims means at least one of connected objects, a character "/", and generally means that the former and latter related objects are in an "or" relationship.
It should be noted that, the marks in the embodiments of the present application are used to indicate words, symbols, images, and the like of information, and a control or other container may be used as a carrier for displaying information, including but not limited to a word mark, a symbol mark, and an image mark.
The video playing method, the video playing device and the electronic device provided by the embodiments of the present application are described in detail below with reference to the accompanying drawings through specific embodiments and application scenarios thereof.
The video playing method provided by the embodiment of the application can be applied to scenes for playing videos.
For example, when the user does not want the electronic device to play the target video with a fixed playing effect, the electronic device may receive a user input on a target object in the target video, since at least one shooting object is included in the target video and at least one special effect video is associated with the target object. Therefore, the electronic equipment can play at least one target special effect video associated with the target object according to the input of the user to the target object, and does not play the target video with a fixed playing effect, so that the special effect of the video can be enriched, and the diversity of the played video is improved.
Embodiments of the present application provide a video playing method, and fig. 1 shows a flowchart of a video playing method provided in some embodiments of the present application. As shown in fig. 1, a video playing method provided by some embodiments of the present application may include steps 101 and 102 described below. The first electronic device is exemplified to execute the method as follows.
Step 101, first electronic equipment receives a first input of a user to a target object in a playing interface of a target video.
Optionally, in some embodiments of the present application, each input of the user may be input by the user through a finger or a touch device such as a stylus.
The target video may include at least one shooting object associated with at least one special effect video, the target video may include an original video or a composite video, the composite video may be synthesized from the original video and the special effect video associated with at least one shooting object in the original video, the target object may be any one of the at least one shooting object, and special effect parameters of different special effect videos may be different.
Optionally, in some embodiments of the present application, one photographic subject may be associated with at least one special effect video.
In some embodiments of the present application, an original video is a video that has been processed only by processes such as basic 3A effect, resolution, frame rate, and picture size cropping, and is not superimposed with additional special effect processing.
In some embodiments of the present application, the special effect video is a video obtained by rendering an original video by special effect parameters.
Optionally, in some embodiments of the present application, the special effect parameters include, but are not limited to, a filter parameter, a special effect parameter, a background music parameter, a video style parameter, and the like, and may be determined specifically according to actual usage requirements, and the embodiments of the present application are not limited.
Optionally, in some embodiments of the present application, the target video may be a video captured by the first electronic device, or a video transmitted by another electronic device, for example, the target video may be a video transmitted by the second electronic device in the following embodiments.
Optionally, in some embodiments of the present application, the first electronic device may receive, without playing the target video, a first input of a user to a target object in the target video; or, the first electronic device may receive a first input of a user to a target object in the target video when the target video is being played, which may be determined according to actual usage requirements, and the embodiment of the present application is not limited.
Optionally, in some embodiments of the present application, the first input includes, but is not limited to, a touch input, a hover input, a preset gesture input, or a voice input, which may be determined according to actual use requirements. For example, the first input is a click input of the target object by the user.
Optionally, in some embodiments of the present application, the step 101 may be specifically implemented by the following step 101a or 101 b.
Step 101a, the first electronic device receives an input of a user to an area where the target object is located.
Step 101b, in the case that the playing interface includes the object identifier associated with the target, the first electronic device receives an input of the object identifier of the target object from the user.
Wherein one object identification may indicate one photographic object.
Optionally, in some embodiments of the present application, the object identifier may be displayed in a preset area corresponding to the shooting object, or may be displayed in any area of the playing interface, which may be specifically determined according to actual use requirements, and the embodiments of the present application are not limited.
Illustratively, as shown in fig. 2, it is assumed that the target video is a video shot in an office scene, and the target video 20 includes 5 subjects to be shot, which are: the system comprises books, table lamps, tables, chairs and drawers, wherein the books are related to special effect videos comprising oil painting filters, special lengthening effects, silent night sky music and writing styles, and the video duration is 30s; the desk lamp is related to a special effect video comprising a black ink filter, a luminous special effect, instantaneous perpetual music and a card ventilation grid, and the video duration is 30s; the table is associated with special effect videos which comprise a smoke, rain and ash filter, a special effect of lengthening, music of thinking in autumn and a classical style, and the video duration is 30s; the chair is associated with a special effect video comprising a cloudy rain filter, an elongated special effect, music of a life journey and a refreshing style, and the video duration is 30s; associated with the drawer is a special effect video containing advanced grey filters, luminous special effects, "big fish" music and classical style, with a video duration of 30s. When the target object is a book, the first input may be input by the user to the area 21 where the book is located, or may be input to the identifier 22 of the book.
In some embodiments of the present application, since the user can input the area where the target object is located and can also input the object identifier indicated by the target object, the flexibility of the user operation can be improved.
Step 102, the first electronic device responds to the first input and plays at least one target special effect video associated with the target object.
Optionally, in some embodiments of the present application, the first electronic device may play a specific video according to the first input, or may play the associated videos in a preset order.
In the video playing method provided in some embodiments of the present application, at least one shooting object included in the target video is associated with at least one special effect video, so that the at least one target special effect video associated with the target object can be played according to an input of a user to the target object in a playing interface of the target video, instead of playing the target video with a fixed playing effect, so that the special effect of the video can be enriched, and thus the diversity of video playing effects is improved.
Optionally, in some embodiments of the present application, the target object may associate at least two special effect videos; the step 102 may be specifically realized by the step 102a or the step 102b described below.
102a, under the condition that each special effect video is associated with a preset input feature and the input feature of the first input is matched with the first preset input feature, the first electronic device plays the special effect video associated with the first preset input feature.
Wherein the preset input features may include at least one of: inputting a starting position, an input direction and an input track.
Optionally, in some embodiments of the present application, the input starting position may be a position where the target object is located in the target video; or identify the location for the target object.
Illustratively, the first input may be a sliding input of a sign-checking gesture by the user on the screen; alternatively, the first input may be a slide input by a user performing a circle gesture on the screen.
In some embodiments of the present application, the matching of the input feature of the first input with the first preset input feature may be understood as: the degree of match between the preset features of the first preset input and the input features of the first input satisfies a preset threshold, for example 90%.
In some embodiments of the application, when the input feature of the first input matches a first preset input feature, the first electronic device may display a prompt message of "√ the target object gesture exists" in the play interface, and start to play a special-effect video associated with the first preset input feature; when the input feature of the first input does not match the first preset input feature, the first electronic device may display a prompt message of "x the target object gesture does not exist" in the play interface, and continue to play the target video.
Exemplarily, assuming that the first preset input corresponding to the book is a "cross" input track, as shown in fig. 3, when the input track 23 of the first input of the user is also a "cross", that is, when the input parameter of the first input matches with the first preset input feature in the target object book 21, as shown in fig. 4, the first electronic device may display a prompt message of "the target object gesture exists" in the playing interface 20, and start to play the special effect video associated with the first preset input feature; when the input track 24 of the first input of the user is in a "Z" shape, as shown in fig. 5, that is, the input parameter of the first input does not match the first preset input feature in the target object book 21, as shown in fig. 6, the first electronic device may display a prompt message of "x the target object gesture does not exist" in the playing interface 20, and continue to play the target video.
And 102b, under the condition that the input characteristic of the first input is a second preset input characteristic, the first electronic equipment plays at least two special effect videos related to the target object according to the playing sequence related to the second preset input characteristic.
Optionally, in some embodiments of the application, the first electronic device may play all videos associated with the target object, or only play part of videos associated with the target object, which may be determined according to actual usage requirements, and the embodiments of the application are not limited.
For example, the target object may be associated with 5 videos, which are video 1, video 2, video 3, video 4, and video 5, where video 1 is a special effect video including an oil painting filter, an elongation special effect, a "night sky silence" music, and a realistic style, video 2 is a special effect video including an ink black filter, a lighting special effect, a "momentary perpetuation" music, and a cartoon style, video 3 is a special effect video including a smoke and rain grey filter, an elongation special effect, an "autumn thoughts" music, and a classic style, video 4 is a special effect video including a rain and shade filter, a special effect, a "journey of life" music, and a fresh style, and video 5 is a special effect video including a high-level grey filter, a lighting, a "big fish" music, and a classic style, and in a case where the input feature of the first input is the second preset input feature, the first electronic device may play only 3 videos of the 5 videos associated with the target object.
In some embodiments of the application, in a case where the target object is associated with at least two special effect videos, in a case where the input feature of the first input matches a first preset input feature, a special effect video associated with the first preset input feature may be played; and under the condition that the input characteristic of the first input is a second preset input characteristic, playing at least two special effect videos related to the target object according to the playing sequence related to the second preset input characteristic. Namely, videos associated with different effects in the playing target object can be flexibly controlled through different inputs, so that convenience and flexibility of operation can be improved.
Optionally, in some embodiments of the present application, the step 101 may be specifically implemented by a step 101c described below, and the step 102 may be specifically implemented by a step 102c described below.
Step 101c, the first electronic device receives an input of a target object in a first video frame displayed in a playing interface of the target video from a user.
Optionally, in some embodiments of the present application, the first video frame may be any video frame in the target video.
And 102c, the first electronic equipment starts to play the target special effect video from a second video frame of the target special effect video associated with the target object.
The second video frame may be a video frame of a frame number corresponding to the first video frame in the target special effect video, or the second video frame may be a starting video frame of the target special effect video.
Illustratively, the target video includes 100 video frames, when the first electronic device is displaying a 50 th video frame in the target video and a table without sunlight, that is, a first video frame is included in the 50 th video frame, the first video frame includes picture content which is not subjected to special effect parameter rendering, the first electronic device may receive a first input of a user to a target object in the first video frame, the first electronic device responds to the first input, starts to play the target special effect video from the 50 th video frame in one target special effect video associated with the target object and a table with sunlight on a table lamp, that is, a second video frame is included in the 50 th video frame, the picture content of the second video frame is the same as that of the first video frame, and the picture of the second video frame is a picture subjected to special effect parameter rendering; or starting to play the target special effect video from the 1 st video frame in the target special effect video, wherein the 1 st video frame contains a table with sunlight on the table lamp.
In some embodiments of the application, the target special effect video can be played from the video frame of the target special effect video, which corresponds to the frame number of the first video frame in the target video, or from the start frame of the target special effect video, so that the diversity and flexibility of playing the target special effect video can be improved.
Optionally, in some embodiments of the present application, if the user's requirement is: each time the target video and the at least one target special effect video associated with the target object are played, the first electronic device may play the plurality of videos in an order in which the target video and the at least one target special effect video associated with the target object were last played, and after step 102, the video playing method provided by the embodiment of the present application may further include steps 103 to 105 described below.
And 103, the first electronic equipment stores the frame number of the second video frame as historical playing information.
Optionally, in some embodiments of the present application, the first electronic device may record the historical playing information in a character string manner, where a recording format of the character string is [ video ID: playing end frame number; and the video ID is the playing end frame number, and the recorded historical playing information is written into a preset file corresponding to a cache region of the mobile phone.
Exemplarily, assuming that the first electronic device receives an input from a user at a 35 th frame displayed in the playing interface of the target video and ends playing at a 55 th frame of the target special effect video, the historical playing information is: [ target video: 35; and 55, writing the recorded character strings into a preset file corresponding to a cache region of the mobile phone.
And step 104, the first electronic equipment receives a second input of the user.
Optionally, in some embodiments of the present application, the second input includes, but is not limited to, a touch input, a hover input, a preset gesture input, or a voice input, which may be determined according to actual usage requirements, and the embodiments of the present application are not limited.
For example, the second input may be a user click input to a play control in the play interface.
And 105, responding to the second input, the first electronic equipment plays the target video according to the pre-stored historical playing information, and in the process of playing the target video, jumping to a second video frame of a target special effect video associated with the target object to continue playing under the condition of playing to the first video frame.
Optionally, in some embodiments of the present application, the first electronic device may find the corresponding file from the cache area, read out the corresponding character string, and perform parsing according to a recording manner of the character string to obtain the historical playing information.
Illustratively, the historical play information acquired by the first electronic device is: 35 for [ target video; 55] of the target special effect video, the first electronic equipment plays 1 to 35 frames of the target special effect video first, and then plays 35 to 55 frames of the target special effect video.
Optionally, in some embodiments of the application, when the user does not want the first electronic device to play the target video according to the pre-stored historical play information, the user may click a target control in the play interface to trigger the first electronic device to clear the historical play information stored in the first electronic device; and then playing the target video according to the self requirement of the user.
For example, as shown in fig. 7, the first electronic device may display a "clear" control in the playing interface 20, and the user may click the "clear" control, which triggers the first electronic device to clear the historical playing information saved by the first electronic device.
In some embodiments of the present application, since the frame number of the second video frame may be stored as the history playing information, and the target video and the target special effect video associated with the target object may be played according to the history playing information, the diversity of playing the target video and the target special effect video associated with the target object may be improved.
Optionally, in some embodiments of the present application, on the play interface, thumbnails of videos may be displayed, and in a case where the play interface includes a special effect video thumbnail of at least one special effect video, if the at least one special effect video belongs to the same file, the special effect video thumbnail of the at least one special effect video may be displayed in an order of the at least one special effect video in the same file. If the at least one special effect video does not belong to the same file, displaying a special effect video thumbnail of the at least one special effect video in the order in which the at least one special effect video was received; or displaying a special effect video thumbnail of the at least one special effect video according to the sequence of setting the special effect parameter corresponding to the at least one special effect video, where the step 101 may be specifically implemented by the following step 101d, and the step 102 may be specifically implemented by the following step 102 d.
Step 101d, the first electronic device receives a first input of a user to a target special effect video thumbnail of the at least one special effect video thumbnail.
Wherein, a special effect video thumbnail can be a thumbnail of any video frame of a special effect video.
Optionally, in some embodiments of the present application, the special effect video thumbnail may be displayed in a preset area in the play interface; for example, a special effect video thumbnail may be displayed below the playback interface.
In some embodiments of the present application, the target special effect video thumbnail may be any of the at least one special effect video thumbnail.
And 102d, the first electronic equipment plays the special effect video comprising the video frame corresponding to the target special effect video thumbnail.
In some embodiments of the application, in a case that the playback interface includes a special effect video thumbnail of at least one special effect video, since a first input by a user to a target special effect video thumbnail of the at least one special effect video thumbnail can be received, a special effect video including a video frame corresponding to the target special effect video thumbnail can be played. Therefore, the special-effect video can be played according to the actual requirements of the user.
Optionally, in some embodiments of the present application, the step 102 may be specifically implemented by the following step 102e or step 102 f.
And 102e, under the condition that the target video is the original video, the first electronic equipment switches at least one play interface of the target special-effect video from the play interface of the target video to play.
In some embodiments of the present application, in a case where the target video is an original video, that is, the target video exists separately from at least one other target special effect video, that is, the case is to switch playing between two independent videos.
And 102f, under the condition that the target video is the composite video, the first electronic equipment jumps to the target video frame from the video frame displayed on the playing interface to play, wherein the target video frame is a video frame corresponding to at least one target special-effect video in the composite video.
In some embodiments of the present application, in a case that a target video is a composite video, that is, the target video and at least one other target special effect video are the same video file, in this case, if a certain special effect video needs to be played, a frame of the corresponding special effect video is skipped to play, so that skipping of different video frames in the same video can be implemented.
In some embodiments of the application, under the condition that the target video is the original video, the playing interface of at least one target special-effect video can be switched from the playing interface of the target video to be played; under the condition that the target video is the composite video, jumping from a video frame displayed on a playing interface to the target video frame to play can be carried out, and the target video frame is a video frame corresponding to at least one target special effect video in the composite video. Thereby improving the flexibility of playing the special-effect video.
Optionally, in some embodiments of the present application, in a case that the playing interface includes at least one special effect video thumbnail of a special effect video, the video playing method provided in the embodiments of the present application may further include the following steps 106 and 107a; alternatively, step 106 and step 107b.
Step 106, the first electronic device receives a third input of a user to a target special effect video thumbnail in the at least one special effect video thumbnail.
Optionally, in some embodiments of the present application, the number of the target special effect video thumbnails may be one or multiple, which may be specifically determined according to actual use requirements, and the embodiments of the present application are not limited.
Optionally, in some embodiments of the present application, the third input includes, but is not limited to, a touch input, a hover input, a preset gesture input, or a voice input, which may be determined according to actual usage requirements, and the embodiments of the present application are not limited.
And step 107a, under the condition that the target video is the original video, the first electronic equipment responds to a third input and deletes the special effect video corresponding to the special effect video thumbnail.
In some embodiments of the application, since the target video is an original video, that is, the special effect video corresponding to the special effect video thumbnail exists separately from the target video, the first electronic device deletes only the special effect video corresponding to the special effect video thumbnail separately.
And step 107b, under the condition that the target video is the composite video, the first electronic equipment responds to a third input and deletes the video clip with the same content as the special effect video corresponding to the special effect video thumbnail in the composite video.
In some embodiments of the present application, since the target video is a composite video, i.e., the target video and the at least one special effect video are in the same video file, the first electronic device may delete the special effect video corresponding to the special effect video thumbnail from the video file.
In some embodiments of the present application, the same may be understood as: the picture content contained in the video is the same as the special effect parameters of the video.
In some embodiments of the present application, since in a case where the playback interface includes a special effect video thumbnail of the at least one special effect video, a third input of the user to a target special effect video thumbnail of the at least one special effect video thumbnail may be received; under the condition that the target video is the original video, responding to a third input, and deleting the special effect video corresponding to the special effect video thumbnail; in a case where the target video is a composite video, in response to a third input, a video clip of the composite video having the same content as the special effect video corresponding to the special effect video thumbnail is deleted. Therefore, the user can edit the video file again through the operation of the thumbnail.
The following describes in detail a video playing method provided in the embodiment of the present application with reference to specific examples.
In some embodiments of the present application, as shown in fig. 2, in a case where the first electronic device displays a playing interface of the target video, a user may click an input on a book in the playing interface, and then the first electronic device starts to play a special effect video associated with the book.
In some embodiments of the application, as shown in fig. 2, in a case where the first electronic device displays a playing interface of the target video, as shown in fig. 3, the user may draw a "cross" gesture 23 on the book identifier in the playing interface, and then as shown in fig. 4, the first electronic device may output a prompt message of "that the target object gesture exists" and start playing the special effect video associated with the gesture.
The embodiment of the application also provides a video shooting method. Fig. 8 illustrates a flow diagram of a video capture method provided by some embodiments of the present application. As shown in fig. 8, the video shooting method provided in the embodiment of the present application may include steps 201 to 203 described below. The method performed by the second electronic device is exemplarily described below.
Step 201, the second electronic device displays a video recording interface of the original video.
The video recording interface may include at least one video recording window of a special effect video associated with at least one photographic subject, the at least one video recording window of the special effect video may be the video recording window associated with the at least one photographic subject in the video recording interface, and special effect parameters of different video recording windows may be different.
Optionally, in some embodiments of the present application, the effect parameter may include: at least one of filters, special effects, music and style, etc.
Optionally, in some embodiments of the present application, the video recording window of the at least one special effect video may be located at a preset position in the video recording interface, or may be moved according to an input of a user during a recording process.
Illustratively, as shown in fig. 9, the second electronic device displays a video recording interface 30 of the original video, wherein the shooting objects included in the video recording interface 30 are: the video recording interface 30 further comprises a video recording window 31, a video recording window 32 and a video recording window 33, namely at least one video recording window of a special effect video, wherein the book corresponds to the video recording window 31, the video recording window 31 is a video recording window after an original video is rendered through an oil painting filter, a lengthening special effect, silence in the night sky and a writing style, the desk corresponds to the video recording window 32, the video recording window 32 is a video recording window after the original video is rendered through a smoke and rain grey filter, special effect lengthening, thinking in autumn music and a classical style, the drawer corresponds to the video recording window 33, and the video recording window 33 is a video recording window after the original video is rendered through a high-grade grey filter, a luminous special effect, big fish music and a classical style.
Step 202, in the process of recording the original video, the second electronic device synchronously updates the preview image in each video recording window.
It can be understood that: the interface for recording the original video is the same as the content displayed in the video recording window of at least one special-effect video.
And step 203, under the condition that the original video and the at least one special effect video are recorded, outputting the target file by the second electronic equipment. The target file may include an original video and at least one special effect video, or a composite video synthesized from the original video and the at least one special effect video, or the target file may include the original video, the at least one special effect video, and the composite video.
Illustratively, in the case of recording 20s, the user may trigger the second electronic device to stop recording the video by inputting, that is, the time duration of the original video and the time duration of the at least one special effect video are both 20s.
In the video shooting method provided by the embodiment of the application, at least one video recording window of a special-effect video can be displayed while the video recording interface of an original video is displayed, the special-effect parameters of different video recording windows are different, and the video recording interface and the content in each video recording window are updated synchronously, so that on one hand, a user can conveniently check the recording effect under different effect parameters, and on the other hand, the operation of shooting a plurality of effect videos can be simplified.
Optionally, in some embodiments of the present application, before step 201 described above, the video shooting method provided in the embodiments of the present application may further include steps 204 to 208 described below.
Step 204, the second electronic device receives a first setting input of the target object in the video preview interface from the user.
Optionally, in some embodiments of the present application, the target object may be any object in the video preview interface.
Optionally, in some embodiments of the present application, the first setting input includes, but is not limited to, a touch input, a hover input, a preset gesture input, or a voice input, which may be determined specifically according to actual usage requirements, and the embodiments of the present application are not limited.
And step 205, the second electronic device responds to the first setting input and displays the special effect parameter setting options.
In some embodiments of the present application, the special effects parameter setting option may display a kind of the special effects parameter.
For example, the special effects parameter setting options include, but are not limited to: filter parameters, special effect parameters, background music parameters, video style parameters and the like.
Step 206, the second electronic device receives a second setting input of the special effect parameter setting option from the user.
Wherein the second setting input may be used to set at least one special effect parameter.
Optionally, in some embodiments of the present application, the second setting input includes, but is not limited to, a touch input, a hover input, a preset gesture input, or a voice input, which may be determined according to actual use requirements.
Optionally, in some embodiments of the application, the second setting input may comprise a plurality of sub-inputs, and different sub-inputs may be used to set the same or different special effects parameters.
And step 207, the second electronic device responds to the second setting input, displays the first video recording window, and updates the special effect preview image in the first video recording window according to at least one special effect parameter set by the second setting input.
Optionally, in some embodiments of the present application, the first video recording window may be located at a preset position in the video preview interface, or the first video recording window may be dragged to a position specified by a user according to a requirement of the user.
It is understood that the second electronic device may render the original video using the special effect parameters determined by the second setting input, thereby updating the special effect preview image in the first video recording window.
Illustratively, as shown in fig. 10, the user may input into a book 35 in the video preview interface 34, and as shown in fig. 11, the second electronic device displays special effects parameter setting options 36, which are: the "filter" option, the "special effect" option, the "music" option, and the "style" option, the user may set the displayed special effect parameter setting option, and after the user sets the special effect parameter, as shown in fig. 12, the second electronic device may display a video recording window 37, that is, the first video recording window, in the video preview interface.
And step 208, the second electronic equipment establishes an association relation among the target object, the special effect parameter set by the second setting input and the first video recording window.
Optionally, in some embodiments of the present application, the second electronic device may establish an association relationship among the target object, the special effect parameter set by the second setting input, and the first video recording window by establishing a mapping relationship or establishing a mapping table.
Optionally, in some embodiments of the present application, the second electronic device may store an association relationship between the target object, the special effect parameter set by the second setting input, and the first video recording window in a data form of a character string or a character.
In some embodiments of the present application, since the special effect parameter of the target object may be set according to the input of the user, after the special effect parameter is set, the special effect under the special effect parameter is displayed in real time through the video recording window, and the association relationship between the target object, the special effect parameter set by the second setting input, and the first video recording window is established. Thereby improving the diversity of video recording effects.
Optionally, in some embodiments of the present application, after step 206 described above, the video capturing method provided in the embodiments of the present application may further include step 209 described below, and the video capturing method provided in the embodiments of the present application may further include steps 210 to 214.
And step 209, the second electronic equipment establishes an association relationship among the target object, the input feature of the first setting input and the first video recording window.
Wherein the input features may include at least one of: inputting a starting position, an input direction and an input track.
Optionally, in some embodiments of the present application, the second electronic device may establish an association relationship between the target object, the input feature of the first setting input, and the first video recording window by establishing a form of a mapping relationship or establishing a form of a mapping table.
Optionally, in some embodiments of the present application, the second electronic device may store the association relationship between the target object, the input feature of the first setting input, and the first video recording window in a data form of a character string or a character.
And step 210, the second electronic device receives a third setting input of the target object by the user.
Optionally, in some embodiments of the present application, the third setting input includes, but is not limited to, a touch input, a floating input, a preset gesture input, or a voice input, which may be determined specifically according to actual use requirements, and the embodiments of the present application are not limited.
And step 211, the second electronic device responds to the third setting input and displays the special effect parameter setting options.
In some embodiments of the present application, for the description of the special effect parameter setting option, reference may be made to the related description in the foregoing embodiments, and details are not repeated here to avoid repetition.
In some embodiments of the present application, at least one parameter may be set for the target object.
Step 212, the second electronic device receives a fourth setting input of the special effect parameter setting option from the user.
Wherein the fourth setting input may be for setting at least one special effect parameter.
Optionally, in some embodiments of the present application, the fourth setting input includes, but is not limited to, a touch input, a floating input, a preset gesture input, or a voice input, which may be determined specifically according to actual use requirements, and the embodiments of the present application are not limited.
Step 213, the second electronic device displays the second video recording window in response to the fourth setting input, and updates the special effect preview image in the second video recording window according to at least one special effect parameter set by the fourth setting input.
In some embodiments of the present application, for the description of the second video recording window, reference may be made to the description of the first video recording window in the foregoing embodiments, and in order to avoid repetition, details are not repeated here.
Step 214, the second electronic device establishes an association relationship between the target object, the special effect parameter set by the second setting input, the input feature of the third setting input, and the first video recording window.
Optionally, in some embodiments of the present application, the second electronic device may establish an association relationship between the target object, the special effect parameter set by the second setting input, the input feature of the third setting input, and the first video recording window by establishing a form of a mapping relationship or establishing a form of a mapping table.
Optionally, in some embodiments of the present application, the second electronic device may store the association relationship between the target object, the special effect parameter set by the second setting input, the input feature of the third setting input, and the first video recording window in a data form of a character string or a character.
Optionally, in some embodiments of the present application, the target object may set the at least one special effect window through different gestures, that is, the same object may be associated with at least one special effect video.
In some embodiments of the present application, the special effect parameter may be set again for the target object, that is, the same object may be associated with a plurality of special effect videos, so that diversity of the target object associated with the special effect videos may be improved.
Optionally, in some embodiments of the present application, before step 201 described above, the video shooting method provided in the embodiments of the present application may further include steps 215 and 216 described below.
Step 215, the second electronic device receives a fifth setting input of the user.
Optionally, in some embodiments of the present application, the fifth setting input includes, but is not limited to, a touch input, a hover input, a preset gesture input, or a voice input, which may be determined according to actual usage requirements, and the embodiments of the present application are not limited.
Step 216, the second electronic device determines the playing sequence of the special effect video in response to the fifth setting input, and establishes an association relationship between the input feature of the fifth setting input and the playing sequence.
Optionally, in some embodiments of the present application, the second electronic device may establish an association relationship between the input feature of the fifth setting input and the playing order by establishing a form of a mapping relationship or establishing a form of a mapping table.
Optionally, in some embodiments of the present application, the second electronic device may store the association relationship between the input feature of the fifth setting input and the play order in a data form of a character string or a character.
Optionally, in some embodiments of the present application, the second electronic device may preset a playing order of the special effect videos, and associate an input feature of a fifth setting input with the playing order, so that, when the videos are required to be played, if the input of the user matches the input feature of the fifth setting input, the second electronic device plays the videos according to the predetermined playing order of the special effect videos.
In some embodiments of the application, since the association relationship between the fifth setting input and the determined playing sequence of the special-effect video can be established, a user can realize video playing control of different playing effects through specific input, and the flexibility of playing control of the special-effect video and the diversity of the video playing sequence are improved.
Optionally, in some embodiments of the present application, the video shooting method provided in the embodiments of the present application may further include steps 217 and 218 described below.
And step 217, the second electronic equipment performs object recognition on the preview image displayed on the video recording interface.
Optionally, in some embodiments of the present application, the second electronic device may perform object recognition on the preview image displayed on the video recording interface through an AI technology.
Step 218, the second electronic device displays the object identification of each identified object.
Optionally, in some embodiments of the present application, the second electronic device may identify the recognized object in the form of numbers and words.
Illustratively, as shown in fig. 13, the second electronic device performs object recognition on the preview image displayed on the video recording interface, and then as shown in fig. 14, the second electronic device displays an object identifier of each recognized object: the book corresponds to a mark of 1 book, the desk lamp corresponds to a mark of 2 desk lamp, the desk corresponds to a mark of 3 desk, the chair corresponds to a mark of 4 chair, and the drawer corresponds to a mark of 5 drawer.
In some embodiments of the present application, since the objects in the preview image can be identified and marked, the user can conveniently set the object for which the special effect parameter needs to be set.
Optionally, in some embodiments of the present application, the step 203 may be specifically implemented by the following step 203 a.
Step 203a, under the condition that the original video and the at least one special-effect video are recorded completely, the second electronic device performs video synthesis on the original video and the at least one special-effect video obtained by recording according to the special-effect parameter setting sequence of the at least one special-effect video before recording, and outputs a synthesized video.
Optionally, in some embodiments of the present application, the special effect parameter setting order may be an order in which the user sets the special effect parameter for the at least one photographic subject.
Optionally, in some embodiments of the application, the second electronic device may place the original video at the front of the composite video and may also place the original video at the back of the composite video.
In some embodiments of the application, the original video and the at least one special effect video obtained by recording can be subjected to video synthesis according to the special effect parameter setting sequence of the at least one special effect video before recording, so that the videos can be recorded according to the special effect parameter setting sequence set by a user, and further videos meeting the recording effect requirements of the user can be obtained.
Optionally, in some embodiments of the present application, the step 202 may be specifically implemented by the following step 202 a.
Step 202a, in the process of recording the original video, the second electronic device copies each frame of preview image displayed in the video recording interface to each video recording window frame by frame, and performs image processing on each copied frame of preview image according to the special effect parameters associated with each video recording window.
And under the condition that the number of the video recording windows is at least two, the number of the preview images copied each time is the same as the number of the video recording windows for each frame of preview image.
Optionally, in some embodiments of the application, the second electronic device may acquire memory space data of each frame of preview image, apply for n memory spaces with corresponding sizes according to the video recording window n of the at least one special-effect video, read all data in the memory space of the original video frame, and copy and fill the read data in the memory space of the original video frame to the corresponding n memory spaces one by one in a thread manner.
Illustratively, as shown in fig. 9, the video recording interface 30 includes 3 video recording windows, which are: the second electronic equipment can copy each frame of preview image displayed in the video recording interface by 2 frames one by one through a thread 1 and respectively display the preview image in the recording window corresponding to the book and the recording window corresponding to the desk; each frame of preview image displayed in the video recording interface is copied by 1 part frame by frame through the thread 2 and is displayed on a recording window corresponding to the drawer, namely, the copy of the preview image of each thread can be executed simultaneously through multi-thread parallel copy, and a time difference within a certain range can be allowed, such as 200-400 milliseconds, so that the copy efficiency of each frame of preview image can be ensured.
In some embodiments of the application, each frame of preview image displayed in the video recording interface can be copied frame by frame according to the number of video recording windows, so that resource waste can be avoided.
Optionally, in some embodiments of the present application, during the recording of the original video, the video shooting method provided in the embodiments of the present application may further include the following steps 219 and 220.
In step 219, the second electronic device receives a sixth setting input of the third video recording window from the user.
Optionally, in some embodiments of the present application, the third video recording window may be any window of the video recording windows of the at least one special-effect video, and the number of the third video recording windows may be one or multiple.
Optionally, in some embodiments of the application, the sixth setting input includes, but is not limited to, a touch input, a hover input, a preset gesture input, or a voice input, which may be determined according to actual usage requirements, and the embodiments of the application are not limited.
Step 220, the second electronic device responds to the sixth setting input, updates the special effect parameter of the third video recording window, and updates the special effect preview image in the third video recording window according to the updated special effect parameter.
In some embodiments of the present application, the sixth setting input may specifically be to modify a special effect parameter corresponding to the third video recording window.
In some embodiments of the application, after the user modifies the special effect parameter corresponding to the third video recording window, the second electronic device may render the image displayed in the third video recording window according to the latest special effect parameter corresponding to the third video recording window.
Illustratively, before the sixth setting input, the special effect parameters corresponding to the third video recording window are: oil painting filter, special lengthening effect, music parameter of 'silence in night sky' and writing style parameter; the user can modify the special effect parameter corresponding to the third video recording window, namely, the sixth setting input; therefore, the electronic device can respond to the input of the user, and update the special effect parameter corresponding to the third video recording window into: the method comprises the steps of filtering the smoke, rain and ash, lengthening the special effect, playing music parameters of autumn thoughts and classical style parameters, and updating a special effect preview image in a third video recording window according to the updated special effect parameters. In some embodiments of the application, in the process of recording a video, the special effect parameter corresponding to the third video recording window may be modified and updated, and the special effect preview image in the third video recording window may be updated according to the updated special effect parameter, so that flexibility of shooting a special effect video may be improved.
Optionally, in some embodiments of the present application, the video recording interface includes a first photographic object and a second photographic object, the first photographic object is associated with the fourth video recording window, and the second photographic object is associated with the fifth video recording window.
Step 221, the second electronic device receives a seventh setting input of the first photographic subject and the second photographic subject by the user.
Wherein the seventh setting input may be for exchanging special effects recording windows associated with the first photographic subject and the second photographic subject.
Optionally, in some embodiments of the present application, the first photographic subject and the second photographic subject are respectively any one of the at least one photographic subject, and the first photographic subject and the second photographic subject are different photographic subjects.
Optionally, in some embodiments of the present application, the seventh setting input includes, but is not limited to, a touch input, a hover input, a preset gesture input, or a voice input, which may be determined according to actual usage requirements, and the embodiments of the present application are not limited.
Step 222, the second electronic device responds to the seventh setting input, establishes an association relationship between the first shot object and the fifth video recording window, and establishes an association relationship between the second shot object and the fourth video recording window.
For example, as shown in fig. 15, assuming that the first subject book is associated with the fourth video recording window 38 and the second subject table is associated with the fifth video recording window 39, the user may input the "replace mark" control and then click on the fourth video recording window 38, then as shown in fig. 16, the first subject book is associated with the fourth video recording window 39 and the second subject table is associated with the fifth video recording window 38.
Optionally, in some embodiments of the present application, the second electronic device may establish an association relationship between the first photographic object and the fifth video recording window and an association relationship between the second photographic object and the fourth video recording window by establishing a mapping relationship or establishing a mapping table.
Optionally, in some embodiments of the present application, the second electronic device may store an association relationship between the first photographic subject and the fifth video recording window in a data format of a character string or a character, and establish an association relationship between the second photographic subject and the fourth video recording window.
In some embodiments of the present application, since the video recording windows associated between different objects can be exchanged by the input of the user, the flexibility between the objects and the video windows can be improved.
The following describes in detail a video shooting method provided in an embodiment of the present application with reference to specific examples.
In some embodiments of the present application, as shown in fig. 13, when the second electronic device displays the video preview interface, the shot object in the video preview interface is identified by an AI technique, and as shown in fig. 14, the second electronic device displays an object identifier of each identified object, which is respectively: 1 book, 2 desk lamp, 3 desk, 4 chair and 5 drawer; the user can click on the book of the shooting object, as shown in fig. 11, special effect parameter setting options are displayed, which are: after the user sets the special effect parameters of the book, as shown in fig. 12, the second electronic device displays a video recording window corresponding to the book in the video preview interface.
In some embodiments of the application, as shown in fig. 13, when the second electronic device displays the video preview interface, the shooting object in the video preview interface is recognized through an AI technology, and as shown in fig. 14, the object identifier of each recognized object is displayed by the second electronic device, which is respectively: 1 book, 2 desk lamp, 3 desk, 4 chair and 5 drawer; a user can click on a book to be shot, as shown in fig. 17, the second electronic device displays a setting window 40, the user can input an "edit" control in the setting window 40, as shown in fig. 18, the user can set a gesture "W", as shown in fig. 17, the user can click on the "set" control to set a special effect parameter corresponding to the gesture, and after the setting is completed, the second electronic device can establish an association relationship between the book to be shot, the gesture "W" and the set effect parameter; then, as shown in fig. 17, the user may click the "+" control in the setting window, then as shown in fig. 19, a "1_ gesture 2" is displayed in the setting window, the user may input an "edit" control corresponding to the "1_ gesture 2", as shown in fig. 20, the user may set a gesture "√" and as shown in fig. 17, the user may click the "setting" control to set a special effect parameter corresponding to the gesture, and after the setting is completed, the second electronic device may establish an association relationship between the object book, the gesture "+" and the set effect parameter.
It should be noted that, in the video playing method provided in the embodiment of the present application, the execution main body may be a video playing device, or a control module used for executing the video playing method in the video playing device. In the embodiment of the present application, a video playing device executing a video playing method is taken as an example to describe the video playing device provided in the embodiment of the present application.
With reference to fig. 21, an embodiment of the present application provides a video playback apparatus 400, where the video playback apparatus 400 may include: a receiving module 401 and a playing module 402; the receiving module 401 is configured to receive a first input of a user for a target object in a playing interface of a target video, where at least one shooting object in the target video is associated with at least one special effect video, the target video includes an original video or a synthesized video, the synthesized video is obtained by synthesizing the original video and the special effect video associated with the at least one shooting object in the original video, the target object is at least one shooting object selected by the user from the at least one shooting object, and special effect parameters of different special effect videos are different; a playing module 402, configured to play at least one target special effect video associated with the target object in response to the first input received by the receiving module 401.
Optionally, in this embodiment of the application, the receiving module 401 is specifically configured to receive an input of a user to an area where the target object is located; or, in the case that the playing interface includes object identifiers associated with objects, receiving user input of object identifiers of the object objects, wherein one object identifier indicates one shooting object.
Optionally, in this embodiment of the present application, the target object is associated with at least two special effect videos; the playing module 402 is specifically configured to play the special effect video associated with the first preset input feature when each special effect video is associated with one preset input feature and the input feature of the first input is matched with the first preset input feature; wherein the preset input features include at least one of: inputting an initial position, an input direction and an input track; the playing module 402 is specifically configured to play the at least two special effect videos associated with the target object according to a playing sequence associated with the second preset input feature when the input feature of the first input is the second preset input feature.
Optionally, in this embodiment of the present application, the receiving module 401 is specifically configured to receive an input of a user to a target object in a first video frame displayed in a playing interface of a target video; a playing module 402, specifically configured to play the target special effect video from a second video frame of one target special effect video associated with the target object; the second video frame is a video frame of a frame number corresponding to the first video frame in the target special effect video, or the second video frame is an initial video frame of the target special effect video.
Optionally, in this embodiment of the present application, the video playing apparatus 400 further includes: a storage module; a storage module, configured to store, as history play information, a frame number of a second video frame after the receiving module 401 receives a first input of a user to a target object in a play interface of a target video; the receiving module 401 is further configured to receive a second input of the user; the playing module 402 is further configured to, in response to the second input received by the receiving module 401, play the target video according to the pre-stored historical playing information, and in the process of playing the target video, jump to a second video frame of a target special effect video associated with the target object to continue playing under the condition of playing to the first video frame.
Optionally, in this embodiment of the application, in a case that the playing interface includes at least one special effect video thumbnail of a special effect video, the receiving module 401 is specifically configured to receive a first input of a target special effect video thumbnail in the at least one special effect video thumbnail from a user; the method comprises the following steps that a special effect video thumbnail is a thumbnail of any video frame of a special effect video; the playing module 402 is specifically configured to play a special effect video including a video frame corresponding to the target special effect video thumbnail.
Optionally, in this embodiment of the application, the playing module 402 is specifically configured to switch a playing interface of at least one target special-effect video from a playing interface of a target video to play the target video when the target video is an original video; the playing module 402 is specifically configured to skip from a video frame displayed on the playing interface to a target video frame for playing when the target video is a composite video, where the target video frame is a video frame corresponding to at least one target special-effect video in the composite video.
Optionally, in this embodiment of the present application, the video playing apparatus 400 further includes: a processing module; the receiving module 401 is further configured to receive a third input of a target special effect video thumbnail in the at least one special effect video thumbnail from the user, in a case that the playing interface includes at least one special effect video thumbnail of the special effect video; the processing module is used for responding to a third input and deleting the special effect video corresponding to the special effect video thumbnail under the condition that the target video is the original video; and the processing module is also used for responding to a third input and deleting a video segment with the same content as the special effect video corresponding to the special effect video thumbnail in the synthesized video under the condition that the target video is the synthesized video.
In the video playing device provided by the embodiment of the application, at least one shooting object included in the target video is associated with at least one special effect video, so that the at least one target special effect video associated with the target object can be played according to the input of the user to the target object in the playing interface of the target video, and the target video is not played with a fixed playing effect, so that the special effect of the video can be enriched, and the diversity of the video playing effect is improved.
It should be noted that, in the video shooting method provided in the embodiment of the present application, the execution subject may be a video shooting device, or a control module in the video shooting device for executing the video shooting method. In the embodiment of the present application, a video shooting device executing a video playing method is taken as an example to describe the video shooting device provided in the embodiment of the present application.
With reference to fig. 22, an embodiment of the present application provides a video camera 500, where the video camera 500 may include: a display module 501, an update module 502 and a processing module 503; the display module 501 is configured to display a video recording interface of an original video, where the video recording interface includes at least one video recording window of at least one special-effect video associated with at least one shooting object, the at least one video recording window of the special-effect video is a video recording window associated with at least one shooting object in the video recording interface, and special-effect parameters of different video recording windows are different; an updating module 502, configured to update the preview image in each video recording window synchronously in the process of recording the original video; a processing module 503, configured to output a target file when the original video and the at least one special-effect video are recorded; the target file comprises an original video and at least one special effect video, or the target file comprises a composite video which is obtained by combining the original video and the at least one special effect video, or the target file comprises the original video, the at least one special effect video and the composite video.
Optionally, in this embodiment of the present application, the video capturing apparatus 500 further includes: a receiving module; a receiving module, configured to receive a first setting input of a target object in a video preview interface from a user before the display module 501 displays a video recording interface of an original video; a display module 501, further configured to display a special effect parameter setting option in response to the first setting input received by the receiving module; the receiving module is also used for receiving a second setting input of the user for the special effect parameter setting options, and the second setting input is used for setting at least one special effect parameter; the display module 501 is further configured to display a first video recording window in response to the second setting input received by the receiving module; the updating module 502 is further configured to update the special effect preview image in the first video recording window according to at least one special effect parameter set by the second setting input in response to the second setting input received by the receiving module; the processing module 503 is further configured to establish an association relationship between the target object, the special effect parameter set by the second setting input, and the first video recording window.
Optionally, in this embodiment of the present application, the processing module 503 is further configured to, after the receiving module receives a second setting input of the special effect parameter setting interface from the user, establish an association relationship between the target object, an input feature of the first setting input, and the first video recording window; wherein the input features include at least one of: inputting an initial position, an input direction and an input track; the receiving module is also used for receiving a third setting input of the target object by the user; the display module 501 is further configured to display a special effect parameter setting option in response to the third setting input received by the receiving module; the receiving module is also used for receiving a fourth setting input of the special effect parameter setting option by the user, and the fourth setting input is used for setting at least one special effect parameter; the display module 501 is further configured to display a second video recording window in response to a fourth setting input received by the receiving module; the updating module 502 is further configured to respond to a fourth setting input received by the receiving module, and update the special effect preview image in the second video recording window according to at least one special effect parameter set by the fourth setting input; the processing module 503 is further configured to establish an association relationship among the target object, the special effect parameter set by the second setting input, the input feature input by the third setting input, and the first video recording window.
Optionally, in this embodiment of the present application, the video capturing apparatus 500 further includes: a receiving module; a receiving module, configured to receive a fifth setting input of the user before the display module 501 displays the video recording interface of the original video; the processing module 503 is configured to determine a playing order of the special effect video in response to the fifth setting input received by the receiving module, and establish an association relationship between an input feature of the fifth setting input and the playing order.
Optionally, in this embodiment of the present application, the video capturing apparatus 500 further includes: an identification module; the identification module is used for carrying out object identification on the preview image displayed on the video recording interface; the display module 501 is further configured to display an object identifier of each identified object.
Optionally, in this embodiment of the application, the processing module 503 is specifically configured to, when the original video and the at least one special effect video are recorded completely, perform video synthesis on the recorded original video and the at least one special effect video according to a special effect parameter setting sequence of the at least one special effect video before recording, and output a synthesized video.
Optionally, in this embodiment of the application, the updating module 502 is specifically configured to copy each frame of preview image displayed in the video recording interface to each video recording window frame by frame, and perform image processing on each copied frame of preview image according to a special effect parameter associated with each video recording window; and under the condition that the number of the video recording windows is at least two, the number of the preview images copied each time is the same as the number of the video recording windows for each frame of preview image.
Optionally, in this embodiment of the present application, the video capturing apparatus 500 further includes: a receiving module; the receiving module is used for receiving the sixth setting input of the user to the third video recording window in the process of recording the original video; the updating module 502 is further configured to update the special effect parameter of the third video recording window in response to the sixth setting input received by the receiving module, and update the special effect preview image in the third video recording window according to the updated special effect parameter.
Optionally, in this embodiment of the present application, the video recording interface includes a first shooting object and a second shooting object, the first shooting object is associated with the fourth video recording window, and the second shooting object is associated with the fifth video recording window; the video camera 500 further includes: a receiving module; the receiving module is used for receiving seventh setting input of a user to the first shooting object and the second shooting object, and the seventh setting input is used for exchanging special effect recording windows related to the first shooting object and the second shooting object; the processing module 503 is further configured to establish an association relationship between the first photographic subject and the fifth video recording window, and establish an association relationship between the second photographic subject and the fourth video recording window, in response to the seventh setting input received by the receiving module.
In the video shooting device provided by the embodiment of the application, at least one video recording window of a special-effect video can be displayed while the video recording interface of an original video is displayed, the special-effect parameters of different video recording windows are different, and the video recording interface and the content in each video recording window are updated synchronously, so that a user can conveniently check the recording effect under different effect parameters on one hand, and the operation of shooting a plurality of effect videos can be simplified on the other hand.
The video playing device and the video shooting device in the embodiment of the application may be the same device or different devices.
The video playing device or the video capturing device in the embodiment of the present application may be an electronic device, and may also be a component in the electronic device, such as an integrated circuit or a chip. The electronic device may be a terminal, or may be a device other than a terminal. The electronic Device may be, for example, a Mobile phone, a tablet computer, a notebook computer, a palm top computer, a vehicle-mounted electronic Device, a Mobile Internet Device (MID), an Augmented Reality (AR)/Virtual Reality (VR) Device, a robot, a wearable Device, an ultra-Mobile personal computer (UMPC), a netbook or a Personal Digital Assistant (PDA), and the like, and may also be a server, a Network Attached Storage (Network Attached Storage, NAS), a personal computer (NAS), a Television (TV), a teller machine, a self-service machine, and the like, and the embodiments of the present application are not limited in particular.
The video playing device or the video shooting device in the embodiment of the present application may be a device having an operating system. The operating system may be an Android operating system, an ios operating system, or other possible operating systems, which is not specifically limited in the embodiment of the present application.
The video playing device or the video shooting device provided in the embodiment of the present application can implement each process implemented by the method embodiments of fig. 1 and fig. 8, and for avoiding repetition, details are not repeated here.
As shown in fig. 23, an electronic device 600 is further provided in the embodiment of the present application, and includes a processor 602, a memory 601, and a program or an instruction stored in the memory 601 and capable of being executed on the processor 602, where the program or the instruction is executed by the processor 602 to implement each process of the above-mentioned video playing method or the embodiment of the video playing method, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
It should be noted that the electronic devices in the embodiments of the present application include the mobile electronic devices and the non-mobile electronic devices described above.
FIG. 24 electronic device 1000 includes, but is not limited to: a radio frequency unit 1001, a network module 1002, an audio output unit 1003, an input unit 1004, a sensor 1005, a display unit 1006, a user input unit 1007, an interface unit 1008, a memory 1009, and a processor 1010.
Those skilled in the art will appreciate that the electronic device 1000 may further comprise a power supply (e.g., a battery) for supplying power to the various components, and the power supply may be logically connected to the processor 1010 through a power management system, so as to implement functions of managing charging, discharging, and power consumption through the power management system. The electronic device structure shown in fig. 24 does not constitute a limitation to the electronic device, and the electronic device may include more or less components than those shown, or combine some components, or arrange different components, and thus, the description is omitted here.
In the case that the electronic device 1000 is the first electronic device described in the embodiment, the user input unit 1007 is configured to receive a first input of a user to a target object in a playing interface of a target video, where at least one shooting object in the target video is associated with at least one special effect video, the target video includes an original video or a composite video, the composite video is obtained by combining the original video and a special effect video associated with at least one shooting object in the original video, the target object is at least one shooting object selected by the user in the at least one shooting object, and special effect parameters of different special effect videos are different; a processor 1010, configured to play at least one target special effect video associated with a target object in response to a first input received by the user input unit 1007.
Optionally, in this embodiment of the application, the user input unit 1007 is specifically configured to receive an input of a user to an area where a target object is located; or, in the case that the playing interface includes object identifiers associated with the targets, receiving user input of object identifiers of the target objects, wherein one object identifier indicates one shooting object.
Optionally, in this embodiment of the present application, the target object is associated with at least two special effect videos; the processor 1010 is specifically configured to play the special effect video associated with the first preset input feature when each special effect video is associated with one preset input feature and the input feature of the first input is matched with the first preset input feature; wherein the preset input features include at least one of: inputting an initial position, an input direction and an input track; the processor 1010 is specifically configured to, when the input feature of the first input is a second preset input feature, play the at least two special effect videos associated with the target object according to the play sequence associated with the second preset input feature.
Optionally, in this embodiment of the present application, the user input unit 1007 is specifically configured to receive an input of a user to a target object in a first video frame displayed in a playing interface of a target video; a processor 1010, configured to start playing a target special effect video from a second video frame of the target special effect video associated with the target object; the second video frame is a video frame of a frame number corresponding to the first video frame in the target special effect video, or the second video frame is an initial video frame of the target special effect video.
Optionally, in this embodiment of the present application, the memory 1009 is configured to store, after the user input unit 1007 receives a first input of a user to a target object in a play interface of a target video, a frame number of a second video frame as history play information; a user input unit 1007, further configured to receive a second input by the user; the processor 1010 is further configured to, in response to a second input received by the user input unit 1007, play the target video according to the pre-stored historical play information, and in the process of playing the target video, jump to a second video frame of a target special effect video associated with the target object to continue playing in the case of playing to the first video frame.
Optionally, in this embodiment of the application, in a case that the playing interface includes at least one special effect video thumbnail of a special effect video, the user input unit 1007 is specifically configured to receive a first input of a target special effect video thumbnail in the at least one special effect video thumbnail from a user; wherein, one special effect video thumbnail is a thumbnail of any video frame of one special effect video; the processor 1010 is specifically configured to play a special effect video including a video frame corresponding to the target special effect video thumbnail.
Optionally, in this embodiment of the application, the processor 1010 is specifically configured to switch a playing interface of at least one target special-effect video from a playing interface of a target video to play the target video when the target video is an original video; the processor 1010 is specifically configured to jump from a video frame displayed on the play interface to a target video frame for playing when the target video is the composite video, where the target video frame is a video frame corresponding to at least one target special-effect video in the composite video.
Optionally, in this embodiment of the application, in a case that the playing interface includes at least one special effect video thumbnail of a special effect video, the user input unit 1007 is further configured to receive a third input of a target special effect video thumbnail in the at least one special effect video thumbnail from the user; the processor 1010 is further configured to delete the special effect video corresponding to the special effect video thumbnail in response to a third input in a case where the target video is an original video; the processor 1010 is further configured to delete a video clip of the same content as the special effect video corresponding to the special effect video thumbnail in the composite video in response to a third input in a case where the target video is the composite video.
In the electronic device provided by the embodiment of the application, at least one shooting object included in the target video is associated with at least one special effect video, so that the at least one target special effect video associated with the target object can be played according to the input of the user to the target object in the playing interface of the target video, and the target video is not played with a fixed playing effect, so that the special effect of the video can be enriched, and the diversity of video playing effects is improved.
In a case that the electronic device 1000 is a second electronic device described in the embodiments, the display unit 1006 is configured to display a video recording interface of an original video, where the video recording interface includes at least one video recording window of a special-effect video associated with at least one shooting object, the at least one video recording window of the special-effect video is a video recording window associated with at least one shooting object in the video recording interface, and special-effect parameters of different video recording windows are different; a processor 1010, configured to update the preview image in each video recording window synchronously during the process of recording the original video; the processor 1010 is further configured to output a target file when the recording of the original video and the at least one special-effect video is completed; the target file comprises an original video and at least one special effect video, or the target file comprises a composite video which is obtained by combining the original video and the at least one special effect video, or the target file comprises the original video, the at least one special effect video and the composite video.
Optionally, in this embodiment, the user input unit 1007 is configured to receive a first setting input of a target object in a video preview interface from a user before the display unit 1006 displays a video recording interface of an original video; a display unit 1006, further configured to display a special effect parameter setting option in response to a first setting input received by the user input unit 1007; a user input unit 1007, further configured to receive a second setting input for the special effect parameter setting option from the user, where the second setting input is used to set at least one special effect parameter; a display unit 1006, further configured to display a first video recording window in response to a second setting input received by the user input unit 1007; the processor 1010 is further configured to update the special effect preview image in the first video recording window according to at least one special effect parameter set by a second setting input in response to the second setting input received by the user input unit 1007; the processor 1010 is further configured to establish an association relationship between the target object, the special effect parameter set by the second setting input, and the first video recording window.
Optionally, in this embodiment of the application, the processor 1010 is further configured to establish an association relationship between the target object, an input feature of the first setting input, and the first video recording window after the user input unit 1007 receives a second setting input to the special effect parameter setting interface from the user; wherein the input features include at least one of: inputting an initial position, an input direction and an input track; a user input unit 1007 also used for receiving a third setting input of the target object by the user; a display unit 1006, further configured to display special effect parameter setting options in response to a third setting input received by the user input unit 1007; a user input unit 1007, further configured to receive a fourth setting input for setting options for special effect parameters by the user, where the fourth setting input is used to set at least one special effect parameter; a display unit 1006, further configured to display a second video recording window in response to a fourth setting input received by the user input unit 1007; the processor 1010 is further configured to update the special effect preview image in the second video recording window according to at least one special effect parameter set by a fourth setting input in response to the fourth setting input received by the user input unit 1007; the processor 1010 is further configured to establish an association relationship between the target object, the special effect parameter set by the second setting input, the input feature set by the third setting input, and the first video recording window.
Optionally, in this embodiment of the application, the user input unit 1007 is configured to receive a fifth setting input from the user before the display unit 1006 displays the video recording interface of the original video; a processor 1010, configured to determine a playing order of the special effect video in response to a fifth setting input received by the user input unit 1007, and establish an association between an input characteristic of the fifth setting input and the playing order.
Optionally, in this embodiment of the application, the processor 1010 is further configured to perform object identification on a preview image displayed on the video recording interface; the display unit 1006 is further configured to display an object identifier of each identified object.
Optionally, in this embodiment of the application, the processor 1010 is specifically configured to, when the original video and the at least one special-effect video are recorded completely, perform video synthesis on the recorded original video and the at least one special-effect video according to a special-effect parameter setting sequence of the at least one special-effect video before recording, and output a synthesized video.
Optionally, in this embodiment of the application, the processor 1010 is specifically configured to copy each frame of preview image displayed in the video recording interface to each video recording window frame by frame, and perform image processing on each copied frame of preview image according to a special effect parameter associated with each video recording window; and under the condition that the number of the video recording windows is at least two, for each frame of preview image, the number of the preview images copied each time is the same as the number of the video recording windows.
Optionally, in this embodiment of the present application, the user input unit 1007 is configured to receive a sixth setting input to the third video recording window by the user in the process of recording the original video; the processor 1010 is further configured to update the special effect parameter of the third video recording window in response to a sixth setting input received by the user input unit 1007, and update the special effect preview image in the third video recording window according to the updated special effect parameter.
Optionally, in this embodiment of the application, the video recording interface includes a first shooting object and a second shooting object, the first shooting object is associated with the fourth video recording window, and the second shooting object is associated with the fifth video recording window; the user input unit 1007 is configured to receive a seventh setting input of the first photographic subject and the second photographic subject from the user, where the seventh setting input is used to exchange special effect recording windows associated with the first photographic subject and the second photographic subject; the processor 1010 is further configured to establish an association relationship between the first photographic subject and the fifth video recording window, and establish an association relationship between the second photographic subject and the fourth video recording window, in response to a seventh setting input received by the user input unit 1007.
In the electronic device provided by the embodiment of the application, at least one video recording window of a special-effect video can be displayed while the video recording interface of an original video is displayed, the special-effect parameters of different video recording windows are different, and the video recording interface and the content in each video recording window are updated synchronously, so that on one hand, a user can conveniently check the recording effect under different effect parameters, and on the other hand, the operation of shooting a plurality of effect videos can be simplified.
The beneficial effects of the various implementation manners in this embodiment may specifically refer to the beneficial effects of the corresponding implementation manners in the above method embodiments, and are not described herein again to avoid repetition.
It should be understood that, in the embodiment of the present application, the input Unit 1004 may include a Graphics Processing Unit (GPU) 10041 and a microphone 10042, and the Graphics Processing Unit 10041 processes image data of still pictures or video obtained by an image capturing device (such as a camera) in a video capturing mode or an image capturing mode. The display unit 1006 may include a display panel 10061, and the display panel 10061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 1007 includes at least one of a touch panel 10071 and other input devices 10072. The touch panel 10071 is also referred to as a touch screen. The touch panel 10071 may include two parts, a touch detection device and a touch controller. Other input devices 10072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein.
The memory 1009 may be used to store software programs as well as various data. The memory 1009 may mainly include a first storage area storing a program or an instruction and a second storage area storing data, wherein the first storage area may store an operating system, an application program or an instruction (such as a sound playing function, an image playing function, and the like) required for at least one function, and the like. Further, the memory 1009 may include volatile memory or nonvolatile memory, or the memory 1009 may include both volatile and nonvolatile memory. The non-volatile Memory may be a Read-Only Memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an Electrically Erasable PROM (EEPROM), or a flash Memory. The volatile Memory may be a Random Access Memory (RAM), a Static Random Access Memory (Static RAM, SRAM), a Dynamic Random Access Memory (Dynamic RAM, DRAM), a Synchronous Dynamic Random Access Memory (Synchronous DRAM, SDRAM), a Double Data Rate Synchronous Dynamic Random Access Memory (Double Data Rate SDRAM, ddr SDRAM), an Enhanced Synchronous SDRAM (ESDRAM), a Synchronous Link DRAM (SLDRAM), and a Direct Memory bus RAM (DRRAM). The memory 1009 in the embodiments of the present application includes, but is not limited to, these and any other suitable types of memory.
Processor 1010 may include one or more processing units; optionally, the processor 1010 integrates an application processor, which primarily handles operations related to the operating system, user interface, and applications, and a modem processor, which primarily handles wireless communication signals, such as a baseband processor. It will be appreciated that the modem processor described above may not be integrated into processor 1010.
The embodiments of the present application further provide a readable storage medium, where a program or an instruction is stored, and when the program or the instruction is executed by a processor, the program or the instruction implements the processes of the embodiments of the video playing method or the video shooting method, and can achieve the same technical effects, and in order to avoid repetition, the detailed description is omitted here.
The processor is a processor in the electronic device in the embodiment. The readable storage medium includes computer readable storage medium, such as computer read only memory ROM, random access memory RAM, magnetic or optical disk, etc.
The embodiment of the present application further provides a chip, where the chip includes a processor and a communication interface, the communication interface is coupled to the processor, and the processor is configured to execute a program or an instruction to implement each process of the video playing method or the video shooting method, and the same technical effect can be achieved, and is not described herein again to avoid repetition.
It should be understood that the chips mentioned in the embodiments of the present application may also be referred to as a system-on-chip, or a system-on-chip.
Embodiments of the present application provide a computer program product, where the program product is stored in a storage medium, and the program product is executed by at least one processor to implement the processes of the foregoing shooting method embodiments, and can achieve the same technical effects, and in order to avoid repetition, details are not repeated here.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrases "comprising a component of' 8230; \8230;" does not exclude the presence of another like element in a process, method, article, or apparatus that comprises the element. Further, it should be noted that the scope of the methods and apparatuses in the embodiments of the present application is not limited to performing the functions in the order illustrated or discussed, but may include performing the functions in a substantially simultaneous manner or in a reverse order based on the functions recited, e.g., the described methods may be performed in an order different from that described, and various steps may be added, omitted, or combined. Additionally, features described with reference to certain examples may be combined in other examples.
Through the description of the foregoing embodiments, it is clear to those skilled in the art that the method of the foregoing embodiments may be implemented by software plus a necessary general hardware platform, and certainly may also be implemented by hardware, but in many cases, the former is a better implementation. Based on such understanding, the technical solutions of the present application may be embodied in the form of a computer software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, or a network device) to execute the method of the embodiments of the present application.
While the present embodiments have been described with reference to the accompanying drawings, it is to be understood that the invention is not limited to the precise embodiments described above, which are meant to be illustrative and not restrictive, and that various changes may be made therein by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (21)

1. A video playback method, the method comprising:
receiving a first input of a user to a target object in a playing interface of a target video, wherein at least one shooting object in the target video is associated with at least one special effect video, the target video comprises an original video or a synthesized video, the synthesized video is obtained by synthesizing the original video and the special effect video associated with the at least one shooting object in the original video, the target object is at least one shooting object selected by the user from the at least one shooting object, and special effect parameters of different special effect videos are different;
in response to the first input, playing at least one target special effect video associated with the target object.
2. The method of claim 1, wherein receiving a first input from a user to a target object in a play interface of a target video comprises:
receiving input of a user on the area where the target object is located;
or, receiving user input of object identifiers of the target objects under the condition that the playing interface comprises object identifiers associated with the targets, wherein one object identifier indicates one shooting object.
3. The method of claim 1, wherein the target object associates at least two special effect videos;
the playing of the at least one target special effect video associated with the target object comprises:
playing the special effect video associated with the first preset input feature under the condition that each special effect video is associated with one preset input feature and the input feature of the first input is matched with the first preset input feature; wherein the preset input features comprise at least one of: inputting an initial position, an input direction and an input track;
and under the condition that the input characteristic of the first input is a second preset input characteristic, playing at least two special effect videos related to the target object according to a playing sequence related to the second preset input characteristic.
4. The method of claim 1, wherein receiving a first input from a user to a target object in a play interface of a target video comprises:
receiving input of a user on the target object in a first video frame displayed in a playing interface of the target video;
the playing of the at least one target special effect video associated with the target object comprises:
starting to play the target special effect video from a second video frame of one target special effect video associated with the target object;
the second video frame is a video frame of a frame number corresponding to the first video frame in the target special effect video, or the second video frame is a starting video frame of the target special effect video.
5. The method of claim 4, wherein after receiving the first input of the target object in the playing interface of the target video from the user, the method further comprises:
storing the frame number of the second video frame as historical playing information;
the method further comprises the following steps:
receiving a second input of the user;
responding to the second input, playing the target video according to the pre-stored historical playing information, and jumping to a second video frame of a target special effect video associated with the target object to continue playing under the condition of playing to the first video frame in the process of playing the target video.
6. The method of claim 1, wherein in the case that the playback interface includes at least one special effect video thumbnail of a special effect video, the receiving a first user input to a target object in the playback interface of a target video comprises:
receiving a first input of a user to a target special effect video thumbnail in at least one special effect video thumbnail;
wherein, one special effect video thumbnail is a thumbnail of any video frame of one special effect video;
the playing of the at least one target special effect video associated with the target object includes:
and playing the special effect video comprising the video frame corresponding to the target special effect video thumbnail.
7. The method of claim 1, wherein the playing at least one target special effects video associated with the target object comprises:
under the condition that the target video is an original video, switching at least one playing interface of the target special-effect video from the playing interface of the target video for playing;
and under the condition that the target video is the composite video, skipping from the video frame displayed on the playing interface to the target video frame for playing, wherein the target video frame is a video frame corresponding to at least one target special-effect video in the composite video.
8. The method of claim 1, wherein if the playback interface includes at least one special effect video thumbnail for a special effect video, the method further comprises:
receiving a third input of a user to a target special effect video thumbnail of the at least one special effect video thumbnail;
deleting the special effect video corresponding to the special effect video thumbnail in response to the third input under the condition that the target video is an original video;
and in response to the third input, deleting a video clip of the same content as the special effect video corresponding to the special effect video thumbnail in the composite video in the case that the target video is the composite video.
9. A method of video capture, the method comprising:
displaying a video recording interface of an original video, wherein the video recording interface comprises at least one video recording window of at least one special-effect video associated with at least one shooting object, the video recording window of the at least one special-effect video is the video recording window associated with the at least one shooting object in the video recording interface, and special-effect parameters of different video recording windows are different;
in the process of recording the original video, synchronously updating the preview image in each video recording window;
under the condition that the original video and the at least one special-effect video are recorded completely, outputting a target file;
the object file comprises an original video and at least one special effect video, or the object file comprises a composite video synthesized by the original video and the at least one special effect video, or the object file comprises the original video, the at least one special effect video and the composite video.
10. The method of claim 9, wherein prior to displaying the video recording interface of the original video, further comprising:
receiving a first setting input of a user on a target object in a video preview interface;
displaying a special effect parameter setting option in response to the first setting input;
receiving a second setting input of the user for the special effect parameter setting option, wherein the second setting input is used for setting at least one special effect parameter;
responding to the second setting input, displaying a first video recording window, and updating a special effect preview image in the first video recording window according to at least one special effect parameter set by the second setting input;
and establishing an incidence relation among the target object, the special effect parameter set by the second setting input and the first video recording window.
11. The method of claim 10, wherein after receiving a second setting input from the user to the special effects parameter setting interface, further comprising:
establishing an incidence relation among the target object, the input features of the first setting input and the first video recording window;
wherein the input features include at least one of: inputting an initial position, an input direction and an input track;
the method further comprises the following steps:
receiving a third setting input of the target object by the user;
displaying a special effect parameter setting option in response to the third setting input;
receiving a fourth setting input of the special effect parameter setting option from a user, wherein the fourth setting input is used for setting at least one special effect parameter;
responding to the fourth setting input, displaying a second video recording window, and updating a special effect preview image in the second video recording window according to at least one special effect parameter set by the fourth setting input;
and establishing an incidence relation among the target object, the special effect parameter set by the second setting input, the input characteristic set by the third setting input and the first video recording window.
12. The method of claim 9, wherein prior to displaying the video recording interface of the original video, further comprising:
receiving a fifth setting input of the user;
and responding to the fifth setting input, determining the playing sequence of the special effect video, and establishing the association relationship between the input characteristics of the fifth setting input and the playing sequence.
13. The method of claim 9, further comprising:
performing object identification on the preview image displayed on the video recording interface;
and displaying the object identification of each identified object.
14. The method of claim 9, wherein outputting the target file in the event that the recording of the original video and the at least one special effects video is complete comprises:
and under the condition that the original video and the at least one special-effect video are recorded, synthesizing the recorded original video and the at least one special-effect video according to the special-effect parameter setting sequence of the at least one special-effect video before recording, and outputting the synthesized video.
15. The method of claim 9, wherein the synchronously updating the preview images in each video recording window comprises:
copying each frame of preview image displayed in the video recording interface into each video recording window frame by frame, and carrying out image processing on each copied frame of preview image according to special effect parameters associated with each video recording window;
and under the condition that the number of the video recording windows is at least two, the number of the preview images copied each time is the same as the number of the video recording windows for each frame of preview image.
16. The method of claim 9, wherein during recording of the original video, the method further comprises:
receiving a sixth setting input of the user to the third video recording window;
and responding to the sixth setting input, updating the special effect parameter of the third video recording window, and updating the special effect preview image in the third video recording window according to the updated special effect parameter.
17. The method of claim 9, wherein the video recording interface comprises a first camera and a second camera, the first camera associated with a fourth video recording window and the second camera associated with a fifth video recording window;
the method further comprises the following steps:
receiving a seventh setting input of the first shooting object and the second shooting object by a user, wherein the seventh setting input is used for exchanging special effect recording windows related to the first shooting object and the second shooting object;
and responding to the seventh setting input, establishing an association relationship between the first shot object and the fifth video recording window, and establishing an association relationship between the second shot object and the fourth video recording window.
18. A video playback apparatus, the apparatus comprising: the device comprises a receiving module and a playing module;
the receiving module is used for receiving a first input of a user to a target object in a playing interface of a target video, at least one shooting object in the target video is associated with at least one special effect video, the target video comprises an original video or a synthesized video, the synthesized video is obtained by synthesizing the original video and the special effect video associated with the at least one shooting object in the original video, the target object is at least one shooting object selected by the user in the at least one shooting object, and different special effect parameters of the special effect videos are different;
the playing module is configured to play at least one target special effect video associated with the target object in response to the first input received by the receiving module.
19. A video camera, the device comprising: the device comprises a display module, an updating module and a processing module;
the display module is used for displaying a video recording interface of an original video, the video recording interface comprises at least one video recording window of at least one special-effect video related to at least one shooting object, the video recording window of the at least one special-effect video is a video recording window related to the at least one shooting object in the video recording interface, and special-effect parameters of different video recording windows are different;
the updating module is used for synchronously updating the preview image in each video recording window in the process of recording the original video;
the processing module is used for outputting a target file under the condition that the original video and the at least one special effect video are recorded completely;
the target file comprises an original video and at least one special effect video, or the target file comprises a composite video synthesized by the original video and the at least one special effect video, or the target file comprises the original video, the at least one special effect video and the composite video.
20. An electronic device comprising a processor, a memory, and a program or instructions stored on the memory and executable on the processor, the program or instructions when executed by the processor implementing the steps of the video playback method of any of claims 1 to 8.
21. An electronic device comprising a processor, a memory, and a program or instructions stored on the memory and executable on the processor, the program or instructions when executed by the processor implementing the steps of the video capture method of any of claims 9 to 17.
CN202211038653.XA 2022-08-26 2022-08-26 Video playing method and device and electronic equipment Pending CN115767141A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202211038653.XA CN115767141A (en) 2022-08-26 2022-08-26 Video playing method and device and electronic equipment
PCT/CN2023/114196 WO2024041514A1 (en) 2022-08-26 2023-08-22 Video playing method and apparatus, and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211038653.XA CN115767141A (en) 2022-08-26 2022-08-26 Video playing method and device and electronic equipment

Publications (1)

Publication Number Publication Date
CN115767141A true CN115767141A (en) 2023-03-07

Family

ID=85349383

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211038653.XA Pending CN115767141A (en) 2022-08-26 2022-08-26 Video playing method and device and electronic equipment

Country Status (2)

Country Link
CN (1) CN115767141A (en)
WO (1) WO2024041514A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024041514A1 (en) * 2022-08-26 2024-02-29 维沃移动通信有限公司 Video playing method and apparatus, and electronic device

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104394324B (en) * 2014-12-09 2018-01-09 成都理想境界科技有限公司 Special efficacy video generation method and device
CN104967900B (en) * 2015-05-04 2018-08-07 腾讯科技(深圳)有限公司 A kind of method and apparatus generating video
CN111050203B (en) * 2019-12-06 2022-06-14 腾讯科技(深圳)有限公司 Video processing method and device, video processing equipment and storage medium
CN112165632B (en) * 2020-09-27 2022-10-04 北京字跳网络技术有限公司 Video processing method, device and equipment
CN112672185B (en) * 2020-12-18 2023-07-07 脸萌有限公司 Augmented reality-based display method, device, equipment and storage medium
CN113542610A (en) * 2021-07-27 2021-10-22 上海传英信息技术有限公司 Shooting method, mobile terminal and storage medium
CN115767141A (en) * 2022-08-26 2023-03-07 维沃移动通信有限公司 Video playing method and device and electronic equipment

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024041514A1 (en) * 2022-08-26 2024-02-29 维沃移动通信有限公司 Video playing method and apparatus, and electronic device

Also Published As

Publication number Publication date
WO2024041514A1 (en) 2024-02-29

Similar Documents

Publication Publication Date Title
CN110868631B (en) Video editing method, device, terminal and storage medium
US20080240683A1 (en) Method and system to reproduce contents, and recording medium including program to reproduce contents
KR20140143725A (en) Image correlation method and electronic device therof
CN110636365B (en) Video character adding method and device, electronic equipment and storage medium
CN103440304A (en) Method and device for storing picture
WO2024041514A1 (en) Video playing method and apparatus, and electronic device
CN110572717A (en) Video editing method and device
CN103699621A (en) Method for recording graphic and text information on materials recorded by mobile device
CN104350455A (en) Causing elements to be displayed
CN114666637A (en) Video editing method, audio editing method and electronic equipment
JP2008250700A (en) Information processor, window reproduction method and program
CN116017043A (en) Video generation method, device, electronic equipment and storage medium
CN114125297B (en) Video shooting method, device, electronic equipment and storage medium
CN110703973B (en) Image cropping method and device
CN115437736A (en) Method and device for recording notes
CN113840099B (en) Video processing method, device, equipment and computer readable storage medium
CN114302009A (en) Video processing method, video processing device, electronic equipment and medium
CN114025237A (en) Video generation method and device and electronic equipment
CN112307252A (en) File processing method and device and electronic equipment
CN114390205B (en) Shooting method and device and electronic equipment
CN110662104B (en) Video dragging bar generation method and device, electronic equipment and storage medium
CN114519859A (en) Text recognition method, text recognition device, electronic equipment and medium
CN116628244A (en) Picture display method and device
CN114286010A (en) Shooting method, shooting device, electronic equipment and medium
CN116074580A (en) Video processing method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination