WO2021031733A1 - 视频特效生成方法及终端 - Google Patents

视频特效生成方法及终端 Download PDF

Info

Publication number
WO2021031733A1
WO2021031733A1 PCT/CN2020/100840 CN2020100840W WO2021031733A1 WO 2021031733 A1 WO2021031733 A1 WO 2021031733A1 CN 2020100840 W CN2020100840 W CN 2020100840W WO 2021031733 A1 WO2021031733 A1 WO 2021031733A1
Authority
WO
WIPO (PCT)
Prior art keywords
video
video segment
playback rate
target information
rate
Prior art date
Application number
PCT/CN2020/100840
Other languages
English (en)
French (fr)
Inventor
杜乐
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Publication of WO2021031733A1 publication Critical patent/WO2021031733A1/zh
Priority to US17/674,918 priority Critical patent/US20220174237A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/76Television signal recording
    • H04N5/78Television signal recording using magnetic recording
    • H04N5/782Television signal recording using magnetic recording on tape
    • H04N5/783Adaptations for reproducing at a rate different from the recording rate
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B27/00Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
    • G11B27/005Reproducing at a different information rate from the information rate of recording
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B27/00Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
    • G11B27/02Editing, e.g. varying the order of information signals recorded on, or reproduced from, record carriers
    • G11B27/031Electronic editing of digitised analogue information signals, e.g. audio or video signals
    • G11B27/034Electronic editing of digitised analogue information signals, e.g. audio or video signals on discs
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B27/00Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
    • G11B27/10Indexing; Addressing; Timing or synchronising; Measuring tape travel
    • G11B27/19Indexing; Addressing; Timing or synchronising; Measuring tape travel by using information detectable on the record carrier
    • G11B27/28Indexing; Addressing; Timing or synchronising; Measuring tape travel by using information detectable on the record carrier by using information signals recorded by the same method as the main recording
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B27/00Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
    • G11B27/10Indexing; Addressing; Timing or synchronising; Measuring tape travel
    • G11B27/34Indicating arrangements 
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • H04N21/47202End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for requesting content on demand, e.g. video on demand
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/845Structuring of content, e.g. decomposing content into time segments
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/845Structuring of content, e.g. decomposing content into time segments
    • H04N21/8456Structuring of content, e.g. decomposing content into time segments by decomposing the content in the time domain, e.g. in time segments
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/2625Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects for obtaining an image which is composed of images from a temporal image sequence, e.g. for a stroboscopic effect

Definitions

  • This application relates to the field of video processing technology, and in particular to a method and terminal for generating video special effects.
  • the slow motion and time-lapse photography currently provided on the terminal only provide the corresponding functions.
  • users need to have a certain degree of artistic skills, and ordinary users lack sufficient experience in grasping the rhythm of the video.
  • the filmed video has a single rhythm, lacks artistic expression, no viewability, and no sharing value.
  • the captured video can be re-edited to enrich the rhythm of the video and increase the viewability of the video, the fast/slow motion special effects are not convenient to edit, and users must have strong editing experience and artistic skills, otherwise they cannot be edited. Rich rhythm and high-value sharing video works. Therefore, how to easily and quickly obtain video works with rich rhythms and high sharing value when users lack artistic skills is a problem being studied by those skilled in the art.
  • the embodiment of the application discloses a method and terminal for generating video special effects, which can improve the efficiency of video editing, and obtain video works with rich rhythm and high sharing value simply and quickly.
  • an embodiment of the present application discloses a method for generating video special effects.
  • the method includes: a terminal obtains target information of a first video segment in a target video, and the target information includes the content feature of the first video segment and the first video segment.
  • One or more of the shooting parameters of a video clip the terminal determines the first playback rate of the first video clip according to the target information of the first video clip; the terminal adjusts the playback rate of the first video clip to The above first playback rate.
  • the embodiments of this application do not require users to have artistic skills and editing capabilities.
  • the device is based on the content of the captured video (such as video Presented scene), or some parameters (such as focal length) when shooting the video, automatically determine the playback rate of the video, and then intelligently adjust the playback speed of the video, you can easily and quickly obtain rich-tempo, high-value-sharing video works, and editing efficiency Greatly improved, and at the same time suitable for more users.
  • adjusting the playback rate of the first video segment to the first playback rate includes: playing the first video segment according to the first playback rate.
  • Adjustment here means “change” the playback rate in some implementations, and “set” the playback rate in other implementations.
  • the playback rate is embodied as the speed of video playback.
  • the adjustment of the playback rate is not limited to adjusting the value of "rate".
  • the first video segment refers to a part of the target video or the target video itself.
  • the method is executed when the target video is captured, or is executed when the target video is saved, or is executed before the target video or the first video segment is played.
  • the terminal determining the first playback rate of the first video clip according to the target information of the first video clip includes: the terminal determining the first playback rate according to the target information of the first video clip The first video type of the video clip; the terminal matches the first play rate corresponding to the first video type of the first video clip from the preset special effect mapping relationship, where the special effect mapping relationship defines multiple video types and multiple Correspondence of a playback rate.
  • the first playback rate may be calculated through a mathematical model, where the input of the model is one or more target information of the first video segment, and the output is the first playback rate.
  • the target information of the first video segment includes content features of the first video segment, and the content features of the first video segment include information about a scene in the first video segment. information.
  • the terminal matching the first play rate corresponding to the first video type of the first video clip from the preset special effect mapping relationship includes: the terminal uses the stream when the first video type is stream , In the case of rain and snow weather or animal close-up video types, the first play rate corresponding to the first video type from the preset special effects mapping relationship is the slow motion play rate; the first video type of the terminal is In the case of a street or natural scene video type, the first play rate corresponding to the first video type matched from the preset special effect mapping relationship is the play rate of the fast action.
  • each picture scene type in the special effect mapping relationship corresponds to a play rate
  • the play rate of the video segment is determined by analyzing the picture scene of the video segment, which can increase the viewing of the video segment; in addition, this application
  • the embodiment matches the playback rate corresponding to each scene type in the special effect mapping relationship in advance, so that as long as the scene type of the screen is analyzed according to the video clip, the corresponding playback rate can be determined in the special effect mapping relationship, which improves the editing of the video clip effectiveness.
  • the target information of the first video segment includes shooting parameters of the first video segment
  • the shooting parameters of the first video segment include the shooting focal length of the first video segment
  • the terminal matches the first play rate corresponding to the first video type of the first video clip from the preset special effect mapping relationship, including: the terminal is shooting in the first video type.
  • the first playback rate corresponding to the first video type from the preset special effect mapping relationship is the slow motion playback rate;
  • matching the first playback rate corresponding to the first video type from the preset special effects mapping relationship is the playback rate of the fast motion; wherein, within the first focal length range Any focal length of is greater than any focal length in the above-mentioned second focal length range.
  • the playback rate of the video clip shot using the close focus mode is matched to the slow motion playback rate, so that more scene details are displayed when the video clip is played; the far focus will be used
  • the playback rate of the video clip shot in the wide-angle mode is matched to the fast-action playback rate, so that the global information of the scene can be quickly displayed when the video clip is played, thereby presenting a better viewing effect for the user.
  • the target information of the first video segment includes the content feature of the first video segment
  • the content feature of the first video segment includes the shooting duration of the first video segment
  • the terminal matches the first play rate corresponding to the first video type of the first video clip from the preset special effect mapping relationship, including: the terminal is shooting in the first video type.
  • the first playback rate corresponding to the first video type matching the preset special effect mapping relationship is the slow motion playback rate; the terminal is in the first video
  • the type is a video type whose shooting duration is within the second preset duration range
  • the first play rate corresponding to the first video type matching the preset special effect mapping relationship is the play rate of the fast action; Any one within a preset duration range is less than any one within the aforementioned second preset duration range.
  • the playback rate of a video clip with a longer shooting time is matched to the playback rate of a fast action, so that the entire process of the scene can be quickly displayed when the video clip is played; the shooting time is longer.
  • the playback rate of the short video clip is matched to the slow motion playback rate, so that more scene details can be displayed when the video clip is played, thereby presenting a special viewing effect for the user.
  • the target information of the first video segment includes the content feature of the first video segment, and the content feature of the first video segment includes a picture change situation in the first video segment.
  • the terminal matching the first play rate corresponding to the first video type of the first video clip from the preset special effect mapping relationship includes: when the first video type is screen change In the case of a video type whose speed falls within the first varying speed range, the first play rate corresponding to the first video type is matched from the preset special effect mapping relationship as the slow motion play rate; in the first video When the type is a video type whose picture change speed falls within the second change speed range, the first play rate corresponding to the first video type from the preset special effect mapping relationship is the play rate of the fast action; wherein, Any one speed in the first changing speed range is greater than any one speed in the second changing speed range.
  • the playback rate of a video clip with a rapid picture change is matched to the slow motion playback rate, so that more scene details can be displayed when the video clip is played;
  • the playback rate of the slow video segment is matched to the playback rate of the fast action, so that the entire change process of the scene picture can be quickly displayed when the video segment is played, thereby presenting a video that meets the viewing needs for the user.
  • the target information of the first video segment includes at least two types of information: information of the screen scene in the first video segment, the shooting focal length of the first video segment, and the first video segment.
  • the shooting time of a video clip and the picture changes in the first video clip;
  • the terminal determining the first playback rate of the first video clip according to the target information of the first video clip includes: the terminal according to the at least two The information determines at least two play rate results of the first video clip, where each play rate result is determined based on one of the at least two types of information; the terminal determines the first video clip based on the at least two play rate results.
  • the first playback rate of a video clip includes at least two types of information: information of the screen scene in the first video segment, the shooting focal length of the first video segment, and the first video segment.
  • the shooting time of a video clip and the picture changes in the first video clip;
  • the terminal determining the first playback rate of the first video clip according to the target information of the first video clip includes: the terminal according to the
  • the embodiment of the present application determines the playback rate of the video clip in the special effect mapping relationship by integrating multiple information of the video clip, which is beneficial to further optimizing the viewing effect of the video clip and improving the sharing value of the video clip.
  • the first playback rate is the playback rate that occurs most frequently among the playback rates characterized by the results of the at least two playback rates.
  • the above method further includes: the terminal acquiring target information of a second video segment in the target video, the target information including the content feature of the second video segment and the shooting of the second video segment One or more of the parameters; the terminal determines the second playback rate of the second video clip according to the target information of the second video clip; the terminal adjusts the playback rate of the second video clip to the second playback rate .
  • the embodiments of the present application indicate that the playback rate of multiple video clips included in a video can be adjusted separately, which is beneficial to further enrich the playback rhythm of the video.
  • acquiring the target information of the first video segment in the target video by the terminal includes: acquiring the target information of the first video segment in the target video in the process of shooting the target video by the terminal .
  • the embodiment of the present application adjusts the playback rate of the video during the video shooting process, so that the user can see the effect immediately after the shooting is completed, which improves the editing efficiency while improving the user experience.
  • the first play rate determination in the above various situations can be used in combination, and the specific play rate selected can be determined by the user or selected by default.
  • an embodiment of the present application provides a terminal, the terminal includes a processor and a memory, the memory stores a computer program, and the processor is configured to call the computer program to perform the following operations: Obtain information about the first video segment in the target video Target information, the target information includes one or more of the content characteristics of the first video segment and the shooting parameters of the first video segment; the first video segment of the first video segment is determined according to the target information of the first video segment Play rate: adjust the play rate of the first video clip to the first play rate.
  • the embodiments of this application do not require users to have artistic skills and editing capabilities.
  • the device is based on the content of the captured video (such as video Presented scene), or some parameters (such as focal length) when shooting the video, automatically determine the playback rate of the video, and then intelligently adjust the playback speed of the video, you can easily and quickly obtain rich-tempo, high-value-sharing video works, and editing efficiency Greatly improved, and at the same time suitable for more users.
  • the processor determines the first playback rate of the first video segment according to the target information of the first video segment, specifically: determining the first playback rate according to the target information of the first video segment The first video type of the video clip; matches the first play rate corresponding to the first video type of the first video clip from the preset special effect mapping relationship, where the special effect mapping relationship defines multiple video types and multiple playbacks Correspondence of speed.
  • the foregoing processor may calculate the first playback rate through a mathematical model. The input of the model is one or more target information of the first video segment, and the output is the first playback rate. .
  • the target information of the first video segment includes the content feature of the first video segment
  • the content feature of the first video segment includes the information of the screen scene in the first video segment
  • the processor matches the first play rate corresponding to the first video type of the first video clip from the preset special effect mapping relationship, specifically: when the first video type is stream , Rain and snow weather or animal close-up video types, the first play rate corresponding to the first video type from the preset special effects mapping relationship is the slow motion play rate; when the first video type is street or In the case of a natural scene video type, the first play rate corresponding to the first video type matched from the preset special effect mapping relationship is the play rate of the fast action.
  • the target information of the first video segment includes shooting parameters of the first video segment
  • the shooting parameters of the first video segment include the shooting focal length of the first video segment
  • the processor matches the first play rate corresponding to the first video type of the first video clip from the preset special effect mapping relationship, specifically: when the first video type is shooting In the case of a video type whose focal length is within the first focal length range, the first playback rate corresponding to the above-mentioned first video type from the preset special effect mapping relationship is the slow-motion playback rate; where the above-mentioned first video type is the shooting focal length In the case of the video type within the second focal length range, the first play rate corresponding to the first video type from the preset special effect mapping relationship is the play rate of the fast action; wherein, any one within the first focal length range The focal length is greater than any focal length in the aforementioned second focal length range.
  • the target information of the first video segment includes the content feature of the first video segment
  • the content feature of the first video segment includes the shooting duration of the first video segment
  • the processor matches the first play rate corresponding to the first video type of the first video clip from the preset special effect mapping relationship, specifically: when the first video type is shooting In the case of a video type whose duration is within the range of the first preset duration, the first play rate corresponding to the above-mentioned first video type from the preset special effect mapping relationship is the slow-motion play rate; where the above-mentioned first video type is In the case of a video type whose shooting duration is within the second preset duration range, the first playback rate corresponding to the first video type matching the preset special effect mapping relationship is the playback rate of the fast action; wherein, the first preview It is assumed that any one duration in the duration range is less than any duration within the second preset duration range.
  • the target information of the first video segment includes the content feature of the first video segment, and the content feature of the first video segment includes a picture change situation in the first video segment.
  • the processor matches the first play rate corresponding to the first video type of the first video clip from the preset special effect mapping relationship, specifically: when the first video type is In the case of a video type whose picture change speed falls within the first change speed range, the first play rate corresponding to the first video type is matched from the preset special effect mapping relationship as the slow motion play rate; When a video type is a video type whose picture change speed falls within the second change speed range, matching the first play rate corresponding to the first video type from the preset special effect mapping relationship is the play rate of the fast action; Wherein, any one speed in the first changing speed range is greater than any one speed in the second changing speed range.
  • the target information of the first video segment includes at least two types of information: information of the screen scene in the first video segment, the shooting focal length of the first video segment, and the first video segment.
  • the shooting time of a video segment and the picture changes in the first video segment;
  • the processor determines the first playback rate of the first video segment according to the target information of the first video segment, specifically: Information determines at least two play rate results of the first video clip, wherein each play rate result is determined based on one of the above at least two types of information; the first video is determined based on the at least two play rate results The first playback rate of the clip.
  • the first playback rate is the playback rate that occurs most frequently among the playback rates characterized by the results of the at least two playback rates.
  • the processor further performs the following operations: acquiring target information of a second video segment in the target video, the target information including the content characteristics of the second video segment and the information of the second video segment One or more of the shooting parameters; determine the second play rate of the second video segment according to the target information of the second video segment; adjust the play rate of the second video segment to the second play rate.
  • the embodiments of the present application indicate that the playback rate of multiple video clips included in a video can be adjusted separately, which is beneficial to further enrich the playback rhythm of the video.
  • the processor obtains the target information of the first video segment in the target video, specifically: obtaining the target information of the first video segment in the target video in the process of shooting the target video .
  • the embodiment of the present application adjusts the video playback rate during the video shooting process, so that the user can see the effect immediately after the shooting is completed, which improves the editing efficiency and the user experience at the same time.
  • an embodiment of the present application provides a terminal, and the terminal includes a unit for executing the method described in the first aspect or any possible implementation manner of the first aspect.
  • an embodiment of the present application provides a chip system that includes at least one processor, a memory, and an interface circuit.
  • the memory, the interface circuit, and the at least one processor are interconnected by wires, and the at least one memory stores There is a computer program; when the computer program is executed by the processor, it implements the method described in the first aspect or any possible implementation of the first aspect.
  • the memory may also be provided in addition to the chip system, and the processor executes the computer program in the memory through the interface circuit.
  • embodiments of the present application provide a computer-readable storage medium in which a computer program is stored.
  • the computer program is executed by a processor, the first aspect or any one of the first aspect is implemented. Possible implementation methods described.
  • the embodiments of the present application provide a computer program product, which when the computer program product runs on a processor, implements the first aspect or the method described in any possible implementation manner of the first aspect.
  • the embodiments of this application do not require users to have artistic skills and editing capabilities.
  • the content (such as the scene presented by the video), or some parameters when shooting the video (such as focal length), automatically determine the playback rate of the video, and then intelligently adjust the playback speed of the video, you can easily and quickly obtain rich rhythms and high sharing value Video works, the editing efficiency is greatly improved, and it is suitable for more users.
  • FIG. 1 is a schematic structural diagram of a terminal provided by an embodiment of the present application.
  • FIG. 2 is a schematic structural diagram of an operating system provided by an embodiment of the present application.
  • FIG. 3 is a schematic diagram of a logical structure of a terminal provided by an embodiment of the present application.
  • FIG. 4 is a schematic flowchart of a method for generating video special effects according to an embodiment of the present application
  • FIG. 5 is a schematic diagram of fluctuations in the similarity of screen content provided by an embodiment of the present application.
  • FIG. 6 is a schematic diagram of a trend of similarity of screen content provided by an embodiment of the present application.
  • Figures 7 to 16 are schematic diagrams of terminal user interfaces based on the video special effect generation method provided by the embodiments of the present application.
  • the terminals involved in the embodiments of this application may include handheld devices (for example, mobile phones, tablet computers, palmtop computers, etc.), vehicle-mounted devices (for example, automobiles, bicycles, electric vehicles, airplanes, ships, etc.), and wearable devices (for example, smart Watches (such as iWatch, etc.), smart bracelets, pedometers, etc.), smart home equipment (such as refrigerators, TVs, air conditioners, electric meters, etc.), smart robots, workshop equipment, and various forms of user equipment (User Equipment, UE), mobile station (Mobile station, MS), terminal equipment (Terminal Equipment), etc.
  • the terminal usually supports multiple applications, such as camera applications, word processing applications, phone applications, email applications, instant messaging applications, photo management applications, web browsing applications, and digital music players Apps and/or digital video player apps, etc.
  • FIG. 1 is a schematic structural diagram of a terminal 100 applied in an embodiment of this application.
  • the terminal 100 includes a memory 180, a processor 150, and a display device 140.
  • the memory 180 stores a computer program.
  • the computer program includes an operating system program 182 and an application program 181, among which the application program 181 includes a browser program.
  • the processor 150 is used to read the computer program in the memory 180, and then execute the method defined by the computer program. For example, the processor 150 reads the operating system program 182 to run the operating system on the terminal 100 and implement various functions of the operating system. Or read one or more application programs 181 to run the application on the terminal, for example, read the camera application program to run the camera.
  • the processor 150 may include one or more processors, for example, the processor 150 may include one or more central processors. When the processor 150 includes multiple processors, the multiple processors may be integrated on the same chip, or each may be an independent chip.
  • a processor may include one or more processing cores. The following embodiments are all described by taking multiple cores as an example, but the video special effect generation method provided in the embodiments of the present application may also be applied to a single-core processor.
  • the memory 180 also stores other data 183 in addition to the computer program.
  • the other data 183 may include data generated after the operating system 182 or the application program 181 is run.
  • the data includes system data (such as operating system configuration parameters) and User data, for example, the terminal obtains the target information of the target video (for example, the scene information in the target video, the shooting duration and other information), and the captured video data can be regarded as user data.
  • the storage 180 generally includes internal memory and external storage.
  • the memory can be random access memory (RAM), read only memory (ROM), and cache (CACHE).
  • External storage can be hard disk, CD, USB disk, floppy disk or tape drive, etc.
  • Computer programs are usually stored on external memory, and the processor loads the computer programs from external memory into memory before executing processing.
  • the video in the embodiment of the application can be stored on the external memory. When the video needs to be edited, the video to be edited can be loaded into the memory first.
  • the operating system program 182 includes a computer program that can implement the video special effect generation method provided by the embodiments of the present application, so that after the processor 150 reads the operating system program 182 and runs the operating system, the operating system can be equipped with the present application The video special effect generation function provided by the embodiment. Further, the operating system can open the calling interface of the video special effect generation function to the upper application. After the processor 150 reads the application 181 from the memory 180 and runs the application, the application can call the operation through the calling interface.
  • the video special effect generation function provided in the system enables video editing.
  • the terminal 100 may also include an input device 130 for receiving inputted digital information, character information, or contact touch operations/non-contact gestures, and generating signal inputs related to user settings and function control of the terminal 100.
  • the input device 130 may include a touch panel 131.
  • the touch panel 131 also called a touch screen, can collect the user's touch operations on or near it (for example, the user uses any suitable objects or accessories such as fingers, stylus, etc.) on the touch panel 131 or on the touch panel 131 ), and drive the corresponding connection device according to the preset program.
  • the touch panel 131 may include two parts: a touch detection device and a touch controller.
  • the touch detection device detects the user's touch position, detects the signal brought by the touch operation, and transmits the signal to the touch controller; the touch controller receives the touch information from the touch detection device, converts it into contact coordinates, and then sends it To the processor 150, and can receive and execute the commands sent by the processor 150.
  • the user clicks a virtual button with a finger on the touch panel 131 the touch detection device detects the signal brought about by this click, and then transmits the signal to the touch controller, and the touch controller transmits the signal
  • the coordinates are converted into coordinates and sent to the processor 150.
  • the processor 150 performs operations such as video selection and editing according to the coordinates and the type of the signal (single click or double click), and finally displays the editing result on the display panel 141.
  • the touch panel 131 can be implemented in multiple types such as resistive, capacitive, infrared, and surface acoustic wave.
  • the input device 130 may also include other input devices 132.
  • Other input devices 132 may include, but are not limited to, a physical keyboard, function keys (such as volume control buttons, switch buttons, etc.), trackball, mouse, joystick, etc. One or more of.
  • the terminal 100 may also include a display device 140, a display device 140, including a display panel 141 for displaying information input by the user or information provided to the user, various menu interfaces of the terminal 100, etc., which are mainly used in the embodiments of the present application It displays information such as the result of video editing and the video of the embodiment of the present application.
  • the display device 140 may include a display panel 141.
  • it may be configured in the form of a liquid crystal display (English: Liquid Crystal Display, abbreviation: LCD) or an organic light emitting diode (English: Organic Light-Emitting Diode, abbreviation: OLED), etc.
  • Display panel 141 In some other embodiments, the touch panel 131 may cover the display panel 141 to form a touch screen.
  • the terminal 100 may also include a power supply 190 for supplying power to other modules, a camera 160 for taking photos or videos, a positioning module (such as GPS) 161 for obtaining the geographical position of the terminal, and obtaining the placement posture of the terminal. (Such as angle, azimuth, etc.) gyroscope 162, timer 163 for recording time; wherein, the video used in the editing process of the embodiment of the present application can be captured by the camera 160.
  • the terminal 100 may also include one or more sensors 120, such as acceleration sensors, light sensors, and so on.
  • the terminal 100 may also include a radio frequency (RF) circuit 110 for performing network communication with wireless network devices, and may also include a WiFi module 170 for performing WiFi communication with other devices.
  • RF radio frequency
  • the following takes the Android operating system as an example in conjunction with FIG. 2 to introduce the various components of the operating system involved in the implementation position of the video special effect generation method provided by the embodiment of the present application.
  • FIG. 2 is a schematic diagram of a system structure of a terminal 200 provided by an embodiment of the application.
  • the terminal 200 may be a device in an embodiment of the present application, for example, it may be the terminal 100 shown in FIG. 1.
  • the terminal 200 includes an application layer 210 and an operating system layer 250.
  • the operating system may be an Android operating system.
  • the operating system layer 250 is further divided into a framework layer 220, a core library layer 230, and a driver layer 240. Wherein, the operating system layer 250 in FIG. 2 may be considered as a specific implementation of the operating system 182 in FIG. 1, and the application layer 210 in FIG. 2 may be considered as a specific implementation of the application program 181 in FIG. 1.
  • the driver layer 240 includes a CPU driver 241, a GPU driver 242, a display controller driver 243, a positioning module driver 244, a gyroscope driver 245, a timer driver 246, and so on.
  • the core library layer 230 is the core part of the operating system, including input/output service 231, core service 232, media service 234, etc.
  • the media service 234 includes a JPEG format picture library 1, a PNG format picture library 2 and other formats A picture library.
  • the media service 234 also includes an algorithm library for storing algorithms related to video processing in this application, for example, an algorithm for selecting a video segment, an algorithm for determining a playback rate of a corresponding video segment according to target information, and so on.
  • the framework layer 220 may include a graphic service (Graphic Service) 224, a system service (System service) 221, a web service (Web Service) 222, and a user service (Customer Service) 223, etc.; the graphic service 224 may include, for example, image coding Codec, Video encoding Codec and audio encoding Codec, etc.
  • the application layer 210 may include a gallery 211, a media player (Media Player) 212, a browser (Browser) 213, and so on.
  • the terminal 200 further includes a hardware layer 260.
  • the hardware layer of the terminal 200 may include a central processing unit (CPU) 251 and a graphics processing unit (GPU) 252 (equivalent to a specific implementation of the processor 150 in FIG. 1), or Including memory 253 (equivalent to memory 180 in Figure 1), including memory and external memory, and can also include positioning module 254 (equivalent to positioning module 161 in Figure 1), gyroscope 255 (equivalent to the gyroscope in Figure 1 162), it may also include a timer 256 (equivalent to the timer 163 in FIG. 1), and may also include one or more sensors (equivalent to the sensor 120 in FIG. 1).
  • the hardware layer 260 may also include the power supply, camera, RF circuit and WiFi module shown in FIG. 1, and may also include other hardware modules not shown in FIG. 1, such as a memory controller and a display control. ⁇
  • FIG. 3 exemplarily shows a schematic structural diagram of a terminal 300.
  • the modules in the terminal are modules divided according to functions. It can be understood that the above-mentioned modules are functional modules divided according to functions. In the specific implementation, Some functional blocks may be subdivided into more small functional modules, and some functional modules may also be combined into one functional module, but regardless of whether these functional modules are subdivided or combined, the terminal will generate the video to perform the special effect video process The general process is the same. Generally, each functional module corresponds to its own computer program, and when the computer program corresponding to each of these functional modules runs on the processor, the functional module executes the corresponding process to realize the corresponding function.
  • the terminal 300 includes a processing module 301, a function module 302, a storage module 303 (which may correspond to the memory 180 in FIG. 1), and a display module 304 (which may correspond to the display device 140 in FIG. 1). among them:
  • the processing module 301 includes an extraction module, an analysis module, and an adjustment module.
  • the extraction module can be used to perform the operation of obtaining target information in the video special effect generation method, and the analysis module can be used to determine the playback rate according to the target information.
  • the adjustment module can be used For adjusting the playback rate of the video, etc.
  • the function module 302 may include a camera module, a library module, a map module, a contact list module, and so on.
  • the camera module can be used to perform picture or video shooting operations, for example, it can be used to perform the operation of shooting target videos in the video special effect generation method, etc.
  • the library module can be used to perform picture and video management and maintenance operations, such as Perform the operation of managing and maintaining the target video in the video special effect generation method.
  • the storage module 303 can be used to store computer programs, system data (such as operating system configuration parameters) and user data.
  • the terminal obtains target information of the target video (for example, scene information in the target video, shooting time and other information), and , And captured video data can be regarded as user data.
  • the display module 304 can be used to display pictures or videos on the display screen. For example, the video whose playback rate is adjusted in the method for generating special effects of a video can be played on the display screen.
  • the processor 150 shown in FIG. 1 may call a computer program stored in the memory 180 to implement the functions of the processing module 301.
  • the processor 150 shown in FIG. 1 may also call a computer program stored in the memory 180 to implement the functions of the function module 302 described above.
  • the following describes a method for generating video special effects provided by an embodiment of the present application.
  • the method may be implemented based on the structure shown in FIG. 1, FIG. 2 and FIG. 3 or other structures.
  • the method includes but is not limited to the following steps:
  • S401 The terminal obtains target information of a first video segment in a target video.
  • the target information includes the content characteristics of the first video clip. In another optional solution, the target information includes shooting parameters of the first video clip. In another optional solution, the target The information includes the content feature of the first video segment and the shooting parameters of the first video segment.
  • the target information may include other information in addition to the above-exemplified information, and other information will not be given as examples here.
  • the content feature of the first video clip may include one or more of the information of the screen scene in the first video clip, the shooting duration of the first video clip, and the picture change in the first video clip.
  • the shooting parameters of the first video clip may include one or more of the shooting focal length, shutter speed, and aperture of the first video clip.
  • the aforementioned target video only includes one video segment, and in this case, the first video segment is this video segment.
  • the foregoing target video includes multiple video clips, and the first video clip may be a video clip that satisfies the playback rate adjustment condition among the multiple clips.
  • the foregoing target video includes multiple video clips, and the first video clip is any one of the multiple video clips, that is, each video clip in the multiple video clips satisfies the feature of adjusting the playback rate.
  • the above-mentioned target video may be a video that has been shot and saved in the above-mentioned terminal, that is, the target video is not edited immediately after being shot, and can be saved in the terminal memory first, and then the target video is obtained from the memory when editing is needed. Obtain the corresponding target information according to the video.
  • the target video may also be a video being shot by the terminal, that is, the target information of the first video segment in the target video is acquired during the process of shooting the target video.
  • the following describes the specific implementation process of the terminal acquiring the target information of the first video segment in the target video.
  • the terminal obtains the target video and extracts shooting parameters of the target video; then, analyzes and processes the extracted target video and the extracted shooting parameters of the target video to classify the target video. Segment, and clarify the target information included in each video segment.
  • the aforementioned terminal can acquire each frame of the target video frame by frame in chronological order, and then recognize the screen content in the acquired frame image through image recognition technology, and then classify the target video according to the category to which the screen content belongs.
  • Segment that is, a video composed of continuous frame images of the same category to which the screen content belongs is divided into one video segment, and the screen scene of the video clip obtained by segmentation is the scene corresponding to the screen content. For example, if the content of the first 100 frames of images in the target video is a street view, the content of the next 200 frames of images is snow falling in the sky.
  • the terminal can divide the target video into two video segments according to the difference in the content of the front and back images of the target video: the first video segment composed of 100 frames of images and the subsequent video segment composed of 200 frames of images. Then the screen scene of the video clip composed of the first 100 frames of images is the street scene, and the screen scene of the video clip composed of the following 200 frames of images is the snow.
  • the screen content of these frame images can be analyzed and compared to obtain the screen change speed of the target video or the video segment included in the target video.
  • the picture change speed of a video clip may appear in the following two situations:
  • the picture change speed of the video clip falls within the first change speed range.
  • the aforementioned terminal may extract one frame image at a first preset frame interval in chronological order, and then compare the screen contents of the extracted frame images one by one. Specifically, the screen content of the extracted first frame image and the extracted second frame image are compared to obtain the first similarity, and the screen content of the extracted second frame image and the extracted third frame image are compared to obtain the second similarity, etc. . That is, the screen content of the extracted i-th frame image and the extracted (i+1)-th frame image are compared to obtain the i-th similarity, where i is an integer greater than or equal to 1 and less than the number of frames of the extracted image.
  • the first preset frame interval may be any integer frame interval between 0 frame and the number of frames of the image included in the target video.
  • the terminal can divide the video composed of the continuous frame images into one video segment, and the picture change of the video segment is that the picture change speed falls within the first change speed range.
  • the aforementioned first preset ratio may be any ratio between 70% and 100%, for example.
  • the aforementioned first preset similarity degree may be any similarity degree between 30% and 70%, for example.
  • the specific first preset ratio and the specific first preset similarity may be determined according to specific scenarios, and there is no limitation here.
  • the above-mentioned terminal may extract a frame image every first preset frame interval in chronological order, and then compare the screen contents of the extracted frame images one by one, specifically, the first extracted frame The image and the extracted second frame of image are compared to obtain the first similarity, and the extracted second frame of image is compared with the extracted third frame of image content to obtain the second similarity, that is, the extracted i-th frame image and the extracted The image content of the (i+1)th frame of image is compared to obtain the i-th similarity, where i is an integer greater than or equal to 1 and less than the number of frames of the extracted image.
  • the terminal can divide the video composed of the continuous frame images into a video segment, then the picture change of the video segment is that the picture change speed falls within the first change speed range.
  • the continuous frame image is any one of the one or more continuous frame images included in the target video
  • the first preset frame interval may be any between 0 frame and the number of frames of the image included in the target video. An integer frame interval.
  • the following example illustrates.
  • the above-mentioned target video includes 100 frames of images, and then one frame image is extracted every other frame of the 100 frames of images, and finally 50 frames of images are extracted, and then the extracted first frame image and the extracted second frame image
  • the content of the screen is compared to obtain the first similarity
  • the screen content of the extracted second frame of image and the extracted third frame of image are compared to obtain the second similarity, ....
  • the screen content of the extracted i-th frame image and the extracted i-th plus 1 frame image are compared to obtain the i-th similarity degree, where i is an integer greater than or equal to 1 and less than 50.
  • FIG. 5 is a fluctuation graph of the ten similarities.
  • the ordinate is the similarity
  • the abscissa is the number of the similarity obtained by comparing the extracted image of the i-th frame with the extracted image of the i-th plus 1 frame in chronological order.
  • the number 1 corresponds to similarity.
  • the degree is the aforementioned first similarity
  • the similarity corresponding to number 2 is the aforementioned second similarity and so on.
  • the picture change speed of the video clip falls within the second change speed range.
  • the aforementioned terminal may extract a frame image every first preset frame interval in chronological order, and then compare the screen content of the extracted frame images one by one. Specifically, the first extracted frame image The screen content of the frame image and the extracted second frame image is compared to obtain the first similarity, and the screen content of the extracted second frame image and the extracted third frame image are compared to obtain the second similarity, .... That is, the screen content of the extracted qth frame image and the extracted (q+1)th frame image are compared to obtain the qth similarity, where q is an integer greater than or equal to 1 and less than the number of frames of the extracted image.
  • the first preset frame interval may be any integer frame interval between 0 frame and the number of frames of the image included in the target video.
  • the similarity is greater than the second preset similarity, it means that the partial or full similarity corresponds to the previous extracted frame image
  • the scene in the video composed of continuous frame images is slowly changing, and this change can be called the change in which the picture change speed falls within the second change speed range.
  • the terminal can divide the video composed of the continuous frame images into one video segment, and the picture change of the video segment is that the picture change speed falls within the second change speed range.
  • the above-mentioned first preset ratio may be, for example, any ratio between 70% and 100%.
  • the aforementioned second preset similarity degree may be any similarity degree between 70% and 100%, for example.
  • the specific second preset ratio and the specific second preset similarity may be determined according to specific scenarios, and there is no limitation here.
  • the above terminal may extract a frame image every second preset frame interval in chronological order, and then compare the extracted frame images with the screen content of the extracted first frame image one by one For example, compare the extracted second frame image with the extracted first frame image to get the first similarity, and compare the extracted third frame image with the extracted first frame image to get the second similarity Degree and so on. That is, the image content of the extracted j-th frame image and the extracted first frame image are compared to obtain the j-th similarity, where j is an integer greater than or equal to 1 and less than or equal to the number of frames of the extracted image.
  • the terminal can divide the video composed of the continuous frame images into one video segment, and the picture change of the video segment is that the picture change speed falls within the second change speed range.
  • the continuous frame image is any one of one or more continuous frame images included in the target video.
  • the following examples illustrate.
  • the above target video includes 100 frames of images, and then one frame of images is extracted every other frame of the 100 frames of images, and finally 50 frames of images are extracted, and then the extracted second frame of image and the extracted first frame
  • the screen content of the image is compared to obtain the first similarity
  • the extracted third frame of image is compared with the extracted first frame of image to obtain the second similarity
  • the image content of the extracted j-th frame image and the extracted first frame image are compared to obtain the j-th similarity degree, where j is greater than or equal to 1 and less than or equal to 50.
  • j is greater than or equal to 1 and less than or equal to 50.
  • FIG. 6 is a trend graph of the ten similarities.
  • the ordinate is the similarity
  • the abscissa is the number of the similarity obtained by comparing the extracted i-th frame image with the extracted first frame image in chronological order.
  • the similarity corresponding to number 1 is For the first similarity
  • the similarity corresponding to number 2 is the second similarity and so on. It can be seen in Fig.
  • the scene in the video composed of the first 22 continuous images of the target video is slowly changing, that is, the picture change speed of the video composed of the first 22 continuous images falls within the second change speed range.
  • the terminal can divide the video composed of the continuous 22 frames of images into one video segment, then the picture change of the video segment is that the scene is slowly changing.
  • the scene corresponding to the video composed of 22 consecutive frames of images may be, for example, a scene in which a flower gradually opens or withers.
  • any one speed in the first changing speed range is greater than any one speed in the second changing speed range.
  • the foregoing terminal analyzes and processes the extracted shooting parameters of the target video to segment the target video and clarify the target information included in each video segment.
  • the shooting parameter of the target video extracted above may be the focal length used when shooting the video, and the used focal length may include one or more.
  • the terminal can divide the target video into one or more video clips according to the difference in focal length. For example, a video composed of video images shot with a focal length of 1x or less can be divided into a video segment, and the target information of the video segment can be determined as the focal length of 1x or less used for shooting; and/or can be used A video composed of video images shot at a focal length of 3 times or more is divided into one video segment, and the target information of the video segment can be determined as the focal length of 3 times or more used for the shooting focal length.
  • the foregoing terminal divides the foregoing target video into one or more video segments, and the segmentation mode of the video segment may be the same as the segmentation mode in the foregoing mode 1, mode 2, or mode 3.
  • the aforementioned terminal separately analyzes the one or more video clips to obtain the respective shooting durations, that is, the video durations, corresponding to the one or more video clips.
  • the aforementioned terminal saves the shooting duration of each video clip in the memory when segmenting the video, and when it is necessary to match the video type or playback rate according to the shooting duration information of the video clip, the shooting time of the corresponding video clip can be obtained from the memory. Duration information.
  • S402 The aforementioned terminal determines a first playback rate of the first video segment according to the target information of the first video segment.
  • the playback rate of the first video clip can be matched in a preset special effect mapping relationship according to the target information
  • the special effect mapping relationship defines the correspondence between the target information and multiple playback rates
  • the first playback rate is The playback rate obtained by matching from the special effect mapping relationship according to the above target information.
  • Table 1 is a mapping relationship table between the target information of the first video segment and the playback rate obtained according to different situations. You can see in Table 1:
  • the target information of the first video clip obtained through method 1 in Table 1 if the target information represents scenes such as dew drops, trickles, waterfalls, rain, snow, butterflies flying, and bees collecting nectar, Then the playback rate corresponding to the first video clip is the slow motion playback rate. If the target information characterizes scenes such as busy traffic, wind and cloud changes, starry sky, auroral changes, etc., then the playback rate corresponding to the first video clip is fast-moving Play rate.
  • the slow motion playback rate may be a rate at which the number of frame images played in a unit time (for example, 1 second) is less than the first preset number of frames; the fast motion playback rate may be a unit time (for example, it may be 1 second) The rate at which the number of frame images played is greater than the second preset number of frames.
  • the first preset frame number may be any frame number less than or equal to 24 frames, and the second preset frame number may be any frame number greater than or equal to 24 frames.
  • the target information of the first video segment obtained through the second method in Table 1. If the target information represents that the picture change speed in the first video segment falls within the first change speed range, then the first video segment The corresponding playback rate is the playback rate corresponding to slow motion. If the target information indicates that the picture change speed in the first video segment falls within the second change speed range, then the playback rate corresponding to the first video segment is fast The playback rate of the action.
  • the target information of the first video clip obtained through the third method in Table 1. If the target information characterizes the situation that the shooting focal length of the first video clip is greater than or equal to 3 times the focal length, then the first video clip corresponds to The playback rate is a slow motion playback rate. If the target information characterizes the situation where the shooting focal length of the first video segment is less than or equal to 1 times the focal length, then the playback rate corresponding to the first video segment is the fast motion playback rate.
  • the target information of the first video clip obtained through method 4 in Table 1. If the target information characterizes the situation that the shooting time of the first video clip is less than 10 seconds, then the corresponding playback rate of the first video clip is The slow motion playback rate. If the target information indicates that the shooting duration of the first video clip is greater than 10 minutes, then the playback rate corresponding to the first video clip is the fast motion playback rate.
  • the target information can also be input into the machine learning model based on the principle of machine learning, and the machine learning model outputs the first playback rate corresponding to the target information.
  • the first playback rate can also be obtained by calculation through a mathematical model, where the input of the model is one or more target information of the first video segment, and the output is the first playback rate.
  • the mathematical model or machine learning model is information that characterizes scenes such as dew drops, trickles, waterfalls, rain, snow, butterflies flying, and bees picking nectar
  • the mathematical model or machine The corresponding playback rate of the matching output of the learning model is the slow motion playback rate
  • the mathematical model or The corresponding playback rate output by the machine learning model matching is the playback rate of the fast action.
  • the mathematical model or machine learning model matches the output corresponding The playback rate of is the slow motion playback rate; if the input to the above mathematical model or machine learning model is the information that characterizes the picture change speed in the first video segment falling within the second change speed range, then the mathematical model Or the corresponding playback rate output by the machine learning model matching is the playback rate of the fast action.
  • the input to the above mathematical model or machine learning model is the information indicating that the shooting focal length of the first video segment is greater than or equal to 3 times the focal length, then the corresponding playback rate of the matching output of the mathematical model or machine learning model is Slow motion playback rate; if the input to the above mathematical model or machine learning model is the information that the focal length of the first video clip is less than or equal to 1 times the focal length, then the mathematical model or machine learning model matches the output corresponding The play rate of is the play rate of fast motion.
  • the input to the above mathematical model or machine learning model is information that characterizes that the shooting duration of the first video segment is less than 10 seconds, then the corresponding playback rate output by the matching output of the mathematical model or machine learning model is slow motion Play rate; if the input to the above mathematical model or machine learning model is the information that characterizes that the shooting time of the first video segment is greater than 10 minutes, then the corresponding play rate of the matching output of the mathematical model or machine learning model is fast The playback rate of the action.
  • the foregoing terminal determines the first playback rate of the first video segment according to the target information of the first video segment, except for the corresponding method described in Table 1 and the foregoing use of mathematical models or machines. In addition to the way of learning the model, it can also include the following ways:
  • the foregoing terminal determines the first video type of the first video segment according to the target information of the first video segment; then, matches the first video type corresponding to the first video type of the first video segment from the preset special effect mapping relationship A playback rate, wherein the special effect mapping relationship defines the corresponding relationship between multiple video types and multiple playback rates.
  • the terminal may determine the video type of the first video clip according to the target information, for example, according to the feature mark corresponding to the target information
  • the corresponding video type is calculated, and then the corresponding playback rate is matched according to the video type from the special effect mapping relationship.
  • the special effect mapping relationship defines the correspondence between multiple video types and multiple playback rates.
  • the first video type is based on the above The corresponding video type determined by the target information of the first video segment. Specifically, see Table 2.
  • Table 2 is a mapping relationship table between video types and playback rates determined according to different target information. You can see in Table 2:
  • the video type of the first video clip determined according to the target information is water flow
  • a playback rate is the playback rate of slow motion.
  • the video type of the first video segment determined according to the target information is rain and snow weather
  • a playback rate is the playback rate of slow motion.
  • the video type of the first video clip determined according to the target information is an animal close-up. According to the animal close-up, the first video type is matched in Table 2.
  • the playback rate is the slow motion playback rate.
  • the video type of the first video clip determined according to the target information is street
  • the first playback rate obtained by matching the video type of street in Table 2 is fast-action playback rate.
  • the video type of the first video clip determined according to the target information is a natural scene
  • the first video type obtained by matching the natural scene in Table 2 A play rate is the play rate of the fast action.
  • the video type of the first video segment determined according to the target information is that the picture content changes quickly, and the picture content changes according to the picture content.
  • the first play rate obtained by matching the video type of fast in Table 2 is the slow motion play rate.
  • the video type of the first video segment determined according to the target information is that the picture content changes slowly, and the picture content changes according to the picture content.
  • the first play rate obtained by matching the video type of slow in Table 2 is the play rate of fast action.
  • the video type of the first video segment determined according to the target information is close-up close-up, according to the close-up close-up video
  • the first playback rate obtained by matching the type in Table 2 is the slow motion playback rate
  • the video type of the first video clip determined according to the target information is telefocus or wide-angle, according to the telefocus or wide-angle
  • the first play rate obtained by matching a video type in Table 2 is the play rate of the fast action.
  • the video type of the first video clip determined according to the target information is the shooting duration.
  • the first playback rate obtained by matching in Table 2 is the slow motion playback rate.
  • the video type of the first video clip determined according to the target information is the shooting duration.
  • the video type according to the shooting duration is shown in Table 2.
  • the first playback rate obtained by matching in is the playback rate of the fast action.
  • the foregoing terminal adjusts the playback rate of the first video clip to the first playback rate.
  • the terminal determines the target information or type of the first video clip, it matches the corresponding playback rate from the special effect mapping relationship according to the target information or type, and then adjusts the first video clip to the corresponding playback rate. rate.
  • the subsequent terminal can play the video clip according to the adjusted playback speed.
  • the playback rate of a video before the adjustment is 24 frames per second
  • the adjusted playback rate is 48 frames per second. That is to say, the playback speed of the video clip is accelerated to twice the original speed.
  • the terminal can play it at a playback speed of 48 frames of images per second.
  • the embodiments of this application do not require users to have artistic skills and editing capabilities.
  • the content (such as the scene presented by the video), or some parameters when shooting the video (such as focal length), automatically determine the playback rate of the video, and then intelligently adjust the playback speed of the video, you can easily and quickly obtain rich rhythms and high sharing value Video works, the editing efficiency is greatly improved, and it is suitable for more users.
  • the determining the first playback rate of the first video clip according to the target information of the first video clip includes: when the first video clip is shot at a focal length When a video type within a focal length range, the first playback rate is determined to be a slow motion playback rate; when the first video clip is a video type with a shooting focal length within a second focal length range, the first playback rate is determined The rate is the playback rate of the fast action; wherein, any focal length in the first focal length range is greater than any focal length in the second focal length range.
  • the first focal length range may be, for example, greater than or equal to 3 times the focal length
  • the second focal length range may be, for example, less than or equal to 1 times the focal length.
  • the specific focal length range can be determined according to the specific situation and is not limited here. For the specific implementation of this embodiment, refer to the description corresponding to Table 2, which will not be repeated here.
  • the determining the first playback rate of the first video segment according to the target information of the first video segment includes: when the first video segment is shot with a duration of the first When the video type is within the preset duration range, it is determined that the first playback rate is a slow-motion playback rate; when the first video segment is a video type whose shooting duration is within the second preset duration range, it is determined that the The first playback rate is a fast-action playback rate; wherein any one of the durations in the first preset duration range is less than any one of the durations within the second preset duration range.
  • the first preset duration range may be, for example, that the shooting duration of the video clip is less than 10 seconds
  • the second preset duration range may be that the shooting duration of the video clip is greater than 10 minutes, for example.
  • the specific preset duration range can be determined according to specific conditions, and there is no limitation here. For the specific implementation of this embodiment, refer to the description corresponding to Table 2, which will not be repeated here.
  • the target information of the first video segment includes at least two types of information: information about the scene in the first video segment, the shooting focal length of the first video segment, and the first video segment.
  • the shooting duration of a video clip and the picture change in the first video clip; the determining the first playback rate of the first video clip according to the target information of the first video clip includes: determining the above based on at least two kinds of information.
  • the terminal obtains the information of the screen scene in the first video clip, the shooting focal length of the first video clip, the shooting time length of the first video clip, and the picture changes in the first video clip
  • a result of the playback rate of the first video segment is determined according to each type of information, and then at least two results of the playback rate of the first video segment can be determined.
  • the at least two play rate results are comprehensively analyzed to determine a play rate corresponding to one play rate result as the first play rate of the first video segment.
  • the first play rate is the play rate that occurs most frequently among the play rates characterized by the at least two play rate results.
  • the playback rate that can be matched in Table 1 or Table 2 is the slow motion playback rate. Assuming that the acquired shooting focal length is greater than or equal to 3 times the focal length, then according to the information, it can be matched in Table 1 or Table 2.
  • the playback rate is the slow-motion playback rate. Since the playback rates determined based on the two kinds of information are slow-motion playback rates, comprehensive analysis can determine that the first playback rate of the first video segment is slow-motion playback. rate.
  • the terminal obtains three types of information: the shooting focal length of the first video clip, the shooting time length of the first video clip, and the picture change in the first video clip.
  • the playback rate that can be matched in Table 1 or Table 2 according to the information is the slow motion playback rate.
  • the playback rate that can be matched in Table 1 or Table 2 according to the information is fast-action playback rate.
  • the play rate that can be matched in Table 1 or Table 2 according to the information is fast
  • the playback rate of the action is the three kinds of information respectively determine the playback rate results, there are two results characterize the fast action playback rate, and only one result characterizes the slow motion playback rate, then finally the first video clip can be determined
  • the first play rate is the play rate of the fast action.
  • the above-mentioned video special effect generation method further includes: acquiring target information of a second video segment in the target video, where the target information includes the content feature of the second video segment and the second video segment.
  • One or more of the shooting parameters of the video clip determine the second playback rate of the second video clip according to the target information of the second video clip; adjust the playback rate of the second video clip to the The second playback rate.
  • the foregoing target video may include multiple video clips
  • the foregoing terminal may determine the playback rate of the corresponding video clip according to the acquired target information of each video clip to adjust the playback rate of the video clip accordingly.
  • the playback rate of multiple video clips included in a video can be adjusted separately, which is beneficial to further enrich the playback rhythm of the video.
  • GUI graphical user interface
  • Controls can include icons, buttons, menus, tabs, text boxes, dialog boxes, status bars, navigation bars, Widgets, etc. Interface elements.
  • FIG. 7 exemplarily shows an exemplary user interface 71 on a mobile phone for displaying applications installed on the mobile phone.
  • the user interface 71 may include: a status bar 701, a calendar indicator 702, a weather indicator 703, a tray 704 with commonly used application icons, a navigation bar 705, a location information indicator 706, and other application icons. among them:
  • the status bar 701 may include: operator name (for example, "China Mobile") 701A, one or more signal strength indicators 701B of wireless fidelity (wireless fidelity, Wi-Fi) signals, and mobile communication signals (also called cellular Signal) one or more signal strength indicator 701C, battery status indicator 701D, and time indicator 701E.
  • operator name for example, "China Mobile”
  • signal strength indicators 701B of wireless fidelity (wireless fidelity, Wi-Fi) signals
  • mobile communication signals also called cellular Signal
  • signal strength indicator 701C battery status indicator 701D
  • time indicator 701E time indicator
  • the calendar indicator 702 can be used to indicate the current time, such as date, day of the week, hour and minute information, and so on.
  • the weather indicator 703 can be used to indicate the type of weather, such as cloudy to clear, light rain, etc., and can also be used to indicate information such as temperature.
  • the tray 704 with icons of commonly used applications can display: a phone icon 704A, an address book icon 704B, a short message icon 704C, and a camera icon 704D.
  • the navigation bar 705 may include system navigation keys such as a return key 705A, a main display key 705B, and a multi-task key 705C.
  • system navigation keys such as a return key 705A, a main display key 705B, and a multi-task key 705C.
  • the location information indicator 706 may be used to indicate information such as the current city and/or the area of the city.
  • Other application icons can be, for example: mailbox icon 707, mobile phone manager icon 708, settings icon 709, gallery icon 710, and so on.
  • the user interface 41 may also include a page indicator 711.
  • the icons of other applications may be distributed on multiple pages, and the page indicator 711 may be used to indicate the application in which page the user is currently browsing. The user can slide the area of other application icons left and right to browse application icons in other pages.
  • the user interface 71 exemplarily shown in FIG. 7 may be a home screen of a mobile phone.
  • the mobile phone may also include a physical main display key.
  • the main display key can be used to receive instructions from the user and return the currently displayed UI to the main interface, so that the user can view the main display at any time.
  • the above instruction can be an operation instruction of the user pressing the main display key once, or an operation instruction of the user pressing the main display key twice in a short period of time, or the user long-pressing the main display key within a predetermined time. Operation instructions of the display keys.
  • the main display screen key can also be integrated with a fingerprint recognizer, so that when the main display screen key is pressed, fingerprint collection and identification are subsequently performed.
  • FIG. 7 only exemplarily shows the user interface on the mobile phone, and should not constitute a limitation to the embodiment of the present application.
  • the aforementioned mobile phone responds to a click or touch operation on the camera icon 704D in the user interface 71, and the user interface of the mobile phone is displayed as a user interface for taking pictures. Then, in the user interface, the mobile phone can respond to the click or touch operation on the video control Enter the recording mode, the user interface of the recording mode can be as shown in Figure 8.
  • the user interface in Fig. 8 includes flash control 801, aperture control 802, front and rear camera conversion control 803, video screen 804, camera control 805, video control 806, and gallery control 807, among which:
  • the flash control 801 can be used to control the turning on and off of the flash
  • the aperture control 802 can be used to control the opening and closing of the aperture
  • the front and rear camera conversion control 803 can be used to adjust whether the camera for taking pictures or recording is a front camera or a rear camera;
  • the video image 804 can be used to display the image content captured by the camera in real time;
  • the camera control 805 can be used to switch to the camera mode when in the video mode, and also used to start the camera for shooting when in the camera mode;
  • the video control 806 can be used to switch to the video mode when in the photo mode, and also used to start the camera for recording and stop the camera when in the video mode;
  • the gallery control 807 can be used to view the photos that have been taken and the recorded videos.
  • the above-mentioned mobile phone activates the camera to record in response to a click or touch operation on the recording control 806.
  • the mobile phone once again responds to the click or touch operation on the recording control 806 to stop the recording operation during the recording process, thereby completing the recording of a video.
  • the user interface of the aforementioned mobile phone may display the user interface as shown in FIG. 9.
  • the user interface shown in FIG. 9 includes a small prompt window 901.
  • the small prompt window 901 is mainly used to prompt that the mobile phone has completed the intelligent adjustment of the playback rhythm of the recorded video, and ask the user whether to accept the intelligent As a result of the tuning, the small window 901 also includes an accept button 903 and a cancel button 902.
  • the mobile phone may save the smartly-tuned video in response to the click or touch operation of the accept button 903, and in addition, may also save the smartly-tuned video at the same time.
  • the mobile phone can also respond to the click or touch operation of the cancel button 902 to cancel the result of this smart tuning, and only save the recorded original video.
  • the user interface shown in FIG. 9 also includes a play control 904, and the mobile phone can play the intelligently tuned video in response to a click or touch operation on the play control.
  • the following introduces another implementation example of a user interface for adjusting the video playback rate.
  • FIG. 10 may be a user interface diagram displayed by the mobile phone in response to a click or touch operation on the gallery icon 710 in the interface shown in FIG. 7, and the user interface diagram includes a theme name 1001.
  • the theme name may, for example, It is a gallery.
  • the gallery can include thumbnails of videos and pictures.
  • the video thumbnail also includes a playback control 1003 for marking the thumbnail as a video thumbnail, while the picture thumbnail does not.
  • the mobile phone can display real-size pictures or display video playback interfaces on the display screen in response to clicking or touching these thumbnails.
  • the mobile phone may display the playback interface of the video on the display screen, as shown in FIG. 11.
  • the mobile phone can play the video in response to a click or touch operation on the playback control 1106.
  • the interface shown in FIG. 1 may also include an edit control 1102, a delete control 1103, a favorite control 1104, and a sharing control 1105.
  • the edit control 1102 can be used to edit the video displayed on the interface
  • the delete control 1103 can be used to delete the video displayed on the interface
  • the favorite control 1104 can be used to favorite the video displayed on the interface
  • the sharing control 1105 can be used to share the video displayed on the interface.
  • the mobile phone may display a video editing interface on the display screen, such as the interface shown in FIG. 12.
  • the interface shown in Figure 12 includes a video playback speed bar 1201.
  • the video playback speed bar 1201 includes multiple playback speed adjustment points 12011. Each adjustment point corresponds to a playback rate. These adjustment points range from slow playback to fast playback. Gradually increase the playback rate.
  • the interface shown in FIG. 12 also includes a video segment frame selection area 1202 for intelligently segmenting video clips according to the corresponding method in the method embodiment of the above-mentioned video special effect generation method.
  • a video segment frame selection area 1202 for intelligently segmenting video clips according to the corresponding method in the method embodiment of the above-mentioned video special effect generation method.
  • the video is divided into two video segments, segment 1 and segment 2.
  • the video can be divided into one or more video clips and displayed in the video clip selection area 1202.
  • the specific video clip division can be determined according to the specific situation, and the embodiment of the application does not limit it. .
  • the mobile phone has intelligently adjusted the playback rate of the two video clips in the video clip frame selection area 1202, as shown in Figure 12, the video clip frame selection area 1202 in the segment 2 is selected, the video The playback speed bar 1201 correspondingly marks and displays the smartly adjusted playback rate 12012 of the segment 2.
  • the mobile phone will also mark and display the intelligently adjusted playback rate of the segment 1 on the display screen.
  • the interface shown in Figure 12 also includes a video preview area 1203.
  • the mobile phone can play the selected video in the video preview area 1203 according to the selected playback rate in the video playback speed bar 1201. Video clips.
  • the mobile phone in response to a click or touch operation on the playback control 1204, can also play a complete video after smart tuning in the video preview area 1203.
  • the user can also manually select the corresponding video clip.
  • the video playback speed bar 1201 will correspondingly display a smart-tuned playback rate.
  • the user can also The selected video clip manually adjusts the video playback speed bar 1201 to adjust the playback rate of the selected video clip. See Figure 13 and Figure 14 for example.
  • the user can manually select the video clip in the video clip frame selection area 1202. After the selection is made, it can be seen that the video playback speed bar 1201 will correspondingly display a smart-tuned playback rate. Then, referring to FIG. 14, the user can also select a playback rate on the video playback speed bar 1201 as the playback rate of the selected video segment.
  • the embodiment of the present application may also be applied to a folding screen mobile phone.
  • FIG. 15 is a user interface for video editing on an unexpanded folding screen. Before the display of the folding screen mobile phone is unfolded, the effect is the same as that of a normal mobile phone editing mode.
  • the folding screen mobile phone when the folding screen mobile phone is unfolded, half of the display screen displays the screen content of the video, and the other half of the display screen displays the video playback speed bar 1201 and the video segment frame selection area 1202 in FIG. 12, for example, see FIG.
  • each video segment can individually correspond to a video playback speed bar 1201, so that the corresponding playback rate of each video segment can be clearly displayed to improve user experience.
  • An embodiment of the present application provides a chip system, the chip system includes at least one processor, a memory, and an interface circuit.
  • the memory, the interface circuit, and the at least one processor are interconnected by wires, and the at least one memory stores a computer program; When the computer program is executed by the processor, it can implement the method embodiment shown in FIG. 4 and the method embodiment that may be implemented.
  • the embodiment of the present application also provides a computer-readable storage medium, and the computer-readable storage medium stores a computer program.
  • the computer program When the computer program is run by a processor, it can implement the method embodiment shown in FIG. 4 and its possible implementations. Method embodiment.
  • the embodiment of the present application also provides a computer program product.
  • the computer program product When the computer program product is run on a processor, it can implement the method embodiment shown in FIG. 4 and the method embodiments that may be implemented.
  • the process can be completed by a computer program instructing relevant hardware.
  • the program can be stored in a computer readable storage medium. , May include the processes of the foregoing method embodiments.
  • the aforementioned storage media include: ROM or random storage RAM, magnetic disks or optical discs and other media that can store program codes.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Databases & Information Systems (AREA)
  • Human Computer Interaction (AREA)
  • Television Signal Processing For Recording (AREA)

Abstract

本申请实施例提供一种视频特效生成方法及终端,该方法包括:终端获取目标视频中的第一视频片段的目标信息,所述目标信息包括所述第一视频片段的内容特征和所述第一视频片段的拍摄参数中的一项或多项;该终端根据所述第一视频片段的目标信息确定所述第一视频片段的第一播放速率;然后,该终端将所述第一视频片段的播放速率调整为所述第一播放速率。采用本申请实施例,能够提高视频编辑效率。

Description

视频特效生成方法及终端
本申请要求于2019年08月20日提交中国专利局、申请号为201910769084.8、申请名称为“视频特效生成方法及终端”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及视频处理技术领域,尤其涉及一种视频特效生成方法及终端。
背景技术
随着信息碎片化趋势不断加剧,短视频以新的媒介形态成为快速接触信息的最佳载体,随着5G技术的成熟,使短视频的应用场景极大丰富。现有的视频节奏调节技术中,已经有了慢动作和延时摄影。选择慢动作模式进行拍摄时,拍摄出的内容会自动放慢呈现,用慢动作拍摄快速移动的物体或拍摄运动场景等,都可以呈现出特殊的观看效果。选择延时摄影进行拍摄时,拍摄的内容长时间缓慢变化的过程会被压缩到一个较短的时间内,可以呈现出平时用肉眼无法察觉的奇异精彩的景象。
但是目前终端上提供的慢动作和延时摄影仅仅是提供了对应的功能,要想拍出有艺术感的视频,还需要用户有一定的艺术功底,普通用户对视频节奏的把握缺乏足够经验,拍摄出的视频节奏单一,缺乏艺术表现力,没有可看性,没有分享价值。虽然可以对拍摄好的视频进行重新编辑以丰富视频的节奏,增加视频的可看性,但是快/慢动作特效不方便编辑,且需要用户具备较强的编辑经验和艺术功底,否则无法编辑出节奏丰富、分享价值高的视频作品。因此,如何在用户缺乏艺术功底的情况下,简单快速地获得节奏丰富、分享价值高的视频作品是本领域技术人员正在研究的问题。
发明内容
本申请实施例公开了一种视频特效生成方法及终端,能够提高视频编辑效率,简单快速地获得节奏丰富和分享价值高的视频作品。
第一方面,本申请实施例公开了一种视频特效生成方法,该方法包括:终端获取目标视频中的第一视频片段的目标信息,上述目标信息包括上述第一视频片段的内容特征和上述第一视频片段的拍摄参数中的一项或多项;上述终端根据上述第一视频片段的目标信息确定上述第一视频片段的第一播放速率;上述终端将上述第一视频片段的播放速率调整为上述第一播放速率。
相比于现有技术需要人为编辑视频的播放速率,对艺术功底和编辑能力有严格的要求,本申请实施例不需要用户具备艺术功底和编辑能力,设备根据拍摄的视频中的内容(如视频呈现出的场景),或者拍摄视频时的一些参数(如焦距),自动确定视频的播放速率,然后智能调整视频的播放速度,可以简单快速地获得节奏丰富、分享价值高的视频作品,编辑效率大大提高,同时适用于更多的用户。
在一些实现方式下,将所述第一视频片段的播放速率调整为所述第一播放速率包括:按照所述第一播放速率播放所述第一视频片段。这里的“调整”在一些实现方式下意味着“改变”播放速率,在另一些实现方式下意味着“设置”播放速率。
本申请中播放速率体现为视频播放速度的快或慢,播放速率的调整并不限定为调整“速率”这个值,速率的变化也可以通过调整相关的其他参数来实现,例如播放速率=播放视频的长度/播放时间,那么可以通过调整播放时间来调整播放速率。
在一些实现方式下,所述第一视频片段指的是所述目标视频中的部分片段或所述目标视频本身。
在一些实现方式下,所述方法在拍摄所述目标视频时执行,或在保存所述目标视频时执行,又或者在播放所述目标视频或所述第一视频片段之前执行。
在其中一种可能的实现方式中,上述终端根据上述第一视频片段的目标信息确定上述第一视频片段的第一播放速率,包括:上述终端根据上述第一视频片段的目标信息确定上述第一视频片段的第一视频类型;上述终端从预设的特效映射关系中匹配上述第一视频片段的第一视频类型对应的第一播放速率,其中,上述特效映射关系定义了多个视频类型与多个播放速率的对应关系。在另一种可能的实现方式中,可以通过数学模型计算获得第一播放速率,该模型的输入为所述第一视频片段的一种或多种目标信息,输出为第一播放速率。
需要说明的是,对于视频类型的确定而言,在具体实现中可以有一个与视频类型对应的参数表征视频类型,也可以没有这样的参数,而是根据下面各种情况直接按照相应的播放速率播放视频片段。对于后一种实现方式,存在视频分类的意义但没有“视频类型”的参数代码。由于代码实现有多种,在此不一一列举。
在其中一种可能的实现方式中,所述第一视频片段的目标信息包括所述第一视频片段的内容特征,所述第一视频片段的内容特征包含所述第一视频片段中画面场景的信息。
在其中一种可能的实现方式中,上述终端从预设的特效映射关系中匹配上述第一视频片段的第一视频类型对应的第一播放速率,包括:上述终端在上述第一视频类型为水流、雨雪天气或动物特写的视频类型的情况下,从预设的特效映射关系中匹配上述第一视频类型对应的第一播放速率为慢动作的播放速率;上述终端在上述第一视频类型为街道或自然景象的视频类型的情况下,从预设的特效映射关系中匹配上述第一视频类型对应的第一播放速率为快动作的播放速率。这两种情况也可以通过前述数学模型实现,下面的多种情况类似,不再赘述。
在本申请实施例,特效映射关系中的各个画面场景类型对应匹配一个播放速率,通过分析视频片段的画面场景确定出该视频片段的播放速率,能够增加该视频片段的观赏性;此外,本申请实施例通过预先在特效映射关系中匹配好各个画面场景类型对应的播放速率,使得只要根据视频片段分析出画面场景类型即可在特效映射关系中确定对应的播放速率,提高了该视频片段的编辑效率。
在其中一种可能的实现方式中,上述第一视频片段的目标信息包括上述第一视频片段的拍摄参数,上述第一视频片段的拍摄参数包含上述第一视频片段的拍摄焦距。
在其中一种可能的实现方式中,上述终端从预设的特效映射关系中匹配上述第一视频片段的第一视频类型对应的第一播放速率,包括:上述终端在上述第一视频类型为拍摄焦距在第一焦距范围内的视频类型的情况下,从预设的特效映射关系中匹配上述第一视频类型对应的第一播放速率为慢动作的播放速率;上述终端在上述第一视频类型为拍摄焦距在 第二焦距范围内的视频类型的情况下,从预设的特效映射关系中匹配上述第一视频类型对应的第一播放速率为快动作的播放速率;其中,上述第一焦距范围内的任意焦距大于上述第二焦距范围内的任意焦距。
需要说明的是,当用户使用近焦模式拍摄视频时表明用户关注视频中的场景画面的细节,当用户使用远焦或广角拍摄视频时表明用户关注视频中场景画面的全局信息。因此,在本申请实施例,将使用近焦模式拍摄的视频片段的播放速率匹配为慢动作的播放速率,以便于在播放该视频片段时展现出更多的场景画面的细节;将使用远焦或广角模式拍摄的视频片段的播放速率匹配为快动作的播放速率,以便于在播放该视频片段时可以快速展现场景画面的全局信息,从而为用户呈现出较好的观看效果。
在其中一种可能的实现方式中,上述第一视频片段的目标信息包括上述第一视频片段的内容特征,上述第一视频片段的内容特征包含上述第一视频片段的拍摄时长。
在其中一种可能的实现方式中,上述终端从预设的特效映射关系中匹配上述第一视频片段的第一视频类型对应的第一播放速率,包括:上述终端在上述第一视频类型为拍摄时长在第一预设时长范围内的视频类型的情况下,从预设的特效映射关系中匹配上述第一视频类型对应的第一播放速率为慢动作的播放速率;上述终端在上述第一视频类型为拍摄时长在第二预设时长范围内的视频类型的情况下,从预设的特效映射关系中匹配上述第一视频类型对应的第一播放速率为快动作的播放速率;其中,上述第一预设时长范围内的任意一个时长小于上述第二预设时长范围内的任意一个时长。
需要说明的是,当用户花费较长的时间拍摄一个视频时表明用户关注视频中场景画面展现的整个过程的情况,当用户拍摄一个时长较短的视频时表明用户更关注视频中场景画面的具体细节。因此,在本申请实施例,将拍摄时长较长的视频片段的播放速率匹配为快动作的播放速率,以便于在播放该视频片段时可以快速展现场景画面的整个过程的情况;将拍摄时长较短的视频片段的播放速率匹配为慢动作的播放速率,以便于在播放该视频片段时可以展现出更多的场景画面的细节,从而为用户呈现出特殊的观看效果。
在其中一种可能的实现方式中,上述第一视频片段的目标信息包括上述第一视频片段的内容特征,上述第一视频片段的内容特征包含上述第一视频片段中的画面变化情况。
在其中一种可能的实现方式中,上述终端从预设的特效映射关系中匹配上述第一视频片段的第一视频类型对应的第一播放速率,包括:在所述第一视频类型为画面变化速度落入第一变化速度范围内的视频类型的情况下,从预设的特效映射关系中匹配所述第一视频类型对应的第一播放速率为慢动作的播放速率;在所述第一视频类型为画面变化速度落入第二变化速度范围内的视频类型的情况下,从预设的特效映射关系中匹配所述第一视频类型对应的第一播放速率为快动作的播放速率;其中,所述第一变化速度范围中的任意一个速度大于所述第二变化速度范围内的任意一个速度。
需要说明的是,当用户拍摄的视频中画面变化很快的时候,用户可能会关注视频中场景画面的细节,当用户拍摄的视频中画面变化很慢的时候,用户可能会关注视频中场景画面的整个变化的过程。因此,在本申请实施例,将画面变化很快的视频片段的播放速率匹配为慢动作的播放速率,以便于在播放该视频片段时可以展现出更多的场景画面的细节;将画面变化很慢的视频片段的播放速率匹配为快动作的播放速率,以便于在播放该视频片 段时可以快速展现场景画面的整个变化过程,从而为用户呈现出符合观看需要的视频。
在其中一种可能的实现方式中,上述第一视频片段的目标信息包括如下信息中的至少两种信息:上述第一视频片段中画面场景的信息、上述第一视频片段的拍摄焦距、上述第一视频片段的拍摄时长和上述第一视频片段中的画面变化情况;上述终端根据上述第一视频片段的目标信息确定上述第一视频片段的第一播放速率,包括:上述终端根据上述至少两种信息确定上述第一视频片段的至少两个播放速率结果,其中,每个播放速率结果为基于上述至少两种信息中的一种信息确定得到;上述终端根据上述至少两个播放速率结果确定上述第一视频片段的第一播放速率。
本申请实施例通过视频片段的多个信息综合在特效映射关系中确定出该视频片段的播放速率,有利于进一步的优化该视频片段的观看效果,提高视频片段的分享价值。
在其中一种可能的实现方式中,上述第一播放速率为上述至少两个播放速率结果表征的播放速率中出现次数最多的播放速率。
在其中一种可能的实现方式中,上述方法还包括:上述终端获取目标视频中的第二视频片段的目标信息,上述目标信息包括上述第二视频片段的内容特征和上述第二视频片段的拍摄参数中的一项或多项;上述终端根据上述第二视频片段的目标信息确定上述第二视频片段的第二播放速率;上述终端将上述第二视频片段的播放速率调整为上述第二播放速率。
本申请实施例表明一个视频包括的多个视频片段可以分别调整其播放速率,有利于进一步丰富视频的播放节奏。
在其中一种可能的实现方式中,上述终端获取目标视频中的第一视频片段的目标信息,包括:上述终端在拍摄上述目标视频的过程中获取上述目标视频中的第一视频片段的目标信息。
本申请实施例在视频拍摄的过程中就对视频进行播放速率的调整,这样用户在拍摄完成后马上就可以看到效果,在提高编辑效率的同时提升了用户体验。
以上各种情况下的第一播放速率确定可以结合使用,具体选择哪一个播放速率可以由用户确定或默认选定。
第二方面,本申请实施例提供一种终端,该终端包括处理器和存储器,上述存储器存储有计算机程序,上述处理器用于调用上述计算机程序执行如下操作:获取目标视频中的第一视频片段的目标信息,上述目标信息包括上述第一视频片段的内容特征和上述第一视频片段的拍摄参数中的一项或多项;根据上述第一视频片段的目标信息确定上述第一视频片段的第一播放速率;将上述第一视频片段的播放速率调整为上述第一播放速率。
相比于现有技术需要人为编辑视频的播放速率,对艺术功底和编辑能力有严格的要求,本申请实施例不需要用户具备艺术功底和编辑能力,设备根据拍摄的视频中的内容(如视频呈现出的场景),或者拍摄视频时的一些参数(如焦距),自动确定视频的播放速率,然后智能调整视频的播放速度,可以简单快速地获得节奏丰富、分享价值高的视频作品,编辑效率大大提高,同时适用于更多的用户。
在其中一种可能的实现方式中,上述处理器根据上述第一视频片段的目标信息确定上述第一视频片段的第一播放速率,具体为:根据上述第一视频片段的目标信息确定上述第 一视频片段的第一视频类型;从预设的特效映射关系中匹配上述第一视频片段的第一视频类型对应的第一播放速率,其中,上述特效映射关系定义了多个视频类型与多个播放速率的对应关系。在另一种可能的实现方式中,上述处理器可以通过数学模型计算获得第一播放速率,该模型的输入为所述第一视频片段的一种或多种目标信息,输出为第一播放速率。
在其中一种可能的实现方式中,上述第一视频片段的目标信息包括上述第一视频片段的内容特征,上述第一视频片段的内容特征包含上述第一视频片段中画面场景的信息。
在其中一种可能的实现方式中,上述处理器从预设的特效映射关系中匹配上述第一视频片段的第一视频类型对应的第一播放速率,具体为:在上述第一视频类型为水流、雨雪天气或动物特写的视频类型的情况下,从预设的特效映射关系中匹配上述第一视频类型对应的第一播放速率为慢动作的播放速率;在上述第一视频类型为街道或自然景象的视频类型的情况下,从预设的特效映射关系中匹配上述第一视频类型对应的第一播放速率为快动作的播放速率。这两种情况也可以通过前述数学模型实现,下面的多种情况类似,不再赘述。
在其中一种可能的实现方式中,所述第一视频片段的目标信息包括所述第一视频片段的拍摄参数,所述第一视频片段的拍摄参数包含所述第一视频片段的拍摄焦距。
在其中一种可能的实现方式中,上述处理器从预设的特效映射关系中匹配上述第一视频片段的第一视频类型对应的第一播放速率,具体为:在上述第一视频类型为拍摄焦距在第一焦距范围内的视频类型的情况下,从预设的特效映射关系中匹配上述第一视频类型对应的第一播放速率为慢动作的播放速率;在上述第一视频类型为拍摄焦距在第二焦距范围内的视频类型的情况下,从预设的特效映射关系中匹配上述第一视频类型对应的第一播放速率为快动作的播放速率;其中,上述第一焦距范围内的任意焦距大于上述第二焦距范围内的任意焦距。
在其中一种可能的实现方式中,上述第一视频片段的目标信息包括上述第一视频片段的内容特征,上述第一视频片段的内容特征包含上述第一视频片段的拍摄时长。
在其中一种可能的实现方式中,上述处理器从预设的特效映射关系中匹配上述第一视频片段的第一视频类型对应的第一播放速率,具体为:在上述第一视频类型为拍摄时长在第一预设时长范围内的视频类型的情况下,从预设的特效映射关系中匹配上述第一视频类型对应的第一播放速率为慢动作的播放速率;在上述第一视频类型为拍摄时长在第二预设时长范围内的视频类型的情况下,从预设的特效映射关系中匹配上述第一视频类型对应的第一播放速率为快动作的播放速率;其中,上述第一预设时长范围内的任意一个时长小于上述第二预设时长范围内的任意一个时长。
在其中一种可能的实现方式中,上述第一视频片段的目标信息包括上述第一视频片段的内容特征,上述第一视频片段的内容特征包含上述第一视频片段中的画面变化情况。
在其中一种可能的实现方式中,上述处理器从预设的特效映射关系中匹配上述第一视频片段的第一视频类型对应的第一播放速率,具体为:在所述第一视频类型为画面变化速度落入第一变化速度范围内的视频类型的情况下,从预设的特效映射关系中匹配所述第一视频类型对应的第一播放速率为慢动作的播放速率;在所述第一视频类型为画面变化速度落入第二变化速度范围内的视频类型的情况下,从预设的特效映射关系中匹配所述第一视 频类型对应的第一播放速率为快动作的播放速率;其中,所述第一变化速度范围中的任意一个速度大于所述第二变化速度范围内的任意一个速度。
在其中一种可能的实现方式中,上述第一视频片段的目标信息包括如下信息中的至少两种信息:上述第一视频片段中画面场景的信息、上述第一视频片段的拍摄焦距、上述第一视频片段的拍摄时长和上述第一视频片段中的画面变化情况;上述处理器根据上述第一视频片段的目标信息确定上述第一视频片段的第一播放速率,具体为:根据上述至少两种信息确定上述第一视频片段的至少两个播放速率结果,其中,每个播放速率结果为基于上述至少两种信息中的一种信息确定得到;根据上述至少两个播放速率结果确定上述第一视频片段的第一播放速率。
在其中一种可能的实现方式中,上述第一播放速率为上述至少两个播放速率结果表征的播放速率中出现次数最多的播放速率。
在其中一种可能的实现方式中,上述处理器还执行如下操作:获取目标视频中的第二视频片段的目标信息,上述目标信息包括上述第二视频片段的内容特征和上述第二视频片段的拍摄参数中的一项或多项;根据上述第二视频片段的目标信息确定上述第二视频片段的第二播放速率;将上述第二视频片段的播放速率调整为上述第二播放速率。
本申请实施例表明一个视频包括的多个视频片段可以分别调整其播放速率,有利于进一步丰富视频的播放节奏。
在其中一种可能的实现方式中,上述处理器获取目标视频中的第一视频片段的目标信息,具体为:在拍摄上述目标视频的过程中获取上述目标视频中的第一视频片段的目标信息。
本申请实施例在视频拍摄的过程中就对视频进行播放速率的调整,这样用户在拍摄完成后马上就可以看到效果,在提高编辑效率的同时提高了用户体验。
第三方面,本申请实施例提供一种终端,该终端包括用于执行第一方面或者第一方面的任一可能的实现方式所描述的方法的单元。
第四方面,本申请实施例提供一种芯片系统,该芯片系统包括至少一个处理器、存储器和接口电路,该存储器、该接口电路和该至少一个处理器通过线路互联,该至少一个存储器中存储有计算机程序;该计算机程序被该处理器执行时,实现第一方面或者第一方面的任一可能的实现方式所描述的方法。所述存储器也可以设置芯片系统之外,所述处理器通过所述接口电路执行所述存储器中的计算机程序。
第五方面,本申请实施例提供一种计算机可读存储介质,该计算机可读存储介质中存储有计算机程序,当该计算机程序被处理器执行时,实现第一方面或者第一方面的任一可能的实现方式所描述的方法。
第六方面,本申请实施例提供一种计算机程序产品,当该计算机程序产品在处理器上运行时,实现第一方面或者第一方面的任一可能的实现方式所描述的方法。
综上所述,相比于现有技术需要人为编辑视频的播放速率,对艺术功底和编辑能力有严格的要求,本申请实施例不需要用户具备艺术功底和编辑能力,设备根据拍摄的视频中的内容(如视频呈现出的场景),或者拍摄视频时的一些参数(如焦距),自动确定视频的播放速率,然后智能调整视频的播放速度,可以简单快速地获得节奏丰富、分享价值高的 视频作品,编辑效率大大提高,同时适用于更多的用户。
附图说明
图1是本申请实施例提供的一种终端的结构示意图;
图2是本申请实施例提供的一种操作系统的结构示意图;
图3是本申请实施例提供的一种终端的逻辑结构示意图;
图4是本申请实施例提供的一种视频特效生成方法的流程示意图;
图5是本申请实施例提供的一种画面内容相似度波动示意图;
图6是本申请实施例提供的一种画面内容相似度趋势示意图;
图7至图16是基于本申请实施例提供的视频特效生成方法的终端用户界面示意图。
具体实施方式
下面结合本申请实施例中的附图对本申请实施例进行描述。
本申请实施例所涉及到的终端可以包括手持设备(例如,手机、平板电脑、掌上电脑等)、车载设备(例如,汽车、自行车、电动车、飞机、船舶等)、可穿戴设备(例如智能手表(如iWatch等)、智能手环、计步器等)、智能家居设备(例如,冰箱、电视、空调、电表等)、智能机器人、车间设备,以及各种形式的用户设备(User Equipment,UE)、移动台(Mobile station,MS)、终端设备(Terminal Equipment),等等。可选的,终端通常支持多种应用程序,如相机应用程序、文字处理应用程序、电话应用程序、电子邮件应用程序、即时消息应用程序、照片管理应用程序、网络浏览应用程序、数字音乐播放器应用程序和/或数字视频播放器应用程序等等。
请参见图1,图1为本申请实施例应用的终端100的结构示意图。该终端100包括存储器180、处理器150以及显示设备140。存储器180存储计算机程序,计算机程序包括操作系统程序182和应用程序181等,其中,应用程序181包括浏览器程序。处理器150用于读取存储器180中的计算机程序,然后执行计算机程序定义的方法,例如处理器150读取操作系统程序182从而在该终端100上运行操作系统以及实现操作系统的各种功能,或读取一种或多种应用程序181,从而在该终端上运行应用,例如,读取相机应用程序来运行相机。
处理器150可以包括一个或多个处理器,例如,处理器150可以包括一个或多个中央处理器。当处理器150包括多个处理器时,这多个处理器可以集成在同一块芯片上,也可以各自为独立的芯片。一个处理器可以包括一个或多个处理核,以下实施例均以多核为例来介绍,但是本申请实施例提供的视频特效生成方法也可以应用于单核处理器。
另外,存储器180还存储有除计算机程序之外的其他数据183,其他数据183可包括操作系统182或应用程序181被运行后产生的数据,该数据包括系统数据(例如操作系统的配置参数)和用户数据,例如终端获取目标视频的目标信息(例如,目标视频中的画面场景信息、拍摄时长等信息),另外,还有拍摄的视频数据等都可看作是用户数据。
存储器180一般包括内存和外存。内存可以为随机存储器(RAM),只读存储器(ROM),以及高速缓存(CACHE)等。外存可以为硬盘、光盘、USB盘、软盘或磁带机等。计算机程序通常被存储在外存上,处理器在执行处理前会将计算机程序从外存加载到内存。本申 请实施例中的视频可以存储在外存上,当需要对该视频编辑时,可以将该需要编辑的视频先加载到内存。
操作系统程序182中包含了可实现本申请实施例提供的视频特效生成方法的计算机程序,从而使得处理器150读取到该操作系统程序182并运行该操作系统后,该操作系统可具备本申请实施例提供的视频特效生成功能。进一步的,该操作系统可以向上层的应用开放该视频特效生成功能的调用接口,处理器150从存储器中180中读取应用程序181并运行该应用后,该应用就可以通过该调用接口调用操作系统中提供的视频特效生成功能,从而实现对视频的编辑。
终端100还可以包括输入设备130,用于接收输入的数字信息、字符信息或接触式触摸操作/非接触式手势,以及产生与终端100的用户设置以及功能控制有关的信号输入等。具体地,本申请实施例中,该输入设备130可以包括触控面板131。触控面板131,也称为触摸屏,可收集用户在其上或附近的触摸操作(比如用户使用手指、触笔等任何适合的物体或附件在触控面板131上或在触控面板131的操作),并根据预先设定的程式驱动相应的连接装置。可选的,触控面板131可包括触摸检测装置和触摸控制器两个部分。其中,触摸检测装置检测用户的触摸方位,并检测触摸操作带来的信号,将信号传送给触摸控制器;触摸控制器从触摸检测装置上接收触摸信息,并将它转换成触点坐标,再送给该处理器150,并能接收处理器150发来的命令并加以执行。例如,用户在触控面板131上用手指单击某个虚拟按钮,触摸检测装置检测到此次单击带来的这个信号,然后将该信号传送给触摸控制器,触摸控制器再将这个信号转换成坐标发送给处理器150,处理器150根据该坐标和该信号的类型(单击或双击)执行视频的选择、编辑等操作,最后将编辑结果显示在显示面板141上。
触控面板131可以采用电阻式、电容式、红外线以及表面声波等多种类型实现。除了触控面板131,输入设备130还可以包括其他输入设备132,其他输入设备132可以包括但不限于物理键盘、功能键(比如音量控制按键、开关按键等)、轨迹球、鼠标、操作杆等中的一种或多种。
终端100还可以包括显示设备140,显示设备140,包括显示面板141,用于显示由用户输入的信息或提供给用户的信息以及终端100的各种菜单界面等,在本申请实施例中主要用于显示视频编辑后的结果、显示本申请实施例的视频等信息。该显示设备140可包括显示面板141,可选的,可以采用液晶显示器(英文:Liquid Crystal Display,简称:LCD)或有机发光二极管(英文:Organic Light-Emitting Diode,简称:OLED)等形式来配置显示面板141。在其他一些实施例中,触控面板131可覆盖显示面板141上,形成触摸显示屏。
除以上之外,终端100还可以包括用于给其他模块供电的电源190以及用于拍摄照片或视频的摄像头160、获取终端的地理位置的定位模块(如GPS)161、获取终端的摆放姿态(如角度、方位等)的陀螺仪162、记录时间的定时器163;其中,本申请实施例编辑过程中用到的视频可以为通过该摄像头160拍摄得到。终端100还可以包括一个或多个传感器120,例如加速度传感器、光传感器等。终端100还可以包括无线射频(Radio Frequency,RF)电路110,用于与无线网络设备进行网络通信,还可以包括WiFi模块170,用于与其 他设备进行WiFi通信。
基于上述介绍的本申请实施例应用的终端100的结构,下面结合图2以Android操作系统为例,介绍本申请实施例提供的视频特效生成方法的实现位置所涉及的操作系统的各个部件。
图2为本申请实施例提供的终端200的系统结构示意图。该终端200可以是本申请实施例的设备,例如可以是图1所示的终端100。该终端200包括应用层210和操作系统层250,该操作系统可以为Android操作系统。操作系统层250又分为框架层220、核心库层230和驱动层240。其中,图2中的操作系统层250可以认为是图1中操作系统182的一种具体实现,图2中的应用层210可以认为是图1中应用程序181的一种具体实现。驱动层240包括CPU驱动241、GPU驱动242、显示控制器驱动243、定位模块驱动244、陀螺仪驱动245和定时器驱动246等。核心库层230是操作系统的核心部分,包括输入/输出服务231、核心服务232、媒体服务234等,该媒体服务234中包含JPEG格式的图片库1、PNG格式的图片库2以及其他格式的图片库,该媒体服务234还包括算法库,该算法库用于存储本申请中与视频处理相关的算法,例如,选择视频片段的算法,根据目标信息确定对应视频片段的播放速率的算法等。框架层220可包括图形服务(Graphic Service)224、系统服务(System service)221、网页服务(Web Service)222和用户服务(Customer Service)223等;图形服务224中,可包括如图像编码Codec、视频编码Codec以及音频编码Codec等。应用层210可包括图库211、媒体播放器(Media Player)212以及浏览器(Browser)213等。
另外,在驱动层240之下,该终端200还包括硬件层260。该终端200的硬件层可以包括中央处理器(Central Processing Unit,CPU)251和图形处理器(Graphic Processing Unit,GPU)252(相当于图1中的处理器150的一种具体实现),还可以包括存储器253(相当于图1中的存储器180),包括内存和外存,还可以包括定位模块254(相当于图1中的定位模块161)、陀螺仪255(相当于图1中的陀螺仪162),还可以包括定时器256(相当于图1中的定时器163),还可以包括一个或多个传感器,(相当于图1中的传感器120)。当然除此之外,硬件层260还可以包括图1中示出的电源、摄像头、RF电路和WiFi模块,还可以包括图1中也没有示出的其他硬件模块,例如内存控制器和显示控制器等。
图3示例性给出了一种终端300的结构示意图,该终端中的模块为根据功能划分出的模块,可以理解的是,上述各个模块是根据功能划分出的功能模块,在具体实现中其中部分功能块可能被细分为更多细小的功能模块,部分功能模块也可能组合成一个功能模块,但无论这些功能模块是进行了细分还是组合,终端将视频生成特效视频过程中所执行的大致流程是相同的。通常,每个功能模块都对应有各自的计算机程序,这些功能模块各自对应的计算机程序在处理器上运行时,使得功能模块执行相应的流程从而实现相应功能。
所述终端300包括处理模块301、功能模块302、存储模块303(可以对应为图1中的存储器180)和显示模块304(可以对应为图1中的显示设备140)。其中:
处理模块301包括提取模块、分析模块和调整模块,提取模块可以用于执行视频特效生成方法中获取目标信息的操作等,分析模块可以用于根据目标信息确定播放速率的操作等,调整模块可以用于调整视频的播放速率的操作等。
功能模块302可以包括相机模块、图库模块、地图模块、通信录模块等等。其中,相机模块可以用于执行图片或视频的拍摄操作,例如可以用于执行视频特效生成方法中拍摄目标视频的操作等,图库模块可以用于执行图片和视频的管理维护操作,例如可以用于执行视频特效生成方法中管理维护目标视频的操作等。
存储模块303可以用于存储计算机程序、系统数据(例如操作系统的配置参数)和用户数据,例如终端获取目标视频的目标信息(例如,目标视频中的画面场景信息、拍摄时长等信息),另外,还有拍摄的视频数据等都可看作是用户数据。
显示模块304可以用于将图片或视频显示在显示屏上,例如可以将视频特效生成方法中调整了播放速率的视频在显示屏中播放等。
可选的,上述图1所示的处理器150可以调用存储器180中存储的计算机程序来实现上述处理模块301的功能。上述图1所示的处理器150还可以调用存储器180中存储的计算机程序来实现上述功能模块302的功能。
参见图4,下面介绍本申请实施例提供的一种视频特效生成方法,该方法可以基于图1、图2和图3所示的结构或者其他结构来实现,该方法包括但不限于如下步骤:
S401、终端获取目标视频中的第一视频片段的目标信息。
一种可选方案中,上述目标信息包括上述第一视频片段的内容特征,又一种可选方案中,上述目标信息包括上述第一视频片段的拍摄参数,又一种可选的方案上述目标信息包括上述第一视频片段的内容特征和上述第一视频片段的拍摄参数。可选的,该目标信息除了包括以上例举的信息之外,还可以包括其他信息,其他信息此处不一一举例。
其中,上述第一视频片段的内容特征可以包含上述第一视频片段中画面场景的信息、上述第一视频片段的拍摄时长、上述第一视频片段中的画面变化情况等信息中的一项或多项;上述第一视频片段的拍摄参数可以包含上述第一视频片段的拍摄焦距、快门速度、光圈等信息中的一项或多项。
本申请实施例中的第一视频片段存在很多可能的情况,例如,上述目标视频仅包含一个视频片段,这种情况下该第一视频片段就是这个视频片段。再如,上述目标视频包含多个视频片段,该第一视频片段可以是该多个片段中满足播放速率调整条件的视频片段。再如,上述目标视频包含多个视频片段,该第一视频片段是该多个片段中任意一个视频片段,即该多个视频片段中每个视频片段均满足播放速率调整的特征。
此外,上述目标视频可以是拍摄好保存在上述终端中的视频,即该目标视频被拍摄完成之后不立即编辑,可以先保存到终端存储器中,当需要编辑的时候再从存储器获取该目标视频然后根据该视频获取对应的目标信息。或者,上述目标视频也可以是上述终端正在拍摄的视频,即在拍摄该目标视频的过程中获取该目标视频中的第一视频片段的目标信息。
下面介绍终端获取目标视频中的第一视频片段的目标信息的具体实现过程。
在具体实施例中,上述终端获取上述目标视频以及提取该目标视频的拍摄参数;然后,对提取到的上述目标视频和提取到的该目标视频的拍摄参数进行分析处理以对该目标视频进行分段,并明确每一个视频片段包括的目标信息。
下面举例四种确定目标视频中的视频片段,以及确定视频片段中的目标信息的方式。
方式一:
上述终端可以按时间的先后顺序逐帧获取该目标视频包括的每一帧图像,然后通过图像识别技术识别获取到的帧图像中的画面内容,然后根据画面内容所属的类别对上述目标视频进行分段,即画面内容所属的类别相同的连续帧图像组成的视频分为一个视频片段,则分段得到的视频片段的画面场景即为其画面内容所对应的场景。例如,假如该目标视频中开始的100帧图像的画面内容是街景,后面的200帧图像的画面内容是天空中飘落着雪花。那么终端可以根据该目标视频前后画面内容的不同将该目标视频分为两个视频片段:开始的100帧图像组成的视频片段和后面的200帧图像组成的视频片段。则该开始的100帧图像组成的视频片段的画面场景即为街景,该后面的200帧图像组成的视频片段的画面场景即为下雪。
方式二:
上述终端识别出上述目标视频中每一帧图像的画面内容后,可以分析比较这些帧图像的画面内容以得到该目标视频或者该目标视频包括的视频片段的画面变化速度。
视频片段的画面变化速度可能出现如下两种情况:
第一种情况、视频片段的画面变化速度落入第一变化速度范围内。
在第一种可能的实现方式中,上述终端可以按时间先后顺序每隔第一预设帧间隔抽取一个帧图像,然后先后逐一比较抽取的帧图像的画面内容。具体的,抽取的第一帧图像与抽取的第二帧图像的画面内容比较得到第一相似度、抽取的第二帧图像与抽取的第三帧图像的画面内容比较得到第二相似度等等。即抽取的第i帧图像与抽取的第(i+1)帧图像的画面内容比较得到第i相似度,其中,i为大于等于1且小于抽取到的图像的帧数的整数。上述第一预设帧间隔可以是0帧至上述目标视频包括的图像的帧数之间的任意一个整数帧间隔。
如果上述比较得到的相似度的部分或全部相似度中有大于或等于第一预设比例的相似度小于第一预设相似度,则表明即该部分或全部相似度对应的抽取帧图像之前的连续帧图像所组成的视频中的景物在连续明显地变化,可以称这种变化为画面变化速度落入第一变化速度范围内的变化。那么终端可以将该连续帧图像组成的视频分割为一个视频片段,那么该视频片段的画面变化情况即为画面变化速度落入第一变化速度范围内。上述第一预设比例例如可以是70%至100%之间的任意一个比例。上述第一预设相似度例如可以是30%至70%之间的任意一个相似度。具体的第一预设比例和具体的第一预设相似度可以根据具体场景确定,此处不做限制。
在第二种可能的实现方式中,上述终端可以按时间先后顺序每隔第一预设帧间隔抽取一个帧图像,然后先后逐一比较抽取的帧图像的画面内容,具体的,抽取的第一帧图像与抽取的第二帧图像的画面内容比较得到第一相似度、抽取的第二帧图像与抽取的第三帧图像的画面内容比较得到第二相似度,即抽取的第i帧图像与抽取的第(i+1)帧图像的画面内容比较得到第i相似度,其中,i为大于等于1且小于抽取到的图像的帧数的整数。
如果上述得到的第一相似度、第二相似度,…,第i相似度中某些连续编号(例如第一相似度、第二相似度、第三相似度、第四相似度等等)的相似度呈波动变化,那么表明该某些连续编号的相似度对应的抽取帧图像之前的连续帧图像所组成的视频中的景物在连续明显地变化,可以称这种变化为画面变化速度落入第一变化速度范围内的变化。那么终 端可以将该连续帧图像组成的视频分割为一个视频片段,那么该视频片段的画面变化情况即为画面变化速度落入第一变化速度范围内。其中,该连续帧图像为上述目标视频中包括的一个或多个连续帧图像中的任意一个,上述第一预设帧间隔可以是0帧至上述目标视频包括的图像的帧数之间的任意一个整数帧间隔。
为了便于理解,下面举例说明。假设上述目标视频包括100帧图像,然后在这100帧图像中每隔1个帧图像就抽取一个帧图像,最后抽取得到50帧图像,然后将抽取的第一帧图像与抽取的第二帧图像的画面内容比较得到第一相似度、将抽取的第二帧图像与抽取的第三帧图像的画面内容比较得到第二相似度,…。即将抽取的第i帧图像与抽取的第i加1帧图像的画面内容比较得到第i相似度,其中,i为大于等于1且小于50的整数。通过分析发现第一相似度至第十相似度这十个相似度呈波动状态,例如参见图5,图5为该十个相似度的波动图。在图5中,纵坐标为相似度,横坐标为按时间先后顺序比较抽取的第i帧图像与抽取的第i加1帧图像的画面内容得到的相似度的编号,例如编号1对应的相似度为上述第一相似度,编号2对应的相似度为上述第二相似度等等。在图5中可以看到,这些相似度呈波动状态,这表明该第一相似度至第十相似度这十个相似度对应的抽取帧图像之前的连续帧图像,即上述目标视频的前22帧连续图像所组成的视频中的景物在连续明显地变化,即该前22帧连续图像所组成的视频的画面变化速度落入第一变化速度范围内。那么终端可以将该连续22帧图像组成的视频分割为一个视频片段,那么该视频片段的画面变化情况即为景物在连续明显地变化。该连续22帧的图像组成的视频对应的场景例如可以是某一个武打动作的场景。
第二种情况、视频片段的画面变化速度落入第二变化速度范围内。
在第一种可能的实现方式之中,上述终端可以按时间先后顺序每隔第一预设帧间隔抽取一个帧图像,然后先后逐一比较抽取的帧图像的画面内容,具体的,抽取的第一帧图像与抽取的第二帧图像的画面内容比较得到第一相似度、抽取的第二帧图像与抽取的第三帧图像的画面内容比较得到第二相似度,…。即抽取的第q帧图像与抽取的第(q+1)帧图像的画面内容比较得到第q相似度,其中,q为大于等于1且小于抽取到的图像的帧数的整数。上述第一预设帧间隔可以是0帧至上述目标视频包括的图像的帧数之间的任意一个整数帧间隔。
如果上述比较得到的相似度的部分或全部相似度中有大于或等于第二预设比例的相似度大于第二预设相似度,则表明即该部分或全部相似度对应的抽取帧图像之前的连续帧图像所组成的视频中的景物在缓慢地变化,可以称这种变化为画面变化速度落入第二变化速度范围内的变化。那么终端可以将该连续帧图像组成的视频分割为一个视频片段,那么该视频片段的画面变化情况即为画面变化速度落入第二变化速度范围内。上述第一预设比例例如可以为70%至100%之间的任意一个比例。上述第二预设相似度例如可以为70%至100%之间的任意一个相似度。具体的第二预设比例和具体的第二预设相似度可以根据具体场景确定,此处不做限制。
在第二种可能的实现方式中,上述终端可以按时间先后顺序每隔第二预设帧间隔抽取一个帧图像,然后将抽取的帧图像逐一与抽取的第一个帧图像的画面内容进行比较,例如将抽取的第二帧图像与抽取的第一帧图像的画面内容比较得到第一个相似度、将抽取的第 三帧图像与抽取的第一帧图像的画面内容比较得到第二个相似度等等。即抽取的第j帧图像与抽取的第一帧图像的画面内容比较得到第j个相似度,其中,j为大于等于1且小于等于抽取到的图像的帧数的整数。
如果上述得到的第一个相似度、第二个相似度,…,第j个相似度中某些连续编号(例如第一个相似度、第二个相似度、第三个相似度、第四个相似度等等)的相似度逐渐变小,那么表明该某些连续编号的相似度对应的抽取帧图像之前的连续帧图像所组成的视频中的景物在在缓慢地变化,可以称这种变化为画面变化速度落入第二变化速度范围内的变化。那么终端可以将该连续帧图像组成的视频分割为一个视频片段,那么该视频片段的画面变化情况即为画面变化速度落入第二变化速度范围内。其中,该连续帧图像为上述目标视频中包括的一个或多个连续帧图像中的任意一个。
为了便于理解,下面举例说明。还是假设上述目标视频包括100帧图像,然后在这100帧图像中每隔1个帧图像就抽取一个帧图像,最后抽取得到50帧图像,然后将抽取的第二帧图像与抽取的第一帧图像的画面内容比较得到第一个相似度、将抽取的第三帧图像与抽取的第一帧图像的画面内容比较得到第二个相似度等等。即抽取的第j帧图像与抽取的第一帧图像的画面内容比较得到第j个相似度,其中,j为大于等于1且小于等于50。通过分析发现第一个相似度至第十个相似度这十个相似度呈逐渐变小的趋势,例如参见图6,图6为该十个相似度的趋势图。在图6中,纵坐标为相似度,横坐标为按时间先后顺序比较抽取的第i帧图像与抽取的第一帧图像的画面内容得到的相似度的编号,例如编号1对应的相似度为上述第一个相似度,编号2对应的相似度为上述第二个相似度等等。在图6中可以看到,这些相似度呈逐渐变小的趋势,这表明该第一个相似度至第十个相似度这十个相似度对应的抽取帧图像之前的连续帧图像,即上述目标视频的前22帧连续图像所组成的视频中的景物在缓慢地变化,即该前22帧连续图像所组成的视频的画面变化速度落入第二变化速度范围内。那么终端可以将该连续22帧图像组成的视频分割为一个视频片段,那么该视频片段的画面变化情况即为景物在缓慢地变化。该连续22帧的图像组成的视频对应的场景例如可以是一朵花逐渐开放或枯萎的场景。
在本申请实施例中,上述第一变化速度范围中的任意一个速度大于上述第二变化速度范围内的任意一个速度。
方式三:
上述终端对提取到的该目标视频的拍摄参数进行分析处理,以对该目标视频进行分段,并明确每一个视频片段包括的所述目标信息。
具体的,上述提取到的目标视频的拍摄参数可以是拍摄视频时使用的焦距,该使用的焦距可以包括一个或多个。终端可以根据焦距的不同来将目标视频划分为一个或多个视频片段。例如,可以将使用1倍及以下焦距拍摄的视频画面组成的视频分割为一个视频片段,则该视频片段的目标信息可以确定为拍摄焦距使用的是1倍及以下焦距;和/或可以将使用3倍及以上焦距拍摄的视频画面组成的视频分割为一个视频片段,则该视频片段的目标信息可以确定为拍摄焦距使用的是3倍及以上焦距。
方式四:
上述终端将上述目标视频分为一个或多个视频片段,视频片段的分段方式可以与上述 方式一、方式二或方式三中的分段方式相同。分段之后,上述终端分别分析该一个或多个视频片段得到该一个或多个视频片段各自对应的拍摄时长即视频时长。或者,上述终端在将视频分段时即将每个视频片段的拍摄时长保存到存储器中,在需要根据视频片段的拍摄时长信息匹配视频类型或播放速率时,可以从存储器中获取对应视频片段的拍摄时长信息。
S402、上述终端根据所述第一视频片段的目标信息确定所述第一视频片段的第一播放速率。
具体地,可以根据该目标信息在预设的特效映射关系中匹配出该第一视频片段的播放速率,该特效映射关系定义了目标信息与多个播放速率的对应关系,该第一播放速率为根据上述目标信息从特效映射关系中匹配得到的播放速率。
具体的,可以参见表1,表1为根据不同的情况得到第一视频片段的目标信息与播放速率的映射关系表。在表1中可以看到:
通过表1中方式一获取到的该第一视频片段的目标信息,如果该目标信息表征的是露珠滴落、涓涓细流、瀑布、下雨、下雪、蝴蝶飞舞、蜜蜂采蜜等场景,那么该第一视频片段对应的播放速率为慢动作的播放速率,如果该目标信息表征的是车水马龙、风云变幻、星空、极光变化等场景,那么该第一视频片段对应的播放速率为快动作的播放速率。
具体的,慢动作的播放速率可以是单位时间内(例如可以是1秒)播放的帧图像的数量小于第一预设帧数的速率;快动作的播放速率可以是单位时间内(例如可以是1秒)播放的帧图像的数量大于第二预设帧数的速率。可选的,第一预设帧数可以是小于或等于24帧的任意一个帧数,第二预设帧数可以是大于或等于24帧的任意一个帧数。
通过表1中方式二获取到的该第一视频片段的目标信息,如果该目标信息表征的是该第一视频片段中的画面变化速度落入第一变化速度范围内,那么该第一视频片段对应的播放速率为慢动作对应的播放速率,如果该目标信息表征的是该第一视频片段中的画面变化速度落入第二变化速度范围内,那么该第一视频片段对应的播放速率为快动作的播放速率。
通过表1中方式三获取到的该第一视频片段的目标信息,如果该目标信息表征的是该第一视频片段的拍摄焦距大于或等于3倍焦距的情况,那么该第一视频片段对应的播放速率为慢动作的播放速率,如果该目标信息表征的是该第一视频片段的拍摄焦距小于或等于1倍焦距的情况,那么该第一视频片段对应的播放速率为快动作的播放速率。
通过表1中方式四获取到的该第一视频片段的目标信息,如果该目标信息表征的是该第一视频片段的拍摄时长小于10秒的情况,那么该第一视频片段对应的播放速率为慢动作的播放速率,如果该目标信息表征的是该第一视频片段的拍摄时长大于10分种的情况,那么该第一视频片段对应的播放速率为快动作的播放速率。
表1
Figure PCTCN2020100840-appb-000001
Figure PCTCN2020100840-appb-000002
当然,也可以基于机器学习的原理将目标信息输入到机器学习模型中,由机器学习模型输出该目标信息对应的第一播放速率。
或者,也可以通过数学模型计算获得第一播放速率,该模型的输入为所述第一视频片段的一种或多种目标信息,输出为第一播放速率。
例如,如果输入到上述数学模型中或机器学习模型中的是表征露珠滴落、涓涓细流、瀑布、下雨、下雪、蝴蝶飞舞、蜜蜂采蜜等场景的信息,那么该数学模型或机器学习模型匹配输出的对应的播放速率为慢动作的播放速率;如果输入到上述数学模型中或机器学习模型中的是表征车水马龙、风云变幻、星空、极光变化等场景的信息,那么该数学模型或机器学习模型匹配输出的对应的播放速率为快动作的播放速率。
例如,如果输入到上述数学模型中或机器学习模型中的是表征该第一视频片段中的画面变化速度落入第一变化速度范围内的信息,那么该数学模型或机器学习模型匹配输出的对应的播放速率为慢动作的播放速率;如果输入到上述数学模型中或机器学习模型中的是表征该第一视频片段中的画面变化速度落入第二变化速度范围内的信息,那么该数学模型或机器学习模型匹配输出的对应的播放速率为快动作的播放速率。
例如,如果输入到上述数学模型中或机器学习模型中的是表征该第一视频片段的拍摄焦距大于或等于3倍焦距的信息,那么该数学模型或机器学习模型匹配输出的对应的播放速率为慢动作的播放速率;如果输入到上述数学模型中或机器学习模型中的是表征该第一视频片段的拍摄焦距小于或等于1倍焦距的信息,那么该数学模型或机器学习模型匹配输出的对应的播放速率为快动作的播放速率。
例如,如果输入到上述数学模型中或机器学习模型中的是表征该第一视频片段的拍摄时长小于10秒的信息,那么该数学模型或机器学习模型匹配输出的对应的播放速率为慢动作的播放速率;如果输入到上述数学模型中或机器学习模型中的是表征该第一视频片段的拍摄时长大于10分种的信息,那么该数学模型或机器学习模型匹配输出的对应的播放速率为快动作的播放速率。
在其中一种可能的实施方式中,上述终端根据所述第一视频片段的目标信息确定所述第一视频片段的第一播放速率除了上述表1中对应描述的方式以及上述利用数学模型或机器学习模型的方式之外,还可以包括如下的方式:
上述终端根据所述第一视频片段的目标信息确定所述第一视频片段的第一视频类型; 然后,从预设的特效映射关系中匹配所述第一视频片段的第一视频类型对应的第一播放速率,其中,所述特效映射关系定义了多个视频类型与多个播放速率的对应关系。
在具体的实施例中,在上述步骤401中获取到该第一视频片段的目标信息之后,该终端可以根据该目标信息确定第一视频片段的视频类型,例如可以根据这些目标信息对应的特征标记出对应的视频类型,然后再从特效映射关系中根据视频类型匹配出对应的播放速率,该特效映射关系定义了多个视频类型与多个播放速率的对应关系,上述第一视频类型为根据上述第一视频片段的目标信息确定出来的对应的视频类型。具体的,可以参见表2,表2为根据不同的目标信息确定出的视频类型与播放速率的映射关系表。在表2中可以看到:
如果该目标信息表征的是露珠滴落、涓涓细流、瀑布等场景,则根据该目标信息确定的第一视频片段的视频类型为水流,根据水流这一视频类型在表2中匹配得到的第一播放速率为慢动作的播放速率。
如果该目标信息表征的是下雨、下雪等场景,则根据该目标信息确定的第一视频片段的视频类型为雨雪天气,根据雨雪天气这一视频类型在表2中匹配得到的第一播放速率为慢动作的播放速率。
如果该目标信息表征的是蝴蝶飞舞、蜜蜂采蜜等场景,则根据该目标信息确定的第一视频片段的视频类型为动物特写,根据动物特写这一视频类型在表2中匹配得到的第一播放速率为慢动作的播放速率。
如果该目标信息表征的是车水马龙等场景,则根据该目标信息确定的第一视频片段的视频类型为街道,根据街道这一视频类型在表2中匹配得到的第一播放速率为快动作的播放速率。
如果该目标信息表征的是风云变幻、星空、极光变化等场景,则根据该目标信息确定的第一视频片段的视频类型为自然景象,根据自然景象这一视频类型在表2中匹配得到的第一播放速率为快动作的播放速率。
如果该目标信息表征的是该第一视频片段中的画面变化速度落入第一变化速度范围内,则根据该目标信息确定的第一视频片段的视频类型为画面内容变化快,根据画面内容变化快这一视频类型在表2中匹配得到的第一播放速率为慢动作的播放速率。
表2
Figure PCTCN2020100840-appb-000003
Figure PCTCN2020100840-appb-000004
如果该目标信息表征的是该第一视频片段中的画面变化速度落入第二变化速度范围内,则根据该目标信息确定的第一视频片段的视频类型为画面内容变化慢,根据画面内容变化慢这一视频类型在表2中匹配得到的第一播放速率为快动作的播放速率。
如果该目标信息表征的是该第一视频片段的拍摄焦距大于或等于3倍焦距的情况,则根据该目标信息确定的第一视频片段的视频类型为近焦特写,根据近焦特写这一视频类型在表2中匹配得到的第一播放速率为慢动作的播放速率;
如果该目标信息表征的是该第一视频片段的拍摄焦距小于或等于1倍焦距的情况,则根据该目标信息确定的第一视频片段的视频类型为远焦或广角,根据远焦或广角这一视频类型在表2中匹配得到的第一播放速率为快动作的播放速率。
如果该目标信息表征的是该第一视频片段的拍摄时长小于10秒钟的情况,则根据该目标信息确定的第一视频片段的视频类型为拍摄时长短,根据拍摄时长短这一视频类型在表2中匹配得到的第一播放速率为慢动作的播放速率。
如果该目标信息表征的是该第一视频片段的拍摄时长大于10分钟的情况,则根据该目标信息确定的第一视频片段的视频类型为拍摄时长长,根据拍摄时长这一视频类型在表2中匹配得到的第一播放速率为快动作的播放速率。
S403、上述终端将所述第一视频片段的播放速率调整为所述第一播放速率。
具体的,上述终端确定出上述第一视频片段的目标信息或类型后,根据该目标信息或类型从特效映射关系中匹配出对应的播放速率,然后将该第一视频片段调整为该对应的播放速率。后续终端就可以根据调整后的播放速度来进行该视频片段的播放,例如,一段视频调整前的播放速率是每秒钟播放24帧图像,调整后的播放速率是每秒钟播放48帧图像,即将该视频片段的播放速度加快为原来的2倍。那么当用户需要播放该视频片段时,终端可以按照每秒钟播放48帧图像的播放速度来进行播放。
综上所述,相比于现有技术需要人为编辑视频的播放速率,对艺术功底和编辑能力有严格的要求,本申请实施例不需要用户具备艺术功底和编辑能力,设备根据拍摄的视频中的内容(如视频呈现出的场景),或者拍摄视频时的一些参数(如焦距),自动确定视频的播放速率,然后智能调整视频的播放速度,可以简单快速地获得节奏丰富、分享价值高的视频作品,编辑效率大大提高,同时适用于更多的用户。
在其中一种可能的实施方式之中,所述根据所述第一视频片段的目标信息确定所述第一视频片段的第一播放速率,包括:当所述第一视频片段为拍摄焦距在第一焦距范围内的视频类型时,确定所述第一播放速率为慢动作的播放速率;当所述第一视频片段为拍摄焦距在第二焦距范围内的视频类型时,确定所述第一播放速率为快动作的播放速率;其中,所述第一焦距范围内的任意焦距大于所述第二焦距范围内的任意焦距。
具体的,上述第一焦距范围例如可以是大于或等于3倍焦距,上述第二焦距范围例如可以是小于或等于1倍焦距。具体的焦距范围可以根据具体的情况确定,此处不做限制。本实施例的具体实现可以参见表2对应的描述,此处不再赘述。
在其中一种可能的实施方式中,所述根据所述第一视频片段的目标信息确定所述第一视频片段的第一播放速率,包括:当所述第一视频片段为拍摄时长在第一预设时长范围内的视频类型时,确定所述第一播放速率为慢动作的播放速率;当所述第一视频片段为拍摄时长在第二预设时长范围内的视频类型时,确定所述第一播放速率为快动作的播放速率;其中,所述第一预设时长范围内的任意一个时长小于所述第二预设时长范围内的任意一个时长。
具体的,上述第一预设时长范围例如可以是视频片段的拍摄时长小于10秒,所述上述第二预设时长范围例如可以是视频片段的拍摄时长大于10分钟。具体的预设时长范围可以根据具体的情况确定,此处不做限制。本实施例的具体实现可以参见表2对应的描述,此处不再赘述。
在其中一种可能的实施方式中,上述第一视频片段的目标信息包括如下信息中的至少两种信息:上述第一视频片段中画面场景的信息、上述第一视频片段的拍摄焦距、上述第一视频片段的拍摄时长和上述第一视频片段中的画面变化情况;上述根据上述第一视频片段的目标信息确定上述第一视频片段的第一播放速率,包括:根据上述至少两种信息确定上述第一视频片段的至少两个播放速率结果,其中,每个播放速率结果为基于上述至少两种信息中的一种信息确定得到;根据上述至少两个播放速率结果确定上述第一视频片段的第一播放速率。
在具体的实施例中,上述终端获取到上述第一视频片段中画面场景的信息、上述第一视频片段的拍摄焦距、上述第一视频片段的拍摄时长和上述第一视频片段中的画面变化情况这四种信息中的至少两种信息之后,分别根据每一种信息确定出一种该第一视频片段的播放速率结果,即可以确定出至少两种该第一视频片段的播放速率结果,然后综合分析该至少两个播放速率结果以确定出一个播放速率结果对应的播放速率作为上述第一视频片段的第一播放速率。
在其中一种可能的实施方式中,所述第一播放速率为所述至少两个播放速率结果表征的播放速率中出现次数最多的播放速率。
为了便于理解本申请实施例,下面举例说明。
例如,假设上述终端获取到的是上述第一视频片段中画面场景的信息和上述第一视频片段的拍摄焦距这两种信息,假设获取的该画面场景的信息表征的是涓涓细流的场景,那么根据该场景在表1或表2中可以匹配出的播放速率为慢动作的播放速率,假设获取的拍摄焦距为大于或等于3倍焦距,那么根据该信息在表1或表2中可以匹配出的播放速率为慢动作的播放速率,由于根据两种信息确定出的播放速率都是慢动作的播放速率,那么综合分析可以确定出该第一视频片段的第一播放速率为慢动作的播放速率。
例如,假设上述终端获取到的是上述第一视频片段的拍摄焦距、上述第一视频片段的拍摄时长和上述第一视频片段中的画面变化情况这三种信息。假设获取的上述第一视频片段的拍摄焦距为大于或等于3倍焦距,那么根据该信息在表1或表2中可以匹配出的播放 速率为慢动作的播放速率。假设获取的上述第一视频片段的拍摄时长表征的是该第一视频片段的拍摄时长大于10分钟的情况,那么根据该信息在表1或表2中可以匹配出的播放速率为快动作的播放速率。假设获取的上述第一视频片段中的画面变化情况表征的是该第一视频片段中的景物在缓慢地变化的情况,那么根据该信息在表1或表2中可以匹配出的播放速率为快动作的播放速率。综合分析,三种信息分别确定出的播放速率结果,有两个结果表征的是快动作的播放速率,只有一个结果表征的是慢动作的播放速率,那么最终可以确定出该第一视频片段的第一播放速率为快动作的播放速率。
上述举例只是示例性地介绍说明,还存在其它可能的实施例,此处不做限制。
在其中一种可能的实施方式中,上述视频特效生成方法还包括:获取目标视频中的第二视频片段的目标信息,所述目标信息包括所述第二视频片段的内容特征和所述第二视频片段的拍摄参数中的一项或多项;根据所述第二视频片段的目标信息确定所述第二视频片段的第二播放速率;将所述第二视频片段的播放速率调整为所述第二播放速率。
在具体实施例中,上述目标视频可以包括多个视频片段,上述终端可以根据获取的每一个视频片段的目标信息确定对应视频片段的播放速率以对应地调整视频片段的播放速率。本申请实施例具体的实现可以参见上述图3所述的方法及其可能实现的实施方式中的对应的描述,此处不再赘述。本申请实施例表明一个视频包括的多个视频片段可以分别调整其播放速率,有利于进一步丰富视频的播放节奏。
下面以上述终端为手机为例,示例性地介绍应用上述方法实现视频特效过程中的手机用户界面(user interface,UI)示意图,以方便更好地理解本申请实施例的技术方案。
本申请的说明书和权利要求书及附图中的术语“用户界面”,是应用程序或操作系统与用户之间进行交互和信息交换的介质接口,它实现信息的内部形式与用户可以接受形式之间的转换。用户界面常用的表现形式是图形用户界面(graphic user interface,GUI),是指采用图形方式显示的与计算机操作相关的用户界面。它可以是在手机的显示屏中显示的一个图标、窗口、控件等界面元素,其中控件可以包括图标、按钮、菜单、选项卡、文本框、对话框、状态栏、导航栏、Widget等可视的界面元素。
图7示例性示出了手机上的用于展示手机安装的应用程序的示例性用户界面71。
用户界面71可包括:状态栏701,日历指示符702,天气指示符703,具有常用应用程序图标的托盘704,导航栏705,位置信息指示符706以及其他应用程序图标。其中:
状态栏701可包括:运营商名称(例如“中国移动”)701A、无线高保真(wireless fidelity,Wi-Fi)信号的一个或多个信号强度指示符701B、移动通信信号(又可称为蜂窝信号)的一个或多个信号强度指示符701C、电池状态指示符701D、时间指示符701E。
日历指示符702可用于指示当前时间,例如日期、星期几、时分信息等。
天气指示符703可用于指示天气类型,例如多云转晴、小雨等,还可以用于指示气温等信息。
具有常用应用程序图标的托盘704可展示:电话图标704A、通讯录图标704B、短信图标704C、相机图标704D。
导航栏705可包括:返回键705A、主显示屏键705B、多任务键705C等系统导航键。当检测到用户点击返回键705A时,手机可显示当前页面的上一个页面。当检测到用户点 击主显示屏键705B时,手机可显示主界面。当检测到用户点击多任务键705C时,手机可显示用户最近打开的任务。各导航键的命名还可以为其他,本申请对此不做限制。不限于虚拟按键,导航栏705中的各导航键也可以实现为物理按键。
位置信息指示符706可用于指示当前所在的城市和/或所在城市的区域等信息。
其他应用程序图标可例如:邮箱的图标707、手机管家的图标708、设置的图标709、图库的图标710等等。
用户界面41还可包括页面指示符711。其他应用程序图标可分布在多个页面,页面指示符711可用于指示用户当前浏览的是哪一个页面中的应用程序。用户可以左右滑动其他应用程序图标的区域,来浏览其他页面中的应用程序图标。
在一些实施例中,图7示例性所示的用户界面71可以为手机的主界面(Home screen)。
在其他一些实施例中,手机还可以包括实体的主显示屏键。该主显示屏键可用于接收用户的指令,将当前显示的UI返回到主界面,这样可以方便用户随时查看主显示屏。上述指令具体可以是用户单次按下主显示屏键的操作指令,也可以是用户在短时间内连续两次按下主显示屏键的操作指令,还可以是用户在预定时间内长按主显示屏键的操作指令。在本申请其他一些实施例中,主显示屏键还可以集成指纹识别器,以便用于在按下主显示屏键的时候,随之进行指纹采集和识别。
可以理解的是,图7仅仅示例性示出了手机上的用户界面,不应构成对本申请实施例的限定。
上述手机响应于对用户界面71中的相机图标704D的点击或触摸操作,该手机的用户界面显示为拍照的用户界面,然后,在该用户界面,手机可以响应于对录像控件的点击或触摸操作进入到录像模式,录像模式的用户界面可以如图8所示。
图8中的用户界面包括闪光灯控件801、光圈控件802、前后摄像头转换控件803、录像画面804、相机控件805、录像控件806、图库控件807,其中:
闪光灯控件801可以用于控制闪光灯的开启和关闭;
光圈控件802可以用于控制光圈的开启和关闭;
前后摄像头转换控件803可以用于调整拍照或录像的摄像头为前置摄像头还是后置摄像头;
录像画面804可以用于显示摄像头即时拍到的画面内容;
相机控件805可以用于在录像模式的时候转换为拍照模式,还用于在拍照模式的时候启动摄像头进行拍摄;
录像控件806可以用于在拍照模式的时候转换为录像模式,还用于在录像模式的时候启动摄像头进行录像以及停止摄像头的摄像;
图库控件807可以用于查看已经拍摄到的照片和录制好的视频。
在图8所示的用户界面中,上述手机响应于对录像控件806的点击或触摸操作启动摄像头进行录像。
然后上述手机在录像的过程中再次响应于对录像控件806的点击或触摸操作停止录像操作,从而完成了一个视频的录制。
在视频录制完成后,上述手机的用户界面可以显示如图9所示的用户界面。在图9所 示的用户界面中包括一个提示小窗口901,该提示的小窗口901主要用于提示手机已经完成对该录制好的视频的播放节奏进行智能调优,并询问用户是否接受该智能调优的结果,此外小窗口901中还包括一个接受按钮903和一个取消按钮902。
手机可以响应于对该接受按钮903的点击或触摸操作保存该智能调优后的视频,此外,还可以同时保存智能调优前的视频。手机也可以响应于对取消按钮902的点击或触摸操作,取消本次智能调优的结果,仅保存录制好的原始的视频。
在图9所示的用户界面中还包括一个播放控件904,手机可以相应于对该播放控件的点击或触摸操作播放智能调优后的视频。
下面介绍另一种调节视频播放速率的用户界面实现的实施例。
参见图10,图10可以是在图7所示的界面中,手机响应于对图库图标710的点击或触摸操作后显示的用户界面图,该用户界面图包括主题名称1001,该主题名称例如可以是图库,该图库中可以包括视频和图片的缩略图,具体的,视频缩略图上还包括播放控件1003用于标记该缩略图为视频缩略图,而图片的缩略图则没有。手机可以响应于对这些缩略图的点击或触摸操作在显示屏中显示真实大小的图片或者显示视频的播放界面。
例如,手机可以响应于对视频缩略图1002的点击或触摸操作,在显示屏中显示该视频的播放界面,如图11所示。在图11所示的界面中,手机可以响应于对播放控件1106的点击或触摸操作播放该视频。
此外,图1所示的界面中还可以包括编辑控件1102、删除控件1103、收藏控件1104和分享控件1105。其中,编辑控件1102可以用于编辑界面显示的视频,删除控件1103可以用于删除界面显示的视频,收藏控件1104可以用于收藏界面显示的视频,分享控件1105可以用于分享界面显示的视频。
手机响应于对编辑控件1102的点击或触摸操作,可以在显示屏中显示视频编辑的界面,例如图12所示的界面。
在图12所示的界面中包括视频播放速度条1201,该视频播放速度条1201包括多个播放速度调节点12011,每一个调节点对应一个播放速率,这些调节点从慢速度播放到快速度播放逐渐增加播放速率。
图12所示的界面中还包括根据上述视频特效生成方法的方法实施例中对应的方法智能分割好视频片段的视频片段框选区域1202,在该视频片段框选区域1202可以看到,手机将视频分为了两个视频片段,分别为片段1和片段2。当然,在具体的实施例中国,视频可以被分为一个或多个视频片段显示在视频片段框选区域1202,具体的视频片段的划分可以根据具体的情况来决定,本申请实施例不做限制。
在图12所示的界面中,手机已经对视频片段框选区域1202中的两个视频片段智能调节好其播放速率,可以参见图12,视频片段框选区域1202中的片段2被选中,视频播放速度条1201则对应地标记显示该片段2的智能调整好的播放速率12012。当然,响应于对视频片段框选区域1202中的片段1的点击或触摸操作,手机也会在显示屏中标记显示该片段1的智能调整好的播放速率。
此外,图12所示界面还包括视频预览区域1203,响应于对播放控件1204的点击或触摸操作,手机可以在视频预览区域1203中根据视频播放速度条1201中被选中的播放速率 播放被选中的视频片段。当然,响应于对播放控件1204的点击或触摸操作,手机也可以在视频预览区域1203中播放智能调优后的完整的视频。
当然,在图12所示的界面中,用户也可以手动选择对应的视频片段,用户选择完视频片段之后,视频播放速度条1201上会对应显示一个智能调优后的播放速率,用户也可以对选择的视频片段手动调整视频播放速度条1201以调整选择的视频片段的播放速率。例如可以参见图13和图14。在图13中,用户可以手动选择视频片段框选区域1202中的视频片段,选择好之后,可以看到视频播放速度条1201上会对应显示一个智能调优后的播放速率。然后可以参见图14,用户也可以在视频播放速度条1201上选择一个播放速率作为已选择的视频片段的播放速率。
在其中一种可能的实施方式中,本申请实施例还可以应用于折叠屏手机。例如可以参见图15,图15为在未展开的折叠屏上进行视频编辑的用户界面,未展开该折叠屏手机的显示屏前与普通手机编辑模式效果一样。
但是,当展开折叠屏手机后,一半显示屏展示视频的画面内容,另一半显示屏则显示图12中的视频播放速度条1201和视频片段框选区域1202,例如可以参见图16。
在图16中,每个视频片段可以单独对应一个视频播放速度条1201,这样可以清楚显示每个视频片段对应的播放速率,以提高用户体验。
本申请实施例的具体操作可以参见图12至图14所述的具体操作描述,此处不再赘述。
本申请实施例提供一种芯片系统,该芯片系统包括至少一个处理器,存储器和接口电路,该存储器、该接口电路和该至少一个处理器通过线路互联,该至少一个存储器中存储有计算机程序;该计算机程序被该处理器执行时,能够实现图4所示方法实施例及其可能实现的方法实施例。
本申请实施例还提供一种计算机可读存储介质,该计算机可读存储介质中存储有计算机程序,当该计算机程序由处理器运行时,能够实现图4所示方法实施例及其可能实现的方法实施例。
本申请实施例还提供一种计算机程序产品,当该计算机程序产品在由处理器上运行时,能够实现图4所示方法实施例及其可能实现的方法实施例。
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,该流程可以由计算机程序来指令相关的硬件完成,该程序可存储于计算机可读取存储介质中,该程序在执行时,可包括如上述各方法实施例的流程。而前述的存储介质包括:ROM或随机存储记忆体RAM、磁碟或者光盘等各种可存储程序代码的介质。
以上所述,仅为本发明的具体实施方式,但本发明的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本发明揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本发明的保护范围之内。因此,本发明的保护范围应以所述权利要求的保护范围为准。

Claims (31)

  1. 一种视频特效生成方法,其特征在于,包括:
    获取目标视频中的第一视频片段的目标信息,所述目标信息包括所述第一视频片段的内容特征和所述第一视频片段的拍摄参数中的一项或多项;
    根据所述第一视频片段的目标信息确定所述第一视频片段的第一播放速率;
    将所述第一视频片段的播放速率调整为所述第一播放速率。
  2. 根据权利要求1所述方法,其特征在于,所述根据所述第一视频片段的目标信息确定所述第一视频片段的第一播放速率,包括:
    根据所述第一视频片段的目标信息确定所述第一视频片段的第一视频类型;
    从预设的特效映射关系中匹配所述第一视频片段的第一视频类型对应的第一播放速率,其中,所述特效映射关系定义了多个视频类型与多个播放速率的对应关系。
  3. 根据权利要求1或2所述方法,其特征在于,所述第一视频片段的目标信息包括所述第一视频片段的内容特征,所述第一视频片段的内容特征包含所述第一视频片段中画面场景的信息。
  4. 根据权利要求1-3任意一项所述方法,其特征在于,所述根据所述第一视频片段的目标信息确定所述第一视频片段的第一播放速率,包括:
    当所述第一视频片段为水流、雨雪天气或动物特写的视频类型时,确定所述第一播放速率为慢动作的播放速率;
    当所述第一视频片段为街道或自然景象的视频类型时,确定所述第一播放速率为快动作的播放速率。
  5. 根据权利要求1或2所述方法,其特征在于,所述第一视频片段的目标信息包括所述第一视频片段的拍摄参数,所述第一视频片段的拍摄参数包含所述第一视频片段的拍摄焦距。
  6. 根据权利要求1-5任意一项所述方法,其特征在于,所述根据所述第一视频片段的目标信息确定所述第一视频片段的第一播放速率,包括:
    当所述第一视频片段为拍摄焦距在第一焦距范围内的视频类型时,确定所述第一播放速率为慢动作的播放速率;
    当所述第一视频片段为拍摄焦距在第二焦距范围内的视频类型时,确定所述第一播放速率为快动作的播放速率。
  7. 根据权利要求1或2所述方法,其特征在于,所述第一视频片段的目标信息包括所述第一视频片段的内容特征,所述第一视频片段的内容特征包含所述第一视频片段的拍摄时长。
  8. 根据权利要求1-7任意一项所述方法,其特征在于,所述根据所述第一视频片段的目标信息确定所述第一视频片段的第一播放速率,包括:
    当所述第一视频片段为拍摄时长在第一预设时长范围内的视频类型时,确定所述第一播放速率为慢动作的播放速率;
    当所述第一视频片段为拍摄时长在第二预设时长范围内的视频类型时,确定所述第一播放速率为快动作的播放速率。
  9. 根据权利要求1或2所述方法,其特征在于,所述第一视频片段的目标信息包括所述第一视频片段的内容特征,所述第一视频片段的内容特征包含所述第一视频片段中的画面变化情况。
  10. 根据权利要求1-9任意一项所述方法,其特征在于,所述根据所述第一视频片段的目标信息确定所述第一视频片段的第一播放速率,包括:
    当所述第一视频片段为画面变化速度落入第一变化速度范围内的视频类型时,确定所述第一播放速率为慢动作的播放速率;
    当所述第一视频片段为画面变化速度落入第二变化速度范围内的视频类型时,确定所述第一播放速率为快动作的播放速率。
  11. 根据权利要求1所述方法,其特征在于,所述第一视频片段的目标信息包括如下信息中的至少两种信息:所述第一视频片段中画面场景的信息、所述第一视频片段的拍摄焦距、所述第一视频片段的拍摄时长和所述第一视频片段中的画面变化情况;
    所述根据所述第一视频片段的目标信息确定所述第一视频片段的第一播放速率,包括:
    根据所述至少两种信息确定所述第一视频片段的至少两个播放速率结果,其中,每个播放速率结果为基于所述至少两种信息中的一种信息确定得到;
    根据所述至少两个播放速率结果确定所述第一视频片段的第一播放速率。
  12. 根据权利要求11所述方法,其特征在于,所述第一播放速率为所述至少两个播放速率结果表征的播放速率中出现次数最多的播放速率。
  13. 根据权利要求1至12任一项所述方法,其特征在于,所述方法还包括:
    获取目标视频中的第二视频片段的目标信息,所述目标信息包括所述第二视频片段的内容特征和所述第二视频片段的拍摄参数中的一项或多项;
    根据所述第二视频片段的目标信息确定所述第二视频片段的第二播放速率;
    将所述第二视频片段的播放速率调整为所述第二播放速率。
  14. 根据权利要求1至13任一项所述方法,其特征在于,所述获取目标视频中的第一视频片段的目标信息,包括:
    在拍摄所述目标视频的过程中获取所述目标视频中的第一视频片段的目标信息。
  15. 一种终端,其特征在于,包括处理器和存储器,所述存储器存储有计算机程序,所述处理器用于调用所述计算机程序来执行如下操作:
    获取目标视频中的第一视频片段的目标信息,所述目标信息包括所述第一视频片段的内容特征和所述第一视频片段的拍摄参数中的一项或多项;
    根据所述第一视频片段的目标信息确定所述第一视频片段的第一播放速率;
    将所述第一视频片段的播放速率调整为所述第一播放速率。
  16. 根据权利要求15所述终端,其特征在于,所述根据所述第一视频片段的目标信息确定所述第一视频片段的第一播放速率,具体为:
    根据所述第一视频片段的目标信息确定所述第一视频片段的第一视频类型;
    从预设的特效映射关系中匹配所述第一视频片段的第一视频类型对应的第一播放速率,其中,所述特效映射关系定义了多个视频类型与多个播放速率的对应关系。
  17. 根据权利要求15或16所述终端,其特征在于,所述第一视频片段的目标信息包 括所述第一视频片段的内容特征,所述第一视频片段的内容特征包含所述第一视频片段中画面场景的信息。
  18. 根据权利要求15-17任意一项所述终端,其特征在于,所述根据所述第一视频片段的目标信息确定所述第一视频片段的第一播放速率,具体为:
    当所述第一视频片段为水流、雨雪天气或动物特写的视频类型时,确定所述第一播放速率为慢动作的播放速率;
    当所述第一视频片段为街道或自然景象的视频类型时,确定所述第一播放速率为快动作的播放速率。
  19. 根据权利要求15或16所述终端,其特征在于,所述第一视频片段的目标信息包括所述第一视频片段的拍摄参数,所述第一视频片段的拍摄参数包含所述第一视频片段的拍摄焦距。
  20. 根据权利要求15-19任意一项所述终端,其特征在于,所述根据所述第一视频片段的目标信息确定所述第一视频片段的第一播放速率,具体为:
    当所述第一视频片段为拍摄焦距在第一焦距范围内的视频类型时,确定所述第一播放速率为慢动作的播放速率;
    当所述第一视频片段为拍摄焦距在第二焦距范围内的视频类型时,确定所述第一播放速率为快动作的播放速率。
  21. 根据权利要求15或16所述终端,其特征在于,所述第一视频片段的目标信息包括所述第一视频片段的内容特征,所述第一视频片段的内容特征包含所述第一视频片段的拍摄时长。
  22. 根据权利要求15-21任意一项所述终端,其特征在于,所述根据所述第一视频片段的目标信息确定所述第一视频片段的第一播放速率,具体为:
    当所述第一视频片段为拍摄时长在第一预设时长范围内的视频类型时,确定所述第一播放速率为慢动作的播放速率;
    当所述第一视频片段为拍摄时长在第二预设时长范围内的视频类型时,确定所述第一播放速率为快动作的播放速率。
  23. 根据权利要求15或16所述终端,其特征在于,所述第一视频片段的目标信息包括所述第一视频片段的内容特征,所述第一视频片段的内容特征包含所述第一视频片段中的画面变化情况。
  24. 根据权利要求15-23任意一项所述终端,其特征在于,所述根据所述第一视频片段的目标信息确定所述第一视频片段的第一播放速率,包括:
    当所述第一视频片段为画面变化速度落入第一变化速度范围内的视频类型时,确定所述第一播放速率为慢动作的播放速率;
    当所述第一视频片段为画面变化速度落入第二变化速度范围内的视频类型时,确定所述第一播放速率为快动作的播放速率。
  25. 根据权利要求15所述终端,其特征在于,所述第一视频片段的目标信息包括如下信息中的至少两种信息:所述第一视频片段中画面场景的信息、所述第一视频片段的拍摄焦距、所述第一视频片段的拍摄时长和所述第一视频片段中的画面变化情况;
    所述根据所述第一视频片段的目标信息确定所述第一视频片段的第一播放速率,具体为:
    根据所述至少两种信息确定所述第一视频片段的至少两个播放速率结果,其中,每个播放速率结果为基于所述至少两种信息中的一种信息确定得到;
    根据所述至少两个播放速率结果确定所述第一视频片段的第一播放速率。
  26. 根据权利要求25所述终端,其特征在于,所述第一播放速率为所述至少两个播放速率结果表征的播放速率中出现次数最多的播放速率。
  27. 根据权利要求15至26任一项所述终端,其特征在于,所述处理器还用于执行如下操作:
    获取目标视频中的第二视频片段的目标信息,所述目标信息包括所述第二视频片段的内容特征和所述第二视频片段的拍摄参数中的一项或多项;
    根据所述第二视频片段的目标信息确定所述第二视频片段的第二播放速率;
    将所述第二视频片段的播放速率调整为所述第二播放速率。
  28. 根据权利要求15至27任一项所述终端,其特征在于,所述处理器获取目标视频中的第一视频片段的目标信息,具体为:
    在拍摄所述目标视频的过程中获取所述目标视频中的第一视频片段的目标信息。
  29. 一种终端,其特征在于,包括用于执行权利要求1至14任一项所述的方法的单元。
  30. 一种芯片系统,其特征在于,所述芯片系统包括至少一个处理器和接口电路,所述处理器用于通过所述接口电路执行存储器中存储的计算机程序实现权利要求1至14任一项所述的方法。
  31. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质中存储有计算机程序,当所述程序被处理器执行时,实现权利要求1至14任一项所述的方法。
PCT/CN2020/100840 2019-08-20 2020-07-08 视频特效生成方法及终端 WO2021031733A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US17/674,918 US20220174237A1 (en) 2019-08-20 2022-02-18 Video special effect generation method and terminal

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910769084.8A CN112422804B (zh) 2019-08-20 2019-08-20 视频特效生成方法及终端
CN201910769084.8 2019-08-20

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/674,918 Continuation US20220174237A1 (en) 2019-08-20 2022-02-18 Video special effect generation method and terminal

Publications (1)

Publication Number Publication Date
WO2021031733A1 true WO2021031733A1 (zh) 2021-02-25

Family

ID=74660419

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/100840 WO2021031733A1 (zh) 2019-08-20 2020-07-08 视频特效生成方法及终端

Country Status (3)

Country Link
US (1) US20220174237A1 (zh)
CN (2) CN115086554A (zh)
WO (1) WO2021031733A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114187169A (zh) * 2021-12-10 2022-03-15 北京字节跳动网络技术有限公司 视频特效包的生成方法、装置、设备及存储介质

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115086554A (zh) * 2019-08-20 2022-09-20 华为技术有限公司 视频特效生成方法及终端
CN113347475B (zh) * 2021-05-31 2023-02-28 北京达佳互联信息技术有限公司 多媒体信息的播放倍速调节方法和装置
CN113395545B (zh) * 2021-06-10 2023-02-28 北京字节跳动网络技术有限公司 视频处理、视频播放方法、装置、计算机设备及存储介质
CN115037872B (zh) * 2021-11-30 2024-03-19 荣耀终端有限公司 视频处理方法和相关装置
CN114938427B (zh) * 2022-05-12 2024-03-12 北京字跳网络技术有限公司 媒体内容的拍摄方法、装置、设备、存储介质和程序产品

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101600107A (zh) * 2009-07-08 2009-12-09 杭州华三通信技术有限公司 调整视频录像播放速度的方法、系统及装置
US20140123195A1 (en) * 2012-10-30 2014-05-01 Kt Corporation Control video content play speed
CN104735385A (zh) * 2015-03-31 2015-06-24 小米科技有限责任公司 播放控制方法及装置、电子设备
CN104811798A (zh) * 2015-04-17 2015-07-29 广东欧珀移动通信有限公司 一种调整视频播放速度的方法及装置
US20160365113A1 (en) * 2015-06-12 2016-12-15 JVC Kenwood Corporation Playback device, playback method, and non-transitory recording medium
CN106559635A (zh) * 2015-09-30 2017-04-05 杭州萤石网络有限公司 一种多媒体文件的播放方法及装置
CN107105314A (zh) * 2017-05-12 2017-08-29 北京小米移动软件有限公司 视频播放方法及装置
CN108401193A (zh) * 2018-03-21 2018-08-14 北京奇艺世纪科技有限公司 一种视频播放方法、装置和电子设备

Family Cites Families (40)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6856757B2 (en) * 2001-03-22 2005-02-15 Koninklijke Philips Electronics N.V. Apparatus and method for detecting sports highlights in a video program
US8472791B2 (en) * 2004-03-17 2013-06-25 Hewlett-Packard Development Company, L.P. Variable speed video playback
US7664558B2 (en) * 2005-04-01 2010-02-16 Apple Inc. Efficient techniques for modifying audio playback rates
CN101018324B (zh) * 2007-02-08 2011-02-16 华为技术有限公司 一种视频监控控制器、视频监控的方法和系统
US8295687B1 (en) * 2007-04-16 2012-10-23 Adobe Systems Incorporated Indicating different video playback rates
CN102117638A (zh) * 2009-12-30 2011-07-06 北京华旗随身数码股份有限公司 音乐节奏控制的视频输出的方法及播放装置
JP5551308B2 (ja) * 2010-05-26 2014-07-16 クゥアルコム・インコーポレイテッド カメラパラメータ支援型のビデオフレームレート・アップコンバージョン
JP5537390B2 (ja) * 2010-11-15 2014-07-02 日本放送協会 映像信号処理装置及びカメラ装置
US8849948B2 (en) * 2011-07-29 2014-09-30 Comcast Cable Communications, Llc Variable speed playback
US8732579B2 (en) * 2011-09-23 2014-05-20 Klip, Inc. Rapid preview of remote video content
US20130129308A1 (en) * 2011-11-18 2013-05-23 Keith Stoll Karn Display device with adaptive fast navigation mode
US9253229B1 (en) * 2013-01-07 2016-02-02 Cox Communications, Inc. Correlating video quality with video playback view
JP6093289B2 (ja) * 2013-12-10 2017-03-08 株式会社フレイ・スリー 映像処理装置、映像処理方法およびプログラム
CN105100692B (zh) * 2014-05-14 2018-10-26 杭州海康威视系统技术有限公司 视频播放方法及其装置
WO2016006946A1 (ko) * 2014-07-09 2016-01-14 정지연 증강현실 컨텐츠의 생성 및 재생 시스템과 이를 이용한 방법
CN104270565B (zh) * 2014-08-29 2018-02-02 小米科技有限责任公司 图像拍摄方法、装置及设备
US9679605B2 (en) * 2015-01-29 2017-06-13 Gopro, Inc. Variable playback speed template for video editing application
CN104702919B (zh) * 2015-03-31 2019-08-06 小米科技有限责任公司 播放控制方法及装置、电子设备
CN104869430B (zh) * 2015-05-18 2017-12-05 北京中熙正保远程教育技术有限公司 一种视频倍速播放方法及装置
CN104980794A (zh) * 2015-06-30 2015-10-14 北京金山安全软件有限公司 一种视频拼接方法及装置
CN105072328B (zh) * 2015-07-16 2020-03-24 Oppo广东移动通信有限公司 一种视频拍摄方法、装置以及终端
CN105554399A (zh) * 2016-02-24 2016-05-04 北京小米移动软件有限公司 拍摄方法、拍摄装置及终端设备
CA2928401A1 (en) * 2016-04-28 2017-10-28 Clearwater Clinical Limited A computer-implemented method for making rapid periodic movements visible to the human eye
CN105959717A (zh) * 2016-05-27 2016-09-21 天脉聚源(北京)传媒科技有限公司 一种基于移动终端的现场直播方法及装置
CN106027897A (zh) * 2016-06-21 2016-10-12 北京小米移动软件有限公司 拍摄参数的设置方法及装置
CN106534938A (zh) * 2016-09-30 2017-03-22 乐视控股(北京)有限公司 视频播放方法及装置
WO2018085982A1 (zh) * 2016-11-08 2018-05-17 深圳市大疆创新科技有限公司 视频录制方法、装置及拍摄设备
CN108235123B (zh) * 2016-12-15 2020-09-22 阿里巴巴(中国)有限公司 视频播放方法及装置
CN106791408A (zh) * 2016-12-27 2017-05-31 努比亚技术有限公司 一种拍摄预览装置、终端及方法
CN108665518B (zh) * 2017-04-01 2021-10-22 Tcl科技集团股份有限公司 一种调节动画速度的控制方法及系统
CN107360365A (zh) * 2017-06-30 2017-11-17 盯盯拍(深圳)技术股份有限公司 拍摄方法、拍摄装置、终端以及计算机可读存储介质
CN107197349A (zh) * 2017-06-30 2017-09-22 北京金山安全软件有限公司 一种视频处理方法、装置、电子设备及存储介质
CN107770595B (zh) * 2017-09-19 2019-11-22 浙江科澜信息技术有限公司 一种在虚拟场景中嵌入真实场景的方法
CN107682742B (zh) * 2017-10-10 2021-03-23 成都德尚视云科技有限公司 无需转码的视频浓缩播放方法
CN108881765A (zh) * 2018-05-25 2018-11-23 讯飞幻境(北京)科技有限公司 轻量录播方法、装置及系统
CN109218810A (zh) * 2018-08-29 2019-01-15 努比亚技术有限公司 一种视频录制参数调控方法、设备及计算机可读存储介质
CN109587560A (zh) * 2018-11-27 2019-04-05 Oppo广东移动通信有限公司 视频处理方法、装置、电子设备以及存储介质
CN109819161A (zh) * 2019-01-21 2019-05-28 北京中竞鸽体育文化发展有限公司 一种帧率的调整方法、装置、终端及可读存储介质
CN110139160B (zh) * 2019-05-10 2022-07-22 北京奇艺世纪科技有限公司 一种预测系统及方法
CN115086554A (zh) * 2019-08-20 2022-09-20 华为技术有限公司 视频特效生成方法及终端

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101600107A (zh) * 2009-07-08 2009-12-09 杭州华三通信技术有限公司 调整视频录像播放速度的方法、系统及装置
US20140123195A1 (en) * 2012-10-30 2014-05-01 Kt Corporation Control video content play speed
CN104735385A (zh) * 2015-03-31 2015-06-24 小米科技有限责任公司 播放控制方法及装置、电子设备
CN104811798A (zh) * 2015-04-17 2015-07-29 广东欧珀移动通信有限公司 一种调整视频播放速度的方法及装置
US20160365113A1 (en) * 2015-06-12 2016-12-15 JVC Kenwood Corporation Playback device, playback method, and non-transitory recording medium
CN106559635A (zh) * 2015-09-30 2017-04-05 杭州萤石网络有限公司 一种多媒体文件的播放方法及装置
CN107105314A (zh) * 2017-05-12 2017-08-29 北京小米移动软件有限公司 视频播放方法及装置
CN108401193A (zh) * 2018-03-21 2018-08-14 北京奇艺世纪科技有限公司 一种视频播放方法、装置和电子设备

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114187169A (zh) * 2021-12-10 2022-03-15 北京字节跳动网络技术有限公司 视频特效包的生成方法、装置、设备及存储介质

Also Published As

Publication number Publication date
CN112422804B (zh) 2022-06-14
CN115086554A (zh) 2022-09-20
CN112422804A (zh) 2021-02-26
US20220174237A1 (en) 2022-06-02

Similar Documents

Publication Publication Date Title
WO2021031733A1 (zh) 视频特效生成方法及终端
KR102161230B1 (ko) 멀티미디어 콘텐츠 검색을 위한 사용자 인터페이스 방법 및 장치
EP3226537B1 (en) Mobile terminal and method for controlling the same
US20160364103A1 (en) Method and apparatus for using gestures during video playback
CN106575361B (zh) 提供视觉声像的方法和实现该方法的电子设备
US7987423B2 (en) Personalized slide show generation
US10282061B2 (en) Electronic device for playing-playing contents and method thereof
US20160232696A1 (en) Method and appartus for generating a text color for a group of images
US20180132006A1 (en) Highlight-based movie navigation, editing and sharing
US9781355B2 (en) Mobile terminal and control method thereof for displaying image cluster differently in an image gallery mode
KR102376700B1 (ko) 비디오 컨텐츠 생성 방법 및 그 장치
WO2023151611A1 (zh) 视频录制方法、装置和电子设备
CN105556947A (zh) 用于色彩检测以生成文本色彩的方法和装置
CN109257649B (zh) 一种多媒体文件生成方法及终端设备
US10261749B1 (en) Audio output for panoramic images
CN113918522A (zh) 一种文件生成方法、装置及电子设备
CN110830704A (zh) 旋转图像生成方法及其装置
US10460196B2 (en) Salient video frame establishment
WO2016200692A1 (en) Editing, sharing, and viewing video
KR20170098113A (ko) 전자 장치의 이미지 그룹 생성 방법 및 그 전자 장치
US20140153836A1 (en) Electronic device and image processing method
CN114245174A (zh) 视频预览方法以及相关设备
KR20200017466A (ko) 비디오 항목의 제공을 위한 장치 및 관련된 방법

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20855629

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20855629

Country of ref document: EP

Kind code of ref document: A1