WO2018072652A1 - 视频处理方法、视频处理装置及存储介质 - Google Patents

视频处理方法、视频处理装置及存储介质 Download PDF

Info

Publication number
WO2018072652A1
WO2018072652A1 PCT/CN2017/106102 CN2017106102W WO2018072652A1 WO 2018072652 A1 WO2018072652 A1 WO 2018072652A1 CN 2017106102 W CN2017106102 W CN 2017106102W WO 2018072652 A1 WO2018072652 A1 WO 2018072652A1
Authority
WO
WIPO (PCT)
Prior art keywords
video
target image
effect
image frame
effect element
Prior art date
Application number
PCT/CN2017/106102
Other languages
English (en)
French (fr)
Inventor
汪倩怡
高雨
王志斌
李恩
李榕
Original Assignee
腾讯科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 腾讯科技(深圳)有限公司 filed Critical 腾讯科技(深圳)有限公司
Publication of WO2018072652A1 publication Critical patent/WO2018072652A1/zh
Priority to US16/231,873 priority Critical patent/US11012740B2/en
Priority to US17/234,741 priority patent/US11412292B2/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/001Texturing; Colouring; Generation of texture or colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/20Drawing from basic elements, e.g. lines or circles
    • G06T11/203Drawing of straight lines or curves
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/60Editing figures and text; Combining figures or text
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs
    • H04N21/23412Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs for generating or manipulating the scene composition of objects, e.g. MPEG-4 objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/44008Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics in the video stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/44012Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving rendering scenes according to scene graphs, e.g. MPEG-4 scene graphs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/478Supplemental services, e.g. displaying phone caller identification, shopping application
    • H04N21/4788Supplemental services, e.g. displaying phone caller identification, shopping application communicating with other users, e.g. chatting
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/85Assembly of content; Generation of multimedia applications
    • H04N21/854Content authoring
    • H04N21/8549Creating video summaries, e.g. movie trailer
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/24Indexing scheme for image data processing or generation, in general involving graphical user interfaces [GUIs]

Definitions

  • the present invention relates to video technologies, and in particular, to a video processing method, a video processing device, and a storage medium.
  • a user can capture a video through a mobile terminal such as a mobile phone, share the video on the social platform that is logged in (or can also share a pre-recorded video), and the login user or the visiting user of the social platform can Watch the videos shared by users and use the interactive features of the social platform to comment, communicate and share again to further enhance the effect of video sharing.
  • a mobile terminal such as a mobile phone
  • share the video on the social platform that is logged in or can also share a pre-recorded video
  • the login user or the visiting user of the social platform can Watch the videos shared by users and use the interactive features of the social platform to comment, communicate and share again to further enhance the effect of video sharing.
  • embodiments of the present invention are directed to providing a video processing method, a video processing apparatus, and a storage medium, which are capable of enhancing the degree of recognition of a clip or a subject in a video.
  • an embodiment of the present invention provides a video processing method, including:
  • a drawing interface frame corresponding to each of the target image frames is output.
  • an embodiment of the present invention provides a video processing apparatus, including:
  • a first determining portion configured to determine a target image frame corresponding to the dynamic effect to be added in the video
  • a second determining portion configured to determine an attribute of the effect element corresponding to the dynamic effect in each of the target image frames, and a coordinate of the effect element
  • a rendering portion configured to render the effect element based on an attribute of the effect element and a coordinate of the effect element in a drawing interface
  • a compositing portion configured to fill the drawing interface with the target image frame as a background of the drawing interface to form a drawing interface frame having a dynamic special effect
  • an output part configured to output a drawing interface frame corresponding to each of the target image frames.
  • an embodiment of the present invention provides a video processing apparatus, including: a memory and a processor, where the memory stores executable instructions for causing the processor to perform the following operations:
  • a drawing interface frame corresponding to each of the target image frames is output.
  • an embodiment of the present invention provides a storage medium, where executable instructions are stored for performing a video processing method provided by an embodiment of the present invention.
  • an embodiment of the present invention provides a video processing method, where the method is performed by a terminal, where the terminal includes one or more processors and a memory, and one or more programs, where the one or More than one program is stored in the memory, the program may include one or more units each corresponding to a set of instructions, the one or more processors being configured to execute the instructions; the method comprising:
  • a drawing interface frame corresponding to each of the target image frames is output.
  • 1 is an alternative implementation diagram of a related art to form a special effect in a video
  • FIG. 2 is a schematic flow chart of an optional video processing method in an embodiment of the present invention.
  • FIG. 3 is a schematic flowchart of an optional video processing method performed on a user side and a network side in an embodiment of the present invention
  • FIG. 4 is a schematic structural diagram of an optional software and hardware of a video processing apparatus according to an embodiment of the present invention.
  • FIG. 5A is a schematic flowchart of an optional video processing method according to an embodiment of the present invention.
  • FIG. 5B is a schematic diagram of an image frame mapped to a video at different stages of a dynamic effect in an embodiment of the present invention
  • FIG. 5C is a schematic diagram of drawing a special effect element for a dynamic image of a target image frame and a corresponding stage of mapping according to an embodiment of the present invention
  • FIG. 5D is a schematic diagram of synthesizing a target image frame and a drawing interface for drawing an effect element in an embodiment of the present invention
  • FIG. 6 is a schematic flow chart of an optional video processing method in an embodiment of the present invention.
  • FIG. 7 is a schematic flow chart of an optional video processing method in an embodiment of the present invention.
  • FIG. 8A is a schematic diagram showing an alternative display of the light drawing effect in the embodiment of the present invention.
  • FIG. 8B is a schematic diagram showing an alternative display of the light painting effect in the embodiment of the present invention.
  • FIG. 8C is a schematic diagram showing an alternative display of the light drawing effect in the embodiment of the present invention.
  • FIG. 8D is an optional schematic diagram showing the light effect of the embodiment of the present invention.
  • FIG. 8E is an optional schematic diagram showing the light effect of the embodiment of the present invention.
  • FIG. 8F is an optional schematic diagram showing the light effect of the embodiment of the present invention.
  • FIG. 8G is an optional schematic diagram showing the light effect of the embodiment of the present invention.
  • FIG. 8H is an optional schematic diagram showing the light effect of the embodiment of the present invention.
  • FIG. 8I is a schematic diagram of an optional process for drawing a special effect element based on a particle system in an embodiment of the present invention.
  • FIG. 8J is an optional schematic flowchart of forming a light drawing effect in an input video according to an embodiment of the present invention.
  • FIG. 9 is a schematic structural diagram of a video processing apparatus according to an embodiment of the present invention.
  • the terms “including”, “including” or any of them are used.
  • the other variations are intended to cover a non-exclusive inclusion, such that a method or apparatus that includes a plurality of elements includes not only the elements that are specifically described, but also other elements that are not explicitly listed, or The inherent elements.
  • an element defined by the phrase “comprising a " does not exclude the presence of additional related elements in the method or device including the element (eg, steps in the method or portions of the device)
  • the parts here may be partial circuits, partial processors, partial programs or software, etc.; they may also be units, which may be modular or non-modular).
  • the video processing method provided by the embodiment of the present invention includes a series of steps, but the video processing method provided by the embodiment of the present invention is not limited to the described steps.
  • the video processing device provided by the embodiment of the present invention is not limited to including the portion explicitly described, and may also include a portion that is required to be set when acquiring related information or processing based on the information.
  • Dynamic effects which are dynamic visual effects added in the video.
  • the dynamic effects can be a light painting effect, that is, a dynamic, specific attribute visual effect formed by moving the light source in the video. Such as the trajectory of the light source moving or various patterns formed by the light source.
  • Attributes which describe the way in which effect elements are constructed to form dynamic effects, such as describing dynamic effects in terms of size, color, and quantity; for example, describing dynamic effects in terms of speed, acceleration, and life cycle.
  • Image frames which constitute the basic unit of video, one image frame is a static image, continuous mining The set of image frames creates a dynamic effect when rendered.
  • Visual effects when dynamic effects are added to the video, they are carried by a series of image frames (also called target image frames) in the video, and the dynamic effects can be decomposed into a series of static visual effects for the target image frames.
  • a series of continuous static visual effects constitute the dynamic effects of special effects.
  • the visual effect corresponds to the target image frame, and the visual effect of the target image frame can continue to be decomposed into the position and attributes of the effect element in the corresponding target image frame.
  • a drawing interface also known as a canvas, for rendering and dynamically displaying special effects elements and graphical elements in image frames.
  • rendering and display operations of graphical elements in the drawing interface are done using a scripting language (usually JavaScript).
  • FIG. 2 is an optional schematic structural diagram of a video processing method according to an embodiment of the present invention, including: Step 101, determining a target image frame corresponding to a dynamic effect to be added in a video; Step 102, determining The attribute of the corresponding effect element and the coordinate of the effect element in each target image frame; in step 103, the effect element is rendered in the drawing interface based on the attribute of the effect element and the coordinate of the effect element; step 104, the target image is The frame is filled into the drawing interface as a background of the drawing interface to form a drawing interface frame with dynamic effects; and in step 105, a drawing interface frame corresponding to each target image frame is output.
  • the embodiment of the present invention further provides a video processing apparatus for performing the above video processing method, and the video processing apparatus can be implemented in various manners, which are exemplarily described below.
  • the video processing device may be implemented together based on hardware resources in a terminal on the user side (for example, a smart phone, a tablet, etc.) and a server on the network side.
  • a terminal on the user side for example, a smart phone, a tablet, etc.
  • the video processing apparatus is distributed in the user side terminal and the network side server, and the user side terminal and the network side server can communicate by various means, for example, based on code division multiple access (CDMA, Communication system such as Code Division Multiple Access (WCDMA), Wideband Code Division Multiple Access (WCDMA) and its evolutionary cellular communication, for example, based on wireless compatibility Card (WiFi, WirelessFidelity) communication.
  • CDMA code division multiple access
  • WCDMA Code Division Multiple Access
  • WCDMA Wideband Code Division Multiple Access
  • WiFi WirelessFidelity
  • the user-side terminal and the network-side server perform data interaction through established communication to cooperatively complete steps 101 to 105 exemplarily shown in FIG. 2, and steps performed on the terminal and the server in the embodiment of the present invention.
  • the actual application can be flexibly adjusted according to the needs.
  • the terminal on the user side performs video collection.
  • the video processing device may be implemented based on hardware resources of the terminal on the user side, that is, the video processing device is implemented in the terminal on the user side, and the terminal on the user side performs steps 101 to 105 exemplarily shown in FIG. 2 .
  • the video processing device may be implemented based on hardware resources of a server on the network side, that is, the video processing device is implemented in a server on the network side, and the server on the network side performs steps 101 to 105 exemplarily shown in FIG. 2 .
  • the hardware resources of the video processing device include computing resources such as a processor and a memory, and communication resources such as a network interface; at the software level, the video processing device can be implemented.
  • Executable instructions including computer executable instructions such as programs, modules, etc. stored in a storage medium.
  • the video processing device when the video processing device is implemented based on the hardware resources of the user-side terminal, refer to an optional hardware and software structure diagram of the video processing device 10 shown in FIG. 4, where the video processing device 10 includes a hardware layer, a driver layer, and an operating system layer. And the application layer.
  • the structure of the video processing device 10 illustrated in FIG. 4 is merely an example and does not constitute a limitation on the structure of the video processing device 10.
  • the video processing device 10 may set more components than FIG. 4 according to implementation requirements, or omit setting partial components according to implementation needs.
  • the hardware layer of video processing device 10 includes a processor 11, an input/output interface 13, a storage medium 14, and a network interface 12, which can communicate via a system bus connection.
  • the processor 11 can be a central processing unit (CPU), a microprocessor (MCU, Microcontroller Unit), or an application specific integrated circuit (ASIC, Application Specific). Integrated Circuit) or Field-Programmable Gate Array (FPGA) implementation.
  • CPU central processing unit
  • MCU microprocessor
  • ASIC Application Specific
  • FPGA Field-Programmable Gate Array
  • the input/output interface 13 can be implemented using input/output devices such as a display screen, a touch screen, and a speaker.
  • the storage medium 14 may be implemented by using a non-volatile storage medium such as a flash memory, a hard disk, or an optical disk, or may be implemented by using a volatile storage medium such as a double data rate (DDR), which is useful for executing the above video processing method.
  • DDR double data rate
  • storage medium 14 may be centrally located with other components of video processing device 10, or may be distributed relative to other components in video processing device 10.
  • the network interface 12 provides the processor 11 with access capabilities of external data such as the off-site set storage medium 14.
  • the network interface 12 may be based on Near Field Communication (NFC) technology, Bluetooth technology, Zigbee (ZigBee) technology for short-range communication, in addition, communication systems such as CDMA, WCDMA, and the like can also be implemented.
  • NFC Near Field Communication
  • Bluetooth technology Bluetooth technology
  • ZigBee Zigbee
  • communication systems such as CDMA, WCDMA, and the like can also be implemented.
  • the driver layer includes middleware 15 for the operating system 16 to identify and communicate with the hardware layer components, such as a collection of drivers for the various components of the hardware layer.
  • the operating system 16 is configured to provide a user-oriented graphical interface, including, by way of example, a plug-in icon, a desktop background, and an application icon, and the operating system 16 supports user control of the device via a graphical interface.
  • the operating system type and version are not limited. For example, it may be a Linux operating system, a UNIX operating system, or another operating system.
  • the application layer includes an application running on the user side terminal. As described above, when it is required to implement the function of sharing the captured video on the social platform, the social application 17 is run in the application layer.
  • the video processing method exemplarily shown in FIG. 2 is applied to a scenario in which a user sets a time period in which a dynamic effect is formed in a video (for example, a video slave).
  • the dynamic effect is displayed when playing to the 6th minute from the 5th minute, and the position where the dynamic effect is formed in the screen (image frame) of the video (for example, set in The center of the video screen shows dynamic effects).
  • the video processing method provided by the embodiment of the present invention may be used to process the video captured by the terminal in real time. Accordingly, refer to step 201a:
  • step 201a the user-side terminal performs video collection, and sequentially displays the collected image frames on the graphical interface.
  • the function of shooting and sharing is triggered, and the rear camera shooting environment of the terminal is selected in the graphic interface, in the graphic interface of the terminal screen.
  • the captured image frames are presented sequentially.
  • the user-side terminal runs the client of the social platform, the function of triggering self-portrait and sharing is triggered.
  • the front camera of the terminal is used to capture the user and the environment where the user is located.
  • the captured image frames are sequentially presented in the graphical interface of the screen.
  • the video processing method provided by the embodiment of the present invention may be further configured to add a dynamic special effect to a video (file) pre-stored locally by the terminal, for example, processing a video pre-acquired by the terminal, a video received from a network side or from another terminal, and correspondingly Ground, see step 202b:
  • step 201b the user side terminal decodes the video, and sequentially presents the image frame in the video on the graphic interface.
  • step 201a and step 201b are corresponding steps performed according to the type of video (which is a video captured in real time or a video stored in advance).
  • Step 202a determining a dynamic effect (also referred to as an effect to be added) that the user sets to be added in the video.
  • the user-side terminal may present a virtual identifier of the candidate dynamic effect that can be added in the current video during the presentation of the video frame in the video in the graphical interface, for example, presenting the candidate dynamic effect in the graphical interface.
  • Serial number, name or thumbnail, etc. according to the user
  • the triggered virtual ID determines the dynamic effects that need to be added.
  • the user-side terminal may present a virtual switch of a dynamic effect that can be formed in the current video during the presentation of the video frame in the video in the graphical interface, such as presenting the name of the dynamic effect in the graphical interface or Thumbnails, etc.
  • a virtual switch of a dynamic effect that can be formed in the current video during the presentation of the video frame in the video in the graphical interface, such as presenting the name of the dynamic effect in the graphical interface or Thumbnails, etc.
  • Step 202b determining the location of the dynamic effect to be added set by the user in the video.
  • the location set by the user in the currently presented image frame (screen) is identified, and the identified location is taken as the location where dynamic effects need to be added.
  • a specific touch gesture (such as three-touch, double-click, etc.) is preset in the frame image as a triggering operation for presenting a dynamic effect.
  • the location of the corresponding operation is detected as the location of the dynamic effect that needs to be added. For example, when the area of the dynamic effect is large, it can be used as the center position of the dynamic effect, for example, as the position where the dynamic effect starts to appear at the beginning, or as the last position when the dynamic effect disappears.
  • the virtual identifier of the dynamic effect of the user dragging the graphical interface and the position released in the image frame is recognized.
  • the identified location is the location of the dynamic effect that needs to be added to the video. For example, as a center position for forming a dynamic effect, for example, as a start position or an end position of a formed dynamic effect.
  • the track identifies the trajectory that the user is operating in the video, and determines where the trajectory of the user's operation is at a location where each image frame passes to add a dynamic effect to the corresponding image frame.
  • the contact between the user's fingertip and the graphical interface is detected, and the movement trajectory of the contact is recognized, assuming that in the image frame 1
  • the middle contact is at position 1
  • the contact is at position 2 in image frame 2, and so on.
  • the contact is at position n, and position 1 to position n form a continuous movement trajectory.
  • the position i of the image frame i (1 ⁇ i ⁇ n) is a position for adding a dynamic effect.
  • Step 202c Determine a time period set by the user to add a dynamic effect in the video.
  • the time period of the dynamic effect refers to the time period corresponding to the life cycle of the dynamic effect on the video time axis when the dynamic effect is added to the video.
  • the dynamic effect set by the user may have a predetermined life cycle (eg, 10 seconds, that is, the dynamic effect disappears after 10 seconds of presentation), and accordingly, when the user triggers the operation of forming a dynamic effect, the time is started. , the time period until the life cycle arrives as a period of time when dynamic effects need to be added to the video.
  • a predetermined life cycle eg, 10 seconds, that is, the dynamic effect disappears after 10 seconds of presentation
  • the life cycle of the dynamic effect is started until the life cycle reaches the time period.
  • the dynamic effect set by the user is added at the same position of the image frame in the time period (that is, the position of the dynamic effect to be added determined in step 202c).
  • the tracking identifies the trajectory of the user's operation in the video, and forms a dynamic effect around the trajectory of the user operation. Accordingly, the time period for adding the dynamic effect is from the detection of the user operation to the playback of the video. The period of time during which the user operation is released.
  • the contact between the user's fingertip and the graphical interface is detected, and it is determined that the contact is initially recognized (assuming that the video is played to The 5th minute) and the time when the contact release is recognized (assuming that the video is played until the 6th minute), the 5-6th play period in the video is the time period during which the dynamic effect needs to be added.
  • the image frame located in the time period can be determined as the target image frame for carrying the dynamic effect based on the time axis of the video, for example, following the foregoing example, the video in the fifth to sixth
  • the image frame corresponding to the minute is used to carry dynamic effects.
  • Step 203 Determine the attributes of the effect elements that need to be formed in each target image frame and the coordinates of the effect elements that need to be formed when the dynamic effects set by the user are added to the video.
  • the visual effect that each target image frame needs to be carried by the target image frame is determined, and the visual effect is an attribute that constitutes a dynamic special effect (such as the number of special effect elements, initial speed) "snapshot" at different stages, gravity acceleration, centripetal force, centrifugal force, tangential acceleration, rotation speed, rotation acceleration, initial size, end size, initial color, end color, color mixing method, life cycle and number of special effects elements, etc.
  • a dynamic special effect such as the number of special effect elements, initial speed
  • tangential acceleration, rotation speed, rotation acceleration tangential acceleration, rotation speed, rotation acceleration, initial size, end size, initial color, end color, color mixing method, life cycle and number of special effects elements, etc.
  • each target is calculated by mapping visual effect attributes of different stages of dynamic effects (including stage 1 to stage n) to each target image frame (target image frame 1 to target image frame n).
  • stage 1 to stage n the stages of dynamic effects
  • target image frame 1 to target image frame n the target image frame
  • Step 204 Performing an effect element in the drawing interface based on the attribute of the effect element and the coordinates of the effect element.
  • the phase 1 of the dynamic effect is mapped to the target image frame 1, and the attributes and positions of the effect elements that need to be formed in the target image frame 1 calculated according to the phase 1 are
  • the empty drawing interface corresponds to the position of the effect element, and the attribute of the effect element of each position obtained according to the calculation is rendered at the corresponding position to form an effect element having the corresponding attribute, thereby obtaining a drawing interface usable for merging with the target image frame 1.
  • the drawing interface needs to be emptied, and then The empty drawing interface corresponds to the position of the effect element of the target image frame 2, and the special effect element having the corresponding attribute is rendered at the corresponding position according to the calculated attribute of the effect element of each position, thereby obtaining a drawing that can be used for merging with the target image frame 2. interface.
  • the coordinates of the effect element need to be normalized according to the size of the target image frame to form a normalized coordinate, in an empty drawing interface of a black background.
  • the rendering forms an effect element with the corresponding attribute, thereby avoiding the problem that the position of the effect element formed by the rendering is inconsistent with the position of the dynamic effect set by the user in the video, ensuring in the video. The accuracy of the position of the resulting dynamic effects.
  • Step 205 Fill the drawing interface with the target image frame as the background of the drawing interface to form a drawing interface frame with dynamic special effects.
  • Step 206 Output a drawing interface frame corresponding to each target image frame.
  • the target image frame 1 is used as a background, and the drawing interface is formed in the drawing interface formed by the rendering of the target image frame 1 (including the effect elements formed for the target image frame 1 rendering).
  • Frame in the same way can form the drawing interface frame 2 To the drawing interface frame n.
  • the user-side terminal outputs the mapping interface frame in real time, so that the user can timely view the setting in the video (for example, the pre-stored video, or the video currently formed by the user-side terminal in the current collection environment). Dynamic effects.
  • Step 207 Video-encode the non-target image frame in the video and the output drawing interface frames in a chronological order to form a video file with dynamic special effects.
  • the target image frame 1, the drawing interface frame 1 to the drawing interface frame 2, and the non-target image frame 2 are sequentially video-encoded to form a video file with user settings for local decoding and playback by the user side terminal.
  • the video file can be viewed while watching the dynamic effect set by the user at a specific position of the specific video segment. So that you can quickly understand that the video clip (and the location of the dynamic effects in the video) is a part of the video publisher's emphasis, which enables the viewer to pay attention to the specific video clip or the specific location in the video when sharing the video.
  • the video processing method exemplarily shown in FIG. 2 is applied to the following scenario: the user performs a specific action in the video, and tracks the track of the specific action automatically in the video. Add dynamic effects to the corresponding track.
  • the video processing method provided by the embodiment of the present invention may be used to process a video captured by a terminal in real time, and correspondingly, refer to step 301a:
  • step 301a the user-side terminal performs video collection, and sequentially displays the collected image frames on the graphical interface.
  • it can be applied to a scene of a user's shooting environment, and for example, can be applied to a scene of a user's self-timer.
  • the video processing method provided by the embodiment of the present invention may also be used to process a video (file) pre-stored locally by the terminal, for example, processing a video pre-acquired by the terminal, a video received from the network side or from another terminal, and correspondingly , see step 302b:
  • step 301b the user side terminal decodes the video, and sequentially presents the image frames in the video on the graphical interface.
  • step 301a and step 301b are steps corresponding to execution according to the type of video (which is a video captured in real time or a video stored in advance).
  • Step 302 Perform feature recognition on each image frame of the video, and determine the identified image frame with the feature of the specific action as the target image frame.
  • feature extraction is performed from the image frame presented in the graphical interface, and the extracted feature is preset.
  • An action feature (such as facial motion, finger motion) is compared to determine an image frame having a preset motion feature as a target image frame to which a dynamic effect needs to be added.
  • Step 303 determining an attribute of the effect element corresponding to each target image frame when adding a dynamic effect of the track following the specific action in the video, and determining the drawing of the effect element in the drawing interface based on the position of the specific action in each target image frame. coordinate of.
  • Step 304 Rendering an effect element in the drawing interface based on the attribute of the effect element and the coordinates of the effect element.
  • step 305 the target image frame is filled into the drawing interface as a background of the drawing interface to form a drawing interface frame with dynamic effects.
  • Step 306 outputting a drawing interface frame corresponding to each target image frame.
  • steps 303 to 306 For details of the implementation of steps 303 to 306, reference may be made to the description of the foregoing steps 204 to 206, which are not further described herein.
  • Step 307 in the chronological order, the non-target image frames in the video, and the output of each The graphics interface frame is video encoded to form a video file with dynamic effects.
  • the target image frame 1, the drawing interface frame 1 to the drawing interface frame 2, and the non-target image frame 2 are sequentially video-encoded to form a video file with user settings for local decoding and playback by the user side terminal. Or, share to social platforms for other users to access.
  • the dynamic effect has been carried in the drawing interface frame 1 to the drawing interface frame n, the dynamic effect corresponding to the track of the specific action can be viewed when the video file is played, so that it can be quickly learned that the specific action in the video is emphasized by the video publisher.
  • the emphasized part is to realize the effect that the user wants the viewer to pay attention to the specific action when sharing the video.
  • the video processing method exemplarily shown in FIG. 2 is applied to a scenario in which a dynamic effect of tracking the contour of a specific object is added to the video.
  • the video processing method provided by the embodiment of the present invention may be used to process a video captured by a terminal in real time, and correspondingly, refer to step 401a:
  • step 401a the user side terminal performs video collection, and presents the collected image frame on the graphic interface.
  • it can be applied to a scene of a user's shooting environment, and for example, can be applied to a scene of a user's self-timer.
  • the video processing method provided by the embodiment of the present invention may also be used to process a video (file) pre-stored locally by the terminal, for example, processing a video pre-acquired by the terminal, a video received from the network side or from another terminal, and correspondingly , see step 401b:
  • step 401b the user side terminal decodes the video and presents the image frame in the video on the graphical interface.
  • step 401a and step 401b are corresponding steps performed according to the type of video (either a real-time captured video or a pre-stored video).
  • Step 402 performing object recognition on each image frame of the video, and identifying the specific pair
  • the image frame of the image is determined as the target image frame.
  • Step 403 determining to add a dynamic effect corresponding to the contour of the specific object in the video to the video, the attribute of the effect element corresponding to each target image frame, and determining to draw in the drawing interface based on the position of the specific object in each target image frame.
  • the coordinates of the effect element The coordinates of the effect element.
  • Step 404 rendering an effect element in the drawing interface based on the attribute of the effect element and the coordinates of the effect element.
  • Step 405 Fill the drawing image frame with the target image frame as the background of the drawing interface to form a drawing interface frame with dynamic special effects.
  • Step 406 Output a drawing interface frame corresponding to each target image frame.
  • the user-side terminal outputs the mapping interface frame in real time, so that the user can timely view the setting in the video (for example, the pre-stored video, or the video currently formed by the user-side terminal in the current collection environment). Dynamic effects.
  • Step 407 Video-encode the non-target image frame in the video and the output drawing interface frames in a chronological order to form a video file with dynamic special effects.
  • the target image frame 1, the drawing interface frame 1 to the drawing interface frame 2, and the non-target image frame 2 are sequentially video-encoded to form a video file with user settings for local decoding and playback by the user side terminal. Or, share to social platforms for other users to access.
  • the dynamic effect Since the dynamic effect has been carried in the drawing interface frame 1 to the drawing interface frame n, the dynamic effect of tracking the external contour of the specific object can be viewed when playing the video file, so that the object with dynamic effects and the motion of the video can be quickly learned.
  • the screen is a part of the video publisher's emphasis, which enables the viewer to focus on the specific object when sharing the video.
  • the basic unit of the particle system tool for constructing a light painting effect is also called a particle.
  • the user can correspond to a certain point in time in the video.
  • a certain position in the image frame sets the trajectory of the drawn illuminating effect, that is, the trajectory of the illuminating effect drawn by the user is bound to a specific position in the video, and is also bound to the image frame corresponding to the time point of the video.
  • the effects include:
  • Scene 3 A combination of illuminating effects and recognition of specific actions in the video.
  • the position of the user's hand moving in each image frame of the video can be automatically recognized, and the position of the hand in each image frame forms a illuminating effect that characterizes the hand motion trajectory.
  • the trajectory of the illuminating effect added by the user is combined with the object in the current video frame of the video. Then, if the object moves in the subsequent image frame of the video, the trajectory of the illuminating effect is also correspondingly moved according to the object in the view.
  • the solid of the light painting effect mainly includes four parts of template protocol analysis, input processing, particle effect rendering and video synthesis, which will be respectively described below with reference to FIG. 8I and FIG. 8J.
  • the light painting effect is essentially composed of a large number of repeated textures of individual particle elements.
  • Each particle element is drawn at different positions on the screen in different sizes, colors, and directions of rotation to form an overall light painting effect.
  • the attributes supported by a single particle element are as follows: launch angle; initial velocity (x, y-axis direction); gravitational acceleration (x, y-axis direction); centripetal force/centrifugal force; tangential acceleration; rotation speed; rotation acceleration; initial size; Initial color; end color; color mixing method; life cycle; maximum particle number.
  • particles (elements) for forming a light painting effect that need to be drawn in an empty canvas are formed when forming a light painting effect in each image frame. Properties such as size, color, rotation direction, and coordinates are calculated, and the position of the particles is calculated. The particles corresponding to each image frame are successively presented in the corresponding image frame to achieve the effect of particle motion.
  • the principle of the light painting effect implemented in the video is that the particle emitter provided by the particle system (the tool provided by the particle system for the user to draw the particle) moves following the movement of the input coordinates, and the light is formed as the particle emitter's own coordinates follow the movement. Paint the trajectory of special effects.
  • the input here may come from several different input sources:
  • the position is set by the user in the video, for example, the finger slides on the display interface of the present video to detect the coordinates.
  • the position of the particle emitter in the canvas is adjusted according to the canvas coordinates, and the particles are drawn through the particle emitter canvas.
  • Rendering mainly uses the OpenGL image manipulation interface provided by the particle system. For example, if you divide the frame into 30 frames per second, each frame will be processed as follows: empty the canvas, pass the particle element texture, and pass the calculated particle element vertex coordinates and color. Related properties; set the color blending mode specified by the template; call the drawing interface to draw particles in the canvas.
  • the corresponding image frame is filled as a canvas background into a canvas that has been drawn with particles for the corresponding image frame, thereby realizing the effect of combining the particles of the light painting effect with the corresponding image frame.
  • the composite canvas frame is encoded and saved as a video file.
  • Subsequent playback can be performed on the local terminal on the user side, or uploaded to the social platform for sharing to other users.
  • An optional functional structure diagram of the device 20 includes:
  • the first determining portion 21 is configured to determine a target image frame corresponding to the dynamic effect to be added in the video
  • the second determining portion 22 is configured to determine an attribute of the corresponding effect element in the target image frame of the dynamic effect, and a coordinate of the effect element;
  • the rendering portion 23 is configured to render the effect element in the drawing interface based on the attribute of the effect element and the coordinates of the effect element;
  • the synthesizing portion 24 fills the drawing interface with the target image frame as the background of the drawing interface, and forms a drawing interface frame with dynamic special effects;
  • the output portion 25 is configured to output a drawing interface frame corresponding to each target image frame.
  • the first determining portion 21 is further configured to determine that the image frame of the corresponding time period in the video is the target image frame based on a time period corresponding to the user operation on the time axis of the video.
  • the first determining portion 21 is further configured to perform feature recognition on each image frame of the video, and determine the identified image frame having the feature of the specific action as the target image frame.
  • the first determining portion 21 is further configured to perform object recognition in each image frame of the video, and determine the identified image frame having the specific object as the target image frame.
  • the second determining portion 22 is further configured to determine, when adding a dynamic special effect following the touch operation trajectory of the user in the video, determining an attribute of the effect element corresponding to each target image frame, and based on the user in the video.
  • the position of the touch operation determines the coordinates of the effect element drawn in the drawing interface.
  • the second determining portion 22 is further configured to determine, when a dynamic effect of a track following a specific action is added to the video, determine an attribute of the effect element corresponding to each target image frame, and, based on the specific action, each target The position in the image frame determines the coordinates of the effect element drawn in the drawing interface.
  • the second determining portion 22 is further configured to determine to add a follow in the video When the dynamic effect of the contour of the specific object is determined, the attribute of the effect element corresponding to each target image frame is determined, and the coordinates of the effect element drawn in the drawing interface are determined based on the position of the specific object in each target image frame.
  • the second determining portion 22 is further configured to determine a visual effect corresponding to each target image frame based on the dynamic special effect, and determine that when the corresponding visual image is formed in each target image frame, the corresponding target image frame The effect elements that need to be rendered and the corresponding properties.
  • the rendering portion 23 is further configured to normalize the coordinates of the effect element based on the size of the target image frame to form normalized coordinates, and normalize the corresponding effect elements in the empty drawing interface of the black background. The position of the coordinates, rendered to form an effect element with the corresponding attribute.
  • the compositing portion 24 is further configured to video encode the non-target image frames in the video and the output drawing interface frames in chronological order to form a video file having dynamic effects.
  • the embodiment of the invention further provides a video processing device, including:
  • the processor is configured to execute when the computer program is executed:
  • a drawing interface frame corresponding to each of the target image frames is output.
  • the processor further configured to execute the computer program, executes:
  • the processor further configured to execute the computer program, executes:
  • Feature recognition is performed on each image frame of the video, and the identified image frame having the feature of the specific motion is determined as the target image frame.
  • the processor further configured to execute the computer program, executes:
  • Object recognition is performed on each image frame of the video, and the identified image frame having the specific object is determined as the target image frame.
  • the processor further configured to execute the computer program, executes:
  • the processor further configured to execute the computer program, executes:
  • the processor further configured to execute the computer program, executes:
  • the processor further configured to execute the computer program, executes:
  • Determining a visual effect corresponding to each of the target image frames based on the dynamic special effects Determining that when each of the target image frames forms a corresponding visual effect, the formed effect elements and corresponding attributes need to be rendered in the corresponding target image frames.
  • the processor further configured to execute the computer program, executes:
  • the processor further configured to execute the computer program, executes:
  • the non-target image frames in the video and the outputted drawing interface frames are video-encoded in chronological order to form a video file having the dynamic effects.
  • a drawing interface frame corresponding to each of the target image frames is output.
  • the computer program is further configured to be executed when the processor is running:
  • the computer program is further configured to be executed when the processor is running:
  • Feature recognition is performed on each image frame of the video, and the identified image frame having the feature of the specific motion is determined as the target image frame.
  • the computer program is further configured to be executed when the processor is running:
  • Object recognition is performed on each image frame of the video, and the identified image frame having the specific object is determined as the target image frame.
  • the computer program is further configured to be executed when the processor is running:
  • the computer program is further configured to be executed when the processor is running:
  • the computer program is further configured to be executed when the processor is running:
  • the computer program is further configured to be executed when the processor is running:
  • the computer program is further configured to be executed when the processor is running:
  • the computer program is further configured to be executed when the processor is running:
  • the non-target image frames in the video and the outputted drawing interface frames are video-encoded in chronological order to form a video file having the dynamic effects.
  • the foregoing storage medium includes: a mobile storage device, a random access memory (RAM), a read-only memory (ROM), a magnetic disk, or an optical disk.
  • RAM random access memory
  • ROM read-only memory
  • magnetic disk or an optical disk.
  • optical disk A medium that can store program code.
  • the above-described integrated portions of the present invention may also be stored in a computer readable storage medium if implemented in the form of a software functional module and sold or used as a standalone product.
  • the technical solution of the embodiments of the present invention may be embodied in the form of a software product in essence or in the form of a software product, which is stored in a storage medium and includes a plurality of instructions for making
  • a computer device which may be a personal computer, server, or network device, etc.
  • the foregoing storage medium includes various media that can store program codes, such as a mobile storage device, a RAM, a ROM, a magnetic disk, or an optical disk.
  • the embodiment of the present invention determines a target image frame corresponding to the dynamic effect to be added in the video; determines the attribute of the corresponding effect element in the target image frame and the coordinate of the effect element; the attribute based on the effect element, and the special effect element
  • the coordinates are rendered in the drawing interface to form a special effect element; the target image frame is filled into the drawing interface as a background of the drawing interface to form a drawing interface frame with dynamic effects; and a drawing interface frame corresponding to each target image frame is output.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Computer Security & Cryptography (AREA)
  • User Interface Of Digital Computer (AREA)
  • Processing Or Creating Images (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

本发明实施例公开了一种视频处理方法、视频处理装置及存储介质;方法包括:确定待添加的动态特效在视频中对应的目标图像帧;确定所述动态特效在各所述目标图像帧中对应的特效元素的属性、以及所述特效元素的坐标;基于所述特效元素的属性、以及所述特效元素的坐标在绘图界面中渲染形成所述特效元素;将所述目标图像帧作为所述绘图界面的背景的方式填充到所述绘图界面中,形成具有动态特效的绘图界面帧;输出针对各所述目标图像帧对应形成的绘图界面帧。实施本发明,能够以简洁高效的方式为视频中的片段或者被拍摄对象添加动态特效以增强显著程度。

Description

视频处理方法、视频处理装置及存储介质
相关申请的交叉引用
本申请基于申请号为201610903697.2、申请日为2016年10月17日的中国专利申请提出,并要求该中国专利申请的优先权,该中国专利申请的全部内容在此以引入方式并入本申请。
技术领域
本发明涉及视频技术,尤其涉及一种视频处理方法、视频处理装置及存储介质。
背景技术
随着互联网行业特别是移动互联网的发展,通过互联网进行视频分享成为信息传播的新形态而得到普遍应用。
例如,在一个典型的应用场景中,用户可以通过手机等移动终端拍摄视频,将视频在所登录的社交平台进行分享(或者也可以分享预先拍摄的视频),社交平台的登录用户或者访问用户可以观看用户分享的视频,并使用社交平台的互动功能进行评论、交流和再次分享等,进一步增强视频分享的效果。
在上述视频分享的场景中,存在难以突出视频中的某一片段或者视频中某一对象的显著程度以引起观看者注意的问题,导致无法达到视频分享的预期效果。
例如,在用户录制视频或者需要在互联网中分享视频时,为了强调或者表现视频中的某一片段或者拍摄的某一个对象,往往需要针对视频中的相应片段或者相应对象添加特效,以达到引起观看者的注意的效果。但是, 相关技术仅仅支持用户在视频的各帧图像上绘制简单的图层,以实现在视频中“涂鸦”的效果,相关技术在视频的一帧图像上绘制文字涂鸦的一个显示效果示意图如图1所示,在视频的一帧或者连续的至少两帧图像中可以实现使用文字“啦啦”进行涂鸦的效果。
不难看出,对于在视频的至少两帧图像中绘制图层的方式,如果用户没有从头开始观看视频,例如在出现文字“啦啦”涂鸦时才观看视频,则无从得知视频拍摄者期望突出的视频的片段或对象。
发明内容
为解决上述技术问题,本发明实施例期望提供一种视频处理方法、视频处理装置及存储介质,能够增强视频中的片段或者被拍摄对象的辨识度。
本发明实施例的技术方案是这样实现的:
第一方面,本发明实施例提供一种视频处理方法,包括:
确定待添加的动态特效在视频中对应的目标图像帧;
确定所述动态特效在各所述目标图像帧中对应的特效元素的属性、以及所述特效元素的坐标;
基于所述特效元素的属性、以及所述特效元素的坐标在绘图界面中渲染形成所述特效元素;
将所述目标图像帧作为所述绘图界面的背景的方式填充到所述绘图界面中,形成具有动态特效的绘图界面帧;
输出针对各所述目标图像帧对应形成的绘图界面帧。
第二方面,本发明实施例提供一种视频处理装置,包括:
第一确定部分,配置为确定待添加的动态特效在视频中对应的目标图像帧;
第二确定部分,配置为确定所述动态特效在各所述目标图像帧中对应的特效元素的属性、以及所述特效元素的坐标;
渲染部分,配置为基于所述特效元素的属性、以及所述特效元素的坐标在绘图界面中渲染形成所述特效元素;
合成部分,配置为将所述目标图像帧作为所述绘图界面的背景的方式填充到所述绘图界面中,形成具有动态特效的绘图界面帧;
输出部分,配置为输出针对各所述目标图像帧对应形成的绘图界面帧。
第三方面,本发明实施例提供一种视频处理装置,包括:存储器和处理器,存储器中存储有可执行指令,用于引起处理器执行以下的操作:
确定待添加的动态特效在视频中对应的目标图像帧;
确定所述动态特效在各所述目标图像帧中对应的特效元素的属性、以及所述特效元素的坐标;
基于所述特效元素的属性、以及所述特效元素的坐标在绘图界面中渲染形成所述特效元素;
将所述目标图像帧作为所述绘图界面的背景的方式填充到所述绘图界面中,形成具有动态特效的绘图界面帧;
输出针对各所述目标图像帧对应形成的绘图界面帧。
第四方面,本发明实施例提供一种存储介质,存储有可执行指令,用于执行本发明实施例提供的视频处理方法。
第五方面,本发明实施例提供一种视频处理方法,所述方法由终端执行,所述终端包括有一个或多个处理器以及存储器,以及一个或一个以上的程序,其中,所述一个或一个以上的程序存储于存储器中,所述程序可以包括一个或一个以上的每一个对应于一组指令的单元,所述一个或多个处理器被配置为执行指令;所述方法包括:
确定待添加的动态特效在视频中对应的目标图像帧;
确定所述动态特效在各所述目标图像帧中对应的特效元素的属性、以及所述特效元素的坐标;
基于所述特效元素的属性、以及所述特效元素的坐标在绘图界面中渲染形成所述特效元素;
将所述目标图像帧作为所述绘图界面的背景的方式填充到所述绘图界面中,形成具有动态特效的绘图界面帧;
输出针对各所述目标图像帧对应形成的绘图界面帧。
本发明实施例具有以下有益效果:
1)提供通过在视频中确定需要形成动态特效的目标视频帧的方式,可以轻易地在视频中确定与视频中的一个片段或与某个特定对象对应的视频帧设定为目标视频帧;实现了在视频中根据需求定制化动态特效的技术效果;
2)对于视频的受众来说,不论是从视频的哪个时间点开始观看,由于动态特效的醒目程度远高于在视频中绘制图层所形成的静态效果,增强了辨识度,因而能够快速了解视频的发布者在视频中所需要突出的片段或者对象,保证视频分享的预期效果。
附图说明
图1是相关技术在视频中形成特效的一个可选的实现示意图;
图2是本发明实施例中视频处理方法一个可选的流程示意图;
图3是本发明实施例中在用户侧和网络侧协同实施视频处理方法一个可选的流程示意图;
图4是本发明实施例中视频处理装置的可选的软硬件结构示意图;
图5A是本发明实施例中视频处理方法一个可选的流程示意图;
图5B是本发明实施例中动态特效的不同阶段映射到视频的图像帧的示意图;
图5C是本发明实施例中针对目标图像帧与所映射的相应阶段的动态特效绘制特效元素的示意图;
图5D是本发明实施例中将目标图像帧与绘制有特效元素的绘制界面进行合成的示意图;
图6是本发明实施例中视频处理方法一个可选的流程示意图;
图7是本发明实施例中视频处理方法一个可选的流程示意图;
图8A是本发明实施例中光绘特效的一个可选的显示示意图;
图8B是本发明实施例中光绘特效的一个可选的显示示意图;
图8C是本发明实施例中光绘特效的一个可选的显示示意图;
图8D是本发明实施例中光绘特效的一个可选的显示示意图;
图8E是本发明实施例中光绘特效的一个可选的显示示意图;
图8F是本发明实施例中光绘特效的一个可选的显示示意图;
图8G是本发明实施例中光绘特效的一个可选的显示示意图;
图8H是本发明实施例中光绘特效的一个可选的显示示意图;
图8I是本发明实施例中基于粒子系统绘制特效元素的一个可选的处理示意图;
图8J是本发明实施例中在输入的视频中形成光绘特效的一个可选的流程示意图;
图9是本发明实施例中视频处理装置的一个可选的结构示意图。
具体实施方式
以下结合附图及实施例,对本发明进行进一步详细说明。应当理解,此处所提供的实施例仅仅用以解释本发明,并不用于限定本发明。另外,以下所提供的实施例是用于实施本发明的部分实施例,而非提供实施本发明的全部实施例,在本领域技术人员不付出创造性劳动的前提下,对以下实施例的技术方案进行重组所得的实施例、以及基于对发明所实施的其他实施例均属于本发明的保护范围。
需要说明的是,在本发明实施例中,术语“包括”、“包含”或者其任 何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的方法或者装置不仅包括所明确记载的要素,而且还包括没有明确列出的其他要素,或者是还包括为实施方法或者装置所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括该要素的方法或者装置中还存在另外的相关要素(例如方法中的步骤或者装置中的部分,这里的部分可以是部分电路、部分处理器、部分程序或软件等等;也可以是单元,可以是模块化的或是非模块化的)。
例如,本发明实施例提供的视频处理方法包含了一系列的步骤,但是本发明实施例提供的视频处理方法不限于所记载的步骤,同样地,本发明实施例提供的视频处理装置包括了一系列部分,但是本发明实施例提供的视频处理装置不限于包括所明确记载的部分,还可以包括为获取相关信息、或基于信息进行处理时所需要设置的部分。
对本发明实施例进行进一步详细说明之前,对本发明实施例中涉及的名词和术语进行说明,本发明实施例中涉及的名词和术语适用于如下的解释。
1)动态特效,是指在视频中添加的动态的视觉效果,下文中,动态特效可以为光绘特效,即在视频中以光源移动的方式绘制形成的动态的、具有特定属性的视觉效果,如光源移动的轨迹或者光源形成的各种图案。
2)特效元素,构成动态特效的基本视觉单位。以光绘特效为例,在采用粒子系统如MAYA粒子系统或3D MAX粒子系统来形成光绘特效时,构成光绘特效的基本单位也称为粒子。
3)属性,用于描述特效元素构造形成动态特效的方式,例如,从尺寸、颜色和数量等方面描述动态特效;又例如,从速度、加速度和生命周期等方面描述动态特效。
4)图像帧,构成视频的基本单位,一个图像帧是静态的图像,连续采 集的图像帧在渲染时形成动态的效果。
5)视觉效果,动态特效添加到视频中时,由视频中一系列的图像帧(也称为目标图像帧)承载,动态特效可以针对目标图像帧而分解为一系列的静态的视觉效果,一系列连续的静态的视觉效果构成特效的动态变化的效果。视觉效果与目标图像帧对应,目标图像帧具有的视觉效果可以继续分解为特效元素在相应目标图像帧的位置以及属性。
6)绘图界面,也称为画布,用于呈现和动态显示特效元素以及图像帧中的图形元素,一般地,图形元素在绘图界面的呈现和显示的操作使用脚本语言(通常是JavaScript)完成。
参见图2,图2示出的本发明实施例提供的视频处理方法的一个可选的架构示意图,包括:步骤101,确定待添加的动态特效在视频中对应的目标图像帧;步骤102,确定动态特效在各目标图像帧中对应的特效元素的属性、以及特效元素的坐标;步骤103,基于特效元素的属性、以及特效元素的坐标在绘图界面中渲染形成特效元素;步骤104,将目标图像帧作为绘图界面的背景的方式填充到绘图界面中,形成具有动态特效的绘图界面帧;步骤105,输出针对各目标图像帧对应形成的绘图界面帧。
本发明实施例还提供用于执行上述视频处理方法的视频处理装置,视频处理装置可以采用多种方式实施,以下示例性地说明。
例如,视频处理装置可以基于用户侧的终端(例如,智能手机、平板电脑等)以及网络侧的服务器中的硬件资源共同实现,参见图3示出的视频处理方法的可选的流程示意图,在图3中,视频处理装置分布实施在用户侧的终端和网络侧的服务器中,用户侧的终端与网络侧的服务器可以通过各种方式通信,示例性地,如基于码分多址(CDMA,Code Division Multiple Access)、宽带码分多址(WCDMA,Wideband Code Division Multiple Access)等通信制式及其演进制式的蜂窝通信,又例如,基于无线相容性认 证(WiFi,WirelessFidelity)的通信。
在图3中,用户侧的终端和网络侧的服务器通过建立的通信进行数据交互,以协同完成图2示例性示出的步骤101至步骤105,本发明实施例中对终端和服务器执行的步骤不做限定,实际应用中可以根据需求灵活调整。另外,一般地,用户侧的终端进行视频采集。
再例如,视频处理装置可以基于用户侧的终端的硬件资源实现,也即是视频处理装置实施在用户侧的终端中,用户侧的终端执行图2示例性示出的步骤101至步骤105。
又例如,视频处理装置可以基于网络侧的服务器的硬件资源实现,也即是视频处理装置实施在网络侧的服务器中,网络侧的服务器执行图2示例性示出的步骤101至步骤105。
在硬件层面上,与前述视频处理装置的实现方式对应,实现视频处理装置的硬件资源包括如处理器和内存的计算资源、如网络接口的通信资源实现;在软件层面上,视频处理装置可以实施为存储于存储介质中的可执行指令(包括诸如程序、模块之类的计算机可执行指令)。
如上,以视频处理装置基于用户侧终端的硬件资源实现时,参见图4示出的视频处理装置10的一个可选的软硬件结构示意图,视频处理装置10包括硬件层、驱动层、操作系统层和应用层。然而,本领域的技术人员应当理解,图4示出的视频处理装置10的结构仅为示例,并不构成对视频处理装置10结构的限定。例如,视频处理装置10可以根据实施需要设置较图4更多的组件,或者根据实施需要省略设置部分组件。
视频处理装置10的硬件层包括处理器11、输入/输出接口13,存储介质14以及网络接口12,组件可以经系统总线连接通信。
处理器11可以采用中央处理器(CPU,Central Processing Unit)、微处理器(MCU,Microcontroller Unit)、专用集成电路(ASIC,Application Specific  Integrated Circuit)或逻辑可编程门阵列(FPGA,Field-Programmable Gate Array)实现。
输入/输出接口13可以采用如显示屏、触摸屏、扬声器等输入/输出器件实现。
存储介质14可以采用闪存、硬盘、光盘等非易失性存储介质实现,也可以采用双倍率(DDR,Double Data Rate)动态缓存等易失性存储介质实现,其中存储有用以执行上述视频处理方法的可执行指令。
示例性地,存储介质14可以与视频处理装置10的其他组件集中设置,也可以相对于视频处理装置10中的其他组件分布设置。网络接口12向处理器11提供外部数据如异地设置的存储介质14的访问能力,示例性地,网络接口12可以基于近场通信(NFC,Near Field Communication)技术、蓝牙(Bluetooth)技术、紫蜂(ZigBee)技术进行的近距离通信,另外,还可以实现如CDMA、WCDMA等通信制式及其演进制式的通信。
驱动层包括用于供操作系统16识别硬件层并与硬件层各组件通信的中间件15,例如可以为针对硬件层的各组件的驱动程序的集合。
操作系统16,配置为提供面向用户的图形界面,示例性地,包括插件图标、桌面背景和应用图标,操作系统16支持用户经由图形界面对设备的控制本发明实施例对上述设备的软件环境如操作系统类型、版本不做限定,例如可以是Linux操作系统、UNIX操作系统或其他操作系统。
应用层包括用户侧终端运行的应用,如前所述,当需要实现对拍摄的视频在社交平台分享的功能时,应用层中运行有社交应用17。
下面,以视频处理装置实施在用户侧终端为例,对图2示例性示出的视频处理方法应用于如下的场景进行说明:用户在视频中设定形成动态特效的时间段(例如视频的从第5分钟播放至第第6分钟时显示动态特效)、以及在视频的画面(图像帧)中设定形成动态特效的位置(例如,设置在 视频画面的中心位置呈现动态特效)。
参见图5A示出的视频处理方法的一个可选的流程示意图,本发明实施例提供的视频处理方法可以用于对终端实时采集的视频进行处理,相应地,参见步骤201a:
步骤201a,用户侧终端进行视频采集,并在图形界面顺序呈现采集到的图像帧。
例如,可以适用于用户拍摄环境的场景,用户侧终端运行社交平台的客户端时,触发拍摄并分享的功能,在图形界面中选择使用终端的后置摄像头拍摄环境,在终端屏幕的图形界面中顺序呈现采集到的图像帧。
又例如,可以适用于用户自拍的场景,用户侧终端运行社交平台的客户端时,触发自拍并分享的功能,在图形界面中选择使用终端的前置摄像头拍摄用户以及用户所处环境,在终端屏幕的图形界面中顺序呈现采集到的图像帧。
本发明实施例提供的视频处理方法还可以用于对终端本地预先存储的视频(文件)添加动态特效,例如,对终端预先采集的视频、从网络侧或从其他终端接收的视频进行处理,相应地,参见步骤202b:
步骤201b,用户侧终端解码视频,在图形界面顺序呈现视频中的图像帧。
可以理解地,步骤201a和步骤201b是根据视频的类型(是实时采集的视频还是预先存储的视频)而对应执行的步骤。
步骤202a,确定用户设定的期望在视频中添加的动态特效(也称为待添加特效)。
在一个实施例中,用户侧的终端在图形界面呈现视频中的视频帧的过程中,可以呈现在当前视频中能够添加的候选动态特效的虚拟标识,例如在图形界面中呈现候选的动态特效的序号、名称或者缩略图等,根据用户 触发的虚拟标识,确定需要添加的动态特效。
在另一个实施例中,用户侧的终端在图形界面呈现视频中的视频帧的过程中,可以呈现在当前视频中能够形成的动态特效的虚拟开关,例如在图形界面中呈现动态特效的名称或者缩略图等。当用户触发的虚拟标识时,确定预先设定的默认的动态特效需要添加的动态特效。
步骤202b,确定以及用户在视频中设定的待添加的动态特效的位置。
在一个实施例中,识别出用户在当前呈现的图像帧(画面)中设定的位置,将识别出的位置作为需要添加动态特效的位置。
例如,预设在帧图像中以特定触控手势(如三点触控、双击等)作为呈现动态特效的触发操作,当在图形界面呈现的帧图像中检测到用户实施的该操作时,即将检测到相应操作的位置作为需要添加的动态特效的位置。例如,当动态特效的面积较大时可以作为动态特效的中心位置,又例如作为动态特效的开始出现时最初出现的位置,或者作为动态特效消失时最后出现的位置。
再例如,在图形界面显示最新采集的图像帧时,或者显示从视频中最新解码出的图像帧时,识别出用户拖动图形界面中动态特效的虚拟标识并在图像帧中释放的位置,将识别出的位置作为需要在视频中添加的动态特效的位置。例如,作为形成动态特效的中心位置,又例如作为形成的动态特效的开始位置或结束位置。
在另一个实施例中,跟踪识别用户在视频中操作的轨迹,并确定用户操作的轨迹在各图像帧经过的位置为在相应图像帧添加动态特效的位置。
例如,在图形界面显示采集的图像帧时,或者显示从视频中解码出的图像帧时,检测出用户指尖与图形界面的触点,并识别出触点的移动轨迹,假设在图像帧1中触点位于位置1,在图像帧2中触点位于位置2,以此类推,在图像帧n中触点位于位置n,则位置1至位置n形成连续的移动轨迹, 并且,图像帧i(1≤i≤n)的位置i为用于添加动态特效的位置。
步骤202c,确定用户设定的在视频中待添加动态特效的时间段。
动态特效的时间段是指动态特效被添加到视频中时,动态特效的生命周期在视频时间轴上所对应的时间段。
在一个实施例中,用户设定的动态特效可以具有预定的生命周期(如10秒,即动态特效在呈现10秒后消失),相应地,检测到用户触发形成动态特效的操作时即开始计时,将直至生命周期到达的时间段作为需要在视频中添加动态特效的时间段。
例如,检测到前述的特定触控手势、或拖动动态特效的虚拟标识并在图像帧的某一位置释放的操作时,即对动态特效的生命周期开始计时,直至生命周期到达的时间段内,在该时段内的图像帧的同一位置(也即步骤202c中确定的待添加的动态特效的位置)添加用户所设定的动态特效。
在另一个实施例中,跟踪识别用户在视频中操作的轨迹,并围绕用户操作的轨迹形成动态特效,相应地,添加动态特效的时间段为在视频的播放过程中从检测到用户操作开始至用户操作释放的时间段。
例如,在图形界面显示采集的图像帧时,或者显示从视频中解码出的图像帧时,检测出用户指尖与图形界面的触点,并确定开始识别出触点(假设此时视频播放至第5分钟)以及识别出触点释放时对应的时刻(假设此时视频播放至第6分钟),则视频中第5-6分钟的播放时段为需要添加动态特效的时间段。
需要指出的是,动态特效的时间段确定时,即可基于视频的时间轴确定位于时间段的图像帧为用于承载动态特效的目标图像帧,例如,接续前述示例,视频中第5-6分钟对应的图像帧用于承载动态特效。
可以理解地,步骤202a至步骤202c所记载的确定待添加的动态特效、以及对应的位置和时间段的处理不分先后,上述步骤的记载顺序不应视为 对步骤202a至步骤202c的执行顺序的限定。
步骤203,确定当在视频中添加用户所设定的动态特效时,各目标图像帧中需要形成的特效元素的属性、以及需要形成的特效元素的坐标。
在一个实施例中,根据视频中待添加的动态特效,确定由目标图像帧承载时各目标图像帧需要具有的视觉效果,视觉效果是对构成动态特效的属性(诸如特效元素的数量、初始速度、重力加速度、向心力、离心力、切向加速度、自转速度、自转加速度、初始大小、结束大小、初始颜色、结束颜色、颜色混合方法、生命周期和特效元素数量等)在不同阶段的“快照”,然后基于视觉效果进行分析,继续确定在各所述目标图像帧形成相应的视觉效果时,在相应的目标图像帧中需要渲染形成的特效元素的静态的属性(如特效元素的数量、大小、颜色)以及特效元素在目标图像帧中的位置。
例如,如图5B所示,通过将动态特效的不同阶段(包括阶段1至阶段n)的视觉效果属性映射到各目标图像帧(目标图像帧1至目标图像帧n)的方式,计算各目标图像帧中形成个相应的视觉效果时需要渲染形成的特效元素的属性以及位置。
步骤204,基于特效元素的属性、以及特效元素的坐标在绘图界面中渲染形成特效元素。
在一个实施例中,对于各目标图像帧,基于针对相应目标图像帧计算出的特效元素的位置以及相应的属性,在黑色背景的空绘图界面(其中绘图界面的尺寸与图像帧的尺寸一致)中需要具有特效元素的坐标的位置,根据计算得到的该位置的特效元素具有的属性渲染形成具有相应属性的特效元素。
例如,如图5C所示,动态特效的阶段1映射到目标图像帧1,根据阶段1计算出的在目标图像帧1中需要形成的特效元素的属性以及位置,在 空绘图界面对应特效元素的位置,根据计算得到的各位置的特效元素具有的属性在相应位置渲染形成具有相应属性的特效元素,从而得到可用于与目标图像帧1合并的绘图界面。
可以理解地,由于需要针对各目标图像帧在空绘图界面中渲染相应目标图像帧中应当具有的特效元素,因此,在空绘图界面中渲染一目标图像帧对应的特效元素之后,如还需渲染另一图像帧对应的特效元素,需要首先对绘图界面进行清空。
例如,仍如图5C所示,当在绘图界面中渲染形成对应目标图像帧1的特效元素之后,如果还需要渲染形成对应目标图像帧2的特效元素,则需要对绘图界面进行清空,然后在空绘图界面对应目标图像帧2的特效元素的位置,根据计算得到的各位置的特效元素具有的属性在相应位置渲染形成具有相应属性的特效元素,从而得到可用于与目标图像帧2合并的绘图界面。
在另一个实施例中,当目标图像帧与绘图界面的大小不一致时,需要基于目标图像帧的尺寸对特效元素的坐标进行归一化处理形成归一化坐标,在黑色背景的空绘图界面中对应特效元素的归一化坐标的位置,渲染形成具有相应属性的特效元素,从而避免渲染形成的特效元素的位置与用户在视频中设定的形成动态特效的位置不一致的问题,确保在视频中形成的动态特效的位置的精确度。
步骤205,将目标图像帧作为绘图界面的背景的方式填充到绘图界面中,形成具有动态特效的绘图界面帧。
步骤206,输出针对各目标图像帧对应形成的绘图界面帧。
在一个实施例中,如图5D所示,将目标图像帧1作为背景,填充到针对目标图像帧1渲染形成的绘图界面(包括有针对目标图像帧1渲染形成的特效元素)中形成绘图界面帧,利用同样的方式可以形成绘图界面帧2 至绘图界面帧n。
在一个典型的应用场景中,用户侧终端实时输出绘图界面帧,从而用户可以及时查看在视频(例如,预先存储的视频,或者,用户侧终端当前正在采集环境所实时形成的视频)中设定的动态特效。
在一个实施例中,还可以执行以下步骤:
步骤207,按照时间先后顺序将视频中的非目标图像帧、以及输出的各绘图界面帧进行视频编码形成具有动态特效的视频文件。
仍然以图5D为例,将目标图像帧1、绘图界面帧1至绘图界面帧2、以及非目标图像帧2依次进行视频编码,形成具有用户设定的视频文件,供用户侧终端本地解码播放,或者,分享到社交平台供其他用户访问,由于动态特效已经承载有绘图界面帧1至绘图界面帧n,因此播放视频文件时可以观看到用户在特定的视频片段的特定位置设定的动态特效,从而可以迅速了解到该视频片段(以及视频中呈现动态特效的位置)是视频发布者着重强调的部分,实现用户分享视频时希望观看者关注特定视频片段或视频中特定位置的效果。
下面,以视频处理装置实施在用户侧终端为例,对图2示例性示出的视频处理方法应用于如下的场景进行说明:用户在视频中实施特定动作,跟踪该特定动作的轨迹自动在视频中添加对应轨迹的动态特效。
参见图6示出的视频处理方法的一个可选的流程示意图,本发明实施例提供的视频处理方法可以用于对终端实时采集的视频进行处理,相应地,参见步骤301a:
步骤301a,用户侧终端进行视频采集,并在图形界面顺序呈现采集到的图像帧。
例如,可以适用于用户拍摄环境的场景,又例如,可以适用于用户自拍的场景。
本发明实施例提供的视频处理方法还可以用于对终端本地预先存储的视频(文件)进行处理,例如,对终端预先采集的视频、从网络侧或从其他终端接收的视频进行处理,相应地,参见步骤302b:
步骤301b,用户侧终端解码视频,在图形界面顺序呈现视频中的图像帧。
可以理解地,步骤301a和步骤301b是根据视频的类型(是实时采集的视频还是预先存储的视频)而对应执行的步骤。
步骤302,对视频的各图像帧中进行特征识别,将识别出的具有特定动作的特征的图像帧确定为目标图像帧。
在一个实施例中,在图形界面播放采集的图像帧,或者在图形界面播放解码视频得到的图像帧时,从在图形界面呈现的图像帧中进行特征提取,将所提取的特征与预设的动作特征(如面部运动、手指运动)比对,确定具有预设动作特征的图像帧作为需要添加动态特效的目标图像帧。
步骤303,确定在视频中添加跟随特定动作的轨迹的动态特效时,各目标图像帧对应的特效元素的属性,以及,基于特定动作在各目标图像帧中的位置确定在绘图界面中绘制特效元素的坐标。
步骤304,基于特效元素的属性、以及特效元素的坐标在绘图界面中渲染形成特效元素。
步骤305,将目标图像帧作为绘图界面的背景的方式填充到绘图界面中,形成具有动态特效的绘图界面帧。
步骤306,输出针对各目标图像帧对应形成的绘图界面帧。
步骤303至步骤306的实施细节可以参照前述步骤204至步骤206的记载,这里不再另外说明。
在一个实施例中,还可以执行以下步骤:
步骤307,按照时间先后顺序将视频中的非目标图像帧、以及输出的各 绘图界面帧进行视频编码形成具有动态特效的视频文件。
仍然以图5D为例,将目标图像帧1、绘图界面帧1至绘图界面帧2、以及非目标图像帧2依次进行视频编码,形成具有用户设定的视频文件,供用户侧终端本地解码播放,或者,分享到社交平台供其他用户访问。
由于动态特效已经承载于绘图界面帧1至绘图界面帧n,因此播放视频文件时可以观看到针对特定动作的轨迹对应的动态特效,从而可以迅速了解到该视频中的特定动作是视频发布者着重强调的部分,实现用户分享视频时希望观看者关注特定动作的效果。
下面,以视频处理装置实施在用户侧终端为例,对图2示例性示出的视频处理方法应用于如下的场景进行说明:在视频中添加跟踪特定对象的轮廓的动态特效。
参见图7示出的视频处理方法的一个可选的流程示意图,本发明实施例提供的视频处理方法可以用于对终端实时采集的视频进行处理,相应地,参见步骤401a:
步骤401a,用户侧终端进行视频采集,并在图形界面呈现采集到的图像帧。
例如,可以适用于用户拍摄环境的场景,又例如,可以适用于用户自拍的场景。
本发明实施例提供的视频处理方法还可以用于对终端本地预先存储的视频(文件)进行处理,例如,对终端预先采集的视频、从网络侧或从其他终端接收的视频进行处理,相应地,参见步骤401b:
步骤401b,用户侧终端解码视频,在图形界面呈现视频中的图像帧。
可以理解地,步骤401a和步骤401b是根据视频的类型(是实时采集的视频还是预先存储的视频)而对应执行的步骤。
步骤402,对视频的各图像帧中进行对象识别,将识别出的具有特定对 象的图像帧确定为目标图像帧。
步骤403,确定在视频中添加跟随视频中特定对象的轮廓的动态特效时,各目标图像帧对应的特效元素的属性,以及,基于特定对象在各目标图像帧中的位置确定在绘图界面中绘制特效元素的坐标。
步骤404,基于特效元素的属性、以及特效元素的坐标在绘图界面中渲染形成特效元素。
步骤405,将目标图像帧作为绘图界面的背景的方式填充到绘图界面中,形成具有动态特效的绘图界面帧。
步骤406,输出针对各目标图像帧对应形成的绘图界面帧。
在一个典型的应用场景中,用户侧终端实时输出绘图界面帧,从而用户可以及时查看在视频(例如,预先存储的视频,或者,用户侧终端当前正在采集环境所实时形成的视频)中设定的动态特效。
在一个实施例中,还可以执行以下步骤:
步骤407,按照时间先后顺序将视频中的非目标图像帧、以及输出的各绘图界面帧进行视频编码形成具有动态特效的视频文件。
仍然以图5D为例,将目标图像帧1、绘图界面帧1至绘图界面帧2、以及非目标图像帧2依次进行视频编码,形成具有用户设定的视频文件,供用户侧终端本地解码播放,或者,分享到社交平台供其他用户访问。
由于动态特效已经承载于绘图界面帧1至绘图界面帧n,因此播放视频文件时可以观看到跟踪特定对象的外部轮廓的动态特效,从而可以迅速了解到该视频具有动态特效的对象及其运动的画面是视频发布者着重强调的部分,实现用户分享视频时希望观看者关注特定对象的效果。
下面,结合采用粒子系统工具构建光绘特效的示例进行说明,粒子系统工具构建光绘特效的基本单位也称为粒子。
场景1)如图8A和图8B所示,用户可以在视频中的某个时间点对应 的图像帧中的某个位置设定绘制的光绘特效的轨迹,即用户绘制的光绘特效的轨迹与视频中的特定位置绑定,也和视频的时间点对应的图像帧绑定。
场景2)用户可以选择光绘特效不同的轨迹的效果,示例性地,效果包括:
A)如图8D所示,光绘特效的轨迹的视觉外观不同。
B)光绘特效轨迹的消失时间长度不同。
C)光绘特效的粒子消失动画不同,示例性地,如:
I)逐渐消失;
II)变成圈消失;
III)化作水蒸气消失。
IV)如图8C和图8E所示,光绘特效的轨迹自身的延展。
D)如图8F所示,多粒子间的动画,如:单个粒子上升,单个粒子爆炸后消失,以实现手滑动的效果。
场景3)光绘特效和视频中特定动作的识别结合。
i)可以自动识别用户手部在视频的各个图像帧中运动的位置,并在各图像帧中手部的位置形成表征手部运动轨迹的光绘特效。
ii)如图8G所示,如用户准备在空中用手画圈,则自动识别出各图像帧中用户手部的位置,并根据各图像帧中用户手臂的位置以及用户手部的起点位置,形成对应手部的运动轨迹的光绘特效。
场景4)光绘特效与对特定对象的追踪识别结合。
I)用户添加的光绘特效的轨迹和视频的当前视频帧中的对象结合,之后若对象在视频的后续图像帧中移动,光绘特效的轨迹也根据对象在视中对应移动。
II)如图8H所示,如用户在视频的当前图像帧中选定了一只小狗,之后采集的视频中,如果小狗移动,则后续的图像帧中光绘特效的轨迹会跟 随小狗一起移动。
对前述光绘特效的实现进行说明,光绘特效的实心主要包含模版协议解析、输入处理、粒子效果渲染和视频合成4部分,下面结合图8I及图8J分别进行说明。
一、模版协议解析
光绘特效本质上是由单个粒子元素大量重复贴图构成。
每个粒子元素以不同的大小、颜色、旋转方向绘制在屏幕的不同位置,构成整体的光绘特效。
单个粒子元素支持的属性如下:发射角度;初始速度(x、y轴方向);重力加速度(x、y轴方向);向心力/离心力;切向加速度;自转速度;自转加速度;初始大小;结束大小;初始颜色;结束颜色;颜色混合方法;生命周期;最大粒子数量。
如图8I所示,对于视频中需要形成光绘特效的图像帧,会计算每个图像帧中形成光绘特效时,需要在空画布中绘制的用于构成光绘特效的粒子(元素)的属性如大小、颜色、旋转方向,并计算坐标,并计算出粒子的位置。各个图像帧对应的粒子被连续在相应的图像帧呈现时,实现粒子运动的效果。
二、输入处理
光绘特效在视频中实现的原理是:通过粒子系统提供的粒子发射器(粒子系统提供的用户绘制粒子的工具)跟随输入坐标的移动而移动,随着粒子发射器自身坐标的跟随移动形成光绘特效的轨迹。
如前所述,这里的输入可能来自几种不同的输入源:
1)用户在视频中设定的位置,例如手指在呈现视频的显示界面上的滑动而检测到坐标。
2)输入视频中某个人物的特定动作识别,例如,识别视频的各图像帧 中手部动作而得到坐标。
3)输入视频中某个对象的位置跟踪,例如对视频中小动物的追踪而得到的坐标。
无论输入源是哪种,最终都会转换为输入坐标(x,y),根据输入坐标(x,y)和输入图像的大小(w,h)对坐标进行归一化处理:
x=x/w;
y=y/h;
再将归一化的坐标放入画布系统转换为画布坐标:
x=x*canvasWidth;y=y*canvasHeight。
最终根据画布坐标调整粒子发射器在画布中的位置,通过粒子发射器画布中绘制出粒子。
三、粒子效果渲染
渲染主要使用粒子系统提供的OpenGL图像操作接口,例如,假设将每秒划分成30帧,每帧处理会经过如下步骤:清空画布传入粒子元素纹理,传入计算好的粒子元素顶点坐标、颜色等相关属性;设置模版指定的颜色混合模式;调用绘制接口在画布中绘制粒子。
四、视频合成
对于视频中需要形成光绘特效的图像帧,将相应的图像帧作为画布背景填充到已经针对相应图像帧绘制有粒子的画布中,实现光绘特效的粒子与相应的图像帧合成的效果,对于合成的画布帧进行编码并保存为视频文件。
后续可以在用户侧本地的终端播放,或者,上传到社交平台分享给其他用户播放。
再对前述视频处理装置的功能结构进行说明,参见图9示出的视频处 理装置20的一个可选的功能结构示意图,包括:
第一确定部分21,配置为确定待添加的动态特效在视频中对应的目标图像帧;
第二确定部分22,配置为确定动态特效在各目标图像帧中对应的特效元素的属性、以及特效元素的坐标;
渲染部分23,配置为基于特效元素的属性、以及特效元素的坐标在绘图界面中渲染形成特效元素;
合成部分24,将目标图像帧作为绘图界面的背景的方式填充到绘图界面中,形成具有动态特效的绘图界面帧;
输出部分25,配置为输出针对各目标图像帧对应形成的绘图界面帧。
在一个实施例中,第一确定部分21,还配置为基于用户操作在视频的时间轴上对应的时间段,确定视频中对应时间段的图像帧为目标图像帧。
在一个实施例中,第一确定部分21,还配置为对视频的各图像帧中进行特征识别,将识别出的具有特定动作的特征的图像帧确定为目标图像帧。
在一个实施例中,第一确定部分21,还配置为对视频的各图像帧中进行对象识别,将识别出的具有特定对象的图像帧确定为目标图像帧。
在一个实施例中,第二确定部分22,还配置为确定在视频中添加跟随用户触控操作轨迹的动态特效时,确定各目标图像帧对应的特效元素的属性,以及,基于用户在视频中触控操作的位置确定在绘图界面中绘制特效元素的坐标。
在一个实施例中,第二确定部分22,还配置为确定在视频中添加跟随特定动作的轨迹的动态特效时,确定各目标图像帧对应的特效元素的属性,以及,基于特定动作在各目标图像帧中的位置确定在绘图界面中绘制特效元素的坐标。
在一个实施例中,第二确定部分22,还配置为确定在视频中添加跟随 特定对象的轮廓的动态特效时,确定各目标图像帧对应的特效元素的属性,以及,基于特定对象在各目标图像帧中的位置确定在绘图界面中绘制特效元素的坐标。
在一个实施例中,第二确定部分22,还配置为基于动态特效确定在各目标图像帧对应形成的视觉效果,确定当在各目标图像帧形成相应的视觉效果时,在相应的目标图像帧中需要渲染形成的特效元素、以及相应的属性。
在一个实施例中,渲染部分23,还配置为基于目标图像帧的尺寸对特效元素的坐标进行归一化处理形成归一化坐标,在黑色背景的空绘图界面中对应特效元素的归一化坐标的位置,渲染形成具有相应属性的特效元素。
在一个实施例中,合成部分24,还配置为按照时间先后顺序将视频中的非目标图像帧、以及输出的各绘图界面帧进行视频编码形成具有动态特效的视频文件。
本发明实施例还提供了一种视频处理装置,包括:
处理器和用于存储能够在处理器上运行的计算机程序的存储器;其中,
所述处理器用于运行所述计算机程序时,执行:
确定待添加的动态特效在视频中对应的目标图像帧;
确定所述动态特效在各所述目标图像帧中对应的特效元素的属性、以及所述特效元素的坐标;
基于所述特效元素的属性、以及所述特效元素的坐标在绘图界面中渲染形成所述特效元素;
将所述目标图像帧作为所述绘图界面的背景的方式填充到所述绘图界面中,形成具有动态特效的绘图界面帧;
输出针对各所述目标图像帧对应形成的绘图界面帧。
所述处理器,还配置为运行所述计算机程序时,执行:
基于用户操作在所述视频的时间轴上对应的时间段,确定所述视频中对应所述时间段的图像帧为所述目标图像帧。
所述处理器,还配置为运行所述计算机程序时,执行:
对所述视频的各图像帧中进行特征识别,将识别出的具有特定动作的特征的图像帧确定为所述目标图像帧。
所述处理器,还配置为运行所述计算机程序时,执行:
对所述视频的各图像帧中进行对象识别,将识别出的具有特定对象的图像帧确定为所述目标图像帧。
所述处理器,还配置为运行所述计算机程序时,执行:
确定在所述视频中添加跟随所述用户触控操作轨迹的动态特效时,各所述目标图像帧对应的特效元素的属性,以及,基于所述用户在所述视频中触控操作的位置确定在所述绘图界面中绘制所述特效元素的坐标。
所述处理器,还配置为运行所述计算机程序时,执行:
确定在所述视频中添加跟随所述特定动作的轨迹的动态特效时,各所述目标图像帧对应的特效元素的属性,以及,基于所述特定动作在各所述目标图像帧中的位置确定在所述绘图界面中绘制所述特效元素的坐标。
所述处理器,还配置为运行所述计算机程序时,执行:
确定在所述视频中添加跟随所述特定对象的轮廓的动态特效时,各所述目标图像帧对应的特效元素的属性,以及,基于所述特定对象在各所述目标图像帧中的位置确定在所述绘图界面中绘制所述特效元素的坐标。
所述处理器,还配置为运行所述计算机程序时,执行:
基于所述动态特效确定在各所述目标图像帧对应形成的视觉效果, 确定当在各所述目标图像帧形成相应的视觉效果时,在相应的目标图像帧中需要渲染形成的特效元素、以及相应的属性。
所述处理器,还配置为运行所述计算机程序时,执行:
基于所述目标图像帧的尺寸对所述特效元素的坐标进行归一化处理形成归一化坐标,在黑色背景的空绘图界面中对应所述特效元素的归一化坐标的位置,渲染形成具有相应属性的所述特效元素。
所述处理器,还配置为运行所述计算机程序时,执行:
按照时间先后顺序将所述视频中的非目标图像帧、以及输出的各所述绘图界面帧进行视频编码形成具有所述动态特效的视频文件。
本发明实施例还提供了一种计算机存储介质,所述计算机存储介质存储有计算机程序,该计算机程序配置为被处理器运行时执行:
确定待添加的动态特效在视频中对应的目标图像帧;
确定所述动态特效在各所述目标图像帧中对应的特效元素的属性、以及所述特效元素的坐标;
基于所述特效元素的属性、以及所述特效元素的坐标在绘图界面中渲染形成所述特效元素;
将所述目标图像帧作为所述绘图界面的背景的方式填充到所述绘图界面中,形成具有动态特效的绘图界面帧;
输出针对各所述目标图像帧对应形成的绘图界面帧。
所述计算机程序,还配置为被处理器运行时执行:
基于用户操作在所述视频的时间轴上对应的时间段,确定所述视频中对应所述时间段的图像帧为所述目标图像帧。
所述计算机程序,还配置为被处理器运行时执行:
对所述视频的各图像帧中进行特征识别,将识别出的具有特定动作的特征的图像帧确定为所述目标图像帧。
所述计算机程序,还配置为被处理器运行时执行:
对所述视频的各图像帧中进行对象识别,将识别出的具有特定对象的图像帧确定为所述目标图像帧。
所述计算机程序,还配置为被处理器运行时执行:
确定在所述视频中添加跟随所述用户触控操作轨迹的动态特效时,各所述目标图像帧对应的特效元素的属性,以及,基于所述用户在所述视频中触控操作的位置确定在所述绘图界面中绘制所述特效元素的坐标。
所述计算机程序,还配置为被处理器运行时执行:
确定在所述视频中添加跟随所述特定动作的轨迹的动态特效时,各所述目标图像帧对应的特效元素的属性,以及,基于所述特定动作在各所述目标图像帧中的位置确定在所述绘图界面中绘制所述特效元素的坐标。
所述计算机程序,还配置为被处理器运行时执行:
确定在所述视频中添加跟随所述特定对象的轮廓的动态特效时,各所述目标图像帧对应的特效元素的属性,以及,基于所述特定对象在各所述目标图像帧中的位置确定在所述绘图界面中绘制所述特效元素的坐标。
所述计算机程序,还配置为被处理器运行时执行:
基于所述动态特效确定在各所述目标图像帧对应形成的视觉效果,确定当在各所述目标图像帧形成相应的视觉效果时,在相应的目标图像帧中需要渲染形成的特效元素、以及相应的属性。
所述计算机程序,还配置为被处理器运行时执行:
基于所述目标图像帧的尺寸对所述特效元素的坐标进行归一化处理形成归一化坐标,在黑色背景的空绘图界面中对应所述特效元素的归一 化坐标的位置,渲染形成具有相应属性的所述特效元素。
所述计算机程序,还配置为被处理器运行时执行:
按照时间先后顺序将所述视频中的非目标图像帧、以及输出的各所述绘图界面帧进行视频编码形成具有所述动态特效的视频文件。
综上所述,本发明实施例实现以下有益效果:
1)实现视频中的时间、位置和动态特效如光绘特效结合的效果,在视频播放至特定片段时在视频片段的各图像帧的特定位置实现动态特效,使视频的观看者能够关注视频的特定片段以及视频片段中的特定位置。
2)实现了动作识别和动态特效如光绘特效结合的效果,跟随特定动作的轨迹形成动态特效,使视频的观看者能够关注视频中的特定动作。
3)实现对象(如物体)追踪和动态特效如光绘特效结合的效果,跟随特定对象的运动形成与特定对象的外部轮廓对应的动态特效,使视频的观看者能关注视频中的特定对象。
本领域的技术人员可以理解:实现上述方法实施例的全部或部分步骤可以通过程序指令相关的硬件来完成,前述的程序可以存储于一计算机可读取存储介质中,该程序在执行时,执行包括上述方法实施例的步骤;而前述的存储介质包括:移动存储装置、随机存取存储器(RAM,Random Access Memory)、只读存储器(ROM,Read-Only Memory)、磁碟或者光盘等各种可以存储程序代码的介质。
或者,本发明上述集成的部分如果以软件功能模块的形式实现并作为独立的产品销售或使用时,也可以存储在一个计算机可读取存储介质中。基于这样的理解,本发明实施例的技术方案本质上或者说对相关技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机装置(可以是个人计算机、服务器、或者网络装置等)执行本发明各个实施例所述方法的全部或 部分。而前述的存储介质包括:移动存储装置、RAM、ROM、磁碟或者光盘等各种可以存储程序代码的介质。
以上所述,仅为本发明的具体实施方式,但本发明的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本发明揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本发明的保护范围之内。因此,本发明的保护范围应以所述权利要求的保护范围为准。
工业实用性
本发明实施例确定待添加的动态特效在视频中对应的目标图像帧;确定动态特效在各目标图像帧中对应的特效元素的属性、以及特效元素的坐标;基于特效元素的属性、以及特效元素的坐标在绘图界面中渲染形成特效元素;将目标图像帧作为绘图界面的背景的方式填充到绘图界面中,形成具有动态特效的绘图界面帧;输出针对各目标图像帧对应形成的绘图界面帧。如此,提供通过在视频中确定需要形成动态特效的目标视频帧的方式,可以轻易地在视频中确定与视频中的一个片段或与某个特定对象对应的视频帧设定为目标视频帧;实现了在视频中根据需求定制化动态特效的技术效果;对于视频的受众来说,不论是从视频的哪个时间点开始观看,由于动态特效的醒目程度远高于在视频中绘制图层所形成的静态效果,因而能够快速了解视频的发布者在视频中所需要突出的片段或者对象,保证视频分享的预期效果。

Claims (22)

  1. 一种视频处理方法,包括:
    确定待添加的动态特效在视频中对应的目标图像帧;
    确定所述动态特效在各所述目标图像帧中对应的特效元素的属性、以及所述特效元素的坐标;
    基于所述特效元素的属性、以及所述特效元素的坐标在绘图界面中渲染形成所述特效元素;
    将所述目标图像帧作为所述绘图界面的背景的方式填充到所述绘图界面中,形成具有动态特效的绘图界面帧;
    输出针对各所述目标图像帧对应形成的绘图界面帧。
  2. 如权利要求1所述的方法,其中,所述确定待添加的动态特效在视频中对应的目标图像帧,包括:
    基于操作在所述视频的时间轴上对应的时间段,确定所述视频中对应所述时间段的图像帧为所述目标图像帧。
  3. 如权利要求1所述的方法,其中,所述确定待添加的动态特效在视频中对应的目标图像帧,包括:
    对所述视频的各图像帧中进行特征识别,将识别出的具有特定动作的特征的图像帧确定为所述目标图像帧。
  4. 如权利要求1所述的方法,其中,所述确定待添加的动态特效在视频中对应的目标图像帧,包括:
    对所述视频的各图像帧中进行对象识别,将识别出的具有特定对象的图像帧确定为所述目标图像帧。
  5. 如权利要求1所述的方法,其中,所述确定所述动态特效在各所述目标图像帧中对应的特效元素的属性、以及所述特效元素的坐标,包 括:
    确定在所述视频中添加跟随触控操作轨迹的动态特效时,各所述目标图像帧对应的特效元素的属性,以及,基于在所述视频中触控操作的位置确定在所述绘图界面中绘制所述特效元素的坐标。
  6. 如权利要求1所述的方法,其中,所述确定所述动态特效在各所述目标图像帧中对应的特效元素的属性、以及所述特效元素的坐标,包括:
    确定在所述视频中添加跟随特定动作的轨迹的动态特效时,各所述目标图像帧对应的特效元素的属性,以及,基于所述特定动作在各所述目标图像帧中的位置确定在所述绘图界面中绘制所述特效元素的坐标。
  7. 如权利要求1所述的方法,其中,所述确定所述动态特效在各所述目标图像帧中对应的特效元素的属性、以及所述特效元素的坐标,包括:
    确定在所述视频中添加跟随特定对象的轮廓的动态特效时,各所述目标图像帧对应的特效元素的属性,以及,基于所述特定对象在各所述目标图像帧中的位置确定在所述绘图界面中绘制所述特效元素的坐标。
  8. 如权利要求1所述的方法,其中,所述确定所述动态特效在各所述目标图像帧中对应的特效元素的属性,包括:
    基于所述动态特效确定在各所述目标图像帧对应形成的视觉效果,确定当在各所述目标图像帧形成相应的视觉效果时,在相应的目标图像帧中需要渲染形成的特效元素、以及相应的属性。
  9. 如权利要求1所述的方法,其中,所述基于所述特效元素的属性、以及所述特效元素的坐标在绘图界面中渲染形成所述特效元素,包括:
    基于所述目标图像帧的尺寸对所述特效元素的坐标进行归一化处理形成归一化坐标;
    在黑色背景的空绘图界面中对应所述特效元素的归一化坐标的位置,渲染形成具有相应属性的所述特效元素。
  10. 如权利要求1所述的方法,其中,还包括:
    按照时间先后顺序将所述视频中的非目标图像帧、以及输出的各所述绘图界面帧进行视频编码形成具有所述动态特效的视频文件。
  11. 一种视频处理装置,包括:
    第一确定部分,配置为确定待添加的动态特效在视频中对应的目标图像帧;
    第二确定部分,配置为确定所述动态特效在各所述目标图像帧中对应的特效元素的属性、以及所述特效元素的坐标;
    渲染部分,配置为基于所述特效元素的属性、以及所述特效元素的坐标在绘图界面中渲染形成所述特效元素;
    合成部分,将所述目标图像帧作为所述绘图界面的背景的方式填充到所述绘图界面中,形成具有动态特效的绘图界面帧;
    输出部分,配置为输出针对各所述目标图像帧对应形成的绘图界面帧。
  12. 如权利要求11所述的视频处理装置,其中,
    所述第一确定部分,还配置为基于操作在所述视频的时间轴上对应的时间段,确定所述视频中对应所述时间段的图像帧为所述目标图像帧。
  13. 如权利要求11所述的视频处理装置,其中,
    所述第一确定部分,还配置为对所述视频的各图像帧中进行特征识别,将识别出的具有特定动作的特征的图像帧确定为所述目标图像帧。
  14. 如权利要求11所述的视频处理装置,其中,
    所述第一确定部分,还配置为对所述视频的各图像帧中进行对象识别,将识别出的具有特定对象的图像帧确定为所述目标图像帧。
  15. 如权利要求11所述的视频处理装置,其中,
    所述第二确定部分,还配置为确定在所述视频中添加跟随触控操作轨迹的动态特效时,各所述目标图像帧对应的特效元素的属性,以及,基于所述用户在所述视频中触控操作的位置确定在所述绘图界面中绘制所述特效元素的坐标。
  16. 如权利要求11所述的视频处理装置,其中,
    所述第二确定部分,还配置为确定在所述视频中添加跟随特定动作的轨迹的动态特效时,各所述目标图像帧对应的特效元素的属性,以及,基于所述特定动作在各所述目标图像帧中的位置确定在所述绘图界面中绘制所述特效元素的坐标。
  17. 如权利要求11所述的视频处理装置,其中,
    所述第二确定部分,还配置为确定在所述视频中添加跟随特定对象的轮廓的动态特效时,各所述目标图像帧对应的特效元素的属性,以及,基于所述特定对象在各所述目标图像帧中的位置确定在所述绘图界面中绘制所述特效元素的坐标。
  18. 如权利要求11所述的视频处理装置,其中,
    所述第二确定部分,还配置为基于所述动态特效确定在各所述目标图像帧对应形成的视觉效果,确定当在各所述目标图像帧形成相应的视觉效果时,在相应的目标图像帧中需要渲染形成的特效元素、以及相应的属性。
  19. 如权利要求11所述的视频处理装置,其中,
    所述渲染部分,还配置为基于所述目标图像帧的尺寸对所述特效元素的坐标进行归一化处理形成归一化坐标;
    在黑色背景的空绘图界面中对应所述特效元素的归一化坐标的位置,渲染形成具有相应属性的所述特效元素。
  20. 如权利要求11所述的视频处理装置,其中,
    所述合成部分,还配置为按照时间先后顺序将所述视频中的非目标图像帧、以及输出的各所述绘图界面帧进行视频编码形成具有所述动态特效的视频文件。
  21. 一种视频处理装置,包括:
    处理器和用于存储能够在处理器上运行的计算机程序的存储器;其中,
    所述处理器用于运行所述计算机程序时,执行权利要求1至10任一项所述的视频处理方法。
  22. 一种计算机存储介质,所述计算机存储介质中存储有计算机可执行指令,该计算机可执行指令用于执行权利要求1至10任一项所述的视频处理方法。
PCT/CN2017/106102 2016-10-17 2017-10-13 视频处理方法、视频处理装置及存储介质 WO2018072652A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US16/231,873 US11012740B2 (en) 2016-10-17 2018-12-24 Method, device, and storage medium for displaying a dynamic special effect
US17/234,741 US11412292B2 (en) 2016-10-17 2021-04-19 Video processing method, video processing device, and storage medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201610903697.2 2016-10-17
CN201610903697.2A CN106385591B (zh) 2016-10-17 2016-10-17 视频处理方法及视频处理装置

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US16/231,873 Continuation US11012740B2 (en) 2016-10-17 2018-12-24 Method, device, and storage medium for displaying a dynamic special effect

Publications (1)

Publication Number Publication Date
WO2018072652A1 true WO2018072652A1 (zh) 2018-04-26

Family

ID=57957966

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/106102 WO2018072652A1 (zh) 2016-10-17 2017-10-13 视频处理方法、视频处理装置及存储介质

Country Status (3)

Country Link
US (2) US11012740B2 (zh)
CN (1) CN106385591B (zh)
WO (1) WO2018072652A1 (zh)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111491205A (zh) * 2020-04-17 2020-08-04 维沃移动通信有限公司 视频处理方法、装置及电子设备
WO2020173199A1 (zh) * 2019-02-27 2020-09-03 北京市商汤科技开发有限公司 显示方法及装置、电子设备及存储介质
CN112347301A (zh) * 2019-08-09 2021-02-09 北京字节跳动网络技术有限公司 图像特效处理方法、装置、电子设备和计算机可读存储介质
CN113824990A (zh) * 2021-08-18 2021-12-21 北京达佳互联信息技术有限公司 视频生成方法、装置及存储介质
US11308655B2 (en) 2018-08-24 2022-04-19 Beijing Microlive Vision Technology Co., Ltd Image synthesis method and apparatus

Families Citing this family (73)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106385591B (zh) 2016-10-17 2020-05-15 腾讯科技(上海)有限公司 视频处理方法及视频处理装置
CN108076373A (zh) * 2017-02-14 2018-05-25 北京市商汤科技开发有限公司 视频图像的处理方法、装置和电子设备
CN108737903B (zh) * 2017-04-25 2020-12-25 腾讯科技(深圳)有限公司 一种多媒体处理系统及多媒体处理方法
CN107124658B (zh) * 2017-05-02 2019-10-11 北京小米移动软件有限公司 视频直播方法及装置
CN107197341B (zh) * 2017-06-02 2020-12-25 福建星网视易信息系统有限公司 一种基于gpu的炫屏显示方法、装置及一种存储设备
CN108012090A (zh) * 2017-10-25 2018-05-08 北京川上科技有限公司 一种视频处理方法、装置、移动终端及存储介质
CN107888845B (zh) 2017-11-14 2022-10-21 腾讯数码(天津)有限公司 一种视频图像处理方法、装置及终端
CN108024071B (zh) * 2017-11-24 2022-03-08 腾讯数码(天津)有限公司 视频内容生成方法、视频内容生成装置及存储介质
CN108076375A (zh) * 2017-11-28 2018-05-25 北京川上科技有限公司 一种视频的手指移动轨迹特效实现方法和装置
CN108022279B (zh) * 2017-11-30 2021-07-06 广州市百果园信息技术有限公司 视频特效添加方法、装置及智能移动终端
CN107948667B (zh) * 2017-12-05 2020-06-30 广州酷狗计算机科技有限公司 在直播视频中添加显示特效的方法和装置
CN107911614B (zh) * 2017-12-25 2019-09-27 腾讯数码(天津)有限公司 一种基于手势的图像拍摄方法、装置和存储介质
CN108079579B (zh) * 2017-12-28 2021-09-28 珠海豹好玩科技有限公司 一种图像处理方法、装置以及终端
CN108307127A (zh) * 2018-01-12 2018-07-20 广州市百果园信息技术有限公司 视频处理方法及计算机存储介质、终端
CN108234825A (zh) * 2018-01-12 2018-06-29 广州市百果园信息技术有限公司 视频处理方法及计算机存储介质、终端
CN110062269A (zh) * 2018-01-18 2019-07-26 腾讯科技(深圳)有限公司 附加对象显示方法、装置及计算机设备
CN108495058A (zh) * 2018-01-30 2018-09-04 光锐恒宇(北京)科技有限公司 图像处理方法、装置和计算机可读存储介质
CN108234903B (zh) * 2018-01-30 2020-05-19 广州市百果园信息技术有限公司 互动特效视频的处理方法、介质和终端设备
CN108540824B (zh) * 2018-05-15 2021-01-19 北京奇虎科技有限公司 一种视频渲染方法和装置
CN108521578A (zh) * 2018-05-15 2018-09-11 北京奇虎科技有限公司 一种检测视频中可贴图区域、实现在视频中贴图的方法
CN108632660A (zh) * 2018-05-28 2018-10-09 深圳Tcl新技术有限公司 电视机的图像显示方法、电视机及存储介质
CN108933895A (zh) * 2018-07-27 2018-12-04 北京微播视界科技有限公司 三维粒子特效生成方法、装置和电子设备
CN110795177B (zh) * 2018-08-03 2021-08-31 浙江宇视科技有限公司 图形绘制方法及装置
CN109120980B (zh) * 2018-08-27 2021-04-06 深圳市青木文化传播有限公司 推介视频的特效添加方法及相关产品
CN109242814A (zh) * 2018-09-18 2019-01-18 北京奇虎科技有限公司 商品图像处理方法、装置及电子设备
CN109492577B (zh) * 2018-11-08 2020-09-18 北京奇艺世纪科技有限公司 一种手势识别方法、装置及电子设备
CN109756672A (zh) * 2018-11-13 2019-05-14 深圳艺达文化传媒有限公司 短视频动物模型叠加方法及相关产品
CN109451248B (zh) * 2018-11-23 2020-12-22 广州酷狗计算机科技有限公司 视频数据的处理方法、装置、终端及存储介质
CN109462776B (zh) * 2018-11-29 2021-08-20 北京字节跳动网络技术有限公司 一种视频特效添加方法、装置、终端设备及存储介质
CN109660784A (zh) * 2018-11-30 2019-04-19 深圳市灼华互娱科技有限公司 特效数据处理方法和装置
CN111258413A (zh) * 2018-11-30 2020-06-09 北京字节跳动网络技术有限公司 虚拟对象的控制方法和装置
CN109698914B (zh) * 2018-12-04 2022-03-01 广州方硅信息技术有限公司 一种闪电特效渲染方法、装置、设备及存储介质
CN109618211A (zh) * 2018-12-04 2019-04-12 深圳市子瑜杰恩科技有限公司 短视频道具编辑方法及相关产品
CN109889893A (zh) * 2019-04-16 2019-06-14 北京字节跳动网络技术有限公司 视频处理方法、装置及设备
CN110111279B (zh) * 2019-05-05 2021-04-30 腾讯科技(深圳)有限公司 一种图像处理方法、装置及终端设备
CN110415326A (zh) * 2019-07-18 2019-11-05 成都品果科技有限公司 一种粒子效果的实现方法及装置
CN110414596B (zh) * 2019-07-25 2023-09-26 腾讯科技(深圳)有限公司 视频处理、模型训练方法和装置、存储介质及电子装置
US11295504B1 (en) * 2019-08-01 2022-04-05 Meta Platforms, Inc. Systems and methods for dynamic digital animation
CN111796818B (zh) * 2019-10-16 2022-11-29 厦门雅基软件有限公司 多媒体文件的制作方法、装置、电子设备及可读存储介质
US11158028B1 (en) * 2019-10-28 2021-10-26 Snap Inc. Mirrored selfie
CN110868634B (zh) * 2019-11-27 2023-08-22 维沃移动通信有限公司 一种视频处理方法及电子设备
CN112887631B (zh) * 2019-11-29 2022-08-12 北京字节跳动网络技术有限公司 在视频中显示对象的方法、装置、电子设备及计算机可读存储介质
CN110930487A (zh) * 2019-11-29 2020-03-27 珠海豹趣科技有限公司 一种动画实现方法及装置
CN111111177A (zh) * 2019-12-23 2020-05-08 北京像素软件科技股份有限公司 游戏特效扰动背景的方法、装置和电子设备
CN111080751A (zh) * 2019-12-30 2020-04-28 北京金山安全软件有限公司 碰撞渲染方法和装置
CN113535282B (zh) * 2020-04-14 2024-04-30 北京字节跳动网络技术有限公司 特效数据处理方法及装置
CN113709389A (zh) * 2020-05-21 2021-11-26 北京达佳互联信息技术有限公司 一种视频渲染方法、装置、电子设备及存储介质
CN111654755B (zh) * 2020-05-21 2023-04-18 维沃移动通信有限公司 一种视频编辑方法及电子设备
CN113810783B (zh) * 2020-06-15 2023-08-25 腾讯科技(深圳)有限公司 一种富媒体文件处理方法、装置、计算机设备及存储介质
CN111899192B (zh) * 2020-07-23 2022-02-01 北京字节跳动网络技术有限公司 交互方法、装置、电子设备及计算机可读存储介质
CN111866587A (zh) * 2020-07-30 2020-10-30 口碑(上海)信息技术有限公司 短视频的生成方法及装置
CN112118397B (zh) * 2020-09-23 2021-06-22 腾讯科技(深圳)有限公司 一种视频合成的方法、相关装置、设备以及存储介质
CN112235516B (zh) * 2020-09-24 2022-10-04 北京达佳互联信息技术有限公司 视频生成方法、装置、服务器及存储介质
CN116437034A (zh) * 2020-09-25 2023-07-14 荣耀终端有限公司 视频特效添加方法、装置及终端设备
CN112188260A (zh) * 2020-10-26 2021-01-05 咪咕文化科技有限公司 视频的分享方法、电子设备及可读存储介质
CN112637517B (zh) * 2020-11-16 2022-10-28 北京字节跳动网络技术有限公司 视频处理方法、装置、电子设备及存储介质
CN112738624B (zh) * 2020-12-23 2022-10-25 北京达佳互联信息技术有限公司 用于视频的特效渲染的方法和装置
CN112700518B (zh) * 2020-12-28 2023-04-07 北京字跳网络技术有限公司 拖尾视觉效果的生成方法、视频的生成方法、电子设备
CN113068072A (zh) * 2021-03-30 2021-07-02 北京达佳互联信息技术有限公司 视频的播放方法、装置及设备
CN114501041B (zh) * 2021-04-06 2023-07-14 抖音视界有限公司 特效显示方法、装置、设备及存储介质
CN113207038B (zh) * 2021-04-21 2023-04-28 维沃移动通信(杭州)有限公司 视频处理方法、视频处理装置和电子设备
CN113254677A (zh) * 2021-07-06 2021-08-13 北京达佳互联信息技术有限公司 多媒体信息处理方法、装置、电子设备及存储介质
CN113542855B (zh) * 2021-07-21 2023-08-22 Oppo广东移动通信有限公司 视频处理方法、装置、电子设备和可读存储介质
CN113518256B (zh) * 2021-07-23 2023-08-08 腾讯科技(深圳)有限公司 视频处理方法、装置、电子设备及计算机可读存储介质
CN113556481B (zh) * 2021-07-30 2023-05-23 北京达佳互联信息技术有限公司 视频特效的生成方法、装置、电子设备及存储介质
CN113596564B (zh) * 2021-09-29 2021-12-28 卡莱特云科技股份有限公司 一种画面播放方法及装置
CN114143398B (zh) * 2021-11-17 2023-08-25 西安维沃软件技术有限公司 视频播放方法、装置
CN114007121B (zh) * 2021-12-29 2022-04-15 卡莱特云科技股份有限公司 一种视频播放特效变换方法、装置及系统
CN114567805A (zh) * 2022-02-24 2022-05-31 北京字跳网络技术有限公司 确定特效视频的方法、装置、电子设备及存储介质
CN114782579A (zh) * 2022-04-26 2022-07-22 北京沃东天骏信息技术有限公司 一种图像渲染方法及装置、存储介质
CN116095412B (zh) * 2022-05-30 2023-11-14 荣耀终端有限公司 视频处理方法及电子设备
CN116580131B (zh) * 2023-04-28 2024-02-13 杭州慧跃网络科技有限公司 一种静态图像渲染方法、装置、系统及存储介质
CN116450057B (zh) * 2023-06-19 2023-08-15 成都赛力斯科技有限公司 基于客户端的车辆功能图片生成方法、装置及存储介质

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101212640A (zh) * 2006-12-29 2008-07-02 英华达股份有限公司 视频通话方法
CN103220490A (zh) * 2013-03-15 2013-07-24 广东欧珀移动通信有限公司 一种在视频通信中实现特效的方法及视频用户端
CN104394313A (zh) * 2014-10-27 2015-03-04 成都理想境界科技有限公司 特效视频生成方法及装置
KR101528312B1 (ko) * 2014-02-14 2015-06-11 주식회사 케이티 영상 편집 방법 및 이를 위한 장치
CN104967865A (zh) * 2015-03-24 2015-10-07 腾讯科技(北京)有限公司 视频预览方法和装置
CN106385591A (zh) * 2016-10-17 2017-02-08 腾讯科技(上海)有限公司 视频处理方法及视频处理装置

Family Cites Families (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001351116A (ja) * 2000-06-07 2001-12-21 Sony Corp 電子アニメコミック提供システム、電子情報作成装置、情報処理装置、記録媒体及び電子アニメコミック提供方法
JP5771329B2 (ja) * 2011-07-20 2015-08-26 ゼットティーイー コーポレイション 動的壁紙の生成方法及び生成装置
JP2014068274A (ja) * 2012-09-26 2014-04-17 Olympus Imaging Corp 画像編集装置、画像編集方法、およびプログラム
KR102109054B1 (ko) * 2013-04-26 2020-05-28 삼성전자주식회사 애니메이션 효과를 제공하는 사용자 단말 장치 및 그 디스플레이 방법
CN103455968A (zh) * 2013-08-07 2013-12-18 厦门美图网科技有限公司 一种具有粒子元素的实时影像渲染方法
JPWO2015049899A1 (ja) * 2013-10-01 2017-03-09 オリンパス株式会社 画像表示装置および画像表示方法
CN103853562B (zh) * 2014-03-26 2017-02-15 北京奇艺世纪科技有限公司 一种视频帧渲染方法及装置
CN104394324B (zh) * 2014-12-09 2018-01-09 成都理想境界科技有限公司 特效视频生成方法及装置
CN104618797B (zh) * 2015-02-06 2018-02-13 腾讯科技(北京)有限公司 信息处理方法、装置及客户端
CN104780458A (zh) * 2015-04-16 2015-07-15 美国掌赢信息科技有限公司 一种即时视频中的特效加载方法和电子设备
US10074014B2 (en) * 2015-04-22 2018-09-11 Battelle Memorial Institute Feature identification or classification using task-specific metadata
CN106303491A (zh) * 2015-05-27 2017-01-04 深圳超多维光电子有限公司 图像处理方法及装置
US10474877B2 (en) * 2015-09-22 2019-11-12 Google Llc Automated effects generation for animated content
CN105635806B (zh) * 2015-12-28 2018-12-28 北京像素软件科技股份有限公司 群体运动场景的渲染方法
CN105812866A (zh) * 2016-03-15 2016-07-27 深圳创维-Rgb电子有限公司 智能终端的控制方法及装置
CN105975273B (zh) * 2016-05-04 2019-04-02 腾讯科技(深圳)有限公司 粒子动画的实现及优化工具的净化过程展示方法和系统
CN106028052A (zh) * 2016-05-30 2016-10-12 徐文波 即时视频中连续发送特效的方法和装置

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101212640A (zh) * 2006-12-29 2008-07-02 英华达股份有限公司 视频通话方法
CN103220490A (zh) * 2013-03-15 2013-07-24 广东欧珀移动通信有限公司 一种在视频通信中实现特效的方法及视频用户端
KR101528312B1 (ko) * 2014-02-14 2015-06-11 주식회사 케이티 영상 편집 방법 및 이를 위한 장치
CN104394313A (zh) * 2014-10-27 2015-03-04 成都理想境界科技有限公司 特效视频生成方法及装置
CN104967865A (zh) * 2015-03-24 2015-10-07 腾讯科技(北京)有限公司 视频预览方法和装置
CN106385591A (zh) * 2016-10-17 2017-02-08 腾讯科技(上海)有限公司 视频处理方法及视频处理装置

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11308655B2 (en) 2018-08-24 2022-04-19 Beijing Microlive Vision Technology Co., Ltd Image synthesis method and apparatus
WO2020173199A1 (zh) * 2019-02-27 2020-09-03 北京市商汤科技开发有限公司 显示方法及装置、电子设备及存储介质
US11687209B2 (en) 2019-02-27 2023-06-27 Beijing Sensetime Technology Development Co., Ltd. Display method and apparatus for displaying display effects
CN112347301A (zh) * 2019-08-09 2021-02-09 北京字节跳动网络技术有限公司 图像特效处理方法、装置、电子设备和计算机可读存储介质
CN111491205A (zh) * 2020-04-17 2020-08-04 维沃移动通信有限公司 视频处理方法、装置及电子设备
CN111491205B (zh) * 2020-04-17 2023-04-25 维沃移动通信有限公司 视频处理方法、装置及电子设备
CN113824990A (zh) * 2021-08-18 2021-12-21 北京达佳互联信息技术有限公司 视频生成方法、装置及存储介质

Also Published As

Publication number Publication date
US11012740B2 (en) 2021-05-18
CN106385591B (zh) 2020-05-15
US20210243493A1 (en) 2021-08-05
US20190132642A1 (en) 2019-05-02
CN106385591A (zh) 2017-02-08
US11412292B2 (en) 2022-08-09

Similar Documents

Publication Publication Date Title
WO2018072652A1 (zh) 视频处理方法、视频处理装置及存储介质
CN112738408B (zh) 图像修改器的选择性识别和排序
KR102653793B1 (ko) 비디오 클립 객체 추적
CN109688451B (zh) 摄像机效应的提供方法及系统
KR101773018B1 (ko) 이미지화된 오브젝트 특성들에 기초한 증강 현실
US20150185825A1 (en) Assigning a virtual user interface to a physical object
US20140248950A1 (en) System and method of interaction for mobile devices
US20160217590A1 (en) Real time texture mapping for augmented reality system
TW201539305A (zh) 使用姿勢控制基於計算的裝置
CN112148189A (zh) 一种ar场景下的交互方法、装置、电子设备及存储介质
CN106464773B (zh) 增强现实的装置及方法
WO2023045207A1 (zh) 任务处理方法及装置、电子设备、存储介质和计算机程序
CN109448050B (zh) 一种目标点的位置的确定方法及终端
US10620807B2 (en) Association of objects in a three-dimensional model with time-related metadata
KR20220101683A (ko) 세계-공간 세그먼테이션
WO2023030176A1 (zh) 视频处理方法、装置、计算机可读存储介质及计算机设备
CN106536004B (zh) 增强的游戏平台
US11836847B2 (en) Systems and methods for creating and displaying interactive 3D representations of real objects
CN112333498A (zh) 一种展示控制方法、装置、计算机设备及存储介质
KR20170120299A (ko) 립모션을 이용한 실감형 콘텐츠 서비스 시스템
KR102635477B1 (ko) 증강현실 기반의 공연 콘텐츠 제공장치 및 방법
KR102276789B1 (ko) 동영상 편집 방법 및 장치
US20230344953A1 (en) Camera settings and effects shortcuts
US20180365268A1 (en) Data structure, system and method for interactive media
US20170287189A1 (en) Displaying a document as mixed reality content

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17862154

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17862154

Country of ref document: EP

Kind code of ref document: A1