WO2019141100A1 - 附加对象显示方法、装置、计算机设备及存储介质 - Google Patents

附加对象显示方法、装置、计算机设备及存储介质 Download PDF

Info

Publication number
WO2019141100A1
WO2019141100A1 PCT/CN2019/070616 CN2019070616W WO2019141100A1 WO 2019141100 A1 WO2019141100 A1 WO 2019141100A1 CN 2019070616 W CN2019070616 W CN 2019070616W WO 2019141100 A1 WO2019141100 A1 WO 2019141100A1
Authority
WO
WIPO (PCT)
Prior art keywords
display
video
picture frame
target
additional
Prior art date
Application number
PCT/CN2019/070616
Other languages
English (en)
French (fr)
Inventor
肖仙敏
张中宝
蒋辉
王文涛
肖鹏
黎雄志
张元昊
林锋
Original Assignee
腾讯科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 腾讯科技(深圳)有限公司 filed Critical 腾讯科技(深圳)有限公司
Priority to JP2020539223A priority Critical patent/JP7109553B2/ja
Priority to EP19740905.5A priority patent/EP3742743A4/en
Publication of WO2019141100A1 publication Critical patent/WO2019141100A1/zh
Priority to US15/930,124 priority patent/US11640235B2/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/0486Drag-and-drop
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B27/00Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
    • G11B27/02Editing, e.g. varying the order of information signals recorded on, or reproduced from, record carriers
    • G11B27/031Electronic editing of digitised analogue information signals, e.g. audio or video signals
    • G11B27/036Insert-editing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04847Interaction techniques to control parameter settings, e.g. interaction with sliders or dials
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04842Selection of displayed objects or displayed text elements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04845Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • G06F3/04883Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B27/00Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
    • G11B27/10Indexing; Addressing; Timing or synchronising; Measuring tape travel
    • G11B27/34Indicating arrangements 
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4318Generation of visual interfaces for content selection or interaction; Content or additional data rendering by altering the content in the rendering process, e.g. blanking, blurring or masking an image region
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/443OS processes, e.g. booting an STB, implementing a Java virtual machine in an STB or power management in an STB
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/443OS processes, e.g. booting an STB, implementing a Java virtual machine in an STB or power management in an STB
    • H04N21/4438Window management, e.g. event handling following interaction with the user interface
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • H04N21/47205End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for manipulating displayed content, e.g. interacting with MPEG-4 objects, editing locally
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/485End-user interface for client configuration

Definitions

  • the present application relates to the field of computer application technologies, and in particular, to an additional object display method, apparatus, computer device, and storage medium.
  • the target location and the additional object to be added may be specified in the video playing interface, and when the terminal plays the video, the additional object is continuously displayed at the target location.
  • the display position of the additional object in the video is fixed, resulting in the mismatch between the attached object and the video playback screen during subsequent video playback.
  • the embodiment of the present application provides an additional object display method, apparatus, computer device, and storage medium, which can match an additional object in a video with a video playback screen.
  • the technical solution is as follows:
  • an additional object display method is provided, the method being performed by a terminal, the method comprising: displaying a trigger control in a video play interface, where the video play interface is used to play a video;
  • the trigger control Responding to the activation operation of the trigger control, suspending playing the video, and displaying a reference picture frame in the video play interface, where the reference picture frame is a picture frame corresponding to the pause time point in the video; Obtaining a target object in response to a drag operation on the trigger control; the target object is a display object corresponding to an end position of the drag operation in the reference picture frame; when playing the video, in the An additional object corresponding to the trigger control is displayed in the picture frame of the video corresponding to the target object.
  • an additional object display device is provided, the device is used in a terminal, and the device includes: a control display module, configured to display a trigger control in a video play interface, where the video play interface is used to play a video;
  • a pause module configured to pause playing the video in response to an activation operation of the trigger control, and display a reference picture frame in the video play interface, where the reference picture frame is corresponding to a pause time point in the video Picture frame at the place;
  • An object obtaining module configured to acquire a target object in response to a drag operation on the trigger control; the target object is a display object corresponding to an end position of the drag operation in the reference picture frame;
  • an object display module configured to display an additional object corresponding to the trigger control corresponding to the target object in a picture frame of the video when the video is played.
  • the device further includes: a tracking module, configured to: in the video, each picture in the video before the object display module displays an additional object corresponding to the trigger control in the video corresponding to the target object Tracking the target object in the frame to obtain first display information, where the first display information is used to indicate at least one of a display position, a display size, and a display posture of the target object in the respective picture frames;
  • a tracking module configured to: in the video, each picture in the video before the object display module displays an additional object corresponding to the trigger control in the video corresponding to the target object Tracking the target object in the frame to obtain first display information, where the first display information is used to indicate at least one of a display position, a display size, and a display posture of the target object in the respective picture frames;
  • An information generating module configured to generate second display information according to the first display information, where the second display information is used to indicate that the additional objects are respectively displayed in the display position, the display size, and the display posture in the respective picture frames At least one type;
  • the object display module is configured to display the additional object in each of the picture frames according to the second display information when the video is played.
  • the first display information includes pixel coordinates of the target points in the target object in the respective picture frames, where the target point is an end position of the target object corresponding to the drag operation.
  • the information generating module is specifically configured to: respectively, a pixel coordinate in the respective picture frame according to a target point in the target object, and a relative position between the additional object and the target point And acquiring, by the information, pixel coordinates of the additional objects in the respective picture frames; and generating the second display information including pixel coordinates of the additional objects in the respective picture frames.
  • the device further includes: a preview image display module, configured to display a preview image of the additional object in the video playing interface;
  • a display location obtaining module configured to acquire a display position of the preview image in the reference picture frame
  • a relative position obtaining module configured to acquire the additional object and the target according to a display position of the preview image in the reference picture frame, and an end position of the drag operation corresponding to the reference picture frame Relative position information between points.
  • the device further includes: a moving module, configured to move a position of the preview image of the additional object in the video playing interface in response to a drag operation on the preview image of the attached object.
  • a moving module configured to move a position of the preview image of the additional object in the video playing interface in response to a drag operation on the preview image of the attached object.
  • the first display information includes a display size of the target object in each of the picture frames, where the information generating module is specifically configured to be in each of the picture frames according to the target object. a display size, and an original size of the target object, calculating a zoom ratio of the additional object in each of the picture frames; an original size of the target object is the target object in the reference picture frame Display size; obtaining a display size of the additional object in the respective picture frames according to an original size of the additional object and the zooming magnification; generating a display including a display size of the additional object in the respective picture frames
  • the second display information is described.
  • the first display information includes a display position and a display posture of the target object in the respective picture frames
  • the information generating module is specifically configured to: respectively, according to the target object a display position and a display posture in the picture frame, and relative position information between the additional object and the target point, acquiring a display position and a display posture of the additional object in the respective picture frames; generating a inclusion The display position of the additional object in the respective picture frames and the second display information of the display gesture.
  • the tracking module is configured to: track, according to a play time sequence and/or a play time, in a reverse order of the reference picture frame, track the target display object frame by frame in the each picture frame, and obtain the The first display information is described.
  • the tracking module is specifically configured to track the target object in each picture frame in the video by using a static adaptive corresponding clustering CMT algorithm tracked by a deformable object to obtain the first display information. .
  • the device further includes: a switch control display module, configured to display a switch control corresponding to the trigger control in the video play interface;
  • Selecting an interface display module configured to display an additional object selection interface, wherein the additional object selection interface includes at least two candidate objects, in response to an activation operation on the switching control;
  • an additional object obtaining module configured to acquire, as a new additional object corresponding to the trigger control, the candidate object corresponding to the selection operation in response to the selecting operation in the additional object selection interface.
  • the additional object is a static display object, or the additional object is a dynamic display object.
  • a computer apparatus comprising a processor and a memory, the memory storing at least one instruction, at least one program, a code set or a set of instructions, the at least one instruction, the at least one segment A program, the set of codes, or a set of instructions is loaded and executed by the processor to implement the additional object display method described above.
  • a computer readable storage medium having stored therein at least one instruction, at least one program, a code set, or a set of instructions, the at least one instruction, the at least one program, the code set Or the set of instructions is loaded and executed by the processor to implement the additional object display method described above.
  • a trigger control is displayed in the video play interface in advance, and when the user activates the trigger control, the video play may be paused.
  • the display object corresponding to the end position of the drag operation is displayed. It is determined as the target object, and the same target object in each picture frame is displayed in the subsequent playback process to display the additional object, so as to maintain the effect that the additional object matches the video playing screen during the video playing process.
  • FIG. 1 is a basic flow chart of an additional object display provided by an exemplary embodiment of the present application.
  • FIG. 2 is a diagram showing a trigger control display interface according to the embodiment shown in FIG. 1;
  • FIG. 3 is a flowchart of an additional object display method provided by an exemplary embodiment of the present application.
  • FIG. 4 is a display interface diagram of a trigger control and an additional object preview diagram according to the embodiment shown in FIG. 3;
  • FIG. 5 is a schematic diagram of an additional object switching according to the embodiment shown in FIG. 3;
  • FIG. 5 is a schematic diagram of an additional object switching according to the embodiment shown in FIG. 3;
  • FIG. 6 is a schematic diagram of zooming in an additional object according to the embodiment shown in FIG. 3;
  • FIG. 6 is a schematic diagram of zooming in an additional object according to the embodiment shown in FIG. 3;
  • FIG. 7 is a display interface diagram of a trigger control and an additional object preview diagram according to the embodiment shown in FIG. 3;
  • FIG. 9 is a flowchart of processing of each picture frame of the CMT algorithm according to the embodiment shown in FIG. 3;
  • FIG. 10 is a flow chart of object tracking according to the embodiment shown in FIG. 3;
  • FIG. 11 is a schematic diagram of an operation of a personal following mode according to an exemplary embodiment of the present application.
  • FIG. 12 is a schematic diagram of another operation of the personal following mode provided by an exemplary embodiment of the present application.
  • FIG. 13 is a schematic diagram of operation of a kite mode according to an exemplary embodiment of the present application.
  • FIG. 14 is a schematic diagram of another operation of the personal following mode provided by an exemplary embodiment of the present application.
  • FIG. 15 is a structural block diagram of an additional object display device according to an exemplary embodiment of the present application.
  • FIG. 16 is a structural block diagram of a computer device according to an exemplary embodiment of the present application.
  • the terminal may be a mobile terminal such as a smart phone, a tablet computer or an e-book reader, or the terminal may also be a personal computer device such as a desktop computer or a notebook computer.
  • the additional object is also referred to as a sticker or a video sticker, and is a text or image element that is later attached to the upper layer of the existing video for display.
  • dynamic objects also called dynamic stickers
  • static objects also called static stickers
  • Additional objects can be transparent, translucent, or non-transparent display objects.
  • the picture frame in this application refers to image data that is played according to a time stamp during video playback.
  • the object tracking in the present application refers to a specified display object among the respective display objects included in one frame of the video frame sequence, and other pictures other than the picture frame in the video frame sequence of the video.
  • the specified display object is found in the frame, and information such as the position, size, and posture (usually a rotation angle) of the specified display object in other respective picture frames is obtained.
  • the display object in a picture frame refers to a visual element in the picture frame, for example, a character in a picture frame, a face, a table, a stone, a house, a cloud or a sky And so on can be the display object in the picture frame.
  • the Clustering of Static-Adaptive Correspondence for Deformable Object Tracking (CMT) algorithm is an object tracking algorithm that can be applied to track display objects (such as people or objects) in a video scene.
  • the CMT algorithm is a feature-based tracking method that uses the classical optical flow method as part of the algorithm.
  • dictionary is a collection for storing data having a mapping relationship, and can be regarded as a container for storing a key-value pair, wherein a key-value pair can be regarded as an entry ( Entry).
  • Entry The dictionary accesses the element by key, and the key cannot be repeated.
  • the value must be an object.
  • the key-value pairs can be stored in the dictionary in an orderly manner or in an unordered manner.
  • FIG. 1 shows a basic flowchart of an additional object display provided by an exemplary embodiment of the present application, which may be implemented by a terminal. As shown in Figure 1, the process can include the following steps:
  • Step 11 Display a trigger control in the video play interface, and the video play interface is used to play the video.
  • the terminal when playing a video, may display a trigger control in the video play interface, and the trigger control is suspended on the upper layer of the video play interface, and can accept user operations, such as cursor operation or touch operation.
  • FIG. 2 illustrates a trigger control display interface diagram according to an embodiment of the present application.
  • the terminal plays a video screen 21 in the video playing interface 20, and the video screen 21 includes a trigger control 22, wherein the trigger control 22 is not a display object originally included in the video screen 21, but a terminal. An object superimposed on the upper layer of the video screen 21 is additionally added.
  • Step 12 When the video is played, in response to the activation operation of the trigger control, the video is paused, and a reference picture frame is displayed in the video play interface, where the reference picture frame is a picture corresponding to the pause time point in the video. frame.
  • the user can perform an activation operation on the trigger control (such as a touch operation), and at this time, the terminal can pause the video playback so that the user selects the target object to which the additional object needs to be added.
  • an activation operation on the trigger control such as a touch operation
  • the trigger control 22 can accept the user's click touch operation.
  • the terminal pauses to play the video, and the trigger control 22 enters the active state.
  • the video picture frame is the above reference picture frame.
  • Step 13 Acquire a target object in response to the drag operation on the trigger control, where the target object is a display object corresponding to an end position of the drag operation in the reference picture frame.
  • the trigger control may accept the user's drag operation.
  • the terminal may acquire the reference picture frame according to the end position of the drag operation.
  • the target object in .
  • the reference picture frame includes a plurality of display objects such as a person object 23, a house 24, a hill 25, and the like.
  • the user can drag the trigger control 22 by touching the sliding operation, and when the drag operation ends, the display object at the position where the trigger control 22 is located (ie, the person object 23 in FIG. 2) is determined as target.
  • the terminal may track the target object in each picture frame in the video to obtain first display information, where the first display information is used to indicate that the target object is in each of the picture frames. At least one of a display position, a display size, and a display gesture; the terminal generates second display information according to the first display information, where the second display information is used to indicate a display position of the additional object in the respective picture frames, At least one of a display size and a display gesture.
  • step 14 when the video is played, an additional object corresponding to the trigger control is displayed corresponding to the target object in the picture frame of the video.
  • the additional object corresponding to the trigger control may be displayed in the video play interface corresponding to the target object.
  • the additional object corresponding to the target object displaying the trigger control may be an overlay object to display the additional object on the upper layer of the target object, or may be an additional object displayed around the target object, or may be an object corresponding to the target object.
  • the attached object is displayed at a relative position.
  • the terminal may display the additional object in each picture frame according to the second display information when playing the video.
  • the additional object 24 is displayed corresponding to the character object 23.
  • the terminal displays a trigger control in the video play interface in advance, and when the user activates the trigger control, the video play may be paused, and then the drag operation on the trigger control is received.
  • the display object corresponding to the end position of the drag operation is determined as the target object, and the same target object in each frame frame is displayed in the subsequent playback process to display the additional object, so as to keep the additional object matching the video play screen during the video playback process. effect.
  • the terminal when the additional object corresponding to the trigger control is displayed in the video, the terminal may display the additional object according to the positional relationship between the additional object and the target object set in advance; or The terminal may also display the additional object according to the positional relationship between the additional object set by the user when selecting the target object and the target object.
  • FIG. 3 illustrates a flowchart of an additional object display method provided by an exemplary embodiment of the present application, which may be performed by a terminal.
  • the method can include the following steps:
  • step 301 a trigger control is displayed in the video playing interface.
  • Step 302 When playing a video, pause the playing of the video in response to the activation operation of the trigger control, and display a reference picture frame in the video playing interface, where the reference picture frame is a picture corresponding to the pause time point in the video. frame.
  • Step 303 Acquire a target object in response to the drag operation on the trigger control, where the target object is a display object corresponding to the end position of the drag operation in the reference picture frame.
  • the terminal also displays a preview of the attached object in the video playing interface.
  • the preview image of the additional object may move in the video play interface along with the trigger control.
  • FIG. 4 shows a display interface diagram of a trigger control and an additional object preview image according to an embodiment of the present application.
  • the terminal plays a video screen 41 in the video playing interface 40.
  • the video screen 41 includes a trigger control 42 and a preview image 43 of the attached object.
  • the trigger is triggered.
  • the control 42 and the preview image 43 of the attached object are moved together to a new location in the video screen 41.
  • the operation point of the above-mentioned drag operation of the trigger control is in an area corresponding to the preview of the trigger control or the attached object. That is to say, in the embodiment of the present application, the drag operation performed by the user on the preview map of the trigger control or the attached object may be regarded as a drag operation on the trigger control.
  • the position of the trigger control 42 and the preview image 43 of the attached object in the video screen 41 can be moved.
  • the terminal triggers the control display switching control, and in response to the activation operation of the switching control, displays an additional object selection interface, where the additional object selection interface includes at least two candidate objects, and the response In the selection operation in the additional object selection interface, the candidate object corresponding to the selection operation is acquired as a new additional object corresponding to the trigger control.
  • the user can also freely switch the additional object corresponding to the trigger control.
  • FIG. 5 illustrates a schematic diagram of an additional object handover according to an embodiment of the present application.
  • the terminal plays a video screen 51 in the video playing interface 50.
  • the video screen 51 includes a trigger control 52, a preview image 53 of the attached object, and a switching control 54 to receive a click operation on the switching control 54 by the user.
  • the terminal displays an additional object selection interface 55 containing the candidate object 55a and the candidate object 55b, wherein the candidate object 55a corresponds to the current additional object.
  • the terminal switches the content displayed in the preview image 53 of the attached object to the preview image of the candidate object 55b.
  • the zoom control is also displayed on the preview image of the attached object, and the preview image of the additional object is scaled in response to the zoom operation of the zoom control. And at the same time, the size of the preview image of the scaled additional object acquires the new display size of the attached object.
  • the terminal may record the size of the preview image of the additional object in the video playing interface as the display size of the additional object, and the user may freely scale the display size of the additional object.
  • FIG. 6 illustrates a zooming diagram of an additional object involved in an embodiment of the present application.
  • the terminal plays a video screen 61 in the video playing interface 60.
  • the video screen 61 includes a trigger control 62, a preview image of the attached object, and a zoom control 66.
  • the user receives the drag on the zoom control 66.
  • the terminal adjusts the size of the preview image 63 of the attached object, and acquires the size of the preview image 63 of the adjusted additional object as the new display size of the attached object.
  • the terminal acquires a display position of the preview image in the reference picture frame, according to the display position of the preview image in the reference picture frame, and the end position of the drag operation corresponding to the reference picture frame. Relative position information between the additional object and the target point.
  • the preview image of the attached object may not move along with the trigger control.
  • the terminal After receiving the drag operation on the trigger control, the terminal may refer to the preview image according to the preview image.
  • the position in the picture frame, and the end position of the drag operation acquire relative position information between the attached object and the target point.
  • the trigger control is in the lower left corner of the preview image.
  • the trigger control is 30 degrees to the left of the lower left corner of the preview image, and the distance is 200 pixels
  • the terminal can obtain The relative position information between the attached object and the target point is: the target point is 30 degrees to the left and 200 pixels away from the lower left corner of the preview image.
  • the terminal moves the position of the preview image of the additional object in the video playing interface in response to the drag operation of the preview image of the attached object.
  • the user can separately perform positional movement on the trigger control and the preview image of the attached object.
  • FIG. 7 shows a display interface diagram of a trigger control and an additional object preview image according to an embodiment of the present application.
  • the terminal plays a video screen 71 in the video playing interface 70.
  • the video screen 71 includes a trigger control 72 and a preview image 73 of the attached object.
  • the terminal triggers.
  • the control 72 moves to the new position, and the position of the preview image 73 of the attached object does not change during this process. Accordingly, when the user drags the preview image 73, the preview image 73 moves to the new position, and the control is triggered. 72 position unchanged.
  • Step 304 Tracking the target object in each picture frame in the video, and obtaining first display information, where the first display information is used to indicate a display position, a display size, and a display posture of the target object in the respective picture frames. At least one of them.
  • Each of the picture frames may be all picture frames in the video, or each of the picture frames may be a partial picture frame in the video.
  • each of the picture frames may be all picture frames in the video after the reference picture frame, or each picture frame may be in front of the reference picture frame in the video. All of the picture frames, or the above picture frames may also be picture frames in the video for a period of time before or after the reference picture frame.
  • the terminal tracks the target object in each picture frame in the video by using a CMT algorithm, to obtain the first display information.
  • the tracking algorithm can adopt the CMT algorithm for the operating systems of different terminals, that is, the solution shown in this application can support multiple platforms.
  • the terminal when the target object in each picture frame is tracked by the CMT algorithm, the terminal first processes the initialization algorithm, the image to be tracked, and the tracking area, and then performs matching processing on each subsequent frame of the picture frame.
  • FIG. 8 and FIG. 9 wherein FIG. 8 shows an initialization process of a CMT algorithm according to an embodiment of the present application, and FIG. 9 shows a processing flow of each picture frame of the CMT algorithm according to the embodiment of the present application. The matching area in the picture frame).
  • the feature detection algorithm for initializing the image may be an acceleration from the Accelerated Segment Test (FAST).
  • FAST Accelerated Segment Test
  • Algorithm, feature point description algorithm can be Binary Robust Invariant Scalable Keypoints (BRISK) algorithm.
  • Detect feature points of the first frame image including the front point of interest (point in the target selection box), the background feature point (the point outside the target selection box), and construct a potential database (database_potential) containing the former attraction and the background point. Data collection.
  • the purpose of the consistency calculation is to estimate the points of the target area according to the rotation and the scaling.
  • the local matching refers to comparing the Euclidean distance between each key point detected in the current frame and all foreground key points after the rotation and scaling in the first frame, if the Euclidean distance is less than the threshold (such as the threshold Pre-set to 20), indicating that the foreground key points are likely to match, then construct these possible foreground key points into a feature description library database_potential; and then describe the feature description of each key point in the current frame.
  • Database_potential performs knnMatch matching. Each feature descriptor finds the best multiple (such as 2) matching results in database_potential.
  • the strategy for excluding unstable key points is similar to global matching.
  • the terminal can track information such as the position, size, and posture of the target object included in each picture frame of the video (ie, the first display information).
  • Step 305 Generate second display information according to the first display information, where the second display information is used to indicate at least one of a display position, a display size, and a display posture of the additional object in the respective picture frames.
  • the first display information includes pixel coordinates of the target points in the target object in the respective picture frames, and pixel coordinates of the terminal in the respective picture frames according to the target points in the target object, and the Obtaining relative position information between the object and the target point, acquiring pixel coordinates of the additional object in the respective picture frames; and generating the second display information including pixel coordinates of the additional object in the respective picture frames .
  • the location information indicated by the first display information may be pixel coordinates of the target points in the target object in the respective picture frames, where the target point may be a drag operation of the trigger control by the user.
  • the end position corresponds to a position point in the target object.
  • the target point in each picture frame including the character A is in the corresponding picture frame.
  • the position point of the nose of the character A is correspondingly, and the generated first display information also includes the pixel coordinates of the position point of the nose of the character A in each picture frame.
  • the terminal subsequently acquires the pixel coordinates of the additional objects in the respective picture frames according to the pixel coordinates of the position point of the nose point of the person A in each picture frame and the relative position information between the additional object and the target point.
  • the terminal acquires a display position of the preview image in the reference picture frame, and obtains the display position according to the preview picture in the reference picture frame, and the end position of the reference picture frame corresponding to the end of the drag operation.
  • the relative position information between the attached object and the target point is not limited to the terminal's position of the preview image in the reference picture frame, and obtains the display position according to the preview picture in the reference picture frame, and the end position of the reference picture frame corresponding to the end of the drag operation.
  • the preview image of the additional object coincides with the target point (ie, the position point of the trigger control after the drag) in the lower left corner of the display position in the reference picture frame, and the additional object is between the target point and the target point.
  • the relative position information may be: the target point coincides with the lower left corner of the attached object.
  • the first display information includes display sizes of the target objects in the respective picture frames
  • the terminal when the second display information is generated according to the first display information, the terminal is respectively located in the respective picture frames according to the target object.
  • the original size and the zoom ratio obtain the display size of the additional object in the respective picture frames; and generate the second display information including the display size of the additional object in the respective picture frames.
  • the display size of the target object in different picture frames may be different.
  • the terminal when generating the second display information, may display the size of the target object in each picture frame and the original of the target object.
  • the multiple relationship between the sizes ie, the above-mentioned zoom magnification
  • the original size of the additional object determines the display size of the additional object in each different picture frame, so as to achieve the size change of the additional object with the target object in different picture frames. Zoom in.
  • the first display information includes a display position and a display posture of the target object in the respective frame frames.
  • the terminal is respectively configured according to the target object. a display position and a display posture in the picture frame, and relative position information between the additional object and the target point, obtaining a display position and a display posture of the additional object in the respective picture frames; generating the additional object including the The display position in each picture frame and the second display information of the display gesture.
  • the position and posture of the target object in different picture frames may also be different.
  • the terminal when generating the second display information, the terminal may determine according to the position and posture of the target object in each picture frame.
  • the position and orientation of the additional objects in the various picture frames are such that the position and orientation of the additional objects are changed (eg, deflected) as the target object is positioned and positioned in different picture frames.
  • the target object is tracked in each picture frame in the video, and when the first display information is obtained, the terminal starts from the reference picture frame, in the reverse order of the play time and/or the play time, in each picture frame.
  • the target display object is tracked frame by frame to obtain the first display information.
  • the terminal may track the target object in the frame frame after the reference picture frame only in the order of the playback time, or the terminal may also start from the reference picture frame only in the reverse order of the play time. Tracking the target object in the picture frame before the reference picture frame; or, the terminal may track the target in the picture frame before and after the reference picture frame, starting from the reference picture frame in the reverse order of the play time sequence and the play time. Object.
  • the target object before and/or after the reference picture frame may be tracked only from the reference picture frame.
  • the terminal may determine the play corresponding to the reference picture frame according to a preset tracking duration (for example, 5 s).
  • the terminal may also track the target object frame by frame according to the play time sequence from the reference picture frame, and if the target object is tracked, continue to track the target object in the next picture frame, if If the target object is not tracked in a certain frame, the terminal stops tracking the target object.
  • Step 306 when playing the video, displaying the additional object in the respective picture frames according to the second display information.
  • the object tracking ie, the process of generating the first display information and the second display information
  • displaying the additional object in the respective picture frames may be two parallel processes.
  • FIG. 10 illustrates an object tracking flowchart according to an embodiment of the present application. As shown in FIG. 10 , the method of the object tracking process may be described as follows:
  • Step 101 After obtaining the target object corresponding to the sticker, detecting the operation of starting the tracking (for example, the user clicks on another area except the sticker in the video playing interface), starting tracking, acquiring the video frame image A when the video is still, and tracking the sticker.
  • the tracking object In the area B, the tracking object is initialized according to the image A and the area B (including two tracking objects so as to be tracked in the forward direction), and the time stamp C of the current video and the video duration D are acquired.
  • step 102 two threads are opened, and the video is decoded forward and backward at the time stamp C, one thread is decoded from the time stamp C to time 0, and the other thread is decoded from the time stamp C to the video duration D.
  • Step 103 The two threads decode the video, and obtain a frame image and a corresponding time stamp.
  • the terminal hands each frame image to the tracking object for tracking processing.
  • step 104 the tracking process obtains two results. If the target object is tracked in the image, the center point and the zoom value are obtained, and the time stamp corresponding to the frame object is added, and the sticker is displayed as a sticker and saved in the dictionary.
  • Step 105 if not tracked, is also time stamped, marked as a sticker is not displayed, and saved to the dictionary.
  • the terminal After the tracking is completed, the terminal will get a dictionary with the time stamp as the key and the tracking data as the value.
  • the tracking data includes information indicating whether there is tracking to the target object (for controlling whether the sticker is displayed on the corresponding time stamp), and also includes The center point and the zoom value, which is used to control the position and size of the sticker.
  • the video preview interface will continue to play. During playback, each frame of the rendering has a timestamp.
  • the terminal searches for the tracking data corresponding to the timestamp according to the timestamp, and if so, changes the sticker according to the tracking data. Properties, while dynamic stickers also change the displayed image based on the timestamp.
  • the sticker and the video are two separate views, so that the sticker can be processed continuously, and the video image and the sticker are synthesized when the video is generated and displayed.
  • each frame has a time stamp, and the terminal has a time stamp. According to this timestamp, the information of the sticker (position, size and sticker image) is obtained, the changed sticker is generated, and then merged with the video frame.
  • the terminal displays a trigger control in the video play interface in advance, and when the user activates the activation operation of the trigger control, the video playback may be paused, and the subsequent trigger is received.
  • the control is dragged, the display object corresponding to the end position of the drag operation is determined as the target object, and the same target object in each frame frame is displayed in the subsequent playback process to display the additional object, thereby maintaining the attached object during the video playback.
  • the effect of the video playback screen matching is a trigger control in the video play interface in advance, and when the user activates the activation operation of the trigger control, the video playback may be paused, and the subsequent trigger is received.
  • the user can add an additional object by the activation and drag operation in the current interface, thereby improving the efficiency of adding the display additional object.
  • the technical solution mainly designs the functions and interactions of the stickers (that is, the additional display objects), and maintains a good level in tracking the accuracy of the objects.
  • the solution supports the tracking of static stickers and dynamic stickers, and can track the target in the entire video; in the interaction, the user can click on the nail (ie, the touch component corresponding to the additional display object) to stop the screen.
  • Drag the sticker to calibrate the target position click on any area to start processing the tracking, continue to play after the processing is completed, and change the attributes of the sticker according to the tracking data during the playback.
  • the dynamic sticker resource may use an Animated Portable Network Graphics (APNG) format file.
  • APNG Animated Portable Network Graphics
  • the terminal decodes the APNG file, and then renders the corresponding image frame according to the timestamp, including The timestamp during video rendering finds the image corresponding to APNG.
  • APNG is a bitmap animation extension of PNG, which can realize the dynamic picture effect of PNG format.
  • the solution provided by the embodiment shown in FIG. 3 above can be applied to a scenario in which a sticker game is added after a short video is captured. That is, you can add static or dynamic stickers after shooting, and select the tracking target according to your needs, and then follow the tracking mode.
  • the above-described personal follow-up mode refers to a mode in which a sticker attachment object is displayed in close proximity to a target object in each screen frame.
  • the attached object is a sticker.
  • FIG. 11 illustrates an operation diagram of a personal following mode provided by an exemplary embodiment of the present application. As shown in Figure 11, the main steps of using the sticker's personal follow-up mode are as follows:
  • the user can click the nail button at the lower left of the edit box (ie, the trigger control), the video preview interface will stop playing, and a transparent gray mask will appear below the sticker. For more intuitive display during the tracking operation.
  • the preview interface will appear loading state, do tracking processing, after processing the video to resume playback, the selected sticker will also change the position and size according to the tracking data.
  • the sticker's close-following mode can select the face in the actual use process to achieve the function of blocking the face, and use the dynamic sticker to make an entertaining video picture.
  • the face occlusion it can also be used in other scenes, such as occlusion. Objects and so on.
  • FIG. 12 illustrates an operation diagram of another close-following mode provided by an exemplary embodiment of the present application.
  • an icon of a nail is displayed on the upper layer of the screen (ie, triggering Controls) and stickers.
  • the user clicks on the nail the video is paused and the picture is still.
  • the user can drag the sticker on the still picture to select the target that the sticker follows ie, the head of the character in Figure 12.
  • the user clicks on other areas of the still picture other than the sticker and the nail the video resumes playing, and the sticker starts to follow the selected target (the head of the character).
  • the solution provided by the embodiment shown in FIG. 3 above can also be applied to a scenario in which a sticker game is added after a short video is captured. That is, you can add static or dynamic stickers after shooting, and select the tracking target according to your needs, and then track the kite mode.
  • the kite mode refers to the pattern in which the sticker moves along with the target object, but does not block the target object.
  • the close-following mode can be summarized as a special scene in the kite mode, but the tracking target area is a sticker.
  • the area where the kite is located, and the area selected by the kite mode can be dragged at will, and the sticker only makes appropriate position and size change according to the relative position information of the selected area.
  • FIG. 13 shows an exemplary implementation of the present application. An example of the operation of a kite mode is provided. As shown in Figure 13, the kite mode is as follows:
  • the user can click the nail button at the lower left of the edit box (ie, the trigger control), the video preview interface will stop playing, and a transparent gray mask will appear below the sticker. For more intuitive display during the tracking operation.
  • the user drags the nail, the nail can be dragged out independently of the sticker in the selected state, dragged to track the target above, while dragging the sticker to make accurate positioning, the terminal handles the relative position of the sticker and the nail, and then the user clicks Other areas than stickers.
  • the preview interface will appear loading state, do tracking processing, after processing the video to resume playback, the selected sticker will also change the position and size according to the tracking data.
  • the kite mode of the sticker can be selected such that a dynamic sticker can be used to track the target, for example, the tracking of the building target, and the sticker is displayed as a geographical indication to mark the building and the like.
  • FIG. 14 illustrates an operation diagram of another kite mode according to an exemplary embodiment of the present application.
  • an icon of a nail is displayed on the upper layer of the screen (ie, a trigger control). ) and stickers.
  • the nail can be dragged off the sticker by the user, and the user drags the nail to select the target corresponding to the sticker, the nail A line can be displayed between the sticker and the sticker to indicate the relative positional relationship between the nail and the sticker.
  • the nail is displayed in a flashing animation.
  • the user clicks on other areas of the still picture other than the sticker and the nail the video resumes playing, and the sticker starts to follow the selected target.
  • a form of movement like a kite.
  • the solution proposed by the present application aims to solve the problem of low efficiency in adding and displaying additional objects in the video editing stage, and can provide the user with two tracking modes (family follow mode and kite mode), and provide friendly operation to accurately select the tracking target.
  • the entire short video is tracked regardless of which timestamp is selected for the short video. It guarantees the ability to support dynamic sticker tracking, increase entertainment, and ensure the accuracy of tracking.
  • FIG. 15 is a block diagram showing the structure of an additional object display device according to an exemplary embodiment.
  • the virtual object control device can be used in the terminal to perform all or part of the steps of the method shown in any of the embodiments of FIG. 1 or FIG.
  • the additional object display device may include:
  • the control display module 1501 is configured to display a trigger control in the video play interface, where the video play interface is used to play a video;
  • a pause module 1502 configured to pause playing the video in response to an activation operation on the trigger control to display a reference picture frame in the video play interface, where the reference picture frame is a pause time in the video Picture frame at the point;
  • An object obtaining module 1503 configured to acquire a target object in response to a drag operation on the trigger control; the target object is a display object corresponding to an end position of the drag operation in the reference picture frame;
  • the object display module 1504 is configured to display an additional object corresponding to the trigger control corresponding to the target object in a picture frame of the video when the video is played.
  • the device further includes: a tracking module, configured to: before the object display module 1504 displays the additional object corresponding to the trigger control in the video corresponding to the target object, each of the videos Tracking the target object in the picture frame to obtain first display information, where the first display information is used to indicate at least one of a display position, a display size, and a display posture of the target object in the respective picture frames ;
  • a tracking module configured to: before the object display module 1504 displays the additional object corresponding to the trigger control in the video corresponding to the target object, each of the videos Tracking the target object in the picture frame to obtain first display information, where the first display information is used to indicate at least one of a display position, a display size, and a display posture of the target object in the respective picture frames ;
  • An information generating module configured to generate second display information according to the first display information, where the second display information is used to indicate that the additional objects are respectively displayed in the display position, the display size, and the display posture in the respective picture frames At least one type;
  • the object display module 1504 is specifically configured to display the additional object in each of the picture frames according to the second display information when the video is played.
  • the first display information includes pixel coordinates of the target points in the target object in the respective picture frames, where the target point is an end position of the target object corresponding to the drag operation.
  • the information generating module specifically for
  • the device further includes: a preview image display module, configured to display a preview image of the additional object in the video playing interface;
  • a display location obtaining module configured to acquire a display position of the preview image in the reference picture frame
  • a relative position obtaining module configured to acquire the additional object and the target according to a display position of the preview image in the reference picture frame, and an end position of the drag operation corresponding to the reference picture frame Relative position information between points.
  • the device further includes: a moving module, configured to move a position of the preview image of the additional object in the video playing interface in response to a drag operation on the preview image of the attached object.
  • a moving module configured to move a position of the preview image of the additional object in the video playing interface in response to a drag operation on the preview image of the attached object.
  • the first display information includes a display size of the target object in each of the picture frames, and the information generating module is specifically configured to:
  • the first display information includes a display position and a display posture of the target object in the respective picture frames
  • the information generating module is specifically configured to:
  • the tracking module is configured to: track, according to a play time sequence and/or a play time, in a reverse order of the reference picture frame, track the target display object frame by frame in the each picture frame, and obtain the The first display information is described.
  • the tracking module is specifically configured to track the target object in each picture frame in the video by using a static adaptive corresponding clustering CMT algorithm tracked by a deformable object to obtain the first display information. .
  • the device further includes: a switch control display module, configured to display a switch control corresponding to the trigger control in the video play interface;
  • Selecting an interface display module configured to display an additional object selection interface, wherein the additional object selection interface includes at least two candidate objects, in response to an activation operation on the switching control;
  • an additional object obtaining module configured to acquire, as a new additional object corresponding to the trigger control, the candidate object corresponding to the selection operation in response to the selecting operation in the additional object selection interface.
  • the additional object is a static display object, or the additional object is a dynamic display object.
  • FIG. 16 is a block diagram showing the structure of a terminal 1600 provided by an exemplary embodiment of the present application.
  • the terminal 1600 can be: a smart phone, a tablet computer, an MP3 player (Moving Picture Experts Group Audio Layer III), and an MP4 (Moving Picture Experts Group Audio Layer IV). Level 4) Player, laptop or desktop computer.
  • Terminal 1600 may also be referred to as a user device, a portable terminal, a laptop terminal, a desktop terminal, and the like.
  • the terminal 1600 includes a processor 1601 and a memory 1602.
  • the processor 1601 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and the like.
  • the processor 1601 may be configured by at least one of a DSP (Digital Signal Processing), an FPGA (Field-Programmable Gate Array), and a PLA (Programmable Logic Array). achieve.
  • the processor 1601 may also include a main processor and a coprocessor.
  • the main processor is a processor for processing data in an awake state, which is also called a CPU (Central Processing Unit); the coprocessor is A low-power processor for processing data in standby.
  • the processor 1601 may be integrated with a GPU (Graphics Processing Unit), which is responsible for rendering and rendering of the content that the display needs to display.
  • the processor 1601 may further include an AI (Artificial Intelligence) processor for processing computational operations related to machine learning.
  • AI Artificial Intelligence
  • Memory 1602 can include one or more computer readable storage media, which can be non-transitory.
  • the memory 1602 can also include high speed random access memory, as well as non-volatile memory such as one or more disk storage devices, flash storage devices.
  • the non-transitory computer readable storage medium in memory 1602 is for storing at least one instruction for execution by processor 1601 to implement additional objects provided by the method embodiments of the present application. Display method.
  • the terminal 1600 optionally further includes: a peripheral device interface 1603 and at least one peripheral device.
  • the processor 1601, the memory 1602, and the peripheral device interface 1603 may be connected by a bus or a signal line.
  • Each peripheral device can be connected to the peripheral device interface 1603 via a bus, signal line or circuit board.
  • the peripheral device includes at least one of a radio frequency circuit 1604, a touch display screen 1605, a camera 1606, an audio circuit 1607, a positioning component 1608, and a power source 1609.
  • Peripheral device interface 1603 can be used to connect at least one peripheral device associated with I/O (Input/Output) to processor 1601 and memory 1602.
  • the processor 1601, the memory 1602, and the peripheral device interface 1603 are integrated on the same chip or circuit board; in some other embodiments, any one of the processor 1601, the memory 1602, and the peripheral device interface 1603 or The two can be implemented on a separate chip or circuit board, which is not limited in this embodiment.
  • the RF circuit 1604 is configured to receive and transmit an RF (Radio Frequency) signal, also referred to as an electromagnetic signal.
  • Radio frequency circuit 1604 communicates with the communication network and other communication devices via electromagnetic signals.
  • the RF circuit 1604 converts the electrical signal into an electromagnetic signal for transmission, or converts the received electromagnetic signal into an electrical signal.
  • the radio frequency circuit 1604 includes an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a subscriber identity module card, and the like.
  • the radio frequency circuit 1604 can communicate with other terminals via at least one wireless communication protocol.
  • the wireless communication protocols include, but are not limited to, the World Wide Web, a metropolitan area network, an intranet, generations of mobile communication networks (2G, 3G, 4G, and 5G), wireless local area networks, and/or WiFi (Wireless Fidelity) networks.
  • the RF circuit 1604 may also include NFC (Near Field Communication) related circuitry, which is not limited in this application.
  • the display 1605 is used to display a UI (User Interface).
  • the UI can include graphics, text, icons, video, and any combination thereof.
  • the display 1605 also has the ability to capture touch signals over the surface or surface of the display 1605.
  • the touch signal can be input to the processor 1601 as a control signal for processing.
  • the display 1605 can also be used to provide virtual buttons and/or virtual keyboards, also referred to as soft buttons and/or soft keyboards.
  • the display screen 1605 can be one, and the front panel of the terminal 1600 is disposed; in other embodiments, the display screen 1605 can be at least two, respectively disposed on different surfaces of the terminal 1600 or in a folded design; In still other embodiments, display screen 1605 can be a flexible display screen disposed on a curved surface or a folded surface of terminal 1600. Even the display screen 1605 can be set to a non-rectangular irregular pattern, that is, a profiled screen.
  • the display 1605 can be made of a material such as an LCD (Liquid Crystal Display) or an OLED (Organic Light-Emitting Diode).
  • Camera component 1606 is used to capture images or video.
  • camera assembly 1606 includes a front camera and a rear camera.
  • the front camera is placed on the front panel of the terminal, and the rear camera is placed on the back of the terminal.
  • the rear camera is at least two, which are respectively a main camera, a depth camera, a wide-angle camera, and a telephoto camera, so as to realize the background blur function of the main camera and the depth camera, and the main camera Combine with a wide-angle camera for panoramic shooting and VR (Virtual Reality) shooting or other integrated shooting functions.
  • camera assembly 1606 can also include a flash.
  • the flash can be a monochrome temperature flash or a two-color temperature flash.
  • the two-color temperature flash is a combination of a warm flash and a cool flash that can be used for light compensation at different color temperatures.
  • the audio circuit 1607 can include a microphone and a speaker.
  • the microphone is used to collect sound waves of the user and the environment, and convert the sound waves into electrical signals for processing into the processor 1601 for processing, or input to the RF circuit 1604 for voice communication.
  • the microphones may be multiple, and are respectively disposed at different parts of the terminal 1600.
  • the microphone can also be an array microphone or an omnidirectional acquisition microphone.
  • the speaker is then used to convert electrical signals from the processor 1601 or the RF circuit 1604 into sound waves.
  • the speaker can be a conventional film speaker or a piezoelectric ceramic speaker.
  • the audio circuit 1607 can also include a headphone jack.
  • the location component 1608 is used to locate the current geographic location of the terminal 1600 to implement navigation or LBS (Location Based Service).
  • the positioning component 1608 can be a positioning component based on a US-based GPS (Global Positioning System), a Chinese Beidou system, or a Russian Galileo system.
  • a power supply 1609 is used to power various components in the terminal 1600.
  • the power source 1609 can be an alternating current, a direct current, a disposable battery, or a rechargeable battery.
  • the rechargeable battery may be a wired rechargeable battery or a wireless rechargeable battery.
  • a wired rechargeable battery is a battery that is charged by a wired line
  • a wireless rechargeable battery is a battery that is charged by a wireless coil.
  • the rechargeable battery can also be used to support fast charging technology.
  • terminal 1600 also includes one or more sensors 1610.
  • the one or more sensors 1610 include, but are not limited to, an acceleration sensor 1611, a gyro sensor 1612, a pressure sensor 1613, a fingerprint sensor 1614, an optical sensor 1615, and a proximity sensor 1616.
  • the acceleration sensor 1611 can detect the magnitude of the acceleration on the three coordinate axes of the coordinate system established by the terminal 1600.
  • the acceleration sensor 1611 can be used to detect components of gravity acceleration on three coordinate axes.
  • the processor 1601 can control the touch display screen 1605 to display the user interface in a landscape view or a portrait view according to the gravity acceleration signal collected by the acceleration sensor 1611.
  • the acceleration sensor 1611 can also be used for the acquisition of game or user motion data.
  • the gyro sensor 1612 can detect the body direction and the rotation angle of the terminal 1600, and the gyro sensor 1612 can cooperate with the acceleration sensor 1611 to collect the 3D motion of the user to the terminal 1600. Based on the data collected by the gyro sensor 1612, the processor 1601 can implement functions such as motion sensing (such as changing the UI according to the user's tilting operation), image stabilization at the time of shooting, game control, and inertial navigation.
  • functions such as motion sensing (such as changing the UI according to the user's tilting operation), image stabilization at the time of shooting, game control, and inertial navigation.
  • the pressure sensor 1613 can be disposed on a side border of the terminal 1600 and/or a lower layer of the touch display screen 1605.
  • the pressure sensor 1613 When the pressure sensor 1613 is disposed at the side frame of the terminal 1600, the user's holding signal to the terminal 1600 can be detected, and the processor 1601 performs left and right hand recognition or shortcut operation according to the holding signal collected by the pressure sensor 1613.
  • the operability control on the UI interface is controlled by the processor 1601 according to the user's pressure operation on the touch display screen 1605.
  • the operability control includes at least one of a button control, a scroll bar control, an icon control, and a menu control.
  • the fingerprint sensor 1614 is configured to collect the fingerprint of the user, and the processor 1601 identifies the identity of the user according to the fingerprint collected by the fingerprint sensor 1614, or the fingerprint sensor 1614 identifies the identity of the user according to the collected fingerprint. Upon identifying that the user's identity is a trusted identity, the processor 1601 authorizes the user to perform relevant sensitive operations including unlocking the screen, viewing encrypted information, downloading software, paying and changing settings, and the like.
  • Fingerprint sensor 1614 can be provided with the front, back or side of terminal 1600. When the physical button or vendor logo is provided on the terminal 1600, the fingerprint sensor 1614 can be integrated with the physical button or the manufacturer logo.
  • Optical sensor 1615 is used to collect ambient light intensity.
  • the processor 1601 can control the display brightness of the touch display 1605 based on the ambient light intensity acquired by the optical sensor 1615. Specifically, when the ambient light intensity is high, the display brightness of the touch display screen 1605 is raised; when the ambient light intensity is low, the display brightness of the touch display screen 1605 is lowered.
  • the processor 1601 can also dynamically adjust the shooting parameters of the camera assembly 1606 based on the ambient light intensity acquired by the optical sensor 1615.
  • Proximity sensor 1616 also referred to as a distance sensor, is typically disposed on the front panel of terminal 1600. Proximity sensor 1616 is used to capture the distance between the user and the front of terminal 1600. In one embodiment, when the proximity sensor 1616 detects that the distance between the user and the front side of the terminal 1600 is gradually decreasing, the touch panel 1605 is controlled by the processor 1601 to switch from the bright screen state to the screen state; when the proximity sensor 1616 detects When the distance between the user and the front side of the terminal 1600 gradually becomes larger, the processor 1601 controls the touch display screen 1605 to switch from the state of the screen to the bright state.
  • FIG. 16 does not constitute a limitation to terminal 1600, may include more or fewer components than illustrated, or may combine certain components, or employ different component arrangements.
  • a non-transitory computer readable storage medium comprising instructions, such as a memory comprising at least one instruction, at least one program, a code set or a set of instructions, at least one instruction, at least one segment
  • the program, code set or set of instructions may be executed by the processor to perform all or part of the steps of the method illustrated in any of the above-described embodiments of FIG. 1 or FIG.
  • the non-transitory computer readable storage medium may be a ROM, a random access memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, and an optical data storage device.

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Databases & Information Systems (AREA)
  • Software Systems (AREA)
  • User Interface Of Digital Computer (AREA)
  • Studio Devices (AREA)

Abstract

本申请是关于一种附加对象显示方法、装置及计算机设备,涉及计算机应用技术领域。该方法包括:在视频播放界面中显示触发控件;响应于对触发控件的激活操作,暂停播放视频,并显示参考画面帧;响应于对触发控件的拖动操作,获取参考画面帧中的目标对象;在播放视频时,在视频的画面帧中对应目标对象显示触发控件对应的附加对象,从而达到在视频播放过程中保持附加对象与视频播放画面相匹配的效果。

Description

附加对象显示方法、装置、计算机设备及存储介质
本申请要求于2018年01月18日提交、申请号为201810050497.6、发明名称为“附加对象显示方法、装置及计算机设备”的中国专利申请的优先权,上述申请的全部内容通过引用结合在本申请中。
技术领域
本申请涉及计算机应用技术领域,特别涉及一种附加对象显示方法、装置、计算机设备及存储介质。
背景技术
随着视频处理技术的不断发展,很多应用程序提供在视频中添加显示附加对象的功能,以提高视频的播放效果。
在相关技术中,用户需要在视频中添加显示附加对象时,可以在视频播放界面中指定目标位置以及待添加的附加对象,终端后续播放视频时,将该附加对象持续显示在该目标位置。然而,通过相关技术所示的方案,用户在指定目标位置后,附加对象在视频中的显示位置就固定不变,导致后续视频播放过程中附加对象与视频播放画面不匹配。
发明内容
本申请实施例提供了一种附加对象显示方法、装置、计算机设备及存储介质,可以使视频中的附加对象与视频播放画面匹配,技术方案如下:
一方面,提供了一种附加对象显示方法,所述方法由终端执行,所述方法包括:在视频播放界面中显示触发控件,所述视频播放界面用于播放视频;
响应于对所述触发控件的激活操作,暂停播放所述视频,并在所述视频播放界面中显示参考画面帧,所述参考画面帧是所述视频中对应在暂停时间点处的画面帧;响应于对所述触发控件的拖动操作,获取目标对象;所述目标对象是所述参考画面帧中对应在所述拖动操作的结束位置的显示对象;在播放所述视频时,在所述视频的画面帧中对应所述目标对象显示所述触发控件对应的附加对象。
一方面,提供了一种附加对象显示装置,所述装置用于终端中,所述装置包括:控件显示模块,用于在视频播放界面中显示触发控件,所述视频播放界面用于播放视频;
暂停模块,用于响应于对所述触发控件的激活操作,暂停播放所述视频,并在所述视频播放界面中显示参考画面帧,所述参考画面帧是所述视频中对应 在暂停时间点处的画面帧;
对象获取模块,用于响应于对所述触发控件的拖动操作,获取目标对象;所述目标对象是所述参考画面帧中对应在所述拖动操作的结束位置的显示对象;
对象显示模块,用于在播放所述视频时,在所述视频的画面帧中对应所述目标对象显示所述触发控件对应的附加对象。
可选的,所述装置还包括:追踪模块,用于在所述对象显示模块在所述视频中对应所述目标对象显示所述触发控件对应的附加对象之前,在所述视频中的各个画面帧中追踪所述目标对象,获得第一显示信息,所述第一显示信息用于指示所述目标对象分别在所述各个画面帧中的显示位置、显示尺寸以及显示姿态中的至少一种;
信息生成模块,用于根据所述第一显示信息生成第二显示信息,所述第二显示信息用于指示所述附加对象分别在所述各个画面帧中的显示位置、显示尺寸以及显示姿态中的至少一种;
所述对象显示模块,具体用于在播放所述视频时,根据所述第二显示信息,分别在所述各个画面帧中显示所述附加对象。
可选的,所述第一显示信息包括所述目标对象中的目标点分别在所述各个画面帧中的像素坐标,所述目标点是所述目标对象中对应所述拖动操作的结束位置的位置点,所述信息生成模块,具体用于,根据所述目标对象中的目标点分别在所述各个画面帧中的像素坐标,以及所述附加对象与所述目标点之间的相对位置信息,获取所述附加对象分别在所述各个画面帧中的像素坐标;生成包含所述附加对象分别在所述各个画面帧中的像素坐标的所述第二显示信息。
可选的,所述装置还包括:预览图显示模块,用于在所述视频播放界面中显示所述附加对象的预览图;
显示位置获取模块,用于获取所述预览图在所述参考画面帧中的显示位置;
相对位置获取模块,用于根据所述预览图在所述参考画面帧中的显示位置,以及所述拖动操作对应在所述参考画面帧中的结束位置,获取所述附加对象与所述目标点之间的相对位置信息。
可选的,所述装置还包括:移动模块,用于响应于对所述附加对象的预览图的拖动操作,移动所述附加对象的预览图在所述视频播放界面中的位置。
可选的,所述第一显示信息包括所述目标对象分别在所述各个画面帧中的显示尺寸,所述信息生成模块,具体用于,根据所述目标对象分别在所述各个画面帧中的显示尺寸,以及所述目标对象的原始尺寸,计算所述附加对象分别在所述各个画面帧中的缩放倍率;所述目标对象的原始尺寸是所述目标对象在 所述参考画面帧中的显示尺寸;根据所述附加对象的原始尺寸和所述缩放倍率获取所述附加对象在所述各个画面帧中的显示尺寸;生成包含所述附加对象在所述各个画面帧中的显示尺寸的所述第二显示信息。
可选的,所述第一显示信息包括所述目标对象分别在所述各个画面帧中的显示位置和显示姿态,所述信息生成模块,具体用于,根据所述目标对象分别在所述各个画面帧中的显示位置和显示姿态,以及所述附加对象与所述目标点之间的相对位置信息,获取所述附加对象分别在所述各个画面帧中的显示位置和显示姿态;生成包含所述附加对象分别在所述各个画面帧中的显示位置和显示姿态的所述第二显示信息。
可选的,所述追踪模块,具体用于,从所述参考画面帧开始,按照播放时间顺序和/或播放时间逆序,在所述各个画面帧中逐帧追踪所述目标显示对象,获得所述第一显示信息。
可选的,所述追踪模块,具体用于,通过可变形物体跟踪的静态自适应对应聚类CMT算法在所述视频中的各个画面帧中追踪所述目标对象,获得所述第一显示信息。
可选的,所述装置还包括:切换控件显示模块,用于在视频播放界面中,对应所述触发控件显示切换控件;
选择界面显示模块,用于响应于对所述切换控件的激活操作,显示附加对象选择界面,所述附加对象选择界面中包含至少两个备选对象;
附加对象获取模块,用于响应于在所述附加对象选择界面中的选择操作,将所述选择操作对应的备选对象获取为所述触发控件对应的新的附加对象。
可选的,所述附加对象为静态显示对象,或者,所述附加对象为动态显示对象。
一方面,提供了一种计算机设备,所述计算机设备包含处理器和存储器,所述存储器中存储有至少一条指令、至少一段程序、代码集或指令集,所述至少一条指令、所述至少一段程序、所述代码集或指令集由所述处理器加载并执行以实现上述附加对象显示方法。
一方面,提供了一种计算机可读存储介质,所述存储介质中存储有至少一条指令、至少一段程序、代码集或指令集,所述至少一条指令、所述至少一段程序、所述代码集或指令集由处理器加载并执行以实现上述附加对象显示方法。
本申请提供的技术方案可以包括以下有益效果:
预先在视频播放界面中显示一个触发控件,检测到用户对该触发控件的激活操作时,即可以暂停视频播放,接收到对触发控件的拖动操作时,将拖动操作结束位置对应的显示对象确定为目标对象,后续播放过程中对应各个画面帧 中同样的目标对象显示附加对象,从而达到在视频播放过程中保持附加对象与视频播放画面相匹配的效果。
应当理解的是,以上的一般描述和后文的细节描述仅是示例性和解释性的,并不能限制本申请。
附图说明
此处的附图被并入说明书中并构成本说明书的一部分,示出了符合本申请的实施例,并与说明书一起用于解释本申请的原理。
图1是本申请一个示例性实施例提供的附加对象显示的基本流程图;
图2是图1所示实施例涉及的一种触发控件展示界面图;
图3是本申请一个示例性的实施例提供的附加对象显示方法的流程图;
图4是图3所示实施例涉及的触发控件及附加对象预览图的展示界面图;
图5是图3所示实施例涉及的一种附加对象切换示意图;
图6是图3所示实施例涉及的一种附加对象缩放示意图;
图7是图3所示实施例涉及的触发控件及附加对象预览图的展示界面图;
图8是图3所示实施例涉及的CMT算法的初始化流程;
图9是图3所示实施例涉及的CMT算法的每个画面帧的处理流程图;
图10是图3所示实施例涉及的一种对象追踪流程图;
图11是本申请一示例性实施例提供的一种贴身跟随模式的操作示意图;
图12是本申请一示例性实施例提供的另一种贴身跟随模式的操作示意图;
图13是本申请一示例性实施例提供的一种风筝模式的操作示意图;
图14是本申请一示例性实施例提供的另一种贴身跟随模式的操作示意图;
图15是本申请一个示例性实施例提供的附加对象显示装置的结构方框图;
图16是本申请一示例性实施例提供的计算机设备的结构框图。
具体实施方式
这里将详细地对示例性实施例进行说明,其示例表示在附图中。下面的描述涉及附图时,除非另有表示,不同附图中的相同数字表示相同或相似的要素。以下示例性实施例中所描述的实施方式并不代表与本申请相一致的所有实施方式。相反,它们仅是与如所附权利要求书中所详述的、本申请的一些方面相一致的装置和方法的例子。
本申请提供的方案可以应用于具有一定的计算能力,且具备视频播放功能的终端中。比如,该终端可以是智能手机、平板电脑或者电子书阅读器等移动终端,或者,该终端也可以是台式电脑或者笔记本电脑等个人计算机设备。
为了便于理解本申请所提供的方案,下面对本申请涉及到几个名词进行解 释。
1、附加对象。在本申请实施例中,附加对象也称为贴纸或视频贴纸,是后期附加在已有的视频上层进行显示的文字或者图像元素。
附加对象按照类型可以分为动态对象(也称动态贴纸)和静态对象(也称静态贴纸)。其中,动态对象是指本身带有动画效果的对象,类似动画表情。而静态对象是指不带动画效果的对象,类似静态图片。
附加对象可以是透明、半透明或者非透明的显示对象。
2、画面帧。本申请中的画面帧是指视频播放过程中,根据时间戳进行播放的图像数据。
3、对象追踪。本申请中的对象追踪,是指在视频的画面帧序列中,对于一帧画面帧中包含的各个显示对象中的指定显示对象,在视频的画面帧序列中除了该画面帧之外的其它画面帧中找到该指定显示对象,并得到该指定显示对象在其它各个画面帧中的位置、尺寸以及姿态(通常可以是旋转角度)等信息。
其中,一个画面帧中的显示对象,是指该画面帧中的可视化元素,比如,一个画面帧中的一个人物、一张人脸、一张桌子、一块石头、一座房子、一片云朵或者一片天空等等都可以是该画面帧中的显示对象。
4、可变形物体跟踪的静态自适应对应聚类算法。可变形物体跟踪的静态自适应对应聚类(Clustering of Static-Adaptive Correspondences for Deformable Object Tracking,CMT)算法是一种对象跟踪算法,可以应用于跟踪视频场景中的显示对象(比如人或物体)。CMT算法是一种基于特征的跟踪方法,其使用了经典的光流法作为算法的一部分。
5、字典。本申请中涉及到的字典是一种用于保存具有映射关系的数据的集合,可以视为用于存储键值(key-value)对的容器,其中,一个键值对可以认为是一个条目(entry)。字典通过key来存取元素,且key不能重复,value必须是对象,键值对在字典中可以有序存储,也可以无序存储。
请参考图1,其示出了本申请一个示例性实施例提供的附加对象显示的基本流程图,该流程可以由终端来实现。如图1所示,该流程可以包括如下几个步骤:
步骤11,在视频播放界面中显示触发控件,该视频播放界面用于播放视频。
在本申请实施例中,终端在播放视频时,可以在视频播放界面中显示一个触发控件,该触发控件悬浮在视频播放界面的上层,且可以接受用户操作,比如光标操作或者触控操作等。
比如,请参考图2,其示出了本申请实施例涉及的一种触发控件展示界面图。如图2所示,终端在视频播放界面20中播放视频画面21,该视频画面21 中包含一触发控件22,其中,该触发控件22并不是视频画面21中原来包含的显示对象,而是终端额外添加在视频画面21的上层悬浮显示的对象。
步骤12,在播放视频时,响应于对该触发控件的激活操作,暂停播放该视频,并在该视频播放界面中显示参考画面帧,该参考画面帧是视频中对应在暂停时间点处的画面帧。
当视频播放至包含有目标对象的画面时,用户可以对触发控件执行激活操作(比如点击触控操作),此时,终端可以暂停视频播放,以便用户选择需要添加附加对象的目标对象。
如图2所示,触发控件22可以接受用户的点击触控操作,当检测到用户对触发控件22的点击触控操作时,终端暂停播放视频,同时触发控件22进入激活状态,此时,暂停的视频画面帧即为上述参考画面帧。
步骤13,响应于对该触发控件的拖动操作,获取目标对象,该目标对象是该参考画面帧中对应在该拖动操作的结束位置的显示对象。
在本申请实施例中,终端响应用户对触发控件的激活操作之后,该触发控件即可以接受用户的拖动操作,当拖动操作结束时,终端可以根据拖动操作的结束位置获取参考画面帧中的目标对象。
如图2所示,参考画面帧中包含若干个显示对象,比如人物对象23、房屋24以及山丘25等等。触发控件22进入激活状态之后,用户可以通过触摸滑动操作来拖动触发控件22,并将拖动操作结束时,触发控件22所在位置处的显示对象(即图2中的人物对象23)确定为目标对象。
从参考画面帧中获取目标对象之后,终端可以在该视频中的各个画面帧中追踪该目标对象,获得第一显示信息,该第一显示信息用于指示该目标对象分别在该各个画面帧中的显示位置、显示尺寸以及显示姿态中的至少一种;终端根据该第一显示信息生成第二显示信息,该第二显示信息用于指示该附加对象分别在该各个画面帧中的显示位置、显示尺寸以及显示姿态中的至少一种。
步骤14,在播放该视频时,在该视频的画面帧中对应该目标对象显示该触发控件对应的附加对象。
本申请实施例中,终端确定目标对象之后,后续播放该视频时,可以在视频播放界面中对应该目标对象显示该触发控件对应的附加对象。其中,对应该目标对象显示该触发控件对应的附加对象可以是指在目标对象上层覆盖显示附加对象,或者,可以是在目标对象周围显示附加对象,或者,也可以是在对应该目标对象的某一相对位置处显示该附加对象。
比如,在对应该目标对象显示该触发控件对应的附加对象时,在确定目标对象之后,终端在播放该视频时,可以根据上述第二显示信息,分别在各个画面帧中显示该附加对象。
如图2所示,终端确定人物对象23为目标对象之后,在后续播放该视频的过程中,当播放的画面帧中包含该人物对象23时,对应该人物对象23显示附加对象24。
通过图1所示的流程,终端预先在视频播放界面中显示一个触发控件,检测到用户对该触发控件的激活操作时,即可以暂停视频播放,后续再接收到对触发控件的拖动操作时,将拖动操作结束位置对应的显示对象确定为目标对象,后续播放过程中对应各个画面帧中同样的目标对象显示附加对象,从而达到在视频播放过程中保持附加对象与视频播放画面相匹配的效果。
在本申请上述图1所示实施例提供的方案中,在视频中显示触发控件对应的附加对象时,终端可以按照预先设置好的,附加对象与目标对象之间的位置关系显示附加对象;或者,终端也可以按照用户在选择目标对象时设置的附加对象与目标对象之间的位置关系显示附加对象。
请参考图3,其示出了本申请一个示例性的实施例提供的附加对象显示方法的流程图,该附加对象显示方法可以由终端执行。该方法可以包括如下几个步骤:
步骤301,在视频播放界面中显示触发控件。
步骤302,在播放视频时,响应于对该触发控件的激活操作,暂停播放该视频,并在该视频播放界面中显示参考画面帧,该参考画面帧是视频中对应在暂停时间点处的画面帧。
步骤303,响应于对该触发控件的拖动操作,获取目标对象,该目标对象是该参考画面帧中对应在该拖动操作的结束位置的显示对象。
上述步骤301至步骤303所示的方案可以参考上述图1中的步骤11至步骤13,此处不再赘述。
可选的,终端还在视频播放界面中显示附加对象的预览图。
在本申请实施例中,当触发控件被激活(即接受到对该触发控件的激活操作)后,该附加对象的预览图可以随着触发控件一起在视频播放界面中移动。
比如,请参考图4,其示出了本申请实施例涉及的一种触发控件及附加对象预览图的展示界面图。如图4所示,终端在视频播放界面40中播放视频画面41,该视频画面41中包含触发控件42以及附加对象的预览图43,当接受到用户对触发控件42的拖动操作时,触发控件42以及附加对象的预览图43一起移动至视频画面41中的新的位置。
可选的,上述对触发控件的拖动操作的操作点处于触发控件或者附加对象的预览图对应的区域。也就是说,在本申请实施例中,用户在触发控件或者附加对象的预览图上执行的拖动操作,都可以视为对触发控件的拖动操作。
比如,在图4中,用户在触发控件42或者附加对象的预览图43所在的位置处开始拖动时,都能够移动触发控件42以及附加对象的预览图43在视频画面41中的位置。
可选的,终端在视频播放界面中,对应该触发控件显示切换控件,响应于对该切换控件的激活操作,显示附加对象选择界面,该附加对象选择界面中包含至少两个备选对象,响应于在该附加对象选择界面中的选择操作,将该选择操作对应的备选对象获取为该触发控件对应的新的附加对象。
在本申请实施例中,用户还可以自由的切换触发控件对应的附加对象。比如,请参考图5,其示出了本申请实施例涉及的一种附加对象切换示意图。如图5所示,在终端在视频播放界面50中播放视频画面51,该视频画面51中包含触发控件52、附加对象的预览图53以及切换控件54,接受到用户对切换控件54的点击操作(即上述对切换控件的激活操作)时,终端显示附加对象选择界面55,该附加对象选择界面55中包含备选对象55a以及备选对象55b,其中,备选对象55a对应当前的附加对象。进一步的,接受到用户对备选对象55b的点击操作(即上述选择操作)时,终端将附加对象的预览图53中显示的内容切换为备选对象55b的预览图。
可选的,当终端在视频播放界面中显示附加对象的预览图时,还对应该附加对象的预览图显示缩放控件,响应于对该缩放控件的缩放操作,对该附加对象的预览图进行缩放,同时将缩放后的附加对象的预览图的尺寸获取该附加对象的新的显示尺寸。
在本申请实施例中,终端可以将附加对象的预览图在视频播放界面中的尺寸记录为该附加对象的显示尺寸,并且,用户可以自由缩放该附加对象的显示尺寸。
比如,请参考图6,其示出了本申请实施例涉及的一种附加对象缩放示意图。如图6所示,终端在视频播放界面60中播放视频画面61,该视频画面61中包含触发控件62、附加对象的预览图63以及缩放控件66,当接受到用户对缩放控件66的拖动操作(即上述对缩放控件的缩放操作)时,终端调整附加对象的预览图63的尺寸,并将调整后的附加对象的预览图63的尺寸获取为该附加对象的新的显示尺寸。
可选的,终端获取该预览图在该参考画面帧中的显示位置,根据该预览图在该参考画面帧中的显示位置,以及该拖动操作对应在该参考画面帧中的结束位置,获取该附加对象与该目标点之间的相对位置信息。
在本申请实施例中,用户在拖动触发控件的过程中,附加对象的预览图可以不随着触发控件一起移动,在接收到对触发控件的拖动操作之后,终端可以根据该预览图在参考画面帧中的位置,以及拖动操作的结束位置,获取附加对 象与目标点之间的相对位置信息。
比如,在初始时刻,触发控件处于预览图的左下角,在用户对触发控件进行拖动之后,触发控件处于预览图左下角的下方偏左30度,且距离为200个像素,则终端可以获取附加对象与目标点之间的相对位置信息为:目标点处于预览图左下角的下方偏左30度,距离200像素。
可选的,终端响应于对该附加对象的预览图的拖动操作,移动该附加对象的预览图在该视频播放界面中的位置。
在本申请实施例中,用户可以分别对触发控件以及附加对象的预览图进行位置移动。比如,请参考图7,其示出了本申请实施例涉及的一种触发控件及附加对象预览图的展示界面图。如图7所示,终端在视频播放界面70中播放视频画面71,该视频画面71中包含触发控件72以及附加对象的预览图73,当接受到用户对触发控件72的拖动操作时,触发控件72移动至新的位置,此过程中附加对象的预览图73位置不变,相应的,接受到用户对预览图73的拖动操作时,预览图73移动至新的位置,此时触发控件72位置不变。
步骤304,在该视频中的各个画面帧中追踪该目标对象,获得第一显示信息,该第一显示信息用于指示该目标对象分别在该各个画面帧中的显示位置、显示尺寸以及显示姿态中的至少一种。
其中,上述各个画面帧可以是视频中的全部画面帧,或者,上述各个画面帧也可以是视频中的部分画面帧。比如,当上述各个画面帧是视频中的部分画面帧时,上述各个画面帧可以是视频中处于参考画面帧之后的全部画面帧,或者,上述各个画面帧也可以是视频中处于参考画面帧之前的全部画面帧,或者,上述各个画面帧也可以是视频中处于参考画面帧之前或者之后的一段时间内的画面帧。
可选的,终端通过CMT算法在视频中的各个画面帧中追踪目标对象,获得该第一显示信息。
在本申请实施例中,针对不同的终端的操作系统,追踪算法都可以采用CMT算法,即本申请所示的方案可以支持多平台。其中,在通过CMT算法追踪各个画面帧中的目标对象时,终端首先要处理初始化算法、需要追踪的图像以及跟踪区域,然后对后续的每一帧画面帧进行匹配处理。请参考图8和图9,其中,图8示出了本申请实施例涉及的CMT算法的初始化流程,图9示出了本申请实施例涉及的CMT算法的每个画面帧的处理流程(找到画面帧中匹配的区域)图。
图8所示的CMT算法初始化具体描述如下:
1、创建初始化的变量,包括跟踪的区域、初始帧的灰度图、物体的中心位置及结果的包含的位置,初始化图片的特征检测算法可以是加速段测试特征 (Features from Accelerated Segment Test,FAST)算法,特征点描述算法可以为二进制鲁棒不变可扩展关键点(Binary Robust Invariant Scalable Keypoints,BRISK)算法。
2、检测第一帧图像的特征点,包括前景点(目标选取框中的点),背景特征点(目标选取框之外的点),构造一个包含前景点和背景点的潜在数据库(database_potential)的数据集合。
3、初始化特征点匹配器matcher,用于匹配两张图像的特征点。初始化一致器,用于评估目标的旋转角度和尺度因子;保存目标区域的(前景点)特征点及序号。
如图9所示,CMT算法处理各个画面帧的过程描述如下:
1、用特征点检测(FAST)和描述(BRISK)算法对当前图像特征点进行特征检测和描述。
2、将之前保存的首帧目标特征点与当前的特征点进行全局匹配,得到匹配到的点。
3、用光流法根据上次匹配得到的目标特征点预测当前帧中目标的特征点。
4、融合全局匹配到的点和用光流法追踪到的点(即做并集)。
5、对目标做缩放和旋转的估计,接着用局部匹配的点和一致器计算得到的点做并集融合得到目标特征点。
其中,一致器计算的目的是根据旋转和缩放估计目标区域的点。而局部匹配是指比较当前帧中检测得到的每一个关键点与第一帧中经过旋转和尺度变换之后的所有前景关键点之间的欧氏距离,若该欧氏距离小于阈值(比如阈值可以预先设置为20),说明该前景关键点是有可能匹配上的,则将这些可能的前景关键点构造成一个特征描述库database_potential;再将当前帧中检测得到的每一个关键点的特征描述与database_potential进行knnMatch匹配,每个特征描述子在database_potential寻找最佳的多个(比如2个)匹配结果,排除不稳定的关键点的策略与全局匹配类似。
通过上述CMT算法,终端可以追踪视频的各个画面帧中包含的上述目标对象的位置、尺寸以及姿态等信息(即上述第一显示信息)。
步骤305,根据该第一显示信息生成第二显示信息,该第二显示信息用于指示该附加对象分别在该各个画面帧中的显示位置、显示尺寸以及显示姿态中的至少一种。
可选的,上述第一显示信息包括该目标对象中的目标点分别在该各个画面帧中的像素坐标,终端根据该目标对象中的目标点分别在该各个画面帧中的像素坐标,以及该附加对象与该目标点之间的相对位置信息,获取该附加对象分别在该各个画面帧中的像素坐标;并生成包含该附加对象分别在该各个画面帧 中的像素坐标的该第二显示信息。
在本申请实施例中,第一显示信息所指示的位置信息可以是目标对象中的目标点分别在该各个画面帧中的像素坐标,其中目标点可以是用户对上述触发控件的拖动操作的结束位置对应在目标对象中的位置点,比如,当用户在参考画面帧中将触发控件拖动到人物A的鼻尖位置时,各个包含该人物A的画面帧中的目标点就是对应画面帧中的人物A的鼻尖所在的位置点,相应的,生成的第一显示信息中也包含各个画面帧中的人物A的鼻尖所在的位置点的像素坐标。终端后续根据各个画面帧中的人物A的鼻尖所在的位置点的像素坐标,以及附加对象与目标点之间的相对位置信息获取附加对象分别在各个画面帧中的像素坐标。
可选的,终端获取预览图在参考画面帧中的显示位置,并根据该预览图在该参考画面帧中的显示位置,以及该参考画面帧中对应在该拖动操作的结束位置,获取该附加对象与该目标点之间的相对位置信息。
比如,在图5中,附加对象的预览图在参考画面帧中的显示位置的左下角与目标点(即拖动后的触发控件所在的位置点)重合,则附加对象与该目标点之间的相对位置信息可以为:目标点与附加对象的左下角重合。
可选的,第一显示信息包括该目标对象分别在该各个画面帧中的显示尺寸,在根据该第一显示信息生成第二显示信息时,终端根据该目标对象分别在该各个画面帧中的显示尺寸,以及该目标对象的原始尺寸,计算该附加对象分别在该各个画面帧中的缩放倍率;该目标对象的原始尺寸是该目标对象在该参考画面帧中的显示尺寸;根据该附加对象的原始尺寸和该缩放倍率获取该附加对象在该各个画面帧中的显示尺寸;生成包含附加对象在该各个画面帧中的显示尺寸的该第二显示信息。
在本申请实施例中,目标对象在不同的画面帧中的显示尺寸可能不同,相应的,在生成第二显示信息时,终端可以根据目标对象在各个画面帧中的显示尺寸与目标对象的原始尺寸之间的倍数关系(即上述缩放倍率),结合附加对象的原始尺寸,确定附加对象在各个不同的画面帧中的显示尺寸,以实现附加对象随着目标对象在不同画面帧中的尺寸变化进行缩放。
可选的,该第一显示信息包括该目标对象分别在该各个画面帧中的显示位置和显示姿态,在根据该第一显示信息生成第二显示信息时,终端根据该目标对象分别在该各个画面帧中的显示位置和显示姿态,以及该附加对象与该目标点之间的相对位置信息,获取该附加对象分别在该各个画面帧中的显示位置和显示姿态;生成包含该附加对象在该各个画面帧中的显示位置和显示姿态的该第二显示信息。
在本申请实施例中,目标对象在不同的画面帧中的位置和姿态也可能不 同,相应的,在生成第二显示信息时,终端可以根据目标对象在各个画面帧中的位置和姿态,确定附加对象在各个不同的画面帧中的位置和姿态,以实现附加对象的位置和姿态随着目标对象在不同画面帧中位置和姿态进行改变(比如偏转)。
可选的,在该视频中的各个画面帧中追踪该目标对象,获得第一显示信息时,终端从该参考画面帧开始,按照播放时间顺序和/或播放时间逆序,在该各个画面帧中逐帧追踪该目标显示对象,获得上述第一显示信息。
在本申请实施例中,终端可以只按照播放时间顺序,从参考画面帧开始,在参考画面帧之后的画面帧中追踪目标对象;或者,终端也可以只按照播放时间逆序,从参考画面帧开始,在参考画面帧之前的画面帧中追踪目标对象;或者,终端也可以按照照播放时间顺序和播放时间逆序,从参考画面帧开始,分别在在参考画面帧之前和之后的画面帧中追踪目标对象。
可选的,在视频中的各个画面帧中追踪目标对象时,也可以从参考画面帧开始,只追踪参考画面帧之前和/或之后一小段时间内的目标对象。比如,以追踪参考画面帧之后的一小段时间内的目标对象为例,在一种可能的实现方式中个,终端可以按照预设的追踪时长(比如5s),确定在参考画面帧对应的播放时间点之后的预设的追踪时长内进行播放的画面帧,并只在确定的画面帧中追踪目标对象;其中,上述预设的追踪时长可以由开发人员预先设置在终端中;或者,上述预设的追踪时长也可以由用户进行设置。
或者,在另一种可能的实现方式中,终端也可以从参考画面帧开始,按照播放时间顺序逐帧追踪目标对象,若追踪到目标对象,则继续在下一画面帧中追踪该目标对象,若在某一画面帧中未追踪到该目标对象,则终端停止对该目标对象的追踪。
步骤306,在播放该视频时,根据该第二显示信息,分别在该各个画面帧中显示该附加对象。
其中,对象追踪(即生成第一显示信息和第二显示信息的过程)和在该各个画面帧中显示该附加对象可以是两个并行的流程。其中,请参考图10,其示出了本申请实施例涉及的一种对象追踪流程图,如图10所示,以附加对象是贴纸为例,该对象追踪流程的步骤可以描述如下:
步骤101,获取贴纸对应的目标对象后,检测到开始追踪的操作(比如用户点击视频播放界面中除了贴纸之外的其他区域)之后开始追踪,获取视频静止时的视频帧图像A,追踪贴纸的所在区域B,根据图像A和区域B初始化追踪对象(包括两个追踪对象,以便正向同时能追踪),同时获取当前视频的时间戳C以及视频时长D。
步骤102,开两个线程,在时间戳C同时正反向解码视频,一个线程从时 间戳C解码到0时刻,另一个线程从时间戳C解码到视频时长D。
步骤103,两个线程解码视频,会得到一帧帧图像以及对应的时间戳,终端将每一帧图像都交给追踪对象进行追踪处理。
步骤104,追踪处理会得到两种结果,如果在图像中追踪到目标对象,则会得到中心点以及缩放值,加上帧对象对应的时间戳,标记为贴纸可显示,保存到字典中。
步骤105,如果没有追踪到,则同样加时间戳,标记为贴纸不可显示,并保存到字典中。
追踪完成后,终端将会得到一个以时间戳为key,追踪数据为value的字典,追踪数据包括指示是否有追踪到目标对象的信息(用于控制贴纸在对应时间戳上是否显示),还包括了中心点以及缩放值,该中心点以及缩放值用于控制贴纸的位置以及大小的改变。视频预览界面将继续播放,播放过程中,渲染的每一帧都有个时间戳,终端根据时间戳,查找字典中是否存在该时间戳对应的追踪数据,如果有,根据这些追踪数据改变贴纸的属性,同时动态贴纸也会根据时间戳改变显示的图像。预览的时候,贴纸和视频是分开的两个view,这样就可以不断对贴纸进行处理,生成并显示视频时对视频画面和贴纸进行合成,合成时视频中每一帧都有个时间戳,终端根据这个时间戳去获取贴纸的信息(位置,大小以及贴纸图像),生成改变后的贴纸,然后跟视频帧做融合。
综上所述,本申请实施例所示的方案,终端预先在视频播放界面中显示一个触发控件,检测到用户对该触发控件的激活操作时,即可以暂停视频播放,后续再接收到对触发控件的拖动操作时,将拖动操作结束位置对应的显示对象确定为目标对象,后续播放过程中对应各个画面帧中同样的目标对象显示附加对象,从而达到在视频播放过程中保持附加对象与视频播放画面相匹配的效果。
此外,本申请实施例所示的方案,由于触发控件已经预先显示在视频播放界面中,用户在当前界面中通过激活和拖动操作即可以添加附加对象,从而提高添加显示附加对象的效率。
本技术方案,主要是针对贴纸(也就是附加显示对象)跟随的功能及交互上进行设计,同时在跟踪物体的准确度上保持较好的水平。在功能上,本方案支持静态贴纸和动态贴纸的跟踪,并且可以对整个视频中的目标进行追踪;在交互上,用户可以点击钉子(即附加显示对象对应的触控组件)停止画面,这时候拖动贴纸进行标定目标位置,点击任意区域后开始处理追踪,处理完成之后继续播放,在播放的过程中根据追踪数据改变贴纸的属性。
其中,动态贴纸的资源可以采用便携式网格图形(Animated Portable  Network Graphics,APNG)的格式文件,添加动态贴纸后,终端对APNG文件进行解码,然后根据时间戳渲染到对应的画面帧上,包括根据视频渲染时的时间戳找到APNG对应的图像。其中,APNG是PNG的位图动画扩展,可以实现PNG格式的动态图片效果。
上述图3所示的实施例提供的方案,可以应用于拍摄完短视频后添加贴纸玩法的场景。即在拍摄后可以添加静态或动态贴纸,并根据需求选中追踪目标,然后进行贴身跟随模式的追踪。
其中,上述贴身跟随模式,是指贴纸附加对象在各个画面帧中紧贴着目标对象进行显示的模式。
以附加对象是贴纸为例,请参考图11,其示出了本申请一示例性实施例提供的一种贴身跟随模式的操作示意图。如图11所示,贴纸的贴身跟随模式的使用方式主要步骤如下:
1、获取到视频(比如拍摄视频)后自动进入视频编辑界面,在编辑栏中添加一个贴纸(静态或动态)。
2、点击要编辑的贴纸进入编辑状态,出现编辑框。
3、视频播放过程中,如果有要选中的追踪目标,则用户可以点击编辑框左下方的钉子按钮(即触发控件),视频预览界面会停止播放,同时会在贴纸下方出现透明灰色蒙层,以便更直观的显示在追踪操作过程中。
4、用户拖动贴纸做精准的定位,然后点击贴纸之外的其他区域。
5、预览界面会出现loading状态,做追踪处理,处理完视频恢复播放,选中的贴纸也会根据追踪数据进行位置以及大小的改变。
贴纸的贴身跟随模式在实际使用过程中可以选中人脸,达到挡住人脸的作用,并用动态贴纸做出娱乐化的视频画面,除了人脸遮挡之外,也可以使用在其它的场景,比如遮挡物体等等。
请参考图12,其示出了本申请一示例性实施例提供的另一种贴身跟随模式的操作示意图,如图12所示,在视频播放过程中,画面上层显示一个钉子的图标(即触发控件)以及贴纸。当用户点击钉子时,视频暂停播放,此时画面静止。在画面静止后,用户可以在静止的画面上拖动贴纸,以选择贴纸跟随的目标(即图12中人物的头部)。用户点击静止的画面中除了贴纸以及钉子之外的其它区域后,视频恢复播放,并且,贴纸开始跟随选择的目标(人物的头部)运动。
上述图3所示的实施例提供的方案,也可以应用于拍摄完短视频后添加贴纸玩法的场景。即在拍摄后可以添加静态或动态贴纸,并根据需求选中追踪目标,然后进行风筝模式的追踪。
其中,风筝模式是指贴纸跟随目标对象一起移动改变,但并不会挡住目标对象的模式,其中,贴身跟随模式严格意义上可以归纳为风筝模式下的一个特殊场景,只是定位追踪目标区域为贴纸本身所在的区域,而风筝模式选中的区域可以随意拖动,贴纸只是根据和选中的区域相对位置信息做适当的位置以及大小的改变,请参考图13,其示出了本申请一示例性实施例提供的一种风筝模式的操作示意图。如图13所示,风筝模式如下:
1、获取到视频(比如拍摄视频)后自动进入视频编辑界面,在编辑栏中添加一个贴纸(静态或动态)。
2、点击要编辑的贴纸进入编辑状态,出现编辑框。
3、视频播放过程中,如果有要选中的追踪目标,则用户可以点击编辑框左下方的钉子按钮(即触发控件),视频预览界面会停止播放,同时会在贴纸下方出现透明灰色蒙层,以便更直观的显示在追踪操作过程中。
4、用户拖动钉子,钉子在选中状态下可以独立于贴纸而被拖出来,拖到要追踪目标上方,同时拖动贴纸做精准的定位,终端处理好贴纸与钉子的相对位置,然后用户点击贴纸之外的其他区域。
5、预览界面会出现loading状态,做追踪处理,处理完视频恢复播放,选中的贴纸也会根据追踪数据进行位置以及大小的改变。
贴纸的风筝模式在使用场景上,可以选择让一个动态贴纸根据追踪目标,比如,可以是追踪建筑物目标,此时贴纸作为地理位置的标识进行显示,以用来标记建筑物等。
请参考图14,其示出了本申请一示例性实施例提供的另一种风筝模式的操作示意图,如图14所示,在视频播放过程中,画面上层显示一个钉子的图标(即触发控件)以及贴纸。与图12不同的是,在图14所示的模式中,当画面静止时,若用户按住钉子,该钉子可以脱离贴纸被用户拖动,用户拖动钉子以选择贴纸对应的目标时,钉子与贴纸之间可以显示一根连线,以指示钉子和贴纸之间的相对位置关系。用户拖动完毕并松开手指后,钉子会以闪烁的动画形式进行显示,用户点击静止的画面中除了贴纸以及钉子之外的其它区域后,视频恢复播放,并且,贴纸开始跟随选择的目标以类似风筝的形式运动。
本申请提出的方案旨在解决视频编辑阶段添加显示附加对象的效率较低的问题,可以给用户提供两种跟踪模式(贴身跟随模式以及风筝模式),提供友好的操作可以精确的选中追踪目标,不管在短视频哪个时间戳选中了追踪目标,都会对整个短视频进行追踪。在保证支持动态贴纸追踪能力,增加娱乐性,同时也要保证追踪的准确性。
图15是根据一示例性实施例示出的一种附加对象显示装置的结构方框图。 该虚拟对象控制装置可以用于终端中,以执行图1或图3任一实施例所示的方法的全部或者部分步骤。该附加对象显示装置可以包括:
控件显示模块1501,用于在视频播放界面中显示触发控件,所述视频播放界面用于播放视频;
暂停模块1502,用于响应于对所述触发控件的激活操作,暂停播放所述视频,以在所述视频播放界面中显示参考画面帧,所述参考画面帧是所述视频中对应在暂停时间点处的画面帧;
对象获取模块1503,用于响应于对所述触发控件的拖动操作,获取目标对象;所述目标对象是所述参考画面帧中对应在所述拖动操作的结束位置的显示对象;
对象显示模块1504,用于播放所述视频时,在所述视频的画面帧中对应所述目标对象显示所述触发控件对应的附加对象。
可选的,所述装置还包括:追踪模块,用于在所述对象显示模块1504在所述视频中对应所述目标对象显示所述触发控件对应的附加对象之前,在所述视频中的各个画面帧中追踪所述目标对象,获得第一显示信息,所述第一显示信息用于指示所述目标对象分别在所述各个画面帧中的显示位置、显示尺寸以及显示姿态中的至少一种;
信息生成模块,用于根据所述第一显示信息生成第二显示信息,所述第二显示信息用于指示所述附加对象分别在所述各个画面帧中的显示位置、显示尺寸以及显示姿态中的至少一种;
所述对象显示模块1504,具体用于在播放所述视频时,根据所述第二显示信息,分别在所述各个画面帧中显示所述附加对象。
可选的,所述第一显示信息包括所述目标对象中的目标点分别在所述各个画面帧中的像素坐标,所述目标点是所述目标对象中对应所述拖动操作的结束位置的位置点,所述信息生成模块,具体用于,
根据所述目标对象中的目标点分别在所述各个画面帧中的像素坐标,以及所述附加对象与所述目标点之间的相对位置信息,获取所述附加对象分别在所述各个画面帧中的像素坐标;
生成包含所述附加对象分别在所述各个画面帧中的像素坐标的所述第二显示信息。
可选的,所述装置还包括:预览图显示模块,用于在所述视频播放界面中显示所述附加对象的预览图;
显示位置获取模块,用于获取所述预览图在所述参考画面帧中的显示位置;
相对位置获取模块,用于根据所述预览图在所述参考画面帧中的显示位 置,以及所述拖动操作对应在所述参考画面帧中的结束位置,获取所述附加对象与所述目标点之间的相对位置信息。
可选的,所述装置还包括:移动模块,用于响应于对所述附加对象的预览图的拖动操作,移动所述附加对象的预览图在所述视频播放界面中的位置。
可选的,所述第一显示信息包括所述目标对象分别在所述各个画面帧中的显示尺寸,所述信息生成模块,具体用于,
根据所述目标对象分别在所述各个画面帧中的显示尺寸,以及所述目标对象的原始尺寸,计算所述附加对象分别在所述各个画面帧中的缩放倍率;所述目标对象的原始尺寸是所述目标对象在所述参考画面帧中的显示尺寸;
根据所述附加对象的原始尺寸和所述缩放倍率获取所述附加对象在所述各个画面帧中的显示尺寸;
生成包含所述附加对象在所述各个画面帧中的显示尺寸的所述第二显示信息。
可选的,所述第一显示信息包括所述目标对象分别在所述各个画面帧中的显示位置和显示姿态,所述信息生成模块,具体用于,
根据所述目标对象分别在所述各个画面帧中的显示位置和显示姿态,以及所述附加对象与所述目标点之间的相对位置信息,获取所述附加对象分别在所述各个画面帧中的显示位置和显示姿态;
生成包含所述附加对象分别在所述各个画面帧中的显示位置和显示姿态的所述第二显示信息。
可选的,所述追踪模块,具体用于,从所述参考画面帧开始,按照播放时间顺序和/或播放时间逆序,在所述各个画面帧中逐帧追踪所述目标显示对象,获得所述第一显示信息。
可选的,所述追踪模块,具体用于,通过可变形物体跟踪的静态自适应对应聚类CMT算法在所述视频中的各个画面帧中追踪所述目标对象,获得所述第一显示信息。
可选的,所述装置还包括:切换控件显示模块,用于在视频播放界面中,对应所述触发控件显示切换控件;
选择界面显示模块,用于响应于对所述切换控件的激活操作,显示附加对象选择界面,所述附加对象选择界面中包含至少两个备选对象;
附加对象获取模块,用于响应于在所述附加对象选择界面中的选择操作,将所述选择操作对应的备选对象获取为所述触发控件对应的新的附加对象。
可选的,所述附加对象为静态显示对象,或者,所述附加对象为动态显示对象。
图16示出了本申请一个示例性实施例提供的终端1600的结构框图。该终端1600可以是:智能手机、平板电脑、MP3播放器(Moving Picture Experts Group Audio Layer III,动态影像专家压缩标准音频层面3)、MP4(Moving Picture Experts Group Audio Layer IV,动态影像专家压缩标准音频层面4)播放器、笔记本电脑或台式电脑。终端1600还可能被称为用户设备、便携式终端、膝上型终端、台式终端等其他名称。
通常,终端1600包括有:处理器1601和存储器1602。
处理器1601可以包括一个或多个处理核心,比如4核心处理器、8核心处理器等。处理器1601可以采用DSP(Digital Signal Processing,数字信号处理)、FPGA(Field-Programmable Gate Array,现场可编程门阵列)、PLA(Programmable Logic Array,可编程逻辑阵列)中的至少一种硬件形式来实现。处理器1601也可以包括主处理器和协处理器,主处理器是用于对在唤醒状态下的数据进行处理的处理器,也称CPU(Central Processing Unit,中央处理器);协处理器是用于对在待机状态下的数据进行处理的低功耗处理器。在一些实施例中,处理器1601可以在集成有GPU(Graphics Processing Unit,图像处理器),GPU用于负责显示屏所需要显示的内容的渲染和绘制。一些实施例中,处理器1601还可以包括AI(Artificial Intelligence,人工智能)处理器,该AI处理器用于处理有关机器学习的计算操作。
存储器1602可以包括一个或多个计算机可读存储介质,该计算机可读存储介质可以是非暂态的。存储器1602还可包括高速随机存取存储器,以及非易失性存储器,比如一个或多个磁盘存储设备、闪存存储设备。在一些实施例中,存储器1602中的非暂态的计算机可读存储介质用于存储至少一个指令,该至少一个指令用于被处理器1601所执行以实现本申请中方法实施例提供的附加对象显示方法。
在一些实施例中,终端1600还可选包括有:外围设备接口1603和至少一个外围设备。处理器1601、存储器1602和外围设备接口1603之间可以通过总线或信号线相连。各个外围设备可以通过总线、信号线或电路板与外围设备接口1603相连。具体地,外围设备包括:射频电路1604、触摸显示屏1605、摄像头1606、音频电路1607、定位组件1608和电源1609中的至少一种。
外围设备接口1603可被用于将I/O(Input/Output,输入/输出)相关的至少一个外围设备连接到处理器1601和存储器1602。在一些实施例中,处理器1601、存储器1602和外围设备接口1603被集成在同一芯片或电路板上;在一些其他实施例中,处理器1601、存储器1602和外围设备接口1603中的任意一个或两个可以在单独的芯片或电路板上实现,本实施例对此不加以限定。
射频电路1604用于接收和发射RF(Radio Frequency,射频)信号,也称 电磁信号。射频电路1604通过电磁信号与通信网络以及其他通信设备进行通信。射频电路1604将电信号转换为电磁信号进行发送,或者,将接收到的电磁信号转换为电信号。可选地,射频电路1604包括:天线系统、RF收发器、一个或多个放大器、调谐器、振荡器、数字信号处理器、编解码芯片组、用户身份模块卡等等。射频电路1604可以通过至少一种无线通信协议来与其它终端进行通信。该无线通信协议包括但不限于:万维网、城域网、内联网、各代移动通信网络(2G、3G、4G及5G)、无线局域网和/或WiFi(Wireless Fidelity,无线保真)网络。在一些实施例中,射频电路1604还可以包括NFC(Near Field Communication,近距离无线通信)有关的电路,本申请对此不加以限定。
显示屏1605用于显示UI(User Interface,用户界面)。该UI可以包括图形、文本、图标、视频及其它们的任意组合。当显示屏1605是触摸显示屏时,显示屏1605还具有采集在显示屏1605的表面或表面上方的触摸信号的能力。该触摸信号可以作为控制信号输入至处理器1601进行处理。此时,显示屏1605还可以用于提供虚拟按钮和/或虚拟键盘,也称软按钮和/或软键盘。在一些实施例中,显示屏1605可以为一个,设置终端1600的前面板;在另一些实施例中,显示屏1605可以为至少两个,分别设置在终端1600的不同表面或呈折叠设计;在再一些实施例中,显示屏1605可以是柔性显示屏,设置在终端1600的弯曲表面上或折叠面上。甚至,显示屏1605还可以设置成非矩形的不规则图形,也即异形屏。显示屏1605可以采用LCD(Liquid Crystal Display,液晶显示屏)、OLED(Organic Light-Emitting Diode,有机发光二极管)等材质制备。
摄像头组件1606用于采集图像或视频。可选地,摄像头组件1606包括前置摄像头和后置摄像头。通常,前置摄像头设置在终端的前面板,后置摄像头设置在终端的背面。在一些实施例中,后置摄像头为至少两个,分别为主摄像头、景深摄像头、广角摄像头、长焦摄像头中的任意一种,以实现主摄像头和景深摄像头融合实现背景虚化功能、主摄像头和广角摄像头融合实现全景拍摄以及VR(Virtual Reality,虚拟现实)拍摄功能或者其它融合拍摄功能。在一些实施例中,摄像头组件1606还可以包括闪光灯。闪光灯可以是单色温闪光灯,也可以是双色温闪光灯。双色温闪光灯是指暖光闪光灯和冷光闪光灯的组合,可以用于不同色温下的光线补偿。
音频电路1607可以包括麦克风和扬声器。麦克风用于采集用户及环境的声波,并将声波转换为电信号输入至处理器1601进行处理,或者输入至射频电路1604以实现语音通信。出于立体声采集或降噪的目的,麦克风可以为多个,分别设置在终端1600的不同部位。麦克风还可以是阵列麦克风或全向采集型麦克风。扬声器则用于将来自处理器1601或射频电路1604的电信号转换为声波。扬声器可以是传统的薄膜扬声器,也可以是压电陶瓷扬声器。当扬声 器是压电陶瓷扬声器时,不仅可以将电信号转换为人类可听见的声波,也可以将电信号转换为人类听不见的声波以进行测距等用途。在一些实施例中,音频电路1607还可以包括耳机插孔。
定位组件1608用于定位终端1600的当前地理位置,以实现导航或LBS(Location Based Service,基于位置的服务)。定位组件1608可以是基于美国的GPS(Global Positioning System,全球定位系统)、中国的北斗系统或俄罗斯的伽利略系统的定位组件。
电源1609用于为终端1600中的各个组件进行供电。电源1609可以是交流电、直流电、一次性电池或可充电电池。当电源1609包括可充电电池时,该可充电电池可以是有线充电电池或无线充电电池。有线充电电池是通过有线线路充电的电池,无线充电电池是通过无线线圈充电的电池。该可充电电池还可以用于支持快充技术。
在一些实施例中,终端1600还包括有一个或多个传感器1610。该一个或多个传感器1610包括但不限于:加速度传感器1611、陀螺仪传感器1612、压力传感器1613、指纹传感器1614、光学传感器1615以及接近传感器1616。
加速度传感器1611可以检测以终端1600建立的坐标系的三个坐标轴上的加速度大小。比如,加速度传感器1611可以用于检测重力加速度在三个坐标轴上的分量。处理器1601可以根据加速度传感器1611采集的重力加速度信号,控制触摸显示屏1605以横向视图或纵向视图进行用户界面的显示。加速度传感器1611还可以用于游戏或者用户的运动数据的采集。
陀螺仪传感器1612可以检测终端1600的机体方向及转动角度,陀螺仪传感器1612可以与加速度传感器1611协同采集用户对终端1600的3D动作。处理器1601根据陀螺仪传感器1612采集的数据,可以实现如下功能:动作感应(比如根据用户的倾斜操作来改变UI)、拍摄时的图像稳定、游戏控制以及惯性导航。
压力传感器1613可以设置在终端1600的侧边框和/或触摸显示屏1605的下层。当压力传感器1613设置在终端1600的侧边框时,可以检测用户对终端1600的握持信号,由处理器1601根据压力传感器1613采集的握持信号进行左右手识别或快捷操作。当压力传感器1613设置在触摸显示屏1605的下层时,由处理器1601根据用户对触摸显示屏1605的压力操作,实现对UI界面上的可操作性控件进行控制。可操作性控件包括按钮控件、滚动条控件、图标控件、菜单控件中的至少一种。
指纹传感器1614用于采集用户的指纹,由处理器1601根据指纹传感器1614采集到的指纹识别用户的身份,或者,由指纹传感器1614根据采集到的指纹识别用户的身份。在识别出用户的身份为可信身份时,由处理器1601授 权该用户执行相关的敏感操作,该敏感操作包括解锁屏幕、查看加密信息、下载软件、支付及更改设置等。指纹传感器1614可以被设置终端1600的正面、背面或侧面。当终端1600上设置有物理按键或厂商Logo时,指纹传感器1614可以与物理按键或厂商Logo集成在一起。
光学传感器1615用于采集环境光强度。在一个实施例中,处理器1601可以根据光学传感器1615采集的环境光强度,控制触摸显示屏1605的显示亮度。具体地,当环境光强度较高时,调高触摸显示屏1605的显示亮度;当环境光强度较低时,调低触摸显示屏1605的显示亮度。在另一个实施例中,处理器1601还可以根据光学传感器1615采集的环境光强度,动态调整摄像头组件1606的拍摄参数。
接近传感器1616,也称距离传感器,通常设置在终端1600的前面板。接近传感器1616用于采集用户与终端1600的正面之间的距离。在一个实施例中,当接近传感器1616检测到用户与终端1600的正面之间的距离逐渐变小时,由处理器1601控制触摸显示屏1605从亮屏状态切换为息屏状态;当接近传感器1616检测到用户与终端1600的正面之间的距离逐渐变大时,由处理器1601控制触摸显示屏1605从息屏状态切换为亮屏状态。
本领域技术人员可以理解,图16中示出的结构并不构成对终端1600的限定,可以包括比图示更多或更少的组件,或者组合某些组件,或者采用不同的组件布置。
在一示例性实施例中,还提供了一种包括指令的非临时性计算机可读存储介质,例如包括至少一条指令、至少一段程序、代码集或指令集的存储器,上述至少一条指令、至少一段程序、代码集或指令集可由处理器执行以完成上述图1或图3任一实施例所示的方法的全部或者部分步骤。例如,所述非临时性计算机可读存储介质可以是ROM、随机存取存储器(RAM)、CD-ROM、磁带、软盘和光数据存储设备等。
本领域技术人员在考虑说明书及实践这里公开的发明后,将容易想到本申请的其它实施方案。本申请旨在涵盖本申请的任何变型、用途或者适应性变化,这些变型、用途或者适应性变化遵循本申请的一般性原理并包括本申请未公开的本技术领域中的公知常识或惯用技术手段。说明书和实施例仅被视为示例性的,本申请的真正范围和精神由下面的权利要求指出。
应当理解的是,本申请并不局限于上面已经描述并在附图中示出的精确结构,并且可以在不脱离其范围进行各种修改和改变。本申请的范围仅由所附的权利要求来限制。

Claims (15)

  1. 一种附加对象显示方法,其特征在于,所述方法由终端执行,所述方法包括:
    在视频播放界面中显示触发控件,所述视频播放界面用于播放视频;
    响应于对所述触发控件的激活操作,暂停播放所述视频,并在所述视频播放界面中显示参考画面帧,所述参考画面帧是所述视频中对应在暂停时间点处的画面帧;
    响应于对所述触发控件的拖动操作,获取目标对象;所述目标对象是所述参考画面帧中对应在所述拖动操作的结束位置的显示对象;
    在播放所述视频时,在所述视频的画面帧中对应所述目标对象显示所述触发控件对应的附加对象。
  2. 根据权利要求1所述的方法,其特征在于,在所述视频中对应所述目标对象显示所述触发控件对应的附加对象之前,还包括:
    在所述视频中的各个画面帧中追踪所述目标对象,获得第一显示信息,所述第一显示信息用于指示所述目标对象分别在所述各个画面帧中的显示位置、显示尺寸以及显示姿态中的至少一种;
    根据所述第一显示信息生成第二显示信息,所述第二显示信息用于指示所述附加对象分别在所述各个画面帧中的显示位置、显示尺寸以及显示姿态中的至少一种;
    所述在播放所述视频时,在所述视频的画面帧中对应所述目标对象显示所述触发控件对应的附加对象,包括:
    在播放所述视频时,根据所述第二显示信息分别在所述各个画面帧中显示所述附加对象。
  3. 根据权利要求2所述的方法,其特征在于,所述第一显示信息包括所述目标对象中的目标点分别在所述各个画面帧中的像素坐标,所述目标点是所述目标对象中对应所述拖动操作的结束位置的位置点;
    所述根据所述第一显示信息生成第二显示信息,包括:
    根据所述目标对象中的目标点分别在所述各个画面帧中的像素坐标,以及所述附加对象与所述目标点之间的相对位置信息,获取所述附加对象分别在所述各个画面帧中的像素坐标;
    生成包含所述附加对象分别在所述各个画面帧中的像素坐标的所述第二显示信息。
  4. 根据权利要求3所述的方法,其特征在于,所述方法还包括:
    在所述视频播放界面中显示所述附加对象的预览图;
    获取所述预览图在所述参考画面帧中的显示位置;
    根据所述预览图在所述参考画面帧中的显示位置,以及所述拖动操作对应在所述参考画面帧中的结束位置,获取所述附加对象与所述目标点之间的相对位置信息。
  5. 根据权利要求4所述的方法,其特征在于,所述方法还包括:
    响应于对所述附加对象的预览图的拖动操作,移动所述附加对象的预览图在所述视频播放界面中的位置。
  6. 根据权利要求2所述的方法,其特征在于,所述第一显示信息包括所述目标对象分别在所述各个画面帧中的显示尺寸;
    所述根据所述第一显示信息生成第二显示信息,包括:
    根据所述目标对象分别在所述各个画面帧中的显示尺寸,以及所述目标对象的原始尺寸,计算所述附加对象分别在所述各个画面帧中的缩放倍率;所述目标对象的原始尺寸是所述目标对象在所述参考画面帧中的显示尺寸;
    根据所述附加对象的原始尺寸和所述缩放倍率获取所述附加对象在所述各个画面帧中的显示尺寸;
    生成包含所述附加对象在所述各个画面帧中的显示尺寸的所述第二显示信息。
  7. 根据权利要求2所述的方法,其特征在于,所述第一显示信息包括所述目标对象分别在所述各个画面帧中的显示位置和显示姿态;
    所述根据所述第一显示信息生成第二显示信息,包括:
    根据所述目标对象分别在所述各个画面帧中的显示位置和显示姿态,以及所述附加对象与所述目标点之间的相对位置信息,获取所述附加对象分别在所述各个画面帧中的显示位置和显示姿态;
    生成包含所述附加对象分别在所述各个画面帧中的显示位置和显示姿态的所述第二显示信息。
  8. 根据权利要求2所述的方法,其特征在于,所述在所述视频中的各个画面帧中追踪所述目标对象,获得第一显示信息,包括:
    从所述参考画面帧开始,按照播放时间顺序和/或播放时间逆序,在所述各个画面帧中逐帧追踪所述目标显示对象,获得所述第一显示信息。
  9. 根据权利要求2所述的方法,其特征在于,所述在所述视频中的各个画面帧中追踪所述目标对象,获得第一显示信息,包括:
    通过可变形物体跟踪的静态自适应对应聚类CMT算法在所述视频中的各个画面帧中追踪所述目标对象,获得所述第一显示信息。
  10. 根据权利要求1至9任一所述的方法,其特征在于,所述方法还包括:
    在所述视频播放界面中,对应所述触发控件显示切换控件;
    响应于对所述切换控件的激活操作,显示附加对象选择界面,所述附加对象选择界面中包含至少两个备选对象;
    响应于在所述附加对象选择界面中的选择操作,将所述选择操作对应的备选对象获取为所述触发控件对应的新的附加对象。
  11. 根据权利要求1至9任一所述的方法,其特征在于,所述附加对象为静态显示对象,或者,所述附加对象为动态显示对象。
  12. 一种附加对象显示装置,其特征在于,所述装置用于终端中,所述装置包括:
    控件显示模块,用于在视频播放界面中显示触发控件,所述视频播放界面用于播放视频;
    暂停模块,用于响应于对所述触发控件的激活操作,暂停播放所述视频,并在所述视频播放界面中显示参考画面帧,所述参考画面帧是所述视频中对应在暂停时间点处的画面帧;
    对象获取模块,用于响应于对所述触发控件的拖动操作,获取目标对象;所述目标对象是所述参考画面帧中对应在所述拖动操作的结束位置的显示对象;
    对象显示模块,用于在播放所述视频时,在所述视频的画面帧中对应所述目标对象显示所述触发控件对应的附加对象。
  13. 根据权利要求12所述的装置,其特征在于,所述装置还包括:
    追踪模块,用于在所述对象显示模块在所述视频中对应所述目标对象显示所述触发控件对应的附加对象之前,在所述视频中的各个画面帧中追踪所述目标对象,获得第一显示信息,所述第一显示信息用于指示所述目标对象分别在所述各个画面帧中的显示位置、显示尺寸以及显示姿态中的至少一种;
    信息生成模块,用于根据所述第一显示信息生成第二显示信息,所述第二显示信息用于指示所述附加对象分别在所述各个画面帧中的显示位置、显示尺寸以及显示姿态中的至少一种;
    所述对象显示模块,具体用于在播放所述视频时,根据所述第二显示信息,分别在所述各个画面帧中显示所述附加对象。
  14. 一种计算机设备,其特征在于,所述计算机设备包含处理器和存储器,所述存储器中存储有至少一条指令、至少一段程序、代码集或指令集,所述至少一条指令、所述至少一段程序、所述代码集或指令集由所述处理器加载并执行以实现如权利要求1至11任一所述的附加对象显示方法。
  15. 一种计算机可读存储介质,其特征在于,所述存储介质中存储有至少一条指令、至少一段程序、代码集或指令集,所述至少一条指令、所述至少一段程序、所述代码集或指令集由处理器加载并执行以实现如权利要求1至11任一所述的附加对象显示方法。
PCT/CN2019/070616 2018-01-18 2019-01-07 附加对象显示方法、装置、计算机设备及存储介质 WO2019141100A1 (zh)

Priority Applications (3)

Application Number Priority Date Filing Date Title
JP2020539223A JP7109553B2 (ja) 2018-01-18 2019-01-07 追加オブジェクトの表示方法及びその、装置、コンピュータ装置並びに記憶媒体
EP19740905.5A EP3742743A4 (en) 2018-01-18 2019-01-07 METHOD AND DEVICE FOR DISPLAYING AN ADDITIONAL OBJECT, COMPUTER DEVICE AND STORAGE MEDIA
US15/930,124 US11640235B2 (en) 2018-01-18 2020-05-12 Additional object display method and apparatus, computer device, and storage medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201810050497.6A CN110062269A (zh) 2018-01-18 2018-01-18 附加对象显示方法、装置及计算机设备
CN201810050497.6 2018-01-18

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US15/930,124 Continuation US11640235B2 (en) 2018-01-18 2020-05-12 Additional object display method and apparatus, computer device, and storage medium

Publications (1)

Publication Number Publication Date
WO2019141100A1 true WO2019141100A1 (zh) 2019-07-25

Family

ID=67302011

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/070616 WO2019141100A1 (zh) 2018-01-18 2019-01-07 附加对象显示方法、装置、计算机设备及存储介质

Country Status (5)

Country Link
US (1) US11640235B2 (zh)
EP (1) EP3742743A4 (zh)
JP (1) JP7109553B2 (zh)
CN (1) CN110062269A (zh)
WO (1) WO2019141100A1 (zh)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110765015A (zh) * 2019-10-24 2020-02-07 北京云聚智慧科技有限公司 一种对被测应用进行测试的方法和电子设备
CN111901662A (zh) * 2020-08-05 2020-11-06 腾讯科技(深圳)有限公司 视频的扩展信息处理方法、设备和存储介质
CN112218136A (zh) * 2020-10-10 2021-01-12 腾讯科技(深圳)有限公司 视频处理方法、装置、计算机设备及存储介质
CN113158621A (zh) * 2021-05-18 2021-07-23 掌阅科技股份有限公司 书架页面的显示方法、计算设备及计算机存储介质
CN113613067A (zh) * 2021-08-03 2021-11-05 北京字跳网络技术有限公司 视频处理方法、装置、设备及存储介质
EP4068794A4 (en) * 2019-12-30 2022-12-28 Beijing Bytedance Network Technology Co., Ltd. IMAGE PROCESSING METHOD AND APPARATUS
WO2023045825A1 (zh) * 2021-09-27 2023-03-30 北京有竹居网络技术有限公司 基于视频的信息展示方法及装置、电子设备和存储介质
KR102717916B1 (ko) 2019-12-30 2024-10-16 베이징 바이트댄스 네트워크 테크놀로지 컴퍼니, 리미티드 이미징 프로세싱 방법 및 장치

Families Citing this family (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11605137B2 (en) 2019-09-11 2023-03-14 Oracle International Corporation Expense report submission interface
US20210073921A1 (en) * 2019-09-11 2021-03-11 Oracle International Corporation Expense report reviewing interface
CN110782510B (zh) * 2019-10-25 2024-06-11 北京达佳互联信息技术有限公司 一种贴纸生成方法及装置
CN111726701B (zh) * 2020-06-30 2022-03-04 腾讯科技(深圳)有限公司 信息植入方法、视频播放方法、装置和计算机设备
CN111954075B (zh) * 2020-08-20 2021-07-09 腾讯科技(深圳)有限公司 视频处理模型状态调整方法、装置、电子设备及存储介质
CN112100437A (zh) * 2020-09-10 2020-12-18 北京三快在线科技有限公司 图像处理方法、装置、电子设备及计算机可读存储介质
CN114265534B (zh) * 2020-09-16 2024-09-10 北京小米移动软件有限公司 锁屏界面的显示方法及装置、电子设备及存储介质
CN112181556B (zh) * 2020-09-21 2024-04-19 北京字跳网络技术有限公司 终端控件的处理方法、装置、电子设备及存储介质
CN112153475B (zh) * 2020-09-25 2022-08-05 北京字跳网络技术有限公司 用于生成文字模式的视频的方法、装置、设备和介质
CN112181572B (zh) * 2020-09-28 2024-06-07 北京达佳互联信息技术有限公司 互动特效展示方法、装置、终端及存储介质
CN114546228B (zh) * 2020-11-12 2023-08-25 腾讯科技(深圳)有限公司 表情图像发送方法、装置、设备及介质
CN114584824A (zh) * 2020-12-01 2022-06-03 阿里巴巴集团控股有限公司 数据处理方法、系统、电子设备、服务端及客户端设备
CN112822544B (zh) * 2020-12-31 2023-10-20 广州酷狗计算机科技有限公司 视频素材文件生成方法、视频合成方法、设备及介质
CN113709545A (zh) * 2021-04-13 2021-11-26 腾讯科技(深圳)有限公司 视频的处理方法、装置、计算机设备和存储介质
CN113676765B (zh) * 2021-08-20 2024-03-01 上海哔哩哔哩科技有限公司 交互动画展示方法及装置
CN113873294A (zh) * 2021-10-19 2021-12-31 深圳追一科技有限公司 视频处理方法、装置、计算机存储介质及电子设备
CN114125555B (zh) * 2021-11-12 2024-02-09 深圳麦风科技有限公司 编辑数据的预览方法、终端和存储介质

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH11306354A (ja) * 1998-04-21 1999-11-05 Matsushita Electric Ind Co Ltd 画像認識方法及び画像認識装置
CN1777916A (zh) * 2003-04-21 2006-05-24 日本电气株式会社 识别视频图像对象的设备和方法、应用视频图像注释的设备和方法及识别视频图像对象的程序
CN104346095A (zh) * 2013-08-09 2015-02-11 联想(北京)有限公司 一种信息处理方法及电子设备
CN105955641A (zh) * 2015-03-08 2016-09-21 苹果公司 用于与对象交互的设备、方法和图形用户界面

Family Cites Families (32)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6144972A (en) * 1996-01-31 2000-11-07 Mitsubishi Denki Kabushiki Kaisha Moving image anchoring apparatus which estimates the movement of an anchor based on the movement of the object with which the anchor is associated utilizing a pattern matching technique
US10003781B2 (en) * 2006-08-04 2018-06-19 Gula Consulting Limited Liability Company Displaying tags associated with items in a video playback
US8640030B2 (en) * 2007-10-07 2014-01-28 Fall Front Wireless Ny, Llc User interface for creating tags synchronized with a video playback
US9390169B2 (en) * 2008-06-28 2016-07-12 Apple Inc. Annotation of movies
US8522144B2 (en) * 2009-04-30 2013-08-27 Apple Inc. Media editing application with candidate clip management
US20110261258A1 (en) * 2009-09-14 2011-10-27 Kumar Ramachandran Systems and methods for updating video content with linked tagging information
US8910046B2 (en) * 2010-07-15 2014-12-09 Apple Inc. Media-editing application with anchored timeline
US20120272171A1 (en) * 2011-04-21 2012-10-25 Panasonic Corporation Apparatus, Method and Computer-Implemented Program for Editable Categorization
JP2013115691A (ja) * 2011-11-30 2013-06-10 Jvc Kenwood Corp 撮像装置及び撮像装置に用いる制御プログラム
JP2013115692A (ja) * 2011-11-30 2013-06-10 Jvc Kenwood Corp 撮像装置及び撮像装置に用いる制御プログラム
JP5838791B2 (ja) * 2011-12-22 2016-01-06 富士通株式会社 プログラム、画像処理装置及び画像処理方法
CN103797783B (zh) * 2012-07-17 2017-09-29 松下知识产权经营株式会社 评论信息生成装置及评论信息生成方法
JP6179889B2 (ja) * 2013-05-16 2017-08-16 パナソニックIpマネジメント株式会社 コメント情報生成装置およびコメント表示装置
JP5671671B1 (ja) * 2013-12-09 2015-02-18 株式会社Pumo 視聴者用インタフェース装置及びコンピュータプログラム
US20150277686A1 (en) * 2014-03-25 2015-10-01 ScStan, LLC Systems and Methods for the Real-Time Modification of Videos and Images Within a Social Network Format
CN104394313A (zh) * 2014-10-27 2015-03-04 成都理想境界科技有限公司 特效视频生成方法及装置
CN104469179B (zh) * 2014-12-22 2017-08-04 杭州短趣网络传媒技术有限公司 一种将动态图片结合到手机视频中的方法
US20160196584A1 (en) * 2015-01-06 2016-07-07 Facebook, Inc. Techniques for context sensitive overlays
WO2016121329A1 (ja) * 2015-01-29 2016-08-04 パナソニックIpマネジメント株式会社 画像処理装置、スタイラス、および画像処理方法
CN106034256B (zh) * 2015-03-10 2019-11-05 腾讯科技(北京)有限公司 视频社交方法及装置
US20170018289A1 (en) * 2015-07-15 2017-01-19 String Theory, Inc. Emoji as facetracking video masks
CN105068748A (zh) * 2015-08-12 2015-11-18 上海影随网络科技有限公司 触屏智能设备的摄像头实时画面中用户界面交互方法
CN106611412A (zh) * 2015-10-20 2017-05-03 成都理想境界科技有限公司 贴图视频生成方法及装置
WO2017077751A1 (ja) * 2015-11-04 2017-05-11 ソニー株式会社 情報処理装置、情報処理方法およびプログラム
US20190096439A1 (en) * 2016-05-23 2019-03-28 Robert Brouwer Video tagging and annotation
CN106385591B (zh) * 2016-10-17 2020-05-15 腾讯科技(上海)有限公司 视频处理方法及视频处理装置
US20190246165A1 (en) * 2016-10-18 2019-08-08 Robert Brouwer Messaging and commenting for videos
CN106683120B (zh) * 2016-12-28 2019-12-13 杭州趣维科技有限公司 追踪并覆盖动态贴纸的图像处理方法
CN107274431A (zh) * 2017-03-07 2017-10-20 阿里巴巴集团控股有限公司 视频内容增强方法及装置
JP2017169222A (ja) * 2017-05-10 2017-09-21 合同会社IP Bridge1号 リンク先指定用インタフェース装置、視聴者用インタフェース装置、及びコンピュータプログラム
US10613725B2 (en) * 2017-10-13 2020-04-07 Adobe Inc. Fixing spaced relationships between graphic objects
US10592091B2 (en) * 2017-10-17 2020-03-17 Microsoft Technology Licensing, Llc Drag and drop of objects to create new composites

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH11306354A (ja) * 1998-04-21 1999-11-05 Matsushita Electric Ind Co Ltd 画像認識方法及び画像認識装置
CN1777916A (zh) * 2003-04-21 2006-05-24 日本电气株式会社 识别视频图像对象的设备和方法、应用视频图像注释的设备和方法及识别视频图像对象的程序
CN104346095A (zh) * 2013-08-09 2015-02-11 联想(北京)有限公司 一种信息处理方法及电子设备
CN105955641A (zh) * 2015-03-08 2016-09-21 苹果公司 用于与对象交互的设备、方法和图形用户界面

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3742743A4 *

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110765015A (zh) * 2019-10-24 2020-02-07 北京云聚智慧科技有限公司 一种对被测应用进行测试的方法和电子设备
CN110765015B (zh) * 2019-10-24 2023-06-16 北京云聚智慧科技有限公司 一种对被测应用进行测试的方法和电子设备
EP4068794A4 (en) * 2019-12-30 2022-12-28 Beijing Bytedance Network Technology Co., Ltd. IMAGE PROCESSING METHOD AND APPARATUS
US11798596B2 (en) 2019-12-30 2023-10-24 Beijing Bytedance Network Technology Co., Ltd. Image processing method and apparatus
KR102717916B1 (ko) 2019-12-30 2024-10-16 베이징 바이트댄스 네트워크 테크놀로지 컴퍼니, 리미티드 이미징 프로세싱 방법 및 장치
CN111901662A (zh) * 2020-08-05 2020-11-06 腾讯科技(深圳)有限公司 视频的扩展信息处理方法、设备和存储介质
CN112218136A (zh) * 2020-10-10 2021-01-12 腾讯科技(深圳)有限公司 视频处理方法、装置、计算机设备及存储介质
CN113158621A (zh) * 2021-05-18 2021-07-23 掌阅科技股份有限公司 书架页面的显示方法、计算设备及计算机存储介质
CN113613067A (zh) * 2021-08-03 2021-11-05 北京字跳网络技术有限公司 视频处理方法、装置、设备及存储介质
CN113613067B (zh) * 2021-08-03 2023-08-22 北京字跳网络技术有限公司 视频处理方法、装置、设备及存储介质
WO2023045825A1 (zh) * 2021-09-27 2023-03-30 北京有竹居网络技术有限公司 基于视频的信息展示方法及装置、电子设备和存储介质

Also Published As

Publication number Publication date
US11640235B2 (en) 2023-05-02
EP3742743A4 (en) 2021-07-28
EP3742743A1 (en) 2020-11-25
US20200272309A1 (en) 2020-08-27
CN110062269A (zh) 2019-07-26
JP2021511728A (ja) 2021-05-06
JP7109553B2 (ja) 2022-07-29

Similar Documents

Publication Publication Date Title
WO2019141100A1 (zh) 附加对象显示方法、装置、计算机设备及存储介质
US11678734B2 (en) Method for processing images and electronic device
US11205282B2 (en) Relocalization method and apparatus in camera pose tracking process and storage medium
WO2019153824A1 (zh) 虚拟对象控制方法、装置、计算机设备及存储介质
US20190243598A1 (en) Head mounted display apparatus and method for displaying a content
CN110213638B (zh) 动画显示方法、装置、终端及存储介质
CN111701238A (zh) 虚拟画卷的显示方法、装置、设备及存储介质
CN108694073B (zh) 虚拟场景的控制方法、装置、设备及存储介质
CN108737897B (zh) 视频播放方法、装置、设备及存储介质
CN111541907B (zh) 物品显示方法、装置、设备及存储介质
WO2019101185A1 (zh) 播放音频数据的方法和装置
JP2021524957A (ja) 画像処理方法およびその、装置、端末並びにコンピュータプログラム
CN112044065B (zh) 虚拟资源的显示方法、装置、设备及存储介质
CN111083526B (zh) 视频转场方法、装置、计算机设备及存储介质
US20150063785A1 (en) Method of overlappingly displaying visual object on video, storage medium, and electronic device
CN110290426B (zh) 展示资源的方法、装置、设备及存储介质
CN110796083B (zh) 图像显示方法、装置、终端及存储介质
US11886673B2 (en) Trackpad on back portion of a device
WO2019192061A1 (zh) 图形码的识别及生成方法、装置及计算机可读存储介质
CN111437600A (zh) 剧情展示方法、装置、设备及存储介质
CN110853124B (zh) 生成gif动态图的方法、装置、电子设备及介质
JP7483056B2 (ja) 選択ターゲットの決定方法、装置、機器、及びコンピュータプログラム
WO2022083257A1 (zh) 多媒体资源的生成方法及终端
CN113160031B (zh) 图像处理方法、装置、电子设备及存储介质
CN111986700B (zh) 无接触式操作触发的方法、装置、设备及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19740905

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2020539223

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2019740905

Country of ref document: EP

Effective date: 20200818