CN113422980A - Video data processing method and device, electronic equipment and storage medium - Google Patents

Video data processing method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN113422980A
CN113422980A CN202110687991.5A CN202110687991A CN113422980A CN 113422980 A CN113422980 A CN 113422980A CN 202110687991 A CN202110687991 A CN 202110687991A CN 113422980 A CN113422980 A CN 113422980A
Authority
CN
China
Prior art keywords
time point
video frame
frame sequence
played
playing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110687991.5A
Other languages
Chinese (zh)
Other versions
CN113422980B (en
Inventor
李成会
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Boguan Information Technology Co Ltd
Original Assignee
Guangzhou Boguan Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Boguan Information Technology Co Ltd filed Critical Guangzhou Boguan Information Technology Co Ltd
Priority to CN202110687991.5A priority Critical patent/CN113422980B/en
Publication of CN113422980A publication Critical patent/CN113422980A/en
Application granted granted Critical
Publication of CN113422980B publication Critical patent/CN113422980B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs
    • H04N21/23418Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/235Processing of additional data, e.g. scrambling of additional data or processing content descriptors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/238Interfacing the downstream path of the transmission network, e.g. adapting the transmission rate of a video stream to network bandwidth; Processing of multiplex streams
    • H04N21/2387Stream processing in response to a playback request from an end-user, e.g. for trick-play
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/435Processing of additional data, e.g. decrypting of additional data, reconstructing software from modules extracted from the transport stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/44008Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics in the video stream
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The disclosure provides a video data processing method and device, electronic equipment and a storage medium, and relates to the technical field of data processing. The video data processing method comprises the following steps: determining a video frame sequence to be played, and calculating the position data of a key point of a target object in the video frame sequence to be played; acquiring a current playing video frame sequence and frame rate data, and calculating the playing time point of the video frame sequence to be played according to the current playing video frame sequence and the frame rate data; and sending the key point position data and the playing time point to a paster special effect playing process so as to play the video frame sequence to be played and the paster special effect corresponding to the video frame sequence to be played in a linkage manner through the video playing process and the paster special effect playing process. The technical scheme of the embodiment of the disclosure can realize the real-time playing of the video frame sequence and the special effect of the paster, and improve the display effect of the special effect of the paster.

Description

Video data processing method and device, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of data processing technologies, and in particular, to a video data processing method, a video data processing apparatus, an electronic device, and a computer-readable storage medium.
Background
With the continuous maturity of internet technology, live webcasting becomes an increasingly popular entertainment mode. In the real-time live broadcast process, the interactivity in the live broadcast can be increased by giving a virtual gift to show a special effect.
However, in the related live broadcast interactive method, either the video frame and the virtual gift special effect are rendered to the interactive interface synchronously by adding the virtual gift special effect to the target position in one process, or the video stream and the virtual gift special effect are played through a plurality of processes. In the first live broadcast interaction method, when the virtual gift effect exceeds the video area, the problem that the virtual gift special effect cannot be played exists, and in the second live broadcast interaction method, the problem that due to time consumption of inter-process communication and the like, the real-time performance of playing the video frame and the virtual gift special effect is poor exists, namely, the position change of the virtual gift special effect is shown to be behind the position change of an object in the video frame visually.
It is to be noted that the information disclosed in the above background section is only for enhancement of understanding of the background of the present disclosure, and thus may include information that does not constitute prior art known to those of ordinary skill in the art.
Disclosure of Invention
An object of the embodiments of the present disclosure is to provide a video data processing method, a video data processing apparatus, an electronic device, and a computer-readable storage medium, so as to overcome at least to some extent the problems that a sticker special effect exceeds a video area and cannot be played, and that a video frame sequence and the sticker special effect are poor in playing real-time performance.
Additional features and advantages of the disclosure will be set forth in the detailed description which follows, or in part will be obvious from the description, or may be learned by practice of the disclosure.
According to a first aspect of the embodiments of the present disclosure, there is provided a video data processing method applied to a video playing process, including: determining a video frame sequence to be played, and calculating the position data of a key point of a target object in the video frame sequence to be played; acquiring a current playing video frame sequence and frame rate data, and calculating the playing time point of the video frame sequence to be played according to the current playing video frame sequence and the frame rate data; and sending the key point position data and the playing time point to a paster special effect playing process so as to play the video frame sequence to be played and the paster special effect corresponding to the video frame sequence to be played in a linkage manner through the paster special effect playing process and the video playing process.
In some example embodiments of the present disclosure, based on the foregoing scheme, the keypoint location data of the target object in the video frame sequence to be played is obtained by calling an image detection interface.
In some example embodiments of the present disclosure, based on the foregoing solution, the calculating a playing time point of the video frame sequence to be played according to the current video frame sequence and frame rate data includes: determining delay data of a unit video frame sequence according to the frame rate data; determining an intermediate playing video frame sequence and the number of video frames to be played corresponding to the intermediate playing video frame sequence based on the video frame sequence to be played and the current playing video frame sequence; calculating target delay data of the video frame sequence to be played according to the delay data and the number of the video frames to be played; and acquiring a current system time point, and calculating the playing time point of the video frame sequence to be played based on the current system time point and the target delay data.
According to a second aspect of the embodiments of the present disclosure, there is provided a video data processing apparatus, applied to a video playing process, including: the video frame sequence determining module is used for determining a video frame sequence to be played and calculating the position data of key points of a target object in the video frame sequence to be played; the time point calculating module is used for acquiring a current playing video frame sequence and frame rate data and calculating the playing time point of the video frame sequence to be played according to the current playing video frame sequence and the frame rate data; and the data sending module is used for sending the key point position data and the playing time point to a paster special effect playing process so as to play the video frame sequence to be played and the paster special effect corresponding to the video frame sequence to be played in a linkage manner through the paster special effect playing process and the video playing process.
In some example embodiments of the present disclosure, based on the foregoing scheme, the video frame sequence determining module further includes a key point position data calculating unit, configured to calculate key point position data of a target object in the video frame sequence to be played by invoking an image detection interface.
In some example embodiments of the present disclosure, based on the foregoing scheme, the time point calculating module further includes a time point calculating unit, configured to determine delay data of a unit video frame sequence according to the frame rate data; determining an intermediate playing video frame sequence and the number of video frames to be played corresponding to the intermediate playing video frame sequence based on the video frame sequence to be played and the current playing video frame sequence; calculating target delay data of the video frame sequence to be played according to the delay data and the number of the video frames to be played; and acquiring a current system time point, and calculating the playing time point of the video frame sequence to be played based on the current system time point and the target delay data.
According to a third aspect of the embodiments of the present disclosure, there is provided a video data processing method applied to a sticker special effect playing process, including: receiving key point position data of a target object in a video frame sequence to be played and a playing time point of the video frame sequence to be played, and storing the key point position data and the playing time point into a cache queue; acquiring a current system time point, and determining target key point position data and a target playing time point from the cache queue according to the current system time point and the playing time point; and when the paster special effect is detected at the target playing time point, the paster special effect is displayed at the position of the target key point data.
In some example embodiments of the present disclosure, based on the foregoing solution, the determining target keypoint location data and a target play time point from the buffer queue according to the current system time point and the play time point includes: screening a playing time point which is equal to or larger than the current system time point from the cache queue, and taking the playing time point as a target playing time point; and acquiring the position data of the key point corresponding to the target playing time point, and taking the position data of the key point as the position data of the target key point.
In some example embodiments of the present disclosure, based on the foregoing, the method further includes: determining display duration data of the special effect of the paster, and determining a display ending time point of the special effect of the paster based on the display duration data and a target playing time point; and hiding the mapping special effect at the display deadline point, and deleting the target playing time point and the target key point position data from the cache queue.
In some example embodiments of the present disclosure, based on the foregoing, the method further includes: presetting static display duration data of the special effect of the paster; if the paster special effect is not detected at the target playing time point, determining an initial time point when the paster special effect is detected; obtaining delay data of a unit video frame sequence, and calculating a playing time point of a next video frame sequence based on the delay data and a target playing time point; if the playing time point is detected to be equal to or greater than the initial time point, determining a position replacement time point of the paster effect based on the initial time point and the static display duration data; and acquiring the position data of the key points of the next frame of video frame sequence, and displaying the map special effect at the position data of the key points at the position replacement time point.
According to a fourth aspect of the embodiments of the present disclosure, there is provided a video data processing apparatus, applied to a video playing process, including: the data receiving module is used for receiving the key point position data of a target object in a video frame sequence to be played and the playing time point of the video frame sequence to be played, and storing the key point position data and the playing time point into a cache queue; the target video frame sequence determining module is used for acquiring a current system time point and determining target key point position data and a target playing time point from the video frame sequence to be played according to the current system time point and the playing time point; and the paster special effect display module is used for displaying the paster special effect at the position of the key point data when the paster special effect is detected at the target playing time point.
In some example embodiments of the present disclosure, based on the foregoing solution, the target video frame sequence determining module further includes a target video frame sequence determining unit, configured to screen out, from the buffer queue, a play time point equal to or greater than the current system time point, and use the play time point as a target play time point; and acquiring the position data of the key point corresponding to the target playing time point, and taking the position data of the key point as the position data of the target key point.
In some example embodiments of the present disclosure, based on the foregoing solution, the sticker special effect display module further includes a display deadline calculating unit, where the display deadline calculating unit is configured to determine display duration data of the sticker special effect and determine a display deadline of the sticker special effect based on the display duration data and a target playing time point; and hiding the mapping special effect at the display deadline point, and deleting the target playing time point and the target key point position data from the cache queue.
In some example embodiments of the present disclosure, based on the foregoing solution, the sticker special effect display module further includes a sticker special effect position data replacing unit, where the sticker special effect position data replacing unit is configured to preset static display duration data of the sticker special effect; if the paster special effect is not detected at the target playing time point, determining an initial time point when the paster special effect is detected; obtaining delay data of a unit video frame sequence, and calculating the playing time point of the next video frame sequence based on the delay data and the playing time point corresponding to the target video frame sequence; if the playing time point is detected to be equal to or greater than the initial time point, determining a position replacement time point of the paster effect based on the initial time point and the static display duration data; and acquiring the position data of the key points of the next frame of video frame sequence, and displaying the map special effect at the position data of the key points at the position replacement time point.
According to a fifth aspect of embodiments of the present disclosure, there is provided an electronic apparatus including: a processor; and a memory having computer readable instructions stored thereon, the computer readable instructions, when executed by the processor, implementing the video data processing method of any one of the above.
According to a sixth aspect of embodiments of the present disclosure, there is provided a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements a video data processing method according to any one of the above.
The technical scheme provided by the embodiment of the disclosure can have the following beneficial effects:
the video data processing method in the example embodiment of the present disclosure determines a video frame sequence to be played, and calculates the position data of a key point of a target object in the video frame sequence to be played; acquiring a current playing video frame sequence and frame rate data, and calculating a playing time point of the video frame sequence to be played according to the current playing video frame sequence and the frame rate data; and sending the key point position data and the playing time point to a paster special effect playing process so as to play the video frame sequence to be played and the paster special effect corresponding to the video frame sequence to be played in a linkage manner through the paster special effect playing process and the video playing process. On one hand, by calculating the key point position data of a target object in a video frame sequence to be played in advance, calculating the playing time point of the video frame sequence to be played according to the current playing video frame sequence and frame rate data, and sending the key point position data and the playing time point of the target object of the video frame sequence to be played to the paster special effect playing process, the paster special effect process can display the paster special effect corresponding to the video frame sequence to be played at the key point position data of the target object in the video frame sequence to be played at the playing time point of the video frame sequence to be played, and play the video frame sequence to be played through the video playing process at the playing time point of the video frame sequence to be played, so that the real-time performance of playing the video frame sequence to be played and the paster special effect corresponding to the video frame sequence to be played is improved; on the other hand, the video frame sequence and the paster special effect are respectively played through the video playing process and the paster special effect playing process, the problem that the paster special effect exceeds the video area and cannot be played is solved, and the playing effect of the paster special effect is improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and together with the description, serve to explain the principles of the disclosure. It is to be understood that the drawings in the following description are merely exemplary of the disclosure, and that other drawings may be derived from those drawings by one of ordinary skill in the art without the exercise of inventive faculty. In the drawings:
fig. 1 schematically illustrates a schematic diagram of a video data processing method flow according to some embodiments of the present disclosure;
fig. 2 schematically illustrates a schematic diagram of a play time point calculation method flow according to some embodiments of the present disclosure;
fig. 3 schematically illustrates a schematic diagram of a video data processing method flow according to some embodiments of the present disclosure;
FIG. 4 schematically illustrates a schematic diagram of a sticker special effect display method flow, according to some embodiments of the present disclosure;
FIG. 5 schematically illustrates a diagram of a chartlet special effects location update method flow, according to some embodiments of the present disclosure;
fig. 6 schematically shows a schematic diagram of a video data processing apparatus according to some embodiments of the present disclosure;
fig. 7 schematically illustrates a schematic diagram of another video data processing apparatus according to some embodiments of the present disclosure;
FIG. 8 schematically illustrates a structural schematic of a computer system of an electronic device, in accordance with some embodiments of the present disclosure;
fig. 9 schematically illustrates a schematic diagram of a computer-readable storage medium, according to some embodiments of the present disclosure.
In the drawings, the same or corresponding reference numerals indicate the same or corresponding parts.
Detailed Description
Example embodiments will now be described more fully with reference to the accompanying drawings. Example embodiments may, however, be embodied in many different forms and should not be construed as limited to the examples set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of example embodiments to those skilled in the art.
Furthermore, the described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. In the following description, numerous specific details are provided to give a thorough understanding of embodiments of the disclosure. One skilled in the relevant art will recognize, however, that the subject matter of the present disclosure can be practiced without one or more of the specific details, or with other methods, components, devices, steps, and so forth. In other instances, well-known methods, devices, implementations, or operations have not been shown or described in detail to avoid obscuring aspects of the disclosure.
Furthermore, the drawings are merely schematic illustrations and are not necessarily drawn to scale. The block diagrams shown in the figures are functional entities only and do not necessarily correspond to physically separate entities. I.e. these functional entities may be implemented in the form of software, or in one or more hardware modules or integrated circuits, or in different networks and/or processor means and/or microcontroller means.
In the present exemplary embodiment, first, a video data processing method is provided, and the video data processing method can be applied to terminal devices, such as electronic devices like mobile phones and computers. Fig. 1 schematically illustrates a schematic diagram of a video data processing method flow, according to some embodiments of the present disclosure. Referring to fig. 1, the video data processing method may include the steps of:
step S110, determining a video frame sequence to be played, and calculating the position data of a key point of a target object in the video frame sequence to be played;
step S120, obtaining a current playing video frame sequence and frame rate data, and calculating the playing time point of the video frame sequence to be played according to the current playing video frame sequence and the frame rate data;
step S130, the key point position data and the playing time point are sent to a paster special effect playing process, so that the sequence of the video frames to be played and the paster special effects corresponding to the sequence of the video frames to be played are played in a linkage mode through the paster special effect playing process and the video playing process.
According to the video frame data processing method in the present exemplary embodiment, on the one hand, by calculating in advance the key point position data of the target object in the sequence of video frames to be played, and calculating the playing time point of the video frame sequence to be played according to the current playing video frame sequence and the frame rate data, and sends the key point position data and the playing time point of the target object of the video frame sequence to be played to the paster special effect playing process, so that the paster special effect process can display the paster special effect corresponding to the video frame sequence to be played at the key point position data of the target object in the video frame sequence to be played at the playing time point of the video frame sequence to be played, the video frame sequence to be played is played through the video playing process at the playing time point of the video frame sequence to be played, so that the real-time performance of playing the video frame sequence to be played and the paster special effect corresponding to the video frame sequence to be played is improved; on the other hand, the video frame sequence and the paster special effect are respectively played through the video playing process and the paster special effect playing process, the problem that the paster special effect exceeds the video area and cannot be played is solved, and the playing effect of the paster special effect is improved.
Next, a video data processing method in the present exemplary embodiment will be further described.
In step S110, a video frame sequence to be played is determined, and the position data of the key point of the target object in the video frame sequence to be played is calculated.
In an example embodiment of the present disclosure, the video frame sequence to be played may refer to a video frame sequence that has not been played and is extracted from a video stream delivered by a server; the target object may refer to an object matching the sticker special effect in the video frame sequence to be played, for example, the video frame sequence to be played may be a frame of a song played in a main broadcaster, the sticker special effect may be a red lip, and the target object may be a mouth of the main broadcaster matching the sticker special effect.
The key point position data may refer to reference position data for displaying a special effect of a sticker, for example, the key point position data may be reference position data for displaying a special effect of a sticker, which is determined according to position data of a target object, for example, a sequence of video frames to be played may be pictures of a song played by a main broadcaster, the special effect of a sticker may be a red lip, the target object may be a mouth of the main broadcaster matching the special effect of the sticker, the key point position data may be position data corresponding to a lipbead, a left mouth corner, a right mouth corner, a highest point of an upper sticker, and a lowest point of a lower lip of the main broadcaster, and of course, the key point position data may also be reference position data for displaying a special effect, which is not particularly limited in this embodiment.
Optionally, the images of the video frame sequence to be played may be identified by calling the image detection interface, and the key point position data of the target object in the video frame sequence to be played is calculated, or the images in the video frame sequence to be played may be identified by training the image identification model and the image identification model, and the key point position data of the target object in the identified images is calculated, which is not particularly limited in this embodiment.
Preferably, the identification of the image in the video frame sequence to be played and the calculation of the key point position data of the target object in the video frame sequence to be played can be completed at the server, and then, the video playing process can acquire the key point position data of the target object in the video frame sequence to be played in real time when receiving the video stream to be played sent by the server, so that the image of the video frame sequence to be played is identified and the key point position data of the target object in the video frame sequence to be played is calculated by calling an image detection interface or an image identification model, and the convenience of acquiring the key point position data of the video frame sequence to be played is improved.
In step S120, a currently played video frame sequence and frame rate data are obtained, and a playing time point of the video frame sequence to be played is calculated according to the currently played video frame sequence and the frame rate data.
In an example embodiment of the present disclosure, a currently playing video frame sequence may refer to a video frame sequence being played by a video playing process; frame rate data may refer to the number of frames of a sequence of video frames displayed per second; the playing time point may refer to a time point when the video playing process starts playing the video frame sequence to be played.
The intermediate video frame sequence to be played between the current video frame sequence and the video frame sequence to be played can be determined according to the current video frame sequence and the video frame sequence to be played, the playing delay data of the unit video frame sequence is calculated through the frame rate data, and then the playing time point of the video frame sequence to be played is calculated according to the intermediate video frame sequence and the playing delay data of the unit video frame sequence.
In step S130, the key point position data and the playing time point are sent to a sticker special effect playing process, so that the to-be-played video frame sequence and the sticker special effect corresponding to the to-be-played video frame sequence are played in a linkage manner through the sticker special effect playing process and the video playing process.
In an exemplary embodiment of the disclosure, the linkage play of the video frame sequence to be played and the sticker special effect corresponding to the video frame sequence to be played can be realized by calculating the play time point of the video frame sequence to be played and the key point position data of the target object in the video frame sequence to be played, binding the key point position data of the same video frame sequence and the play time point, synchronously sending the play time point of the video frame sequence to be played and the key point position data of the target object to be played to the sticker special effect play process in an agreed format, such as json format, to play the video frame sequence to be played through the video play process at the play time point of the video frame sequence to be played, and displaying the sticker special effect at the key point position data of the target object in the video frame sequence to be played through the sticker special effect play process, so as to avoid that the sticker special effect exceeds the video area, the method has the advantages that the problem that the real-time performance of playing the special effect of the sticker is poor due to the fact that the special effect of the sticker cannot be played, time consumption of inter-process communication and the like exists, the display effect of the special effect of the sticker is improved, and the interactivity among users is also improved.
In an example embodiment of the present disclosure, images of a sequence of video frames to be played may be identified by invoking an image detection interface, and keypoint location data of a target object in the sequence of video frames to be played may be calculated.
The image detection interface can be used for identifying the images of the video frame sequence to be played, calculating the key point position data or position coordinates of all objects in the images of the video frame sequence to be played, sending the key point position data or the position coordinates of all the objects to the paster special effect playing process, and the paster playing process stores the received key point position data and the playing time point corresponding to the key point position data into the cache queue. Furthermore, when the operation of the virtual gift sent by the user is detected in the process of playing the sticker, the special effect of the virtual gift is identified, and the position data or the position coordinates of the key point are extracted from the cache queue, so that the special effect of the virtual gift is displayed on the position data or the position coordinates of the key point of the target object, and the display effect of the special effect of the virtual gift is improved.
Fig. 2 schematically illustrates a schematic diagram of a play time point calculation method flow according to some embodiments of the present disclosure. Referring to fig. 2, the play time point calculating method may include the steps of:
in step S210, determining delay data of a unit video frame sequence according to the frame rate data;
in step S220, based on the video frame sequence to be played and the currently played video frame sequence, determining an intermediate played video frame sequence and a number of video frames to be played corresponding to the intermediate played video frame sequence;
in step S230, calculating target delay data of the sequence of video frames to be played according to the delay data and the number of the video frames to be played;
in step S240, a current system time point is obtained, and a playing time point of the video frame sequence to be played is calculated based on the current system time point and the target delay data.
The delay data may be play time delay data of the unit video frame sequence, for example, the delay data may be play time delay data of the unit video frame sequence determined by time consumption data of the inter-process communication, or the delay data may be play time delay data of the unit video frame sequence determined by the time consumption data of the inter-process communication and the time consumption data of the image in the unit video frame sequence together, of course, the delay data may also be play time delay data of the unit video frame sequence determined by other factors, which is not particularly limited in this embodiment.
The intermediate playing video frame sequence may refer to a video frame sequence to be played between a currently playing video frame sequence and a video frame sequence to be played; the number of video frames to be played can refer to the number of video frames corresponding to the sequence of video frames to be played in the middle; the target delay data may refer to duration data that the sequence of video frames to be played needs to wait, i.e. the target delay data may be time interval data between playing the current sequence of video frames and playing the sequence of video frames to be played. The current system time point may refer to a display time point of the terminal device.
For example, the sequence number of the currently played video frame sequence may be 3, the sequence number of the to-be-played video frame sequence may be 5, the delay data of the unit video frame sequence may be 10ms (milliseconds), the intermediate played video frame sequence is a video frame sequence with a sequence number of 4 and a video frame sequence with a sequence number of 5, respectively, that is, the number of video frames to be played between the currently played video frame sequence and the to-be-played video frame sequence is 2, that is, the number of video frames to be played corresponding to the intermediate played video frame sequence is 2, and the target delay data of the to-be-played video frame sequence with a play sequence number of 5 may be calculated to be 20 milliseconds; if the current system time is 11.20, the playing time point of the video frame sequence to be played with the playing sequence number of 5 is 11:20:00: 20.
Without loss of generality, the target delay data of the video frame sequence to be played can be calculated by formula (1), and the playing time point of the video frame sequence to be played can also be calculated by formula (2).
C=(N-M)nT (1)
TC=T0+C (2)
Wherein, M may represent a sequence number of a currently played video frame sequence, N may represent a sequence number of a video frame sequence to be played, f may represent video frame rate data, T may represent a reciprocal of the video frame rate data, nT may represent delay data of a unit video frame sequence, N may represent a coefficient corresponding to the reciprocal of the delay data of the unit video frame sequence and the video frame rate data, C may represent target delay data of the video frame sequence to be played, T may represent target delay data of the video frame sequence to be played, and0can indicate the current system time point, TcMay represent the point in time of the playing of a sequence of video frames to be played.
Fig. 3 schematically illustrates a schematic diagram of a video data processing method flow, according to some embodiments of the present disclosure. Referring to fig. 3, the video data processing method may include the steps of:
in step S310, receiving key point position data of a target object in a video frame sequence to be played and a playing time point of the video frame sequence to be played, and storing the key point position data and the playing time point in a buffer queue;
in step S320, obtaining a current system time point, and determining target key point position data and a target play time point from the cache queue according to the current system time point and the play time point;
in step S330, when a sticker special effect is detected at the target playing time point, the sticker special effect is displayed at the target key point position data.
The target keypoint location data may refer to keypoint location data of a target object in a target video frame sequence, and the target video frame sequence may refer to a video frame sequence whose play time point is closer to a current system time, for example, the target video frame sequence may be a video frame sequence whose play time point is the same as the current system time point in the video frame sequence to be played, or a video frame sequence whose play time point is slightly greater than the current system time point in the video frame sequence to be played, or of course, the target video frame sequence may also be other video frame sequences in the video frame sequence to be played, such as a video frame sequence specifically identified by a server, which is not particularly limited in this embodiment. The target play time point may refer to a play time point of the target video frame sequence.
The method can receive the playing time point of the video frame sequence to be played and the key point position data of the target object in the video frame sequence to be played, which are synchronously sent by the video playing process, and store the received playing time point and the key point position data into the buffer queue. And further, according to the current system time point, determining target playing time point and target key point position data of the target video frame sequence from the buffer queue. If the paster special effect of the virtual gift is detected at the playing time point of the target video frame sequence, the paster special effect is displayed at the position data of the key point of the target object in the target video frame sequence. If the paster special effect of the virtual gift is not detected at the target playing time point, determining the time point of detecting the paster special effect of the virtual gift, determining whether the time point of detecting the paster special effect is larger than the playing time point of the next video frame sequence of the target video frame sequence, and if the time point of detecting the paster special effect is equal to or slightly larger than the playing time point of the next video frame sequence of the target video frame sequence, displaying the paster special effect at the key point position data corresponding to the next video frame sequence to update the display position of the paster special effect in real time, so that the display effect of the paster special effect is improved.
In an example embodiment of the present disclosure, a playing time point equal to or greater than a current system time point may be screened out from a buffer queue storing key point position data and playing time points of a video frame sequence to be played, and the playing time point may be taken as a target playing time point; and acquiring the position data of the key point corresponding to the target playing time point, and taking the position data of the key point as the position data of the target key point.
If a playing time point equal to the current system time point exists in the cache queue, the playing time point is used as a target playing time point, and key point position data corresponding to the target playing time point is obtained, so that the key point position data is used as target key point position data. If the play time point is not detected to be equal to the current system time in the buffer queue, the play time point slightly larger than the current system time point may be used as the target play time, and the target play time point may be a play time point on the right side of the current system time point on the time axis and closest to the current system time point.
The target playing time point of the pre-played target video frame sequence corresponding to the current system time point can be determined according to the current system time point and the playing time point of the video frame sequence to be played, and the target key point position data corresponding to the target playing time point is determined, so that the paster special effect is displayed at the target key point position data at the target playing time point, and the display effect of the paster special effect is improved.
FIG. 4 schematically illustrates a schematic diagram of a sticker special effect display method flow, according to some embodiments of the present disclosure. Referring to fig. 4, the method for displaying the special effect of the sticker may include the steps of:
in step S410, display duration data of the sticker special effect is determined, and a display deadline time point of the sticker special effect is determined based on the display duration data and the play time point;
in step S420, hide the mapping special effect at the display deadline, and delete the target playing time point and the target key point position data from the buffer queue.
The display duration data can refer to preset playing duration data of a special effect of the sticker; the display deadline time may refer to a time point at which the display of the special effect of the sticker is closed, for example, the display deadline time may be a time point determined according to the display time duration data of the special effect of the sticker and a time point at which the special effect of the sticker is detected, and if the time point at which the special effect of the sticker is detected may be 11.20 and the display time duration data of the sticker may be 10 seconds, the display deadline time point of the special effect of the sticker may be 11:20:10, or of course, the display deadline time may also be a time point determined according to the display time duration data of other special effects of the sticker and a time point at which the special effect of the sticker is detected. If the paster special effect is detected at the target playing time point, the display ending time point of the paster special effect can be a time point determined according to the target playing time point and the display duration data of the paster special effect; if the special effect of the sticker is not detected at the target playing time point, the display deadline time point of the special effect of the sticker may be a time point determined according to the time point at which the special effect of the sticker is detected and the display duration data of the special effect of the sticker, which is not particularly limited in this embodiment.
Optionally, a timer for triggering the display of the special effect of the sticker may be set, the timer is started when the special effect of the sticker is detected, the special effect of the sticker is hidden when the deadline time point of the display of the special effect of the sticker arrives to close the special effect of the sticker, the target play time point and the target key point position data are deleted from the buffering queue, and the reception of other key point position data and play data sent by the video play process is stopped. By clearing the expired playing time point and the key point position data in the cache and stopping receiving other key point position data and playing data sent by the video playing process, the memory space is saved, and the processing capacity of the video data of the system is improved.
Fig. 5 schematically illustrates a schematic diagram of a chartlet special effect location update method flow, according to some embodiments of the present disclosure. Referring to fig. 5, the map special effect updating method may include the following steps:
in step S510, the static display duration data of the sticker special effect is preset;
in step S520, if no special effect of sticker is detected at the target playing time point, determining an initial time point at which the special effect of sticker is detected;
in step S530, delay data of a unit video frame sequence is obtained, and a playing time point of a next video frame sequence is calculated based on the delay data and the target playing time point;
in step S540, if it is detected that the playing time point is equal to or greater than the initial time point, determining a position replacement time point of the sticker effect based on the initial time point and the static display duration data;
in step S550, key point position data of the next frame of video frame sequence is obtained, and the chartlet special effect is displayed at the key point position data at the position change time point.
Wherein the static display duration data may refer to duration data of a sticker special effect shown at the target key point position data, for example, a first video frame sequence is played by a video playing process at a first playing time point, and when the playing of the first video frame sequence is not finished, such as T1If the virtual gift special effect is received by the time-point sticker special effect playing process, the static display duration data can be T1Duration data between a point in time and a point in time of play of a second sequence of video frames, i.e. at T1The sticker special effect is still displayed at the keypoint location data of the first video frame sequence during a time period between the point in time and a play time point of the second video frame sequence.
The position replacement time point may refer to a time point of replacing the presentation position of the sticker special effect, for example, the position update time point may be a time point determined based on an initial time point at which the sticker special effect is detected and the static display time duration data, such as by starting playing the first video frame sequence at the first play time point through a video playing process, and when the playing of the first video frame sequence is not finished, e.g., 11.20, the sticker special effect playing process receives the virtual gift special effect, i.e., an initial time point of 11.20, a sticker special effect is displayed at the keypoint location data of the first video frame sequence, and if the static display duration data of the sticker special effect is 10 milliseconds, the position update time point of the sticker is 11:20:00:10, but of course, the position update time point may also be a time point determined by the initial time point of other sticker special effects and the static display duration data, which is not limited in this embodiment.
Without loss of generality, the position update time point of the sticker special effect can be calculated by formula (3).
Tg=Tm+a (3)
Wherein, TgThe position update time point T of the special effect of the paster can be shownmMay represent an initial point in time when the special effect of the sticker is detected, and a may represent static display duration data of the special effect of the sticker.
Preferably, when the sticker special effect of the virtual gift is detected at the target playing time point of the target video frame sequence, the sticker special effect can be displayed at the target key point position data of the target video frame sequence through a sticker special effect playing process in a time period corresponding to the preset display time length data according to the preset display time length data of the sticker special effect, meanwhile, the target video frame sequence is played through a video playing process, linkage playing of the target video frame sequence and the sticker special effect is achieved, the sticker special effect exists at the key point position data of the target object in the video frame sequence visually, and the real-time performance of playing the video frame sequence and the sticker special effect is improved.
If the paster special effect of the virtual gift is not detected at the target playing time point of the target video frame sequence, a timer for triggering the updating of the position of the paster special effect can be set, the paster special effect is displayed at the key point position data of the current playing video frame sequence when the paster special effect is detected, the key point position data and the playing time point of the next video frame sequence are obtained, the paster special effect is displayed at the key point position of the next video frame sequence when the position updating time point of the paster special effect is reached, the real-time updating of the position of the paster special effect is realized, the situation that the position change of the paster special effect visually lags behind the position change of an object in the video frame is avoided, and the displaying effect of the paster special effect is improved.
It should be noted that although the various steps of the methods of the present disclosure are depicted in the drawings in a particular order, this does not require or imply that these steps must be performed in this particular order, or that all of the depicted steps must be performed, to achieve desirable results. Additionally or alternatively, certain steps may be omitted, multiple steps combined into one step execution, and/or one step broken down into multiple step executions, etc.
In addition, in the present exemplary embodiment, a video data processing apparatus is also provided, which is applied to a video playing process. Referring to fig. 6, the video data processing apparatus 600 includes a video frame sequence determining module 610, a time point calculating module 620, and a data transmitting module 630. The video frame sequence determining module 610 is configured to determine a video frame sequence to be played, and calculate key point position data of a target object in the video frame sequence to be played; the time point calculating module 620 is configured to obtain a currently played video frame sequence and frame rate data, and calculate a playing time point of the video frame sequence to be played according to the currently played video frame sequence and the frame rate data; the data sending module 630 is configured to send the key point position data and the playing time point to a sticker special effect playing process, so as to play the to-be-played video frame sequence and a sticker special effect corresponding to the to-be-played video frame sequence in a linkage manner through the sticker special effect playing process and the video playing process.
In some example embodiments of the present disclosure, based on the foregoing scheme, the video frame sequence determining module 610 further includes a key point position data calculating unit, configured to calculate key point position data of a target object in the video frame sequence to be played by invoking an image detection interface.
In some example embodiments of the present disclosure, based on the foregoing scheme, the time point calculating module 620 further includes a time point calculating unit, configured to determine delay data of a unit video frame sequence according to the frame rate data; determining an intermediate playing video frame sequence and the number of video frames to be played corresponding to the intermediate playing video frame sequence based on the video frame sequence to be played and the current playing video frame sequence; calculating target delay data of the video frame sequence to be played according to the delay data and the number of the video frames to be played; and acquiring a current system time point, and calculating the playing time point of the video frame sequence to be played based on the current system time point and the target delay data.
Meanwhile, in the present exemplary embodiment, a video data processing apparatus is further provided, which is applied to a sticker special effect playing process. Referring to fig. 7, the video data processing apparatus 700 includes a data receiving module 710, a target video frame sequence determining module 720, and a sticker special effect displaying module 730. The data receiving module 710 is configured to receive key point position data of a target object in a video frame sequence to be played and a playing time point of the video frame sequence to be played, and store the key point position data and the playing time point in a buffer queue; the target video frame sequence determining module 720 is configured to obtain a current system time point, and determine target key point position data and a target playing time point from the cache queue according to the current system time point and the playing time point; and the paster special effect display module 730 is used for displaying the paster special effect at the position of the key point data when the paster special effect is detected at the target playing time point.
In some example embodiments of the present disclosure, based on the foregoing solution, the target video frame sequence determining module 720 further includes a target video frame sequence determining unit, configured to screen out a play time point equal to or greater than the current system time point from the buffer queue, and use the play time point as a target play time point; and acquiring the position data of the key point corresponding to the target playing time point, and taking the position data of the key point as the position data of the target key point.
In some example embodiments of the present disclosure, based on the foregoing solution, the sticker special effect display module 730 further includes a display deadline calculating unit, where the display deadline calculating unit is configured to determine display duration data of the sticker special effect and determine a display deadline of the sticker special effect based on the display duration data and a target playing time point; and hiding the mapping special effect at the display deadline point, and deleting the target playing time point and the target key point position data from the cache queue.
In some example embodiments of the present disclosure, based on the foregoing solution, the sticker special effect display module 730 further includes a sticker special effect position data replacing unit, where the sticker special effect position data replacing unit is configured to preset static display duration data of the sticker special effect; if the paster special effect is not detected at the target playing time point, determining an initial time point when the paster special effect is detected; obtaining delay data of a unit video frame sequence, and calculating the playing time point of the next video frame sequence based on the delay data and the playing time point corresponding to the target video frame sequence; if the playing time point is detected to be equal to or greater than the initial time point, determining a position replacement time point of the paster effect based on the initial time point and the static display duration data; and acquiring the position data of the key points of the next frame of video frame sequence, and displaying the map special effect at the position data of the key points at the position replacement time point.
The specific details of each module of the video data processing apparatus have been described in detail in the corresponding video data processing method, and therefore are not described herein again.
It should be noted that although in the above detailed description several modules or units of the video data processing apparatus are mentioned, this division is not mandatory. Indeed, the features and functionality of two or more modules or units described above may be embodied in one module or unit, according to embodiments of the present disclosure. Conversely, the features and functions of one module or unit described above may be further divided into embodiments by a plurality of modules or units.
In addition, in an exemplary embodiment of the present disclosure, an electronic device capable of implementing the above video data processing method is also provided.
As will be appreciated by one skilled in the art, aspects of the present disclosure may be embodied as a system, method or program product. Accordingly, various aspects of the present disclosure may be embodied in the form of: an entirely hardware embodiment, an entirely software embodiment (including firmware, microcode, etc.) or an embodiment combining hardware and software aspects that may all generally be referred to herein as a "circuit," module "or" system.
An electronic device 800 according to such an embodiment of the disclosure is described below with reference to fig. 8. The electronic device 800 shown in fig. 8 is only an example and should not bring any limitations to the functionality and scope of use of the embodiments of the present disclosure.
As shown in fig. 8, electronic device 800 is in the form of a general purpose computing device. The components of the electronic device 800 may include, but are not limited to: the at least one processing unit 810, the at least one memory unit 820, a bus 830 connecting different system components (including the memory unit 820 and the processing unit 810), and a display unit 840.
Wherein the storage unit stores program code that is executable by the processing unit 810 to cause the processing unit 810 to perform steps according to various exemplary embodiments of the present disclosure as described in the "exemplary methods" section above in this specification. For example, the processing unit 810 may execute step S110 shown in fig. 1, determine a video frame sequence to be played, and calculate the position data of the key point of the target object in the video frame sequence to be played; step S120, obtaining a current playing video frame sequence and frame rate data, and calculating the playing time point of the video frame sequence to be played according to the current playing video frame sequence and the frame rate data; step S130, the key point position data and the playing time point are sent to a paster special effect playing process, so that the sequence of the video frames to be played and the paster special effects corresponding to the sequence of the video frames to be played are played in a linkage mode through the paster special effect playing process and the video playing process. Meanwhile, the processing unit 810 may execute step S310 shown in fig. 3, receiving the key point position data of the target object in the video frame sequence to be played and the playing time point of the video frame sequence to be played, and storing the key point position data and the playing time point into a buffer queue; in step S320, a current system time point is obtained, and target key point position data and a target play time point are determined from the cache queue according to the current system time point and the play time point; in step S330, when the sticker special effect is detected at the target playing time point, the sticker special effect is displayed at the target key point position data.
The storage unit 820 may include readable media in the form of volatile storage units, such as a random access storage unit (RAM)821 and/or a cache storage unit 822, and may further include a read only storage unit (ROM) 823.
Storage unit 820 may also include a program/utility 824 having a set (at least one) of program modules 825, such program modules 825 including, but not limited to: an operating system, one or more application programs, other program modules, and program data, each of which, or some combination thereof, may comprise an implementation of a network environment.
Bus 830 may be any of several types of bus structures including a memory unit bus or memory unit controller, a peripheral bus, an accelerated graphics port, a processing unit, or a local bus using any of a variety of bus architectures.
The electronic device 800 may also communicate with one or more external devices 870 (e.g., keyboard, pointing device, bluetooth device, etc.), with one or more devices that enable a user to interact with the electronic device 800, and/or with any devices (e.g., router, modem, etc.) that enable the electronic device 800 to communicate with one or more other computing devices. Such communication may occur via input/output (I/O) interfaces 850. Also, the electronic device 800 may communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network, such as the internet) via the network adapter 860. As shown, the network adapter 860 communicates with the other modules of the electronic device 800 via the bus 830. It should be appreciated that although not shown, other hardware and/or software modules may be used in conjunction with the electronic device 800, including but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, and data backup storage systems, among others.
Through the above description of the embodiments, those skilled in the art will readily understand that the exemplary embodiments described herein may be implemented by software, or by software in combination with necessary hardware. Therefore, the technical solution according to the embodiments of the present disclosure may be embodied in the form of a software product, which may be stored in a non-volatile storage medium (which may be a CD-ROM, a usb disk, a removable hard disk, etc.) or on a network, and includes several instructions to enable a computing device (which may be a personal computer, a server, a terminal device, or a network device, etc.) to execute the method according to the embodiments of the present disclosure.
In an exemplary embodiment of the present disclosure, there is also provided a computer-readable storage medium having stored thereon a program product capable of implementing the above-described method of the present specification. In some possible embodiments, aspects of the present disclosure may also be implemented in the form of a program product comprising program code for causing a terminal device to perform the steps according to various exemplary embodiments of the present disclosure described in the "exemplary methods" section above of this specification, when the program product is run on the terminal device.
Referring to fig. 9, a program product 900 for implementing the above-described video data processing method according to an embodiment of the present disclosure is described, which may employ a portable compact disc read only memory (CD-ROM) and include program codes, and may be run on a terminal device, such as a personal computer. However, the program product of the present disclosure is not limited thereto, and in this document, a readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
The program product may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. A readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium include: an electrical connection having one or more wires, a portable disk, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
A computer readable signal medium may include a propagated data signal with readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A readable signal medium may also be any readable medium that is not a readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Program code for carrying out operations for the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device and partly on a remote computing device, or entirely on the remote computing device or server. In the case of a remote computing device, the remote computing device may be connected to the user computing device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computing device (e.g., through the internet using an internet service provider).
Furthermore, the above-described figures are merely schematic illustrations of processes included in methods according to exemplary embodiments of the present disclosure, and are not intended to be limiting. It will be readily understood that the processes shown in the above figures are not intended to indicate or limit the chronological order of the processes. In addition, it is also readily understood that these processes may be performed synchronously or asynchronously, e.g., in multiple modules.
Through the above description of the embodiments, those skilled in the art will readily understand that the exemplary embodiments described herein may be implemented by software, or by software in combination with necessary hardware. Therefore, the technical solution according to the embodiments of the present disclosure may be embodied in the form of a software product, which may be stored in a non-volatile storage medium (which may be a CD-ROM, a usb disk, a removable hard disk, etc.) or on a network, and includes several instructions to enable a computing device (which may be a personal computer, a server, a touch terminal, or a network device, etc.) to execute the method according to the embodiments of the present disclosure.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (11)

1. A video data processing method is applied to a video playing process, and comprises the following steps:
determining a video frame sequence to be played, and calculating the position data of a key point of a target object in the video frame sequence to be played;
acquiring a current playing video frame sequence and frame rate data, and calculating the playing time point of the video frame sequence to be played according to the current playing video frame sequence and the frame rate data;
and sending the key point position data and the playing time point to a paster special effect playing process so as to play the video frame sequence to be played and the paster special effect corresponding to the video frame sequence to be played in a linkage manner through the paster special effect playing process and the video playing process.
2. The method according to claim 1, wherein the key point position data of the target object in the sequence of video frames to be played is obtained by calling an image detection interface.
3. The method of claim 1, wherein the calculating the playing time point of the video frame sequence to be played according to the current video frame sequence and frame rate data comprises:
determining delay data of a unit video frame sequence according to the frame rate data;
determining an intermediate playing video frame sequence and the number of video frames to be played corresponding to the intermediate playing video frame sequence based on the video frame sequence to be played and the current playing video frame sequence;
calculating target delay data of the video frame sequence to be played according to the delay data and the number of the video frames to be played;
and acquiring a current system time point, and calculating the playing time point of the video frame sequence to be played based on the current system time point and the target delay data.
4. A video data processing method is applied to a paster special effect playing process, and comprises the following steps:
receiving key point position data of a target object in a video frame sequence to be played and a playing time point of the video frame sequence to be played, and storing the key point position data and the playing time point into a cache queue;
acquiring a current system time point, and determining target key point position data and a target playing time point from the cache queue according to the current system time point and the playing time point;
and when the paster special effect is detected at the target playing time point, the paster special effect is displayed at the position of the target key point data.
5. The method of claim 4, wherein determining target keypoint location data and a target play time point from the buffer queue according to the current system time point and the play time point comprises:
screening a playing time point which is equal to or larger than the current system time point from the cache queue, and taking the playing time point as the target playing time point;
and acquiring the position data of the key point corresponding to the target playing time point, and taking the position data of the key point as the position data of the target key point.
6. The video data processing method of claim 5, wherein the method further comprises:
determining display duration data of the special effect of the paster, and determining a display ending time point of the special effect of the paster based on the display duration data and a target playing time point;
and hiding the mapping special effect at the display deadline point, and deleting the target key point position data and the target playing time point from the cache sequence.
7. The video data processing method of claim 4, wherein the method further comprises:
presetting static display duration data of the special effect of the paster;
if the paster special effect is not detected at the target playing time point, determining an initial time point when the paster special effect is detected;
obtaining delay data of a unit video frame sequence, and calculating the playing time point of the next video frame sequence based on the delay data and the target playing time point;
if the playing time point is detected to be equal to or greater than the initial time point, determining a position replacement time point of the paster effect based on the initial time point and the static display duration data;
and acquiring the position data of the key points of the next frame of video frame sequence, and displaying the map special effect at the position data of the key points at the position replacement time point.
8. A video data processing apparatus, comprising:
the video frame sequence determining module is used for determining a video frame sequence to be played and calculating the position data of key points of a target object in the video frame sequence to be played;
the time point calculating module is used for acquiring a current playing video frame sequence and frame rate data and calculating the playing time point of the video frame sequence to be played according to the current playing video frame sequence and the frame rate data;
and the data sending module is used for sending the key point position data and the playing time point to a paster special effect playing process so as to play the video frame sequence to be played and the paster special effect corresponding to the video frame sequence to be played in a linkage manner through the paster special effect playing process and the video playing process.
9. A video data processing apparatus, comprising:
the data receiving module is used for receiving the key point position data of a target object in a video frame sequence to be played and the playing time point of the video frame sequence to be played, and storing the key point position data and the playing time point into a cache queue;
the target video frame sequence determining module is used for acquiring a current system time point and determining target key point position data and a target playing time point from the cache queue according to the current system time point and the playing time point;
and the paster special effect display module is used for displaying the paster special effect at the position of the key point data when the paster special effect is detected at the target playing time point.
10. An electronic device, comprising:
a processor; and
a memory having stored thereon computer readable instructions which, when executed by the processor, implement the video data processing method of any of claims 1 to 7.
11. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the video data processing method according to any one of claims 1 to 7.
CN202110687991.5A 2021-06-21 2021-06-21 Video data processing method and device, electronic equipment and storage medium Active CN113422980B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110687991.5A CN113422980B (en) 2021-06-21 2021-06-21 Video data processing method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110687991.5A CN113422980B (en) 2021-06-21 2021-06-21 Video data processing method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN113422980A true CN113422980A (en) 2021-09-21
CN113422980B CN113422980B (en) 2023-04-14

Family

ID=77789567

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110687991.5A Active CN113422980B (en) 2021-06-21 2021-06-21 Video data processing method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113422980B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114494534A (en) * 2022-01-25 2022-05-13 成都工业学院 Frame animation self-adaptive display method and system based on motion point capture analysis
CN117115313A (en) * 2023-10-23 2023-11-24 成都工业学院 Animation frame image display time optimization method, system, terminal and medium

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106096062A (en) * 2016-07-15 2016-11-09 乐视控股(北京)有限公司 video interactive method and device
CN107613313A (en) * 2017-10-09 2018-01-19 武汉斗鱼网络科技有限公司 A kind of player method and device of multiple live videos
WO2018232795A1 (en) * 2017-06-19 2018-12-27 网宿科技股份有限公司 Video player client, system, and method for live broadcast video synchronization
CN109495791A (en) * 2018-11-30 2019-03-19 北京字节跳动网络技术有限公司 A kind of adding method, device, electronic equipment and the readable medium of video paster
CN110475150A (en) * 2019-09-11 2019-11-19 广州华多网络科技有限公司 The rendering method and device of virtual present special efficacy, live broadcast system
CN111654690A (en) * 2020-05-06 2020-09-11 北京百度网讯科技有限公司 Live video delay time determination method and device and electronic equipment
CN112218107A (en) * 2020-09-18 2021-01-12 广州虎牙科技有限公司 Live broadcast rendering method and device, electronic equipment and storage medium
CN112929740A (en) * 2021-01-20 2021-06-08 广州虎牙科技有限公司 Method, device, storage medium and equipment for rendering video stream
CN112929683A (en) * 2021-01-21 2021-06-08 广州虎牙科技有限公司 Video processing method and device, electronic equipment and storage medium

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106096062A (en) * 2016-07-15 2016-11-09 乐视控股(北京)有限公司 video interactive method and device
WO2018232795A1 (en) * 2017-06-19 2018-12-27 网宿科技股份有限公司 Video player client, system, and method for live broadcast video synchronization
CN107613313A (en) * 2017-10-09 2018-01-19 武汉斗鱼网络科技有限公司 A kind of player method and device of multiple live videos
CN109495791A (en) * 2018-11-30 2019-03-19 北京字节跳动网络技术有限公司 A kind of adding method, device, electronic equipment and the readable medium of video paster
CN110475150A (en) * 2019-09-11 2019-11-19 广州华多网络科技有限公司 The rendering method and device of virtual present special efficacy, live broadcast system
CN111654690A (en) * 2020-05-06 2020-09-11 北京百度网讯科技有限公司 Live video delay time determination method and device and electronic equipment
CN112218107A (en) * 2020-09-18 2021-01-12 广州虎牙科技有限公司 Live broadcast rendering method and device, electronic equipment and storage medium
CN112929740A (en) * 2021-01-20 2021-06-08 广州虎牙科技有限公司 Method, device, storage medium and equipment for rendering video stream
CN112929683A (en) * 2021-01-21 2021-06-08 广州虎牙科技有限公司 Video processing method and device, electronic equipment and storage medium

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114494534A (en) * 2022-01-25 2022-05-13 成都工业学院 Frame animation self-adaptive display method and system based on motion point capture analysis
CN114494534B (en) * 2022-01-25 2022-09-27 成都工业学院 Frame animation self-adaptive display method and system based on motion point capture analysis
CN117115313A (en) * 2023-10-23 2023-11-24 成都工业学院 Animation frame image display time optimization method, system, terminal and medium
CN117115313B (en) * 2023-10-23 2024-02-02 成都工业学院 Animation frame image display time optimization method, system, terminal and medium

Also Published As

Publication number Publication date
CN113422980B (en) 2023-04-14

Similar Documents

Publication Publication Date Title
US11178448B2 (en) Method, apparatus for processing video, electronic device and computer-readable storage medium
CN111131851B (en) Game live broadcast control method and device, computer storage medium and electronic equipment
CN110418151B (en) Bullet screen information sending and processing method, device, equipment and medium in live game
US10271105B2 (en) Method for playing video, client, and computer storage medium
CN113422980B (en) Video data processing method and device, electronic equipment and storage medium
CN111773667A (en) Live game interaction method and device, computer readable medium and electronic equipment
CN103024561A (en) Method and device for displaying dragging progress bar
CN112135160A (en) Virtual object control method and device in live broadcast, storage medium and electronic equipment
CN111097168B (en) Display control method and device in game live broadcast, storage medium and electronic equipment
EP4333440A1 (en) Video interaction method and apparatus, electronic device, and storage medium
CN111324252B (en) Display control method and device in live broadcast platform, storage medium and electronic equipment
CN109462779B (en) Video preview information playing control method, application client and electronic equipment
CN111726688A (en) Method and device for self-adapting screen projection picture in network teaching
CN109618216B (en) Method, device and equipment for displaying video loading state identification and storage medium
CN111760272B (en) Game information display method and device, computer storage medium and electronic equipment
CN112546621A (en) Voting method and device for live game, computer storage medium and electronic equipment
US11750879B2 (en) Video content display method, client, and storage medium
CN112616064A (en) Live broadcast room information processing method and device, computer storage medium and electronic equipment
CN111177167A (en) Augmented reality map updating method, device, system, storage and equipment
CN113676761B (en) Multimedia resource playing method and device and main control equipment
CN107592561B (en) Screen saver display method, device, intelligent remote controller and computer readable storage medium
CN113825022B (en) Method and device for detecting play control state, storage medium and electronic equipment
CN115562532A (en) Mouse input data processing method and device, storage medium and electronic equipment
CN112714331B (en) Information prompting method and device, storage medium and electronic equipment
CN114259734A (en) Game trial playing method and device, computer readable storage medium and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant