CN111666793A - Video processing method, video processing device and electronic equipment - Google Patents

Video processing method, video processing device and electronic equipment Download PDF

Info

Publication number
CN111666793A
CN111666793A CN201910174970.6A CN201910174970A CN111666793A CN 111666793 A CN111666793 A CN 111666793A CN 201910174970 A CN201910174970 A CN 201910174970A CN 111666793 A CN111666793 A CN 111666793A
Authority
CN
China
Prior art keywords
video
expression
user
processing method
video processing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910174970.6A
Other languages
Chinese (zh)
Inventor
吴峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alibaba Group Holding Ltd
Original Assignee
Alibaba Group Holding Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba Group Holding Ltd filed Critical Alibaba Group Holding Ltd
Priority to CN201910174970.6A priority Critical patent/CN111666793A/en
Publication of CN111666793A publication Critical patent/CN111666793A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition

Abstract

A video processing method, a video processing apparatus and an electronic device are disclosed. The video processing method comprises the following steps: acquiring a first video; obtaining expression feature data in a second video through face recognition, wherein the second video comprises a specific expression of a user watching the first video; determining whether the specific expression and expression data of the specific expression occur based on the acquired expression feature data; and calculating an expression score corresponding to the second video based on the expression data. Therefore, the interactive value of the second video is improved, and the interactive experience of the user is improved.

Description

Video processing method, video processing device and electronic equipment
Technical Field
The present application relates to the field of video processing technologies, and in particular, to a video processing method, a video processing apparatus, and an electronic device.
Background
The funny videos are the main part of video contents, and on each large video platform, a large amount of user demands of the original forms similar to the following scenes can be seen:
1. the enthusiastic net friends/video fans spontaneously clip high-energy and explosive video clips, upload the video clips to the network for other net friends to watch, or release the video clips to various forums and communities to ask the net friends to watch whether comments are good, and test how long the internet friends can endure.
2. Video practitioners/operators clip the video highlights to attract interaction or watch the videos for storage in a topic/thematic mode.
Obviously, publishing a glancing content video → spontaneous interaction of users → triggering sharing after interaction → new users continuing to participate in interaction is a long-standing user demand. However, current interaction is inconvenient and lacks efficient interactive product morphological support.
Specifically, the effect is that the user still views the screen simply or mainly interacts with the text and the bullet screen at the present stage, so that the interaction efficiency is low and the form is not rich enough.
Accordingly, it is desirable to provide improved video processing schemes that can enhance interactivity.
Disclosure of Invention
The present application is proposed to solve the above-mentioned technical problems. The embodiment of the application provides a video processing method, a video processing device and electronic equipment, wherein expression characteristic data of a second video of a user watching a first video is acquired through face recognition, and expression scores corresponding to the second video are further calculated, so that the interaction value of the second video is improved, and the interaction experience of the user is improved.
According to an aspect of the present application, there is provided a video processing method including: acquiring a first video; obtaining expression feature data in a second video through face recognition, wherein the second video comprises a specific expression of a user watching the first video; determining whether the specific expression and expression data of the specific expression occur based on the acquired expression feature data; and calculating an expression score corresponding to the second video based on the expression data.
In the above video processing method, acquiring the first video includes: obtaining the selected first video from a video list corresponding to the specific expression.
In the video processing method, the acquiring of the expression feature data in the second video through face recognition includes: synchronously recording the second video in the process of playing the first video; and acquiring expression feature data in the second video through face recognition in the process of recording the second video.
In the above video processing method, before obtaining the expression feature data in the second video by face recognition, the method includes: carrying out face pre-recognition on the user; and acquiring expression feature data in the second video through face recognition in the process of recording the second video comprises the following steps: and responding to the success of the face pre-recognition, and starting to record the second video.
In the above video processing method, determining whether the specific expression occurs based on the acquired expression feature data includes: matching the expression feature data with an expression feature model; and determining that the specific expression occurs in response to the similarity of the matching being greater than a predetermined threshold.
In the above video processing method, determining whether the specific expression occurs based on the acquired expression feature data includes: determining whether the expression feature data meets a predetermined condition; and determining that the specific expression occurs in response to the expression feature data satisfying a predetermined condition.
In the above video processing method, determining whether the specific expression occurs based on the acquired expression feature data includes: and responding to the specific expression, and presenting an expression effect corresponding to the specific expression to the user.
In the above video processing method, determining the expression data of the specific expression based on the obtained expression feature data includes: determining at least one of a first occurrence time, a single expression time length, and an expression occurrence number of the specific expression based on the identified expression features.
In the above video processing method, calculating an expression score corresponding to the second video based on the expression data includes: and calculating at least one of expression frequency, expression endurance and expression control force of the user based on the expression data.
In the above video processing method, after calculating the expression score corresponding to the second video based on the expression data, the method further includes: and presenting the expression scores to the user through a result presentation page.
In the video processing method, presenting the expression score to the user through a result presentation page includes: synthesizing the first video and the second video to generate a third video, wherein the first video is played in a picture-in-picture form in the third video.
In the above video processing method, the result presentation page includes at least one of: a thumbnail of the third video; a first option for the user to select other videos corresponding to the particular expression; a second option for the user to generate a presentation image of the expression score; and a third option for the user to take a video corresponding to the particular expression.
In the above video processing method, further comprising: and responding to a selection instruction of the thumbnail of the third video, and playing the third video.
According to another aspect of the present application, there is provided a video processing apparatus including: an acquisition unit configured to acquire a first video; the identification unit is used for acquiring expression characteristic data in a second video through face identification, wherein the second video comprises a specific expression of a user watching the first video; a determination unit configured to determine whether the specific expression and expression data of the specific expression occur based on the acquired expression feature data; and a calculation unit configured to calculate an expression score corresponding to the second video based on the expression data.
In the above video processing apparatus, the acquisition unit is configured to: obtaining the selected first video from a video list corresponding to the specific expression.
In the above video processing apparatus, the identifying unit includes: the recording subunit is used for synchronously recording the second video in the process of playing the first video; and the identification subunit is used for acquiring expression characteristic data in the second video through face identification in the process of recording the second video.
In the above video processing apparatus, the video processing apparatus further includes a pre-recognition unit configured to perform face pre-recognition on the user before obtaining expression feature data in the second video through face recognition; and the recording subunit is configured to: and responding to the success of the face pre-recognition, and starting to record the second video.
In the above video processing apparatus, the determining unit includes: the matching subunit is used for matching the expression feature data with the expression feature model; and a first determining subunit, configured to determine that the specific expression occurs in response to the similarity of the matching being greater than a predetermined threshold.
In the above video processing apparatus, the determining unit includes: a second determining subunit, configured to determine whether the expression feature data satisfies a predetermined condition; and a third determining subunit, configured to determine that the specific expression occurs in response to the expression feature data satisfying a predetermined condition.
In the above video processing apparatus, the determining unit is configured to: and responding to the specific expression, and presenting an expression effect corresponding to the specific expression to the user.
In the above video processing apparatus, the determining unit is configured to: determining at least one of a first occurrence time, a single expression time length, and an expression occurrence number of the specific expression based on the identified expression features.
In the above video processing apparatus, the calculation unit is configured to: and calculating at least one of expression frequency, expression endurance and expression control force of the user based on the expression data.
In the above video processing apparatus, the presentation unit is further configured to present, after calculating an expression score corresponding to the second video based on the expression data, the expression score to the user through a result presentation page.
In the above video processing apparatus, the presentation unit is configured to: synthesizing the first video and the second video to generate a third video, wherein the first video is played in a picture-in-picture form in the third video.
In the above video processing apparatus, the result presentation page includes at least one of: a thumbnail of the third video; a first option for the user to select other videos corresponding to the particular expression; a second option for the user to generate a presentation image of the expression score; and a third option for the user to take a video corresponding to the particular expression.
In the above video processing apparatus, further comprising a playing unit configured to play the third video in response to a selection instruction of a thumbnail of the third video.
According to still another aspect of the present application, there is provided an electronic apparatus including: a processor; and a memory in which are stored computer program instructions which, when executed by the processor, cause the processor to perform the video processing method as described above.
According to yet another aspect of the present application, there is provided a computer readable medium having stored thereon computer program instructions which, when executed by a processor, cause the processor to perform a video processing method as described above.
The video processing method, the video processing device and the electronic equipment can acquire expression feature data of a second video of a user watching a first video through face recognition, and further calculate an expression score corresponding to the second video.
Therefore, the expression score of the second video can reflect the user expression in the second video by an objective score value, so that the interaction value of the second video is improved, the objective score value is convenient for comparison, evaluation and the like among users, and the interaction experience of the users is also improved.
Drawings
The above and other objects, features and advantages of the present application will become more apparent by describing in more detail embodiments of the present application with reference to the attached drawings. The accompanying drawings are included to provide a further understanding of the embodiments of the application and are incorporated in and constitute a part of this specification, illustrate embodiments of the application and together with the description serve to explain the principles of the application. In the drawings, like reference numbers generally represent like parts or steps.
Fig. 1 illustrates a flow diagram of a video processing method according to an embodiment of the application.
FIG. 2 illustrates a schematic diagram of a results presentation page according to an embodiment of the present application.
Fig. 3 is a schematic diagram illustrating an application example of a video processing method according to an embodiment of the present application.
Fig. 4 illustrates a block diagram of a video processing apparatus according to an embodiment of the present application.
FIG. 5 illustrates a block diagram of an electronic device in accordance with an embodiment of the present application.
Fig. 6 illustrates an exemplary cloud architecture according to an embodiment of the present application.
Fig. 7 illustrates a schematic diagram of the abstraction functional layers of a cloud architecture according to an embodiment of the present application.
Detailed Description
Hereinafter, example embodiments according to the present application will be described in detail with reference to the accompanying drawings. It should be understood that the described embodiments are only some embodiments of the present application and not all embodiments of the present application, and that the present application is not limited by the example embodiments described herein.
Summary of the application
As described above, the current interactive mode of the funny videos is mainly based on characters, for example, through barrage interaction, which is inefficient and not rich enough in form. In contrast, the expression and the expression of watching the video and the like of the user can be more vivid and concrete and can trigger a new round of interaction.
In particular, interactive forms of video reply videos have appeared in some community-like products, such as picture-in-picture lip-to-mouth, or clap-type video interactions. The picture-in-picture mouth-matching type video interaction firstly selects a material video to be imitated/perplex, the material video is loaded on a small window at the lower left corner to be played, and simultaneously, a video performed by a user is recorded and displayed on a large window of a main window, and the whole video is synthesized into a video file form. The video interaction of the close-shot type is that the material video is loaded on a left side window or a right side window, and simultaneously, the performance picture of the user is recorded on a window on the other side to be integrally synthesized into a small video. These two types of video interactions can relate videos of both interacting parties together, but the interactivity and sociability are still not strong enough.
For the above types of video interaction, the applicant of the present application finds that, firstly, the interaction effect and the expressive force of the video interaction are limited, so that the interactivity and the transmission force are limited, and secondly, the user lacks subsequent sharing power after participating in the video interaction, which is mainly because the current interactive video lacks a uniform objective evaluation mechanism.
In view of the above technical problem, the basic idea of the present application is to obtain expression feature data of a second video of a user watching a first video through face recognition, and further calculate an expression score corresponding to the second video based on the expression feature data, thereby obtaining an objective score of the second video containing an expression of the user.
Specifically, the video processing method, the video processing device and the electronic equipment first acquire a first video, then acquire expression feature data in a second video through face recognition, the second video includes a specific expression of a user watching the first video, then determine whether the specific expression and expression data of the specific expression occur based on the acquired expression feature data, and finally calculate an expression score corresponding to the second video based on the expression data.
In this way, the specific expression of the user and the expression data thereof are determined by the expression feature data obtained by face recognition, so that whether the specific expression occurs and the degree of the specific expression can be accurately determined, and the calculated expression score corresponding to the second video can reflect the user expression in the second video by an objective score value.
Therefore, the second video avoids the problem that a unified objective evaluation mechanism is lacked due to the fact that only the video itself is used for interaction, new evaluation dimensionality can be added to different videos in the aspect of expression, and the interaction value of the second video is improved.
In addition, the expression score can objectively reflect the degree of the specific expression of the user in the second video, so that the user can compare and evaluate the expression of the user in other users and other videos based on the score value, the user can further interact the second video with other users with power, and the interaction experience of the user is enhanced.
In addition, due to the objectivity and intuition of the expression scores, the user can record more expression videos with power, and the richness of video materials is improved.
It is to be noted that, in the video processing method, the video processing apparatus, and the electronic device provided in the present application, the first video is not limited to the smiling-like video, and accordingly, the specific expression of the user is not limited to the smiling expression. For example, the first video may be a tragedy-like video, and accordingly, the specific expression of the user is a sad expression or a crying expression. For another example, the first video may be a horror video, and accordingly, the specific expression of the user is a panic expression.
Having described the general principles of the present application, various non-limiting embodiments of the present application will now be described with reference to the accompanying drawings.
Exemplary method
Fig. 1 illustrates a flow diagram of a video processing method according to an embodiment of the application.
As shown in fig. 1, a video processing method according to an embodiment of the present application includes: s110, acquiring a first video; s120, obtaining expression feature data in a second video through face recognition, wherein the second video comprises a specific expression of a user watching the first video; s130, determining whether the specific expression and the expression data of the specific expression occur or not based on the acquired expression feature data; and S140, calculating an expression score corresponding to the second video based on the expression data.
In step S110, a first video is acquired. Here, the first video is a video viewed by the user, and may also be referred to as a material video in the present application. As described above, the material video may be some type of video, such as a fun-type video, a horror-type video, and the like.
In an embodiment of the present application, the source of the material video may include: provided by a video service provider, provided by participating users, provided by third parties through business collaboration. For example, the material provided by the video service provider may include a two-created clip of high-energy points in various funny programs, and the material provided by the third party may include commercial affiliations with fun nature, brand show-to-fun videos, and so on.
And, the first video may be presented to the user in the form of a page, for example, in the form of a list containing videos corresponding to a certain expression, for example, a list of funny videos. Also, in the page, in order to avoid the user from seeing the first video in advance, only an overview of the first video, such as a jacket photograph and title text, may be presented without providing a function of previewing the content of the material video before the user views the first video. Accordingly, a user can select a specific material video to start watching and perform video interaction by accessing the page.
Therefore, in the video processing method according to the embodiment of the present application, acquiring the first video includes: obtaining the selected first video from a video list corresponding to the specific expression.
In step S120, expression feature data in a second video including a specific expression of a user viewing the first video is obtained through face recognition.
That is, the second video is a video recorded by the user while watching the first video, and accordingly, a specific expression, such as a smile expression or the like, of the user while watching the first video is recorded in the second video. In this way, expressive feature data of a specific expression of the user in the second video, for example, smile feature data including data of 108 feature points of a face of a person at the time of smiling, can be acquired by face recognition.
In this embodiment of the application, the second video may be a video that is pre-recorded and stored by a user, and then the expression feature data is acquired through face recognition. In addition, the user can also record the second video and synchronously acquire expression feature data through face recognition.
That is, in the video processing method according to the embodiment of the present application, acquiring expression feature data in the second video by face recognition includes: synchronously recording the second video in the process of playing the first video; and acquiring expression feature data in the second video through face recognition in the process of recording the second video.
In addition, during the process of playing the first video, the recorded second video can also be synchronously presented in a split-screen mode. For example, the recorded second video may be presented in a small window on the screen in a picture-in-picture format.
In addition, in the embodiment of the application, in order to improve the accuracy of face recognition, face pre-recognition may be performed on the user before expression feature data is acquired through face recognition. For example, a face shape guide box may be presented to the user, guiding the user through face recognition, and in particular, 108 points of the face may be scanned as described above to determine whether these point feature data can be extracted. That is, on the obtained face image, it is necessary to locate and focus on tracking feature points of the mouth, cheek, and the like to extract expressive features. Through face pre-recognition, whether face feature vector extraction of the user can be completed or not can be judged, namely whether face recognition of the user is successful or not is judged, and the fact that the face recognition accuracy is affected by factors such as the face position of the user is avoided.
It is noted that during the pre-recognition of the face, there is no need to determine the specific expression of the user, e.g., only the facial expression feature data of the user need be obtained, and no further determination is needed as to whether the specific expression is present.
Therefore, in the video processing method according to the embodiment of the present application, before acquiring the expression feature data in the second video through face recognition, the method includes: carrying out face pre-recognition on the user; and acquiring expression feature data in the second video through face recognition in the process of recording the second video comprises the following steps: and responding to the success of the face pre-recognition, and starting to record the second video.
In step S130, it is determined whether the specific expression and the expression data of the specific expression occur based on the acquired expression feature data. Here, in the embodiment of the present application, based on the acquired expression feature data, it may be determined whether it is a specific expression and expression data of the specific expression may be determined in various ways.
Specifically, in one example, whether a specific expression occurs may be determined by matching the extracted point location feature data with a feature model in a database. For example, by matching the extracted point location feature data with a smile feature model in the database, if the similarity relationship between the two is larger than a predetermined threshold, it is determined that the matching is passed and a smile expression occurs. Of course, those skilled in the art will understand that the extracted point location feature data may also be matched with feature models of other expressions in the database, such as panic expressions, to determine whether a panic expression occurs.
Further, in another example, it may be determined whether it satisfies a predetermined condition based on the extracted point location feature data. For example, an angle between the mouth and the key point of the face is calculated based on the extracted point feature data, and when the corresponding angle is larger than a predetermined threshold, it is determined that smile expression has occurred. Of course, it will be understood by those skilled in the art that it may also be determined whether the extracted point location feature data satisfies a predetermined condition corresponding to another expression, such as an eye-opening condition or mouth-opening condition corresponding to a panic expression, or a facial contraction condition corresponding to a crying expression.
Therefore, in the video processing method according to the embodiment of the present application, determining whether the specific expression occurs based on the acquired expression feature data includes: matching the expression feature data with an expression feature model; and determining that the specific expression occurs in response to the similarity of the matching being greater than a predetermined threshold.
Also, in the video processing method according to an embodiment of the present application, determining whether the specific expression occurs based on the acquired expression feature data includes: determining whether the expression feature data meets a predetermined condition; and determining that the specific expression occurs in response to the expression feature data satisfying a predetermined condition.
In the embodiment of the application, in order to enable a user to more intuitively know whether a specific expression of the user is recognized, an expression effect corresponding to the specific expression may be presented to the user when the specific expression is determined to occur. For example, once it is determined from the expressive feature data that a smiling expression (regardless of the degree of smiling, such as smiling, laughing, etc.) has occurred to the user, the preset smile value may be decremented, e.g., the smile value presented in the visualization bar is scaled down by length. In addition, visual animation with particle effect can be presented, high-efficiency barrage can be presented, special-effect background sound can be played, and the like.
Moreover, in the embodiment of the application, when the second video is recorded, even if it is determined that the specific expression does not occur, a bullet screen can be played at predetermined time intervals, for example, 10 seconds, so as to enhance the interaction feeling of the user.
Therefore, in the video processing method according to the embodiment of the present application, determining whether the specific expression occurs based on the acquired expression feature data includes: and responding to the specific expression, and presenting an expression effect corresponding to the specific expression to the user.
In the embodiment of the application, based on the expression feature data, in the case that it is determined that a specific expression occurs, expression data of the specific expression is further determined. Here, the expression data is data for reflecting the expression degree of the user in the second video, and specifically, may include a first occurrence time, a single expression time length, an expression occurrence number, and the like of the specific expression.
Taking a smile as an example, a point in time at which a user first smiles while watching the first video, a duration of each smile, and a number of occurrences of smiles can be determined. For the number of smiles, it may be set that the determined continuous smile is regarded as a primary smile, and if the smile is restored halfway to a normal expression, when the smile appears again, it is regarded as another smile regardless of the interval time.
Of course, it will be understood by those skilled in the art that the expression data may also reflect the severity of a single expression, such as a smile, laugh, crying, etc.
Therefore, in the video processing method according to the embodiment of the present application, determining the expression data of the specific expression based on the acquired expression feature data includes: determining at least one of a first occurrence time, a single expression time length, and an expression occurrence number of the specific expression based on the identified expression features.
In step S140, an expression score corresponding to the second video is calculated based on the expression data.
That is, based on the expression data as described above, the expression score may be further calculated. Specifically, the expression control force may be calculated based on the first occurrence time of the specific expression, for example, the later the first occurrence time of the specific expression, the higher the expression control force. In addition, the expression persistence may be calculated based on the word expression time length of the specific expression, for example, the longer the word expression time length of the specific expression, the higher the expression persistence. Furthermore, the expression frequency may be calculated based on the expression occurrence number of the specific expression, for example, the more the expression occurrence number of the specific expression is, the higher the expression frequency is.
Therefore, in the video processing method according to the embodiment of the present application, calculating the expression score corresponding to the second video based on the expression data includes: and calculating at least one of expression frequency, expression endurance and expression control force of the user based on the expression data.
Therefore, the expression of the user in the second video of the first video watched by the user can be objectively and intuitively evaluated through the obtained expression scores, and the problem that only the second video is insufficient in objectivity and intuitiveness is solved.
In order to facilitate the user to know the expression score of the user, the expression score may be presented to the user after the expression score is calculated, and specifically, in this embodiment of the application, the expression score may be presented to the user through a result presentation page.
That is, in the video processing method according to the embodiment of the present application, after calculating the expression score corresponding to the second video based on the expression data, the method further includes: and presenting the expression scores to the user through a result presentation page.
FIG. 2 illustrates a schematic diagram of a results presentation page according to an embodiment of the present application. As shown in fig. 2, the results presentation page 200 may include a second video of the first video viewed by the user in addition to expression scores, such as expression frequency 210, expression duration 220, and expression control force 230, to facilitate the user reviewing his or her expressions.
Here, in order to facilitate the user to review his or her expression contrastingly, the first video and the second video may be synthesized into a third video, so that the user can review the expression of viewing the material video by himself or herself through a single video. Also, for example, the first video may be displayed in the second video as a picture-in-picture form. Of course, it will be understood by those skilled in the art that the second video may also be displayed as a picture-in-picture in the first video. Alternatively, only the first video or the second video may be presented in the result presentation page.
Therefore, in the video processing method according to the embodiment of the application, presenting the expression score to the user through the result presentation page includes: synthesizing the first video and the second video to generate a third video, wherein the first video is played in a picture-in-picture form in the third video.
As shown in fig. 2, in the result presentation page, a thumbnail 240 of the third video, for example, a cover map of the third video, may be included. Of course, if only the first video or the second video is presented in the result presentation page, a thumbnail of the first video or the second video may also be included. And, if the user selects, for example, clicks on a thumbnail of the third video, the third video may be played. In addition, in the embodiment of the present application, the user may also save the third video by selecting a thumbnail of the third video.
Further, as shown in FIG. 2, the results presentation page may include other functional options. Specifically, a first option 250 for the user to select other videos corresponding to the particular expression may be included, such that the user may select to view other material videos. In addition, a second option 260 for the user to generate a presentation image of the expression score may be included, so that the user may share his/her own performance of watching the first video to other users in an intuitive and objective manner through the presentation image, thereby improving user interactivity. Further, a third option 270 for the user to photograph a video corresponding to the specific expression may be included, that is, the user may photograph a material video corresponding to the specific expression by himself.
Therefore, in the video processing method according to the embodiment of the present application, the result presentation page includes at least one of: a thumbnail of the third video; a first option for the user to select other videos corresponding to the particular expression; a second option for the user to generate a presentation image of the expression score; and a third option for the user to take a video corresponding to the particular expression.
Further, in the above video processing method, the method further includes: and responding to a selection instruction of the thumbnail of the third video, and playing the third video.
Next, a process in which the user captures the material video by himself through the third option 270 as described above will be described.
Firstly, after the user selects the third option, the shooting function module can be called at the client to shoot a section of small video, and for the shot small video, the user can add background music, special effects such as slow motion and repetition, and can preview the shot video content in real time.
After the video is shot, a user synthesizes a video file at a client side and uploads the video file to a video material library so as to release the video, and the uploading progress and result can be displayed in a bullet layer mode in the uploading process. Specifically, the client can actively monitor the uploading result, so that after the uploading is successful, the user is informed of the successful uploading in a popup mode, and meanwhile, a function of guiding the user to share the shot video can be set. For example, a user initiates sharing of a captured video by clicking on the popup layer. In addition, videos uploaded to the video repository by users may be subjected to operational screening and algorithmic distribution to recommend presentations to other users.
Therefore, the problem that the number of the material videos provided by the video server is limited can be solved by recording the material videos by the User, uploading the material videos to the video material library and sharing the material videos with other users, and a UGC (User Generated Content) closed loop for supplying the material videos and participating in interactive consumption is formed.
Application example
Fig. 3 is a schematic diagram illustrating an application example of a video processing method according to an embodiment of the present application.
As described above, there are a lot of smiling-like videos and also a lot of full smiling contents in movies and television shows. By the video processing method, interaction and social contact based on the small videos can be achieved, and the value of the funny videos can be effectively amplified.
By using the AI technology such as face recognition, a product with strong technological sense can be provided for the user, and the method has spontaneous attraction and transmission force for the user. When the video processing method is applied to smile expressions, smile recognition can be used as a core, production capacity and expression form of picture-in-picture videos are combined, interaction and propagation playing capacity is built through game product expression, and marketing capacity is provided for continuous fermentation of smile videos.
Specifically, as shown in fig. 3, the laugh material is first presented at the client in the form of an active page, and as described above, only the front cover and title text of the laugh material are presented at the active page, not supporting previewing the material video content ahead of the interactive challenge.
Then, the user accesses the active page on the client, selects a specific laughing material to start a challenge, and enters an interaction link. Specifically, first, the user selects a specific laughing material as a challenge object, and issues a request for a laughing challenge. Then the system starts a challenge link, and the client loads and displays a challenge preparation page. Meanwhile, the server receives the request and issues data such as cover pictures of corresponding materials. The client loads the material cover picture, for example, the material cover picture is spread to be full screen, and then the material cover picture is contracted to be displayed in a small window at the lower left corner of the picture-in-picture in a dynamic mode (only the cover is displayed, and the preview playing is not started). And the material cover is displayed in the small window, and the client displays the face shape guide frame to guide the user to perform face recognition.
And if the face recognition obtains and finishes the extraction of the face feature vector, namely the face recognition is successful, automatically starting the 3-second countdown at the moment. During the countdown, smiling face recognition is not performed (smiling face recognition may be performed, but no specific instruction determination is made). And when the countdown is carried out, the client acquires and finishes the loading of the video content of the material from the server, and the laughing material is transferred from the picture-in-picture small window to the full-screen display of the full main window. And the small picture-in-picture window starts the front camera to shoot and view, and displays a framed real-time picture.
Meanwhile, the top of the interactive interface displays a visual bar representing the smile point of the user, for example, the initial default smile point value is 100. And after the countdown is finished, starting automatic playing of the laughing material video of the main window, starting real-time video recording by starting the small window, and recording the picture of the user watching the laughing material through the front camera. And meanwhile, the client identifies and judges the change of the characteristics of the point positions of the human face in real time to detect and judge the smile.
Here, the interaction time is determined by the duration of the material video, and for example, the maximum duration is controlled not to exceed 90 seconds. When the laughing material is played, the recording of the picture content of the small window is synchronously finished. And the client synthesizes the video based on the data of the interactive process. Specifically, the video is in a picture-in-picture mode, wherein the main window is the viewing content of a front camera which is recorded in real time, and the small window is provided with a laughing material video. In addition, smile point values, visualization bars, and barrage content, visual effects, etc., that appear in real time are rendered in the main window interface based on the point in time of occurrence. The special effect background sound is also synthesized with the video content based on the time point.
And (4) successfully synthesizing the video of the interactive video, and displaying a challenge ending page at the client. The functions that the challenge end page can carry include: and (4) guided sharing, returning to an entry with other material challenges selected by an activity page, issuing a laugh video entry by the UGC, watching a playback video and the like. Wherein, the information of guiding sharing can include: interactive participation data, title/title stimulation, guided case and sharing button are visually displayed. The interactive engagement data includes laughing point, laughing frequency, endurance, control force, for example, only data may be presented without units. The user can click the function button of "challenge other laughing material", will directly return to the activity page, can select other laughing material materials, continue the interactive challenge. The user can click the functional area of 'I make a point laughing material', and the shooting functional module can be called up to shoot a section of small video. The user can click the cover picture of the challenge video on the challenge end page, watch the playback of the interaction process, the content is the synthesized video, and the user can click the save button and reserve the locally generated video to the path of the mobile phone album.
After the challenge is over, the user can publish a laugh video, upload the video to a material library, and initiate sharing. And moreover, the laughing material videos uploaded to the material library by the user can be recommended and displayed on the activity page through operation screening and algorithm distribution.
Exemplary devices
Fig. 4 illustrates a block diagram of a video processing apparatus according to an embodiment of the present application.
As shown in fig. 4, the video processing apparatus 300 includes: an acquisition unit 310 configured to acquire a first video; an identifying unit 320 configured to acquire expression feature data in a second video by face recognition, the second video including a specific expression of a user viewing the first video acquired by the acquiring unit 310; a determining unit 330, configured to determine whether the specific expression and expression data of the specific expression occur based on the expression feature data acquired by the identifying unit 320; and a calculating unit 340 for calculating an expression score corresponding to the second video based on the expression data determined by the determining unit 330.
In an example, in the above video processing apparatus 300, the obtaining unit 310 is configured to: obtaining the selected first video from a video list corresponding to the specific expression.
In one example, in the above video processing apparatus 300, the identifying unit 320 includes: a recording subunit, configured to record the second video synchronously in the process of playing the first video acquired by the acquiring unit 310; and the identification subunit is used for acquiring expression characteristic data in the second video through face identification in the process of recording the second video by the recording subunit.
In an example, in the above video processing apparatus 300, a pre-recognition unit is further included, configured to perform face pre-recognition on the user before the recognition unit 320 acquires the expression feature data in the second video through face recognition; and the recording subunit is configured to: and responding to the success of the face pre-recognition, and starting to record the second video.
In one example, in the above-described video processing apparatus 300, the determining unit 330 includes: a matching subunit, configured to match the expression feature data acquired by the identifying unit 320 with an expression feature model; and a first determining subunit configured to determine that the specific expression occurs in response to the similarity of the matching by the matching subunit being greater than a predetermined threshold.
In one example, in the above-described video processing apparatus 300, the determining unit 330 includes: a second determining subunit configured to determine whether the expression feature data acquired by the identifying unit 320 satisfies a predetermined condition; and a third determining subunit operable to determine that the specific expression occurs in response to the second determining subunit determining that the expression feature data satisfies a predetermined condition.
In an example, in the above video processing apparatus 300, the determining unit 330 is configured to: and responding to the specific expression, and presenting an expression effect corresponding to the specific expression to the user.
In an example, in the above video processing apparatus 300, the determining unit 330 is configured to: at least one of the first occurrence time, the single expression time length, and the expression occurrence number of the specific expression is determined based on the expression feature data acquired by the recognition unit 320.
In an example, in the above video processing apparatus 300, the computing unit 340 is configured to: at least one of expression frequency, expression persistence force, and expression control force of the user is calculated based on the expression data determined by the determination unit 330.
In one example, in the above video processing apparatus 300, a presentation unit is further included, configured to present the expression score to the user through a result presentation page after the calculation unit 340 calculates the expression score corresponding to the second video based on the expression data.
In one example, in the above video processing apparatus 300, the presenting unit is configured to: synthesizing the first video and the second video to generate a third video, wherein the first video is played in a picture-in-picture form in the third video.
In one example, in the above video processing apparatus 300, the result presentation page includes at least one of: a thumbnail of the third video; a first option for the user to select other videos corresponding to the particular expression; a second option for the user to generate a presentation image of the expression score; and a third option for the user to take a video corresponding to the particular expression.
In one example, in the above-described video processing apparatus 300, a playing unit is further included for playing the third video in response to a selection instruction of a thumbnail of the third video.
Here, it will be understood by those skilled in the art that the specific functions and operations of the respective units and modules in the above-described video processing apparatus 300 have been described in detail in the above description of the video processing method with reference to fig. 1 and 2, and thus, a repetitive description thereof will be omitted.
As described above, the video processing apparatus 300 according to the embodiment of the present application can be implemented in various terminal devices, such as a smartphone of a user. In one example, the video processing apparatus 300 according to the embodiment of the present application may be integrated into a terminal device as one software module and/or hardware module. For example, the video processing apparatus 300 may be a software module in an operating system of the terminal device, or may be an application developed for the terminal device; of course, the video processing apparatus 300 may also be one of many hardware modules of the terminal device.
Alternatively, in another example, the video processing apparatus 300 and the terminal device may be separate devices, and the video processing apparatus 300 may be connected to the terminal device through a wired and/or wireless network and transmit the interactive information according to an agreed data format.
Exemplary electronic device
Next, an electronic apparatus according to an embodiment of the present application is described with reference to fig. 5.
FIG. 5 illustrates a block diagram of an electronic device in accordance with an embodiment of the present application.
As shown in fig. 5, the electronic device 10 includes one or more processors 11 and memory 12.
The processor 13 may be a Central Processing Unit (CPU) or other form of processing unit having data processing capabilities and/or instruction execution capabilities, and may control other components in the electronic device 10 to perform desired functions.
Memory 12 may include one or more computer program products that may include various forms of computer-readable storage media, such as volatile memory and/or non-volatile memory. The volatile memory may include, for example, Random Access Memory (RAM), cache memory (cache), and/or the like. The non-volatile memory may include, for example, Read Only Memory (ROM), hard disk, flash memory, etc. One or more computer program instructions may be stored on the computer readable storage medium and executed by the processor 11 to implement the video processing methods of the various embodiments of the present application described above and/or other desired functions. Various contents such as expression scores may also be stored in the computer-readable storage medium.
In one example, the electronic device 10 may further include: an input device 13 and an output device 14, which are interconnected by a bus system and/or other form of connection mechanism (not shown).
The input device 13 may include, for example, a keyboard, a mouse, and the like.
The output device 14 may output various information including expression scores corresponding to the second video and the like to the outside. The output devices 14 may include, for example, a display, speakers, a printer, and a communication network and its connected remote output devices, among others.
Of course, for simplicity, only some of the components of the electronic device 10 relevant to the present application are shown in fig. 5, and components such as buses, input/output interfaces, and the like are omitted. In addition, the electronic device 10 may include any other suitable components depending on the particular application.
Exemplary computer program product and computer-readable storage Medium
In addition to the above-described methods and apparatus, embodiments of the present application may also be a computer program product comprising computer program instructions that, when executed by a processor, cause the processor to perform the steps in the video processing method according to various embodiments of the present application described in the "exemplary methods" section of this specification, supra.
The computer program product may be written with program code for performing the operations of embodiments of the present application in any combination of one or more programming languages, including an object oriented programming language such as Java, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device and partly on a remote computing device, or entirely on the remote computing device or server.
Furthermore, embodiments of the present application may also be a computer-readable storage medium having stored thereon computer program instructions that, when executed by a processor, cause the processor to perform steps in a video processing method according to various embodiments of the present application described in the "exemplary methods" section above of this specification.
The computer-readable storage medium may take any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. A readable storage medium may include, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium include: an electrical connection having one or more wires, a portable disk, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
Exemplary cloud architecture
It is to be noted that the video processing method according to the embodiment of the present application may adopt a system architecture based on a cloud computing environment, which is referred to as a cloud architecture for short. Those skilled in the art will appreciate that cloud computing is a service provisioning model that enables on-demand network access to a shared pool of resources made up of configurable computing resources (e.g., networks, network bandwidth, servers, processors, memory, storage media, applications, virtual machines, and services). The shared resource pool can be configured and released quickly with only minor administrative effort or interaction with the service provider.
Fig. 6 illustrates an exemplary cloud architecture according to an embodiment of the present application. As shown in fig. 6, an exemplary cloud architecture 20 includes a series of cloud computing nodes 21. Through these cloud computing nodes 21, local computing devices, such as an in-vehicle computer 22A, a smartphone 22B, a personal digital assistant 22C, a tablet computer 22D, and the like, can implement internet communication. Cloud computing nodes 21 may be in communication with each other and may be grouped, either virtually or physically, to form a series of networks of nodes, such as private, public, community, or hybrid clouds, etc., in such a way as to provide cloud users with cloud services that do not require resource maintenance on the local computing devices, such as infrastructure, software programs or platforms, etc. Those skilled in the art will appreciate that the computing device illustrated in fig. 6 is merely an example, and that a cloud computing environment may be interconnected with any other computing device, directly or indirectly, via a network, and that this application is not intended to be limiting in any way.
Fig. 7 illustrates a schematic diagram of the abstraction functional layers of a cloud architecture according to an embodiment of the present application.
As shown in FIG. 7, a set of abstraction functional layers provided by cloud architecture 20 includes hardware and software layers, a virtualization layer, a management layer, and a working layer. Those skilled in the art will appreciate that the components, layers, and functions illustrated in fig. 7 are merely examples to illustrate features of cloud architecture 20 and are not intended to limit the present application in any way.
The hardware and software layers include a range of hardware and software, where the hardware includes, but is not limited to, hosts, RISC (Reduced Instruction Set Computer) architecture servers, blade servers, storage devices, networks and network components, and the like. In addition, the software includes web application server and database software, etc.
The virtual layer includes a series of virtual entities including, but not limited to, virtual servers, virtual storage spaces, virtual networks, virtual private networks, virtual applications and operating systems and virtual clients, etc.
The management layer is used to implement the functions described below. First, a resource provisioning function that is capable of providing dynamic procurement of computing and other resources needed for performing tasks within the cloud architecture; secondly, a metering and pricing function, which can track the use cost of the resources in the cloud architecture and charge or price the resource consumption; thirdly, a security protection function, which can perform identity verification on cloud users and tasks and protect data and other resources; fourth, a user portal function capable of providing access channels to the cloud infrastructure for cloud users and system administrators; fifthly, a service management function capable of allocating and managing cloud computing resources to meet the requirements of the required service; sixth, a Service Level Agreement planning and enforcement function, which can pre-arrange and purchase required cloud computing resources according to SLA (Service Level Agreement).
The working layer provides functional examples that can be implemented by the cloud architecture, for example, the video processing method according to the embodiment of the present application as described above.
The foregoing describes the general principles of the present application in conjunction with specific embodiments, however, it is noted that the advantages, effects, etc. mentioned in the present application are merely examples and are not limiting, and they should not be considered essential to the various embodiments of the present application. Furthermore, the foregoing disclosure of specific details is for the purpose of illustration and description and is not intended to be limiting, since the foregoing disclosure is not intended to be exhaustive or to limit the disclosure to the precise details disclosed.
The block diagrams of devices, apparatuses, systems referred to in this application are only given as illustrative examples and are not intended to require or imply that the connections, arrangements, configurations, etc. must be made in the manner shown in the block diagrams. These devices, apparatuses, devices, systems may be connected, arranged, configured in any manner, as will be appreciated by those skilled in the art. Words such as "including," "comprising," "having," and the like are open-ended words that mean "including, but not limited to," and are used interchangeably therewith. The words "or" and "as used herein mean, and are used interchangeably with, the word" and/or, "unless the context clearly dictates otherwise. The word "such as" is used herein to mean, and is used interchangeably with, the phrase "such as but not limited to".
It should also be noted that in the devices, apparatuses, and methods of the present application, the components or steps may be decomposed and/or recombined. These decompositions and/or recombinations are to be considered as equivalents of the present application.
The previous description of the disclosed aspects is provided to enable any person skilled in the art to make or use the present application. Various modifications to these aspects will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other aspects without departing from the scope of the application. Thus, the present application is not intended to be limited to the aspects shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
The foregoing description has been presented for purposes of illustration and description. Furthermore, the description is not intended to limit embodiments of the application to the form disclosed herein. While a number of example aspects and embodiments have been discussed above, those of skill in the art will recognize certain variations, modifications, alterations, additions and sub-combinations thereof.

Claims (15)

1. A video processing method, comprising:
acquiring a first video;
obtaining expression feature data in a second video through face recognition, wherein the second video comprises a specific expression of a user watching the first video;
determining whether the specific expression and expression data of the specific expression occur based on the acquired expression feature data; and
calculating an expression score corresponding to the second video based on the expression data.
2. The video processing method of claim 1, wherein obtaining the first video comprises:
obtaining the selected first video from a video list corresponding to the specific expression.
3. The video processing method of claim 1, wherein the obtaining of the expression feature data in the second video through face recognition comprises:
synchronously recording the second video in the process of playing the first video; and
and acquiring expression characteristic data in the second video through face recognition in the process of recording the second video.
4. The video processing method according to claim 3, wherein before the obtaining of the expressive feature data in the second video by face recognition comprises:
carrying out face pre-recognition on the user; and
the obtaining of the expression feature data in the second video through face recognition in the process of recording the second video comprises:
and responding to the success of the face pre-recognition, and starting to record the second video.
5. The video processing method of claim 1, wherein determining whether the particular expression occurs based on the obtained expression feature data comprises:
matching the expression feature data with an expression feature model; and
determining that the particular expression occurs in response to the similarity of the match being greater than a predetermined threshold.
6. The video processing method of claim 1, wherein determining whether the particular expression occurs based on the obtained expression feature data comprises:
determining whether the expression feature data meets a predetermined condition; and
determining that the specific expression occurs in response to the expression feature data satisfying a predetermined condition.
7. The video processing method of claim 1, wherein determining whether the particular expression occurs based on the obtained expression feature data comprises:
and responding to the specific expression, and presenting an expression effect corresponding to the specific expression to the user.
8. The video processing method of claim 1, wherein determining the expression data of the specific expression based on the obtained expression feature data comprises:
determining at least one of a first occurrence time, a single expression time length, and an expression occurrence number of the specific expression based on the acquired expression feature data.
9. The video processing method of claim 1, wherein calculating the expression score corresponding to the second video based on the expression data comprises:
and calculating at least one of expression frequency, expression endurance and expression control force of the user based on the expression data.
10. The video processing method of claim 1, wherein after calculating the expression score corresponding to the second video based on the expression data, further comprising:
and presenting the expression scores to the user through a result presentation page.
11. The video processing method of claim 10, wherein presenting the expression score to the user via a results presentation page comprises:
synthesizing the first video and the second video to generate a third video, wherein the first video is played in a picture-in-picture form in the third video.
12. The video processing method of claim 11, wherein the results presentation page comprises at least one of:
a thumbnail of the third video;
a first option for the user to select other videos corresponding to the particular expression;
a second option for the user to generate a presentation image of the expression score; and
a third option for the user to take a video corresponding to the particular expression.
13. The video processing method of claim 12, further comprising:
and responding to a selection instruction of the thumbnail of the third video, and playing the third video.
14. A video processing apparatus, comprising:
an acquisition unit configured to acquire a first video;
the identification unit is used for acquiring expression characteristic data in a second video through face identification, wherein the second video comprises a specific expression of a user watching the first video;
a determination unit configured to determine whether the specific expression and expression data of the specific expression occur based on the acquired expression feature data; and
a calculating unit configured to calculate an expression score corresponding to the second video based on the expression data.
15. An electronic device, comprising:
a processor; and
a memory having stored therein computer program instructions which, when executed by the processor, cause the processor to perform the video processing method of any of claims 1-13.
CN201910174970.6A 2019-03-08 2019-03-08 Video processing method, video processing device and electronic equipment Pending CN111666793A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910174970.6A CN111666793A (en) 2019-03-08 2019-03-08 Video processing method, video processing device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910174970.6A CN111666793A (en) 2019-03-08 2019-03-08 Video processing method, video processing device and electronic equipment

Publications (1)

Publication Number Publication Date
CN111666793A true CN111666793A (en) 2020-09-15

Family

ID=72382008

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910174970.6A Pending CN111666793A (en) 2019-03-08 2019-03-08 Video processing method, video processing device and electronic equipment

Country Status (1)

Country Link
CN (1) CN111666793A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114187177A (en) * 2021-11-30 2022-03-15 北京字节跳动网络技术有限公司 Method, device and equipment for generating special effect video and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106792170A (en) * 2016-12-14 2017-05-31 合网络技术(北京)有限公司 Method for processing video frequency and device
CN106878809A (en) * 2017-02-15 2017-06-20 腾讯科技(深圳)有限公司 A kind of video collection method, player method, device, terminal and system
US20180027307A1 (en) * 2016-07-25 2018-01-25 Yahoo!, Inc. Emotional reaction sharing
CN108337563A (en) * 2018-03-16 2018-07-27 深圳创维数字技术有限公司 Video evaluation method, apparatus, equipment and storage medium
CN109040842A (en) * 2018-08-16 2018-12-18 上海哔哩哔哩科技有限公司 Video spectators' emotional information capturing analysis method, device, system and storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180027307A1 (en) * 2016-07-25 2018-01-25 Yahoo!, Inc. Emotional reaction sharing
CN106792170A (en) * 2016-12-14 2017-05-31 合网络技术(北京)有限公司 Method for processing video frequency and device
CN106878809A (en) * 2017-02-15 2017-06-20 腾讯科技(深圳)有限公司 A kind of video collection method, player method, device, terminal and system
CN108337563A (en) * 2018-03-16 2018-07-27 深圳创维数字技术有限公司 Video evaluation method, apparatus, equipment and storage medium
CN109040842A (en) * 2018-08-16 2018-12-18 上海哔哩哔哩科技有限公司 Video spectators' emotional information capturing analysis method, device, system and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
洪庆;王思尧;赵钦佩;李江峰;饶卫雄;: "基于弹幕情感分析和聚类算法的视频用户群体分类", 计算机工程与科学, no. 06, 15 June 2018 (2018-06-15) *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114187177A (en) * 2021-11-30 2022-03-15 北京字节跳动网络技术有限公司 Method, device and equipment for generating special effect video and storage medium

Similar Documents

Publication Publication Date Title
CN106658200B (en) Live video sharing and acquiring method and device and terminal equipment thereof
CN111050222B (en) Virtual article issuing method, device and storage medium
WO2018228037A1 (en) Media data processing method and device and storage medium
CN113965811A (en) Play control method and device, storage medium and electronic device
CN111263170B (en) Video playing method, device and equipment and readable storage medium
CN111314204A (en) Interaction method, device, terminal and storage medium
CN112188267B (en) Video playing method, device and equipment and computer storage medium
CN108737903B (en) Multimedia processing system and multimedia processing method
CN112822560B (en) Virtual gift giving method, system, computer device and storage medium
CN113573092B (en) Live broadcast data processing method and device, electronic equipment and storage medium
CN111723237A (en) Media content access control method
EP3272127B1 (en) Video-based social interaction system
CN113014934A (en) Product display method, product display device, computer equipment and storage medium
CN115362474A (en) Scoods and hairstyles in modifiable video for custom multimedia messaging applications
CN107172178B (en) A kind of content delivery method and device
US20140012792A1 (en) Systems and methods for building a virtual social network
CN104901939B (en) Method for broadcasting multimedia file and terminal and server
CN111666793A (en) Video processing method, video processing device and electronic equipment
CN113515336B (en) Live room joining method, creation method, device, equipment and storage medium
CN113301362B (en) Video element display method and device
CN114666643A (en) Information display method and device, electronic equipment and storage medium
CN114125552A (en) Video data generation method and device, storage medium and electronic device
JP5728141B1 (en) Server, program and method for distributing content
CN114466208B (en) Live broadcast record processing method and device, storage medium and computer equipment
CN111585865A (en) Data processing method, data processing device, computer readable storage medium and computer equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination