CN110611776A - Special effect processing method, computer device and computer storage medium - Google Patents

Special effect processing method, computer device and computer storage medium Download PDF

Info

Publication number
CN110611776A
CN110611776A CN201810523816.0A CN201810523816A CN110611776A CN 110611776 A CN110611776 A CN 110611776A CN 201810523816 A CN201810523816 A CN 201810523816A CN 110611776 A CN110611776 A CN 110611776A
Authority
CN
China
Prior art keywords
special effect
displayed
picture
type
preset
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201810523816.0A
Other languages
Chinese (zh)
Other versions
CN110611776B (en
Inventor
许阳双
邹放
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN201810523816.0A priority Critical patent/CN110611776B/en
Publication of CN110611776A publication Critical patent/CN110611776A/en
Application granted granted Critical
Publication of CN110611776B publication Critical patent/CN110611776B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • G06V40/176Dynamic expression
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/28Recognition of hand or arm movements, e.g. recognition of deaf sign language
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • H04N23/611Control of cameras or camera modules based on recognised objects where the recognised objects include parts of the human body
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/2621Cameras specially adapted for the electronic generation of special effects during image pickup, e.g. digital cameras, camcorders, video cameras having integrated special effects capability
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person

Abstract

A special effects processing method, computer apparatus, and computer storage medium, the method of one embodiment comprising: monitoring a special effect triggering event when a current video shooting picture is displayed; when a special effect trigger event is monitored, determining the event type of the special effect trigger event based on a corresponding video shooting picture when the special effect trigger event is monitored; obtaining a special effect to be displayed matched with the event type; and displaying the obtained special effect to be displayed. According to the scheme, in the process of playing the video picture, the display of the matched special effect is triggered based on the special effect triggering event, so that the efficiency of special effect addition and display is improved.

Description

Special effect processing method, computer device and computer storage medium
Technical Field
The present application relates to the field of computer technologies, and in particular, to a special effect processing method, a computer device, and a storage medium.
Background
With the development of computer and video technologies, video special effect technologies for adding special effects to videos gradually appear. In the current mode of adding special effects to a micro video, after a special effect file is designed, when the special effects are needed to be used, a special effect menu is opened, the special effects needed to be used are selected from the special effect menu, then video shooting is carried out, and in the process of video shooting, the selected special effects are superposed on a video image obtained through shooting to be displayed, so that a video display result of adding the special effects to the video is obtained. However, this method of adding special effects to the video is inefficient because the special effects to be used need to be selected through the special effect menu each time.
Disclosure of Invention
In view of the above, it is necessary to provide a special effect processing method, a computer device, and a computer storage medium for addressing the above technical problems.
A special effects processing method, the method comprising:
monitoring a special effect triggering event when a current video shooting picture is displayed;
when a special effect trigger event is monitored, determining the event type of the special effect trigger event based on a corresponding video shooting picture when the special effect trigger event is monitored;
obtaining a special effect to be displayed matched with the event type;
and displaying the obtained special effect to be displayed.
A computer device comprising a memory storing a computer program and a processor implementing the steps of the special effects processing method as described above when the processor executes the computer program.
A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the special effects processing method as described above.
According to the special effect processing method, the special effect processing device, the computer equipment and the storage medium in the embodiment, when a special effect trigger event is monitored in the process of online current video shooting of a picture, the event type of the special effect trigger event is determined based on the current video shooting picture corresponding to the monitored special effect trigger event, then a special effect to be displayed matched with the event type is obtained, and the special effect to be displayed is displayed, so that the display of the matched special effect can be triggered based on the special effect trigger event in the process of playing the video picture, and the efficiency of special effect addition and display is improved.
Drawings
FIG. 1 is a diagram of an application environment of a special effects processing method in one embodiment;
FIG. 2 is a flow diagram that illustrates a special effects processing method, according to one embodiment;
FIG. 3 is a diagram of a display interface in an example application;
FIG. 4 is a schematic diagram of a display interface in another example of an application;
FIG. 5 is a schematic diagram of a display interface in another example of an application;
FIG. 6 is a schematic diagram of a display interface in another example of an application;
FIG. 7 is a schematic diagram of a display interface in another example of an application;
FIG. 8 is a schematic diagram of a display interface in another example of an application;
FIG. 9 is a schematic diagram of a display interface in another example of an application;
FIG. 10 is a schematic diagram of a display interface in another example of an application;
FIG. 11 is a schematic illustration of a display interface in another example of an application;
FIG. 12 is a schematic diagram of a display interface in another example of an application;
FIG. 13 is a schematic illustration of a display interface in another example of an application;
FIG. 14 is a schematic illustration of a display interface in another example of an application;
FIG. 15 is a schematic illustration of a display interface in another example of an application;
FIG. 16 is a schematic illustration of a display interface in another example of an application;
FIG. 17 is a schematic illustration of a display interface in another example of an application;
FIG. 18 is a schematic illustration of a display interface in another example of an application;
FIG. 19 is a schematic diagram of a display interface in another example of an application;
FIG. 20 is a diagram illustrating an internal structure of a computer device in one embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
The special effect processing method provided by the application can be applied to the application environment shown in fig. 1. The method for acquiring the special effect file may be that the special effect file corresponding to the special effect added to the video is stored locally in the terminal 101, or the terminal 101 acquires the special effect file from the server 100 or other terminals 102, and the acquired special effect file may be downloaded from the server 100 or the terminals 102 when needed, for example, when the special effect is needed to be used, or when the network is idle (for example, when the network traffic is less than a preset traffic threshold), or when the terminal is in a predetermined type of network connection state (for example, when the terminal is connected to the network through wifi (wireless fidelity)). The terminals 101 and 102 may be, but are not limited to, various personal computers, notebook computers, smart phones, tablet computers, and portable wearable devices, as long as recording of videos and superimposition of special effects belonging to videos can be realized, and the server 100 may be realized by an independent server or a server cluster formed by a plurality of servers.
Referring to fig. 2, the special effect processing method in one embodiment includes the following steps S21 to S24, and the method can be executed in the terminal 101 shown in fig. 1.
Step S21: and monitoring a special effect triggering event when the current video shooting picture is displayed.
The current video shooting picture refers to a video image acquired by a camera device (such as a camera) of the terminal, and the acquired video image can be displayed on a terminal screen.
The special effect triggering event refers to an event that triggers addition of a special effect to a video, and the specific mode of the special effect triggering event can be set in various ways.
In one embodiment, it may be determined that a special effect trigger event is monitored when a predetermined gesture feature is recognized from the current video shot. Here, the predetermined gesture feature refers to a feature determined based on the hand of the user recognized from the current video shooting picture. The specific type of the predetermined gesture feature may be set in various ways.
In one embodiment, the predetermined gesture characteristics may include a predetermined user gesture state. The predetermined user gesture state may refer to a form presented by the hand of the user recognized from the current video capture picture, for example, a form presented by the recognized hand of the user as a fist, a palm, or based on a positional relationship between fingers of the hand in the current video capture picture. In one implementation, after the hand of the user is recognized from the current video shooting picture, image analysis may be performed based on picture content corresponding to the hand of the user, so as to determine a gesture form of the user, and specific image recognition and image analysis may be performed in any possible manner.
In one embodiment, the predetermined gesture feature may include a predetermined hand motion trajectory, which refers to a motion trajectory of a user's hand identified from a current video capture. The predetermined hand motion trajectory herein may be any possible motion trajectory, such as a top-to-bottom motion, a bottom-to-top motion, a left-to-right motion, a right-to-left motion, a bottom-to-left-to-top motion, a top-to-right-to-bottom-right motion, a top-to-bottom-to-right motion, or a bottom-to-top-to-left motion. It is to be understood that the upper, lower, left and right herein may refer to a position relative to the current video capture picture, and in one embodiment, the predetermined hand motion trajectory may be identified by tracking the identified hand after the hand of the user is identified from the current video capture picture.
In one embodiment, the predetermined gesture feature may include a predetermined gesture motion, the predetermined gesture motion being a hand motion of a user's hand identified from the current video capture, the hand motion of the user's hand may be a hand gesture, and some more common gesture motions may be, for example, a ringing finger, a flicking finger, a shaking hand, and so on. In one embodiment, after the hand features are recognized from the preset current video shooting picture, the hand shape and the hand feature change information are analyzed to determine the gesture type of the recognized preset gesture action. The gesture type here may be a ringing finger, a flicking finger, a shaking hand, etc. as described above.
In an embodiment, when a screen touch event is detected when a current video shooting screen is displayed, and the screen content corresponding to the touch position of the screen touch event is the predetermined type of screen content, it may be determined that a special effect trigger event is monitored. Therefore, the special effect can be triggered to be added to the video only according to the touch action of the user on the specified content in the current video shooting picture. The predetermined type of picture content may be set according to actual needs, such as a head portrait of a human face, images of five sense organs of the human face (such as eyes, nose, eyebrows, mouth, ears, etc.), and a specific manner of identifying the predetermined type of picture content may be performed in any manner of image analysis.
In one embodiment, the special effect triggering event may be determined to be monitored when a current video shot is displayed and a predetermined type of picture content is monitored from the current video shot. The predetermined type picture content is the same as the predetermined type picture content described above. Therefore, the user does not need to touch the video, and the special effect can be triggered to be added to the video by analyzing the current video shooting picture.
In an embodiment, when a current video shooting picture is displayed and a predetermined type of picture content is monitored from the current video shooting picture, determining that a special effect trigger event is monitored may specifically include:
when a current video shooting picture is displayed, monitoring preset type picture contents from the current video shooting picture, and judging that a special effect triggering event is monitored when the number of the monitored preset type picture contents is larger than or equal to the preset number. The predetermined number may be set in combination with actual requirements, for example, the predetermined number may be set to 1, and in this case, it is determined that the special effect triggering event is monitored as long as the predetermined type of screen content is monitored. For another example, the predetermined number may be set to 2, and in this case, it is determined that the special effect trigger event is monitored only when more than two predetermined types of screen contents are monitored, so as to facilitate distinguishing from a case where only one predetermined type of screen contents is monitored. Taking the predetermined type of picture content as the head portrait of the human face as an example, by setting the predetermined number to a numerical value of 2 or more, it is possible to distinguish from a case where a self-timer shot does not require a special effect.
In an embodiment, when a current video shooting picture is displayed and a predetermined type of picture content is monitored from the current video shooting picture, determining that a special effect trigger event is monitored may specifically include:
when a current video shooting picture is displayed, monitoring preset type picture content from the current video shooting picture, and judging that a special effect triggering event is monitored when a voice special effect control instruction is received. The voice special effect control device comprises a voice frequency acquisition device, a voice frequency processing device and a voice frequency processing device, wherein the voice frequency acquisition device can acquire voice frequency information and recognize the voice frequency information to convert the voice frequency information into character information, so that whether a voice special effect control instruction is received or not is judged.
In an embodiment, when a current video shooting picture is displayed and a predetermined type of picture content is monitored from the current video shooting picture, determining that a special effect trigger event is monitored may specifically include:
when a current video shooting picture is displayed, monitoring preset type picture content from the current video shooting picture, and judging that a special effect triggering event is monitored when preset mouth shape characteristics are identified from the current video shooting picture. The mouth feature can be analyzed by performing video analysis on the current video shooting picture, and the predetermined mouth feature can be recognized by analyzing the change of the mouth feature.
Step S22: when a special effect trigger event is monitored, determining the event type of the special effect trigger event based on a video shooting picture corresponding to the monitored special effect trigger event.
The event type of the special effect trigger event is determined, so that the special effect to be added and displayed can be conveniently obtained, and the event type is related to the corresponding special effect trigger event and the corresponding video shooting picture when the special effect trigger event is monitored.
In one embodiment, for example, when the predetermined gesture feature is recognized, it is determined that the special effect trigger event is monitored, and an event type of the special effect trigger event may be determined based on a gesture type of the recognized predetermined gesture feature. The gesture type can be determined by analyzing a current video display picture corresponding to the recognized predetermined gesture feature. Taking the predetermined gesture feature as the predetermined user gesture state as an example, the recognized gesture type and the corresponding event type may be a gesture form type. Taking the predetermined gesture feature as the predetermined hand motion trajectory as an example, the recognized gesture type and the corresponding event type may be a hand trajectory type. Taking the predetermined gesture feature as the predetermined gesture, the recognized gesture type and the corresponding event type may be a gesture action type.
In an embodiment, for example, when the special effect trigger event is related to the predetermined type of picture content, as described above, when a screen touch event is detected when the current video capture picture is displayed, and the picture content corresponding to the touch position of the screen touch event is the predetermined type of picture content, it is determined that the special effect trigger event is monitored, or when the predetermined type of picture content is monitored from the current video capture picture, it is determined that the special effect trigger event is monitored, the event type of the special effect trigger event is the special effect type of the specific picture content, and the special effect type of the specific picture content is related to the identified predetermined type of picture content.
Step S23: and acquiring a special effect to be displayed matched with the event type.
The special effect to be displayed refers to a special effect used for displaying, the special effect to be displayed can be a specific special effect file or special effect information in the special effect file, the special effect information can contain special effect elements and expression forms of the special effect elements, and the special effect elements can contain any possible elements such as pictures, files, backgrounds, shading and the like.
When the special effect to be displayed, which is matched with the event type, is obtained, any method can be adopted for obtaining.
In one embodiment, for example, when a predetermined gesture feature is recognized and a special effect trigger event is determined to be monitored, obtaining a special effect to be displayed, which is matched with the event type, includes: and selecting a special effect to be displayed from all special effects matched with the gesture type based on the recognized gesture type of the preset gesture characteristics.
For any gesture type, there may be only one special effect matched with the gesture type, or there may be more than two special effects. When there are more than two special effects matched with the gesture type, the selection can be performed through a certain rule, for example, the selection is randomly performed from the matched special effects, or an undisplayed special effect is selected from the matched special effects.
In an embodiment, the selecting a special effect to be displayed from the special effects matched with the gesture type may include: and selecting the special effects to be displayed from the special effects which are matched with the gesture type and are not historically displayed. In an example, the history display special effect may be a special effect displayed when a predetermined gesture feature of the gesture type is recognized within a latest preset time period of displaying the current video shooting picture this time. In another example, the historical display special effects may be a preset number of recently displayed special effects when the current video shooting picture is displayed this time and a predetermined gesture feature of the gesture type is recognized.
In one example, when a special effect to be displayed is selected from special effects which are matched with the gesture type and are not historical display special effects, if only one special effect is matched with the gesture type and are not historical display special effects, the special effect can be directly used as the special effect to be displayed. In one example, if there are more than two effects that match the gesture type and are outside of the historical display effects, then the selection may be made by random selection or other selection rules. In one example, if there are no effects that match the gesture type and are outside of the historical display effects, then a selection may be made directly from the effects that match the gesture type.
In an embodiment, the selecting a special effect to be displayed from the special effects matched with the gesture type may include: and selecting special effects to be displayed from the special effects matched with the gesture type and the user parameter information according to the user parameter information of the equipment for displaying the current video shooting picture. Wherein, in one example, the user parameter information includes at least one of the following parameters: the equipment comprises user personal information related to a user account logged in by the equipment, the geographical position of the equipment, weather information corresponding to the geographical position of the equipment, the altitude of the equipment, the current time, the facial expression of the user and mood information. The mood information can be determined by identifying the facial expression of the user in the current video shooting picture, and the facial expression of the user can be the facial expression information of the face in the identified current video shooting picture.
In one embodiment, selecting a special effect to be displayed from the special effects matched with the gesture type may include: and selecting special effects to be displayed from the special effects matched with the gesture type and the position relation based on the position relation between the picture content with the preset gesture characteristics and the face of the user. The face position of the user can be the face position in the current video shooting picture.
In an embodiment, for example, when the special effect trigger event is related to a predetermined type of screen content, the obtaining a special effect to be displayed, which is matched with the event type, may include: and performing preset image content processing on the preset type image content to obtain a special effect to be displayed, which is matched with the event type. Here, the predetermined screen content processing may be any possible screen content processing method, such as performing an enlargement processing, a reduction processing, or a graphic conversion processing on the predetermined type of screen content.
In an embodiment, for example, when the special effect trigger event is related to a predetermined type of screen content, the obtaining a special effect to be displayed, which is matched with the event type, may include: and determining preset type image contents to be processed from the monitored preset type image contents, and performing preset image content processing on the preset type image contents to be processed to obtain a special effect to be displayed, wherein the special effect is matched with the event type. The preset type of picture content to be processed is randomly selected from various preset type of picture contents, or the preset type of picture content meeting preset conditions in various preset type of picture contents.
In an example, when the number of the monitored predetermined types of screen contents is greater than 1, the predetermined types of screen contents to be processed may be that no predetermined type of screen contents currently displaying a special effect exists in the monitored predetermined types of screen contents. In this case, while or after the predetermined picture content processing is performed on the predetermined type of picture content to be processed and the special effect to be displayed matched with the event type is obtained, the special effect corresponding to the predetermined type of picture content having the currently displayed special effect may be further removed.
In an embodiment, when a predetermined type of picture content is monitored from the current video capture picture and a voice special effect control instruction is received, and when it is determined that a special effect trigger event is monitored, obtaining a special effect to be displayed, which is matched with the event type, may include various possible implementation manners, which are exemplified below with reference to several examples thereof.
In one example, acquiring a special effect to be displayed matched with the event type includes: when the number of the monitored preset type of picture contents is 1, if no special effect is added to the preset picture contents when a voice special effect control instruction is received, carrying out preset picture content processing on the preset type of picture contents to be processed, and obtaining a special effect to be displayed, wherein the special effect is matched with the event type. The predetermined screen content processing here may be any possible screen content processing manner, such as performing enlargement processing, reduction processing, or graphic conversion processing on the predetermined type of screen content.
In one example, acquiring a special effect to be displayed matched with the event type includes: when the number of the monitored preset type of picture contents is 1, if the preset picture contents have the current display special effect when a voice special effect control instruction is received, clearing the current display special effect. Therefore, the special effect of the preset type of picture content can be started and cleared through the voice special effect control instruction. In another example, when the number of the monitored predetermined types of picture contents is 1, if the predetermined picture contents have the current display special effect when the voice special effect control instruction is received, the predetermined picture contents may also be subjected to predetermined picture content processing to obtain a to-be-displayed special effect matched with the event type, and the to-be-displayed special effect is different from the current display special effect. Therefore, the special effect added by the preset type of picture content can be switched through the voice special effect control instruction.
In an embodiment, when the predetermined type of picture content is monitored from the current video shot picture, and when the predetermined mouth shape feature is identified from the current video shot picture, and it is determined that the special effect trigger event is monitored, obtaining the special effect to be displayed, which is matched with the event type, may include various possible implementation manners, which are exemplified below with reference to several examples thereof.
In one example, acquiring a special effect to be displayed matched with an event type comprises the following steps: and when the number of the monitored preset type of picture contents is 1, if no special effect is added to the preset picture contents when the preset mouth shape characteristics are identified, performing preset picture content processing on the preset type of picture contents to be processed to obtain the special effect to be displayed, wherein the special effect is matched with the event type. The predetermined picture content processing may be any possible picture content processing method, such as performing an enlargement processing, a reduction processing, or a graphic conversion processing on the predetermined type of picture content.
In one example, acquiring a special effect to be displayed matched with an event type comprises the following steps: when the number of the monitored preset type of picture contents is 1, if the preset picture contents have the current display special effect when the preset mouth shape characteristics are identified, clearing the current display special effect. Therefore, the special effect of the preset type of picture content can be controlled to be started and cleared through the preset mouth shape characteristic. In another example, when the number of the monitored predetermined types of image contents is 1, if a current display special effect exists in the predetermined image contents when the predetermined mouth shape feature is identified, the predetermined image contents may also be subjected to predetermined image content processing to obtain a to-be-displayed special effect matched with the event type, and the to-be-displayed special effect is different from the current display special effect. Thereby, the special effect added by the preset type picture content can be switched through the preset mouth shape characteristic.
In one example, acquiring a special effect to be displayed matched with an event type comprises the following steps: and when the monitored number of the preset type of the image contents is more than 1, determining the preset type of the image contents to be processed, and performing preset image content processing on the preset type of the image contents to be processed to obtain a special effect to be displayed, wherein the special effect is matched with the event type. In an example, the predetermined type of picture content to be processed is a predetermined type of picture content corresponding to the identified predetermined mouth shape feature. In another example, the predetermined type of picture content to be processed is a predetermined type of picture content other than the predetermined type of picture content corresponding to the identified predetermined mouth shape feature.
In one example, when the monitored number of the preset type of the image contents is greater than 1, the preset type of the image contents to be processed is determined, the preset type of the image contents to be processed is processed by the preset image contents, a special effect to be displayed, which is matched with the event type, is obtained, and the special effect corresponding to the preset type of the image contents with the currently displayed special effect is eliminated.
Step S24: and displaying the obtained special effect to be displayed.
When the obtained special effect to be displayed is displayed, the special effect to be displayed can be displayed in a corresponding mode based on the determined event type and the obtained type of the special effect to be displayed.
In one embodiment, for example, when it is determined that a special effect trigger event is monitored when a predetermined gesture feature is recognized, displaying the obtained special effect to be displayed includes: and displaying the special effect to be displayed at the special effect display position in the current video shooting picture. The special effect display position corresponds to a gesture feature position, and the gesture feature position is a position of the picture content with the preset gesture feature recognized in the current video shooting picture.
In one embodiment, for example, when it is determined that a special effect trigger event is monitored when a predetermined gesture feature is recognized, displaying the obtained special effect to be displayed includes: identifying the face position of a user in the current video shooting picture; and displaying the special effect to be displayed based on the face position of the user. And displaying the special effect to be displayed based on the face position of the user in any possible way. Which are exemplified below in connection with several examples thereof.
In one example, displaying the special effect to be displayed based on the user face position may include: and displaying the special effect to be displayed based on the position relation between the gesture characteristic position and the face position of the user. And the gesture feature position is the position of the picture content with the recognized preset gesture feature in the current video shooting picture. The user face position includes at least one of: a crown position, an eyebrow position, an eye position, a nose position, a mouth position, an ear position, a face contour, and a chin position.
For example, when the user face position includes a head position, a head effect of the effects to be displayed may be displayed at the identified head position in the current video capture picture. And when the face position of the user comprises an eyebrow position, displaying the eyebrow special effect in the special effects to be displayed at the identified eyebrow position in the current video shooting picture. And when the face position of the user comprises an eye position, displaying the eye special effect in the special effects to be displayed at the recognized eye position in the current video shooting picture. When the face position of the user includes a nose position, a nose effect of the effects to be displayed may be displayed at the identified nose position in the current video photographing picture. When the user face position includes a mouth position, a mouth special effect of the special effects to be displayed may be displayed at the mouth position in the recognized current video-captured picture. And when the face position of the user comprises an ear position, displaying the ear special effect in the special effects to be displayed at the identified ear position in the current video shooting picture. And when the face position of the user comprises a face contour, displaying the face contour special effect in the special effects to be displayed along the recognized face contour position in the current video shooting picture. And when the face position of the user comprises a chin position, displaying a chin special effect in the special effects to be displayed at the recognized chin position in the current video shooting picture.
In one embodiment, displaying the obtained special effect to be displayed may include: and replacing the current video shooting picture with the special effect to be displayed for displaying. In this case, only the obtained special effect to be displayed can be displayed without being superimposed on the current video shot again
In one embodiment, displaying the obtained special effect to be displayed may include: and overlapping the special effect to be displayed and the face picture of the user and then displaying. In one example, the user face picture may be a user face image recognized from a current video capture picture in which the predetermined gesture motion is recognized. In another example, the user face picture may be a user face image recognized from a video photographing picture obtained by further photographing after recognizing the predetermined gesture motion.
In one embodiment, displaying the obtained special effect to be displayed may further include: acquiring a background to be used; and replacing the picture background in the current video shooting picture by using the background to be used, wherein the picture background is picture content except the figure outline in the current video shooting picture. Therefore, when the obtained special effect to be displayed is displayed, the picture background in the current video shooting picture can be further replaced, and a special effect display result with stronger interactivity is obtained.
In one embodiment, after the determining the event type of the special effect triggering event in step S22, before the displaying the acquired special effect to be displayed in step S24, the method may further include: and acquiring a special effect of a preset type and displaying the special effect. Therefore, before the obtained special effect to be displayed is displayed, the preset type of special effect can be displayed firstly and used as the transition special effect in the special effect display process, and the interaction performance of special effect display is further improved.
In one embodiment, before monitoring the special effect triggering event, the method may further include the steps of: and overlapping and displaying the current video shooting picture and the selected initial special effect. Therefore, before a special effect trigger event is monitored, the interaction performance of special effect display can be further improved by displaying the current video shooting picture and the selected initial special effect in an overlapping mode.
In one example, displaying the current video capture picture in superimposition with the selected initial special effect may include: identifying a figure outline in the current video shooting picture; and extracting a foreground image based on the figure outline, and displaying the foreground image and the selected initial special effect in an overlapping mode.
In one embodiment, in the presence of the selected initial effect, obtaining an effect to be displayed that matches the event type includes: and selecting a special effect to be displayed from the special effects matched with the type of the selected initial special effect and the gesture type of the preset gesture motion.
In an embodiment, the displaying the obtained special effect to be displayed may further include: and playing the audio information matched with the special effect to be displayed, and superposing and displaying an additional special effect corresponding to the special effect to be displayed.
According to the special effect processing method, the special effect processing device, the computer equipment and the storage medium in the embodiment, when a special effect trigger event is monitored in the process of online current video shooting of a picture, the event type of the special effect trigger event is determined based on the current video shooting picture corresponding to the monitored special effect trigger event, then a special effect to be displayed matched with the event type is obtained, and the special effect to be displayed is displayed, so that the display of the matched special effect can be triggered based on the special effect trigger event in the process of playing the video picture, and the efficiency of special effect addition and display is improved.
Based on the scheme of the embodiment described above, the following is exemplified in connection with several application examples thereof.
In the process that a user uses the terminal, after a camera of the terminal is directly opened or an APP (Application) installed on the terminal is opened, the terminal starts to record videos through related operations on the APP, and the recorded and obtained current video shooting meeting can be displayed on the terminal. During the recording of the video, the end user may make various possible gestures, actions, expressions, or the like. In the process of displaying the current video shooting picture, whether a special effect triggering event occurs is judged by analyzing the displayed current video shooting picture. It can be understood that the analysis process of the displayed current video shot picture may be analysis after the video picture is collected by the camera of the terminal and before the video picture is displayed on the terminal screen, or synchronous analysis during the process of displaying the video picture on the terminal screen.
A schematic diagram of a display picture of a current video shooting picture displayed by a terminal in one embodiment is shown in fig. 3, and by analyzing the current video shooting picture shown in fig. 3, a gesture state of a predetermined user stretching out two fingers can be identified in the current video shooting picture, so that a special effect trigger event can be judged and monitored, the corresponding event type can be determined to be stretching out two fingers or other corresponding related type names, a special effect to be displayed is selected and obtained from various special effects corresponding to the event type, and the special effect to be displayed is displayed. The special effect to be displayed may be randomly selected from the special effects corresponding to the event type, or may be selected based on other rules, such as different from the special effect displayed last time or based on the priority and usage rate of each special effect. After the special effect to be displayed is selected, the special effect to be displayed can be obtained locally from the terminal, and the special effect to be displayed can also be downloaded from the server. When the special effect to be displayed is displayed, the obtained special effect to be displayed can be superposed on the current video shooting picture for displaying.
When the obtained special effect to be displayed is superposed on the current video shooting picture to be displayed, the figure outline and the user face position on the current video shooting picture can be identified, then the obtained special effect to be displayed is displayed by combining the identified figure outline and the user face position, the background to be used can be simultaneously obtained in the process of displaying the obtained special effect to be displayed, the background to be used is adopted to replace the picture background in the current video shooting picture, and the picture background is the picture content except the figure outline in the current video shooting picture. It can be understood that when the obtained special effect to be displayed is superimposed on the current video shooting picture for displaying, the special effect to be displayed can be displayed by tracking the figure outline, the face position of the user and the like in the current video shooting picture. In one example, the current video capture picture shown in FIG. 3 shows the display interface after a special effect is to be displayed as shown in FIG. 4-A.
In the process of displaying the current video shooting picture, if the preset user gesture state of the gesture type is analyzed again, a new special effect to be displayed can be triggered to be displayed. When the gesture type shown in fig. 3 is analyzed again, after a new special effect to be displayed is obtained by triggering, a display interface after the special effect to be displayed is shown as 4-B in fig. 4. If the analyzed gesture types of the gesture states of the preset user are different in the subsequent process of displaying the current video shooting picture, triggering to obtain a special effect to be displayed corresponding to the gesture types for displaying, such as a terminal display interface corresponding to the gesture type shown in fig. 4-C and a terminal display interface corresponding to the gesture type of the palm dragging face shown in fig. 4-D.
A schematic diagram of a display picture of a current video shot displayed by a terminal in one embodiment is shown in fig. 5, and a hand movement trajectory extending from left to right of a palm in the current video shot can be identified by analyzing the current video shot shown in fig. 5, so that a special effect trigger event can be judged and monitored, a corresponding event type can be determined to be extending from left to right of the palm or corresponding other types of names, a special effect to be displayed can be selected and obtained from various special effects corresponding to the event type, and the special effect to be displayed can be displayed. Before displaying the obtained special effect to be displayed, a preset type of special effect can be obtained and displayed, so that the transition special effect is realized. The predetermined type of special effect for realizing the transition special effect may be determined in combination with various possible manners, for example, a fixed special effect may be set as the transition special effect, such as a smoke special effect, or may be determined based on user parameter information of a device that displays a current video shot. A display diagram of transition effects of the transition effect displayed by the terminal in one example is shown in fig. 6.
Correspondingly, the obtained special effect to be displayed may be determined based on any possible rule, for example, one of the special effects corresponding to the event type (e.g., the movement track of the palm moving left and right) may be randomly selected, or the special effect to be displayed may be determined based on another rule (e.g., determined based on the click rate, the sharing rate, or other data of the corresponding special effects), or the special effect to be displayed may be determined based on user parameter information of the device displaying the current video capture picture. The obtained special effect to be displayed can be directly displayed by replacing the current video shooting face, and a display interface displaying the special effect to be displayed in one example is shown as 7-a in fig. 7.
If the hand movement track is analyzed again, a new special effect to be displayed can be triggered to be displayed. When the hand movement trajectory of the left-right movement of the palm is analyzed again, after a new special effect to be displayed is obtained by triggering, a display interface after the special effect to be displayed is shown as 7-B or 7-C in FIG. 7.
A schematic diagram of a display screen of a current video shot displayed by a terminal in one embodiment is shown in fig. 8, and by analyzing the current video shot shown in fig. 8, a predetermined gesture motion of a touch head in the current video shot can be identified, so that it can be determined that a special effect trigger event is monitored. And determining that the corresponding event type is a model or other corresponding related type names, selecting and acquiring a special effect to be displayed from all special effects corresponding to the event type, and displaying the special effect to be displayed. The special effect to be displayed may be randomly selected from the special effects corresponding to the event type (the head touch action), or may be selected based on other rules, such as being different from the special effect displayed last time, or being determined based on the priority/usage rate of each special effect, or being determined based on user parameter information of the device displaying the current video shooting picture, and the like. After the special effect to be displayed is selected, the special effect to be displayed can be obtained locally from the terminal, and the special effect to be displayed can also be downloaded from the server. When the special effect to be displayed is displayed, the obtained special effect to be displayed can be superposed on the current video shooting picture for displaying.
When the obtained special effect to be displayed is superposed on the current video shooting picture for displaying, the figure outline and the user face position on the current video shooting picture can be identified, and then the obtained special effect to be displayed is displayed by combining the identified figure outline and the user face position. As in the present example, after the user face position is recognized, when a predetermined gesture motion is recognized, the boundary position (the lowermost part of the hand position shown in fig. 8) of the display content corresponding to the predetermined gesture motion may be in a position corresponding to the person outline on the current video shooting screen and the user face position, and the display position of the special effect to be displayed may be determined. As shown in fig. 8, for the special effect of the head, the lowermost position of the special effect may be aligned with the lowermost position of the hand on the current video-captured picture, and then displayed.
In the process of displaying the obtained special effect to be displayed, the background to be used can be obtained at the same time, and the background to be used is adopted to replace the picture background in the current video shooting picture, wherein the picture background is the picture content except the figure outline in the current video shooting picture. It can be understood that when the obtained special effect to be displayed is superimposed on the current video shooting picture for displaying, the special effect to be displayed can be displayed by tracking the figure outline, the face position of the user and the like in the current video shooting picture. In one example, after aligning the lowermost position of the special effect with the lowermost position on the current video capture screen with the hand position, the current video capture screen shown in fig. 8 displays the display interface after the special effect is to be displayed as shown in 9-a in fig. 9.
And in the process of displaying the current video shooting picture, if the preset gesture action is analyzed again, a new special effect to be displayed can be triggered to be displayed. When the gesture type shown in fig. 8 is analyzed again, after a new special effect to be displayed is obtained by triggering, a display interface after the special effect to be displayed is shown as 9B in fig. 9. And if the gesture types of the predetermined gesture actions analyzed in the subsequent process of displaying the current video shooting picture are different, triggering to obtain a special effect to be displayed corresponding to the gesture types for displaying.
In one embodiment, if a terminal user makes some gesture actions and facial expressions in the process of recording a video, a predetermined gesture action can be identified in a current video shooting picture through analyzing the displayed current video shooting picture, so that a special effect triggering event can be determined to be monitored. After the special effect triggering event is judged to be monitored, the current video shooting picture can be further analyzed, and the facial expression of the user in the current video shooting picture is analyzed. In some embodiments, during the process of analyzing the displayed current video shooting picture, the predetermined gesture action and the user facial expression may be analyzed at the same time, and after the predetermined gesture action is recognized to occur in the current video shooting picture and the predetermined user facial expression is recognized to occur in the current video shooting picture, it is determined that the special effect trigger event is monitored.
After the preset gesture action and the user facial expression appearing in the current video shooting picture are determined, the special effect to be displayed can be selected and obtained from the special effects corresponding to the preset gesture action and the user facial expression, and the special effect to be displayed is displayed. The special effect to be displayed may be randomly selected from various special effects corresponding to the event type (predetermined gesture motion and user facial expression), or may be selected based on other rules, such as being different from the last displayed special effect, or being determined based on the priority/usage rate of each special effect, or being determined based on user parameter information of the device displaying the current video shooting picture, and the like. After the special effect to be displayed is selected, the special effect to be displayed can be obtained locally from the terminal, and the special effect to be displayed can also be downloaded from the server. When the special effect to be displayed is displayed, the obtained special effect to be displayed can be superposed on the current video shooting picture for displaying.
When the obtained special effect to be displayed is superposed on the current video shooting picture for displaying, the figure outline and the user face position on the current video shooting picture can be identified, and then the obtained special effect to be displayed is displayed by combining the identified figure outline, the user face position and the preset gesture action. It can be understood that when the obtained special effect to be displayed is superimposed on the current video shooting picture for displaying, the special effect to be displayed can be displayed by tracking the position of the picture content (such as a hand image) corresponding to the predetermined gesture motion in the current video shooting picture. In one example, when the terminal user makes a tapping motion (i.e., a recognized predetermined gesture motion is taken as tapping) and the recognized facial expression of the user is angry, the display interfaces after the current video shooting screen displays a special effect to be displayed may be sequentially shown as 10-a, 10-B, and 10-C in fig. 10 according to the time sequence.
In one embodiment, a predetermined gesture motion (such as shaking hands, flicking fingers, rubbing fingers, and the like) occurring in the current video shot picture can be identified by analyzing the displayed current video shot picture, so that it can be determined that a special effect trigger event is monitored. And determining the event type (such as hand shaking, finger flicking, finger rubbing and the like) corresponding to the preset gesture action, selecting and acquiring the special effects to be displayed from the special effects corresponding to the event type, and displaying the special effects to be displayed. The special effect to be displayed may be randomly selected from the special effects corresponding to the event type, or may be selected based on other rules, such as being different from the last displayed special effect, or being determined based on the priority/usage rate of each special effect, or being determined based on user parameter information of the device displaying the current video shooting picture, and the like. After the special effect to be displayed is selected, the special effect to be displayed can be obtained locally from the terminal, and the special effect to be displayed can also be downloaded from the server. When the special effect to be displayed is displayed, the obtained special effect to be displayed can be superposed on the current video shooting picture for displaying.
When the obtained special effect to be displayed is superposed on the current video shooting picture for displaying, the position of picture content (such as hand image) corresponding to the preset gesture motion in the current video shooting picture can be tracked for displaying. For example, a specified type position of the special effect to be displayed is displayed in correspondence with a screen position (edge or center position of the hand) determined based on the display content corresponding to the predetermined gesture motion.
Accordingly, in one embodiment, a display screen of the current video capture picture displayed by the terminal is schematically shown as 11-a in fig. 11, a predetermined gesture motion of a hand crank appearing in the current video capture picture can be recognized by analyzing the current video capture picture shown in fig. 11-a, and a display interface displaying a special effect to be displayed in the current video capture picture can be sequentially shown as 11-B, 11-C and 11-D in fig. 11 along with a time sequence. In one embodiment, a display screen schematic diagram of a current video shot displayed by a terminal is shown as 12-a in fig. 12, a predetermined gesture motion of a pop-up finger appearing in the current video shot can be recognized by analyzing the current video shot shown in fig. 12-a, and a display interface after the current video shot displays a special effect to be displayed can be shown as 12-B in fig. 12, and the special effect of popping up a flame is displayed. In one embodiment, a display picture schematic diagram of a current video shot displayed by a terminal is shown as 12-a in fig. 12, a predetermined gesture action of paper rubbing appearing in the current video shot can be recognized by analyzing the current video shot shown in fig. 13-a, and a display interface after a special effect to be displayed is displayed on the current video shot can be shown as 13-B in fig. 13, so that the special effect of flame rubbing is displayed.
A schematic view of a display of a current video capture picture displayed by the terminal in one embodiment is shown in fig. 14-a of fig. 14. When the current video shooting picture is displayed in the embodiment, the selected initial special effect is combined for displaying, namely the current video shooting picture and the selected initial special effect are overlapped and displayed, so that the initial special effect display effect is presented. When the current video shooting picture and the selected initial special effect are displayed in a superimposed manner, the portrait outline in the current video shooting picture can be extracted, and then the extracted portrait outline and the selected initial special effect are displayed in a superimposed manner, as shown in fig. 14-a.
If the terminal user makes a preset gesture action or action track in the camera range, the video shooting picture obtained by the camera can be analyzed to obtain the preset gesture action or action track. It can be understood that the video shot picture obtained by the camera for analysis does not necessarily need to be displayed on the display interface, and only the picture data collected by the camera is needed. And if the preset gesture action or action track appears in the current video shooting picture through analysis, judging that the special effect triggering time is monitored. Referring to fig. 14-a and 14-B, if the recognized motion trajectory moves from right to left, it is determined that a special effect trigger event is monitored, and at this time, in a case that the initially selected initial special effect has a relevant setting condition, the initially selected initial special effect may be adjusted, for example, the remotely sensed picture position is moved from the right side to the left side of the screen in fig. 14-a to 14-B.
After the special effect trigger event is judged and monitored, the special effect to be displayed can be selected and obtained from all the special effects corresponding to the special effect trigger event, and the special effect to be displayed is displayed. Before displaying the obtained special effect to be displayed, a preset type of special effect can be obtained and displayed, so that the transition special effect is realized. The predetermined type of special effect for realizing the transition special effect may be determined in combination with various possible manners, for example, a fixed special effect may be set as the transition special effect, such as a smoke special effect, or may be determined based on user parameter information of a device that displays a current video shot. A display diagram of the transition effect displayed by the terminal in one example is shown in fig. 15-a.
Correspondingly, the obtained special effect to be displayed may be determined based on any possible rule, for example, one of the special effects corresponding to the event type (e.g., the movement track of the palm moving left and right) may be randomly selected, or the special effect to be displayed may be determined based on another rule (e.g., determined based on the click rate, the sharing rate, or other data of the corresponding special effects), or the special effect to be displayed may be determined based on user parameter information of the device displaying the current video capture picture. The obtained special effect to be displayed can directly replace the current video shooting face changing or directly display after being overlapped with the extracted portrait edge, and the display interface after displaying the special effect to be displayed in one example can be sequentially shown as 15-B and 15-C in FIG. 15.
If the special effect triggering event is monitored again, a new special effect to be displayed can be triggered to be displayed, and for example, after the new special effect to be displayed is obtained through triggering, a display interface after the special effect to be displayed is shown as 16-B, 16-C or 16-D in fig. 16. It will be appreciated that the eye, nose, mouth, etc. features of fig. 16-B, 16-C or 16-D may be features of the five sense organs in the image of the user's face as identified from the current video display, and need not be included in the special effects to be displayed. In other embodiments, special effects may be applied to the facial features of the recognized facial image of the user.
A schematic view of a display screen of a current video capture screen displayed by the terminal in one embodiment is shown in fig. 17-a of fig. 17. In this embodiment, when the current video capture picture is displayed, if the terminal user clicks the related picture content (for example, the user head in fig. 17-B) in the current video capture picture, a screen touch event may be detected, and the picture content corresponding to the touch position of the screen touch event is the predetermined type of picture content, so as to determine that a special effect trigger event is detected. In the example shown in fig. 17, assuming that the predetermined type of screen content is a user head image, in the process of displaying the current video capture screen, if a screen touch event is detected and the screen content corresponding to the touch position of the screen touch event is the user head image, it is determined that a special effect trigger event is detected. And performing preset picture content processing (such as amplification processing, reduction processing, expansion processing, deformation processing or other processing) on the head image of the user to obtain a matched special effect to be displayed, and displaying the special effect to be displayed and the current video shooting picture. In one embodiment, taking the predetermined screen content processing as an example of the enlarging processing, the display interface for displaying the obtained special effect to be displayed after being superimposed on the current video shooting screen is shown as 17-C in fig. 17.
A schematic view of a display of a current video capture picture displayed by the terminal in one embodiment is shown in fig. 18-a of fig. 18. In the embodiment, when the current video shooting picture is displayed, the current video shooting picture can be directly analyzed to determine whether the preset type of picture content exists, and if the preset type of picture content exists, it can be determined that a special effect triggering event is monitored. In the example shown in fig. 18, taking the predetermined type of screen content as the head image of the user as an example, it may be determined that the special effect triggering event is monitored when the number of the monitored predetermined type of screen content is greater than or equal to a predetermined number (e.g., greater than 1).
In this case, the predetermined type of picture content to be processed may be selected from the identified predetermined type of picture content, and the predetermined type of picture content to be processed may be selected in any possible manner, such as randomly selecting, or determining the predetermined type of picture content closest to the camera as the predetermined type of picture content to be processed. The manner of determining the distance between the predetermined type of picture content and the camera may be performed using any image analysis. As shown in connection with fig. 18, the user's head image on the right side of the screen in 18-a may be selected as a predetermined type of screen content to be processed, on which predetermined screen content processing is performed. Taking the predetermined screen content processing as the enlargement processing as an example, a display interface for displaying the special effect to be displayed determined in accordance with this after being superimposed on the current video capture screen is shown in fig. 18-B.
Referring to fig. 18, in the process of displaying the current video captured image, if the relationship between the predetermined type of image content in the current video captured image changes, for example, the distance between the predetermined type of image content and the camera changes, or the predetermined type of image content to be processed cannot be clearly identified (for example, the predetermined type of image content to be processed is blocked or the predetermined type of image content to be processed cannot be identified), the special effect of the original predetermined type of image content to be processed may be removed, the predetermined type of image content to be processed may be determined again, and the predetermined image content to be processed may be processed on the newly determined predetermined type of image content, so as to generate a new special effect processing result. Referring to fig. 18, if the head image of the user on the right side of the screen cannot be clearly recognized, the predetermined type of picture content to be processed is determined again, and after the predetermined picture content processing is performed accordingly, the displayed display interface may be as shown in 18-C of fig. 18.
A schematic view of a display of a current video capture picture displayed by the terminal in one embodiment is shown in fig. 19-a. In this embodiment, when the current video shot picture is displayed, in addition to analyzing the current video shot picture to determine whether a predetermined type of picture content exists, a voice special effect control instruction may be monitored at the same time, or the current video shot picture may be analyzed to determine whether a predetermined mouth shape feature exists. If the preset type of picture content exists, a voice special effect control instruction is received or a preset mouth shape characteristic exists at the same time.
In the example shown in fig. 19, taking the predetermined type of picture content as the user head image and monitoring the predetermined mouth shape feature as an example, when the current video shooting picture is displayed, if the predetermined mouth shape feature exists, the predetermined type of picture content to be processed is selected from the identified predetermined type of picture content, and the manner of selecting the predetermined type of picture content to be processed may be performed in any possible manner, for example, the corresponding predetermined type of picture content with the predetermined mouth shape feature exists is determined as the predetermined type of picture content to be processed. As shown in fig. 19, the user head image on the left side of the screen has a predetermined mouth shape feature, and the user head image on the left side of the screen is determined as the predetermined type of picture content to be processed, and is subjected to the predetermined picture content processing. Taking the predetermined screen content processing as an example of the enlargement processing, a display interface for displaying the special effect to be displayed determined in accordance with this after being superimposed on the current video capture screen is shown in fig. 19-B. As shown in fig. 19, when the predetermined mouth shape feature exists in the head image of the user on the right side of the screen and the predetermined mouth shape feature does not exist in the head image of the user on the left side of the screen, the head image of the user on the right side of the screen may be determined as the predetermined type of picture content to be processed, and the predetermined picture content processing may be performed thereon, and the special effect existing in the head image of the user on the left side of the screen may be cleared. Taking the predetermined screen content processing as an example of the enlargement processing, a display interface for displaying the special effect to be displayed determined in accordance with this after being superimposed on the current video capture screen is shown in fig. 19-C.
It should be understood that, although the foregoing embodiments have been described with reference to several specific examples of application display interfaces, in an actual technical implementation process, each special effect trigger condition, a determination method of a special effect to be displayed, and a display method of a special effect to be displayed may be arbitrarily combined, and are not limited.
One embodiment provides a computer device, which may be a terminal, and its internal structure diagram may be as shown in fig. 20. The computer device includes a processor, a memory, a network interface, a display screen, and an input device connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement a special effects processing method. The display screen of the computer equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, a key, a track ball or a touch pad arranged on the shell of the computer equipment, an external keyboard, a touch pad or a mouse and the like.
Those skilled in the art will appreciate that the architecture shown in fig. 20 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
Accordingly, in an embodiment, there is also provided a computer device comprising a memory and a processor, the memory having stored therein a computer program, the processor implementing the steps of the method in any of the embodiments as described above when executing the computer program.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in the embodiments provided herein may include non-volatile and/or volatile memory, among others. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDRSDRAM), Enhanced SDRAM (ESDRAM), Synchronous Link DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (15)

1. A method of adding special effects, the method comprising:
monitoring a special effect triggering event when a current video shooting picture is displayed;
when a special effect trigger event is monitored, determining the event type of the special effect trigger event based on a corresponding video shooting picture when the special effect trigger event is monitored;
obtaining a special effect to be displayed matched with the event type;
and displaying the obtained special effect to be displayed.
2. The method of claim 1,
and when the preset gesture features are recognized from the current video shooting picture, judging that a special effect triggering event is monitored.
3. The method of claim 2, wherein obtaining the special effect to be displayed that matches the event type comprises:
and selecting a special effect to be displayed from all special effects matched with the gesture type based on the recognized gesture type of the preset gesture characteristics.
4. The method according to claim 3, wherein selecting a special effect to be displayed from the special effects matched with the gesture type comprises any one of the following:
the first item: selecting a special effect to be displayed from various special effects which are matched with the gesture type and are not historically displayed, wherein the historically displayed special effect is a special effect displayed when a preset gesture feature of the gesture type is recognized in a latest preset time period when a current video shooting picture is displayed at this time, or the historically displayed special effect is a preset number of special effects which are displayed most recently when the current video shooting picture is displayed at this time and the preset gesture feature of the gesture type is recognized;
the second term is: selecting a special effect to be displayed from all special effects matched with the gesture type and the user parameter information according to the user parameter information of the equipment for displaying the current video shooting picture, wherein the user parameter information comprises at least one of the following parameters: the method comprises the steps that personal information of a user, the geographical position of the device, weather information corresponding to the geographical position of the device, the altitude of the device, the current time, facial expressions of the user and mood information which are related to a user account logged in by the device are identified, wherein the mood information is determined by identifying the facial expressions of the user in a current video shooting picture, and the facial expressions of the user are expression information of faces in the current video shooting picture;
the third item: and selecting special effects to be displayed from the special effects matched with the gesture types and the position relation based on the position relation between the picture content with the preset gesture characteristics and the user face, wherein the user face is the face position in the current video shooting picture.
5. The method of claim 2, wherein displaying the obtained special effect to be displayed comprises at least one of:
the first item:
displaying the special effect to be displayed at a special effect display position in the current video shooting picture, wherein the special effect display position corresponds to a gesture characteristic position, and the gesture characteristic position is a position of the picture content with the preset gesture characteristic recognized in the current video shooting picture;
the second term is:
identifying the face position of a user in the current video shooting picture;
displaying the special effect to be displayed based on the face position of the user;
the third item:
and replacing the current video shooting picture with the special effect to be displayed for displaying.
6. The method of claim 5, wherein displaying the special effect to be displayed based on the user face position comprises:
and displaying the special effect to be displayed based on the position relation between the gesture characteristic position and the face position of the user, wherein the gesture characteristic position is the position of the picture content with the preset gesture characteristic recognized in the current video shooting picture.
7. The method according to claim 5 or 6, wherein the obtained special effect to be displayed is displayed, further comprising:
acquiring a background to be used;
and replacing the picture background in the current video shooting picture by using the background to be used, wherein the picture background is picture content except the figure outline in the current video shooting picture.
8. The method according to claim 2, wherein after determining the event type of the special effect triggering event and before displaying the acquired special effect to be displayed, the method further comprises:
and acquiring a special effect of a preset type and displaying the special effect.
9. The method according to claim 2, further comprising, before monitoring for a special effect triggering event, the steps of:
and overlapping and displaying the current video shooting picture and the selected initial special effect.
10. The method of claim 9, wherein obtaining the special effect to be displayed that matches the event type comprises:
and selecting a special effect to be displayed from the special effects matched with the type of the selected initial special effect and the gesture type of the preset gesture motion.
11. The method of claim 1, comprising any one of:
the first item: when a current video shooting picture is displayed, when a screen touch event is detected, and picture content corresponding to the touch position of the screen touch event is preset type picture content, judging that a special effect trigger event is monitored, wherein the event type of the special effect trigger event is a special effect type of specific picture content;
the second term is: when a current video shooting picture is displayed, monitoring preset type picture contents from the current video shooting picture, and judging that a special effect triggering event is monitored when the number of the monitored preset type picture contents is greater than or equal to a preset number;
the third item: when a current video shooting picture is displayed, monitoring preset type picture content from the current video shooting picture, and judging that a special effect triggering event is monitored when a voice special effect control instruction is received;
the fourth item: when a current video shooting picture is displayed, monitoring preset type picture content from the current video shooting picture, and judging that a special effect triggering event is monitored when preset mouth shape characteristics are identified from the current video shooting picture.
12. The method of claim 11, wherein obtaining the special effect to be displayed that matches the event type comprises at least one of:
processing the preset type of picture content to obtain a special effect to be displayed, wherein the special effect is matched with the event type;
determining preset type image contents to be processed from the monitored preset type image contents, wherein the preset type image contents to be processed are randomly selected from the preset type image contents or are the preset type image contents meeting preset conditions in the preset type image contents; and carrying out preset image content processing on the image content of the preset type to be processed to obtain a special effect to be displayed, which is matched with the event type.
13. The method of claim 11, wherein obtaining the special effect to be displayed that matches the event type comprises at least one of:
when the number of the monitored preset type of picture contents is 1, if no special effect is added to the preset picture contents when the voice special effect control instruction is received, carrying out preset picture content processing on the preset type of picture contents to be processed to obtain a special effect to be displayed, wherein the special effect is matched with the event type;
when the number of the monitored preset type of picture contents is 1, if the preset picture contents have a current display special effect when the voice special effect control instruction is received, removing the current display special effect, or performing preset picture content processing on the preset type of picture contents to obtain a special effect to be displayed, which is matched with the event type, wherein the special effect to be displayed is different from the current display special effect;
when the number of the monitored preset type picture contents is more than 1, determining preset type picture contents to be processed, wherein the preset type picture contents to be processed are the preset type picture contents which do not have the current special effect to be displayed in the monitored preset type picture contents, and performing preset picture content processing on the preset type picture contents to be processed to obtain the special effect to be displayed matched with the event type;
when the number of the monitored preset type picture contents is more than 1, determining preset type picture contents to be processed, wherein the preset type picture contents to be processed are preset type picture contents without current display special effects in the monitored preset type picture contents, performing preset picture content processing on the preset type picture contents to be processed, obtaining special effects to be displayed, which are matched with the event types, and removing the special effects corresponding to the preset type picture contents with the current display special effects;
when the number of the monitored preset type of picture contents is 1, if no special effect is added to the preset picture contents when the preset mouth shape characteristics are identified, carrying out preset picture content processing on the preset type of picture contents to be processed to obtain a special effect to be displayed, wherein the special effect is matched with the event type;
when the number of the monitored preset type of picture contents is 1, if the preset picture contents have a current display special effect when the preset mouth shape characteristics are identified, removing the current display special effect, or performing preset picture content processing on the preset type of picture contents to obtain a special effect to be displayed matched with the event type, wherein the special effect to be displayed is different from the current display special effect;
when the number of the monitored preset type of picture contents is more than 1, determining preset type of picture contents to be processed, and performing preset picture content processing on the preset type of picture contents to be processed to obtain a special effect to be displayed, wherein the special effect is matched with the event type; the image content of the predetermined type to be processed is the image content of the predetermined type corresponding to the identified feature of the predetermined mouth shape, or the image content of the predetermined type to be processed is the image content of the predetermined type other than the image content of the predetermined type corresponding to the identified feature of the predetermined mouth shape.
14. A computer device comprising a memory and a processor, the memory storing a computer program, wherein the processor implements the steps of the method of any one of claims 1 to 13 when executing the computer program.
15. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 13.
CN201810523816.0A 2018-05-28 2018-05-28 Special effect processing method, computer device and computer storage medium Active CN110611776B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810523816.0A CN110611776B (en) 2018-05-28 2018-05-28 Special effect processing method, computer device and computer storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810523816.0A CN110611776B (en) 2018-05-28 2018-05-28 Special effect processing method, computer device and computer storage medium

Publications (2)

Publication Number Publication Date
CN110611776A true CN110611776A (en) 2019-12-24
CN110611776B CN110611776B (en) 2022-05-24

Family

ID=68887493

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810523816.0A Active CN110611776B (en) 2018-05-28 2018-05-28 Special effect processing method, computer device and computer storage medium

Country Status (1)

Country Link
CN (1) CN110611776B (en)

Cited By (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111639613A (en) * 2020-06-04 2020-09-08 上海商汤智能科技有限公司 Augmented reality AR special effect generation method and device and electronic equipment
CN112148188A (en) * 2020-09-23 2020-12-29 北京市商汤科技开发有限公司 Interaction method and device in augmented reality scene, electronic equipment and storage medium
CN112511750A (en) * 2020-11-30 2021-03-16 维沃移动通信有限公司 Video shooting method, device, equipment and medium
WO2021052130A1 (en) * 2019-09-17 2021-03-25 西安中兴新软件有限责任公司 Video processing method, apparatus and device, and computer-readable storage medium
CN112584215A (en) * 2020-12-10 2021-03-30 深圳创维-Rgb电子有限公司 Video transmission method and device, smart television and storage medium
CN112672036A (en) * 2020-12-04 2021-04-16 北京达佳互联信息技术有限公司 Shot image processing method and device and electronic equipment
CN112766214A (en) * 2021-01-29 2021-05-07 北京字跳网络技术有限公司 Face image processing method, device, equipment and storage medium
CN112804466A (en) * 2020-12-31 2021-05-14 重庆电子工程职业学院 Real-time interactive special effect camera shooting and photographing method
CN112866562A (en) * 2020-12-31 2021-05-28 上海米哈游天命科技有限公司 Picture processing method and device, electronic equipment and storage medium
CN112995694A (en) * 2021-04-09 2021-06-18 北京字节跳动网络技术有限公司 Video display method and device, electronic equipment and storage medium
CN113301358A (en) * 2020-07-27 2021-08-24 阿里巴巴集团控股有限公司 Content providing and displaying method and device, electronic equipment and storage medium
WO2021213191A1 (en) * 2020-04-23 2021-10-28 中兴通讯股份有限公司 Video processing method, terminal, and computer readable storage medium
WO2021233378A1 (en) * 2020-05-21 2021-11-25 北京字节跳动网络技术有限公司 Method and apparatus for configuring video special effect, device, and storage medium
CN113709573A (en) * 2020-05-21 2021-11-26 北京字节跳动网络技术有限公司 Method, device and equipment for configuring video special effects and storage medium
CN114079817A (en) * 2020-08-20 2022-02-22 北京达佳互联信息技术有限公司 Video special effect control method and device, electronic equipment and storage medium
CN114173155A (en) * 2022-02-09 2022-03-11 檀沐信息科技(深圳)有限公司 Virtual live image processing method
CN114245021A (en) * 2022-02-14 2022-03-25 北京火山引擎科技有限公司 Interactive shooting method, electronic device, storage medium and computer program product
CN114489404A (en) * 2022-01-27 2022-05-13 北京字跳网络技术有限公司 Page interaction method, device, equipment and storage medium
WO2023040633A1 (en) * 2021-09-14 2023-03-23 北京字跳网络技术有限公司 Video generation method and apparatus, and terminal device and storage medium
EP4152741A4 (en) * 2021-07-31 2023-12-06 Honor Device Co., Ltd. Image processing method and electronic device
WO2023224548A3 (en) * 2022-05-17 2024-01-25 脸萌有限公司 Special effects video determination method and apparatus, electronic device and storage medium
WO2024061274A1 (en) * 2022-09-20 2024-03-28 成都光合信号科技有限公司 Method for filming and generating video, and related device
CN113709383B (en) * 2020-05-21 2024-05-03 抖音视界有限公司 Method, device, equipment and storage medium for configuring video special effects

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105578056A (en) * 2016-01-27 2016-05-11 努比亚技术有限公司 Photographing terminal and method
CN106231415A (en) * 2016-08-18 2016-12-14 北京奇虎科技有限公司 A kind of interactive method and device adding face's specially good effect in net cast
US9773524B1 (en) * 2016-06-03 2017-09-26 Maverick Co., Ltd. Video editing using mobile terminal and remote computer
CN107786549A (en) * 2017-10-16 2018-03-09 北京旷视科技有限公司 Adding method, device, system and the computer-readable medium of audio file

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105578056A (en) * 2016-01-27 2016-05-11 努比亚技术有限公司 Photographing terminal and method
US9773524B1 (en) * 2016-06-03 2017-09-26 Maverick Co., Ltd. Video editing using mobile terminal and remote computer
CN106231415A (en) * 2016-08-18 2016-12-14 北京奇虎科技有限公司 A kind of interactive method and device adding face's specially good effect in net cast
CN107786549A (en) * 2017-10-16 2018-03-09 北京旷视科技有限公司 Adding method, device, system and the computer-readable medium of audio file

Cited By (30)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021052130A1 (en) * 2019-09-17 2021-03-25 西安中兴新软件有限责任公司 Video processing method, apparatus and device, and computer-readable storage medium
WO2021213191A1 (en) * 2020-04-23 2021-10-28 中兴通讯股份有限公司 Video processing method, terminal, and computer readable storage medium
WO2021233378A1 (en) * 2020-05-21 2021-11-25 北京字节跳动网络技术有限公司 Method and apparatus for configuring video special effect, device, and storage medium
US11962929B2 (en) 2020-05-21 2024-04-16 Beijing Bytedance Network Technology Co., Ltd. Method, apparatus, and device for configuring video special effect, and storage medium
CN113709573A (en) * 2020-05-21 2021-11-26 北京字节跳动网络技术有限公司 Method, device and equipment for configuring video special effects and storage medium
CN113709383A (en) * 2020-05-21 2021-11-26 北京字节跳动网络技术有限公司 Method, device and equipment for configuring video special effects and storage medium
CN113709383B (en) * 2020-05-21 2024-05-03 抖音视界有限公司 Method, device, equipment and storage medium for configuring video special effects
CN111639613B (en) * 2020-06-04 2024-04-16 上海商汤智能科技有限公司 Augmented reality AR special effect generation method and device and electronic equipment
CN111639613A (en) * 2020-06-04 2020-09-08 上海商汤智能科技有限公司 Augmented reality AR special effect generation method and device and electronic equipment
CN113301358A (en) * 2020-07-27 2021-08-24 阿里巴巴集团控股有限公司 Content providing and displaying method and device, electronic equipment and storage medium
CN113301358B (en) * 2020-07-27 2023-08-29 阿里巴巴集团控股有限公司 Content providing and displaying method and device, electronic equipment and storage medium
CN114079817A (en) * 2020-08-20 2022-02-22 北京达佳互联信息技术有限公司 Video special effect control method and device, electronic equipment and storage medium
CN112148188A (en) * 2020-09-23 2020-12-29 北京市商汤科技开发有限公司 Interaction method and device in augmented reality scene, electronic equipment and storage medium
CN112511750A (en) * 2020-11-30 2021-03-16 维沃移动通信有限公司 Video shooting method, device, equipment and medium
CN112511750B (en) * 2020-11-30 2022-11-29 维沃移动通信有限公司 Video shooting method, device, equipment and medium
WO2022116604A1 (en) * 2020-12-04 2022-06-09 北京达佳互联信息技术有限公司 Image captured image processing method and electronic device
CN112672036A (en) * 2020-12-04 2021-04-16 北京达佳互联信息技术有限公司 Shot image processing method and device and electronic equipment
CN112584215A (en) * 2020-12-10 2021-03-30 深圳创维-Rgb电子有限公司 Video transmission method and device, smart television and storage medium
CN112804466A (en) * 2020-12-31 2021-05-14 重庆电子工程职业学院 Real-time interactive special effect camera shooting and photographing method
CN112866562A (en) * 2020-12-31 2021-05-28 上海米哈游天命科技有限公司 Picture processing method and device, electronic equipment and storage medium
CN112766214A (en) * 2021-01-29 2021-05-07 北京字跳网络技术有限公司 Face image processing method, device, equipment and storage medium
CN112995694A (en) * 2021-04-09 2021-06-18 北京字节跳动网络技术有限公司 Video display method and device, electronic equipment and storage medium
EP4152741A4 (en) * 2021-07-31 2023-12-06 Honor Device Co., Ltd. Image processing method and electronic device
WO2023040633A1 (en) * 2021-09-14 2023-03-23 北京字跳网络技术有限公司 Video generation method and apparatus, and terminal device and storage medium
CN114489404A (en) * 2022-01-27 2022-05-13 北京字跳网络技术有限公司 Page interaction method, device, equipment and storage medium
CN114173155A (en) * 2022-02-09 2022-03-11 檀沐信息科技(深圳)有限公司 Virtual live image processing method
CN114245021A (en) * 2022-02-14 2022-03-25 北京火山引擎科技有限公司 Interactive shooting method, electronic device, storage medium and computer program product
CN114245021B (en) * 2022-02-14 2023-08-08 北京火山引擎科技有限公司 Interactive shooting method, electronic equipment, storage medium and computer program product
WO2023224548A3 (en) * 2022-05-17 2024-01-25 脸萌有限公司 Special effects video determination method and apparatus, electronic device and storage medium
WO2024061274A1 (en) * 2022-09-20 2024-03-28 成都光合信号科技有限公司 Method for filming and generating video, and related device

Also Published As

Publication number Publication date
CN110611776B (en) 2022-05-24

Similar Documents

Publication Publication Date Title
CN110611776B (en) Special effect processing method, computer device and computer storage medium
US10565437B2 (en) Image processing device and method for moving gesture recognition using difference images
CN108932053B (en) Drawing method and device based on gestures, storage medium and computer equipment
US10599914B2 (en) Method and apparatus for human face image processing
US10742900B2 (en) Method and system for providing camera effect
JP6355829B2 (en) Gesture recognition device, gesture recognition method, and information processing device
CN108762505B (en) Gesture-based virtual object control method and device, storage medium and equipment
CN107483834B (en) Image processing method, continuous shooting method and device and related medium product
JP5423525B2 (en) Handwriting input device, handwriting input method, and handwriting input program
CN111857356A (en) Method, device, equipment and storage medium for recognizing interaction gesture
CN113099298B (en) Method and device for changing virtual image and terminal equipment
KR20140002007A (en) Information processing device, information processing method, and recording medium
CN112527115B (en) User image generation method, related device and computer program product
CN108875667B (en) Target identification method and device, terminal equipment and storage medium
CN107273869B (en) Gesture recognition control method and electronic equipment
CN108897589B (en) Human-computer interaction method and device in display equipment, computer equipment and storage medium
CN103106388B (en) Method and system of image recognition
CN110348193A (en) Verification method, device, equipment and storage medium
CN112274909A (en) Application operation control method and device, electronic equipment and storage medium
CN111880660B (en) Display screen control method and device, computer equipment and storage medium
CN112749357B (en) Interaction method and device based on shared content and computer equipment
CN110164444A (en) Voice input starting method, apparatus and computer equipment
JP7173535B2 (en) Motion extraction device, motion extraction method, and program
CN111405175A (en) Camera control method and device, computer equipment and storage medium
CN108874115B (en) Session scene display method and device and computer equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant