CN111880709A - Display method and device, computer equipment and storage medium - Google Patents

Display method and device, computer equipment and storage medium Download PDF

Info

Publication number
CN111880709A
CN111880709A CN202010758534.6A CN202010758534A CN111880709A CN 111880709 A CN111880709 A CN 111880709A CN 202010758534 A CN202010758534 A CN 202010758534A CN 111880709 A CN111880709 A CN 111880709A
Authority
CN
China
Prior art keywords
video picture
target object
display
target
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010758534.6A
Other languages
Chinese (zh)
Inventor
于霄
张春
赵代平
薛永娇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Sensetime Technology Development Co Ltd
Original Assignee
Beijing Sensetime Technology Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Sensetime Technology Development Co Ltd filed Critical Beijing Sensetime Technology Development Co Ltd
Priority to CN202010758534.6A priority Critical patent/CN111880709A/en
Publication of CN111880709A publication Critical patent/CN111880709A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04845Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The present disclosure provides a display method, an apparatus, a computer device and a storage medium, wherein the method comprises: the method comprises the steps of displaying an adjusting interface of a video picture, wherein the adjusting interface comprises an adjusting option for adjusting a display special effect of the video picture; detecting a trigger operation of at least one adjusting option in the adjusting interface; in response to the triggering operation, determining a display special effect of the video picture corresponding to the triggered at least one adjusting option; and displaying the display special effect in the video picture.

Description

Display method and device, computer equipment and storage medium
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to a display method, an apparatus, a computer device, and a storage medium.
Background
With the development of internet technology, video live broadcast services are gradually emerging and widely applied in the industries of e-commerce, education and the like. In a video live broadcast scene, a main broadcast can use live broadcast equipment to carry out video live broadcast anytime and anywhere, but the display effect of the video live broadcast is often the real environment where the main broadcast is located and the real image of the main broadcast, and the display effect has certain limitation, so that the video live broadcast effect is poor.
Disclosure of Invention
The embodiment of the disclosure at least provides a display method, a display device, computer equipment and a storage medium.
In a first aspect, an embodiment of the present disclosure provides a display method, including:
the method comprises the steps of displaying an adjusting interface of a video picture, wherein the adjusting interface comprises an adjusting option for adjusting a display special effect of the video picture;
detecting a trigger operation of at least one adjusting option in the adjusting interface;
in response to the triggering operation, determining a display special effect of the video picture corresponding to the triggered at least one adjusting option;
and displaying the display special effect in the video picture.
In the embodiment of the disclosure, various adjustment options related to the display special effect can be set for the video picture, so that a user can trigger the required adjustment options based on the display requirements in the live broadcasting process, and the setting of the display special effect corresponding to the triggered adjustment options is completed. Specifically, the terminal device can respectively determine the display special effect of the video picture corresponding to each adjustment option by responding to the triggering operation of the user on at least one adjustment option, and then can present the corresponding display special effect in the video picture.
In some embodiments, the at least one of the adjustment options that is triggered comprises a background beautification option;
the determining, in response to the triggering operation, a special display effect of the video picture corresponding to the triggered at least one of the adjustment options includes:
responding to the trigger operation, and acquiring a scene image of a target theme indicated by the trigger operation;
detecting a first image area where the target object is located and a second image area except the first image area from the video picture;
and replacing a second image area in the video picture by using the scene image of the target theme to obtain the video picture conforming to the target theme.
In the embodiment of the disclosure, a background beautification option can be set for the video picture, so that a user can trigger the background beautification option based on the display requirement in the live broadcast process, and personalized setting of the background of the video picture is realized. Specifically, the terminal device may respond to the trigger operation to replace an image area other than the image area where the target object is located with a video picture conforming to the target theme, so that the background area of the displayed video picture can conform to the theme required by the user, and various video live broadcast scene requirements can be met.
In some embodiments, the at least one of the adjustment options that is triggered comprises an adjustment option of a target object;
the determining, in response to the triggering operation, a special display effect of the video picture corresponding to the triggered at least one of the adjustment options includes:
responding to the trigger operation, acquiring key point information of a target object appearing in the video picture, and acquiring a target material indicated by the trigger operation;
and determining display parameters of the target material in the video picture based on the key point information of the target object, and adding the target material on the video picture by using the display parameters.
In the embodiment of the disclosure, an adjustment option for a target object can be set for a video picture, so that a user can trigger a required adjustment option based on a display requirement in a live broadcasting process, and personalized setting of the target object (such as the user) of the video picture is realized. Specifically, the terminal device may respond to the trigger operation, and determine the display parameters of the target material in the video frame based on the detection result of the target object, so that the target material required by the user may be superimposed in the video frame to meet the requirements of various video live scenes.
In some embodiments, the target object comprises a face, the target material comprising cosmetic material and/or cosmetic material of at least one facial organ;
the determining, based on the key point information of the target object, display parameters of the target material in the video picture, and adding the target material on the video picture by using the display parameters includes:
determining a face contour position and/or a position of at least one facial organ based on the positions of the key points of the target object;
determining display parameters of the beauty material based on the face contour position, and/or determining display parameters of the beauty material based on the position of the at least one facial organ; the display parameters comprise a display position and a display size;
and adding the beauty material and/or the makeup material which accord with the display parameters on the video picture.
In the embodiment of the disclosure, a beauty and/or makeup material may be provided, and display parameters of the beauty and makeup material may be determined based on a face contour or a position of a facial organ, so that a target object subjected to beauty or makeup processing is displayed in a video frame, and an effect of beautifying the target object (such as a user) of the video frame is achieved.
In some embodiments, the target object includes a face and/or a limb, the target material includes effect sticker material;
the determining, based on the key point information of the target object, display parameters of the target material in the video picture, and adding the target material on the video picture by using the display parameters includes:
identifying an expression category of the face and/or an action category of the limb appearing in the video frame;
and under the condition that the expression category and/or the action category accord with a set triggering display condition, determining the display position of the special effect sticker material based on the positions of the key points of the target object, and adding the sticker material to the display position of the video picture, wherein the display position of the sticker material changes along with the change of the positions of the key points of the target object.
In the embodiment of the disclosure, the special effect sticker material can be provided, and the display of the special effect sticker material in the video picture can be triggered through the expression or the action of the target object, so that the interactivity and the interestingness in the video live broadcast process are increased, and the video watching experience is improved.
In some embodiments, the at least one of the adjustment options that is triggered comprises an adjustment option of a target object;
the determining, in response to the triggering operation, a special display effect of the video picture corresponding to the triggered at least one of the adjustment options includes:
responding to the trigger operation, and acquiring a part to be deformed and a deformation degree of the target object indicated by the trigger operation; acquiring key point information of a target object appearing in the video picture;
determining a to-be-deformed area corresponding to at least one to-be-deformed part in the video picture based on the key point information of the target object;
and according to the deformation degree, carrying out deformation processing on the part to be deformed on the at least one area to be deformed to obtain a video picture after deformation processing.
In this embodiment of the disclosure, the adjustment options for the target object may further include an adjustment option for performing deformation processing on the target object, and based on the deformation portion and the deformation degree indicated by the trigger operation, the deformation of the deformation portion of the target object may be performed to a corresponding degree, for example, adjustment operations such as face thinning may be performed by deformation, so as to achieve an effect of beautifying the target object in the video picture.
In some embodiments, the method further comprises:
determining deflection parameters of the part to be deformed based on the key point information of the target object;
according to the deformation degree, the deformation processing is carried out on the part to be deformed on the at least one area to be deformed to obtain a video picture after the deformation processing, and the method comprises the following steps:
determining the deformation direction of the at least one region to be deformed based on the key point information and the deflection parameters;
and according to the deformation degree and the deformation direction, carrying out deformation processing on the part to be deformed on the at least one area to be deformed to obtain a video picture after the deformation processing.
In the embodiment of the disclosure, deflection parameters such as the orientation of the target object can be taken into consideration when the target object is deformed, so as to achieve a better deformation effect.
In some embodiments, the at least one of the adjustment options that is triggered comprises an adjustment option of a target object;
the determining, in response to the triggering operation, a special display effect of the video picture corresponding to the triggered at least one of the adjustment options includes:
responding to the trigger operation, and acquiring an avatar of the target object indicated by the trigger operation; acquiring key point information of the target object;
acquiring target bone parameters matched with the target object according to the key point information;
creating a three-dimensional virtual model corresponding to the target object according to the target skeleton parameters and a preset standard three-dimensional virtual model;
rendering and generating an avatar of the target object based on the three-dimensional virtual model, and replacing the target object appearing in the video picture with the avatar of the target object.
In the embodiment of the disclosure, the adjustment options for the target object may further include an adjustment option for generating an avatar of the target object, and after the avatar indicated by the triggering operation is received, the avatar matched with the target object may be remodeled through key point information of the target object, so that the avatar corresponding to a display area of the target object in a video picture is displayed, interestingness in a live video process is increased, and privacy of the target object can be guaranteed.
In a second aspect, the present disclosure provides a display device, the device comprising:
the first display module is used for displaying an adjustment interface of a video picture, and the adjustment interface comprises an adjustment option for adjusting a display special effect of the video picture;
the detection module is used for detecting the triggering operation of at least one adjusting option in the adjusting interface;
the determining module is used for responding to the triggering operation and determining the display special effect of the video picture corresponding to the triggered at least one adjusting option;
and the second display module is used for displaying the display special effect in the video picture.
In some embodiments, the at least one of the adjustment options that is triggered comprises a background beautification option;
the determining module, when determining, in response to the triggering operation, a display special effect of the video image corresponding to the triggered at least one of the adjustment options, is specifically configured to:
responding to the trigger operation, and acquiring a scene image of a target theme indicated by the trigger operation;
detecting a first image area where the target object is located and a second image area except the first image area from the video picture;
and replacing a second image area in the video picture by using the scene image of the target theme to obtain the video picture conforming to the target theme.
In some embodiments, the at least one of the adjustment options that is triggered comprises an adjustment option of a target object;
the determining module, when determining, in response to the triggering operation, a display special effect of the video image corresponding to the triggered at least one of the adjustment options, is specifically configured to:
responding to the trigger operation, acquiring key point information of a target object appearing in the video picture, and acquiring a target material indicated by the trigger operation;
and determining display parameters of the target material in the video picture based on the key point information of the target object, and adding the target material on the video picture by using the display parameters.
In some embodiments, the target object comprises a face, the target material comprising cosmetic material and/or cosmetic material of at least one facial organ;
the determining module, when determining the display parameter of the target material in the video picture based on the key point information of the target object and adding the target material on the video picture by using the display parameter, is specifically configured to:
determining a face contour position and/or a position of at least one facial organ based on the positions of the key points of the target object;
determining display parameters of the beauty material based on the face contour position, and/or determining display parameters of the beauty material based on the position of the at least one facial organ; the display parameters comprise display position and display size
And adding the beauty material and/or the makeup material which accord with the display parameters on the video picture.
In some embodiments, the target object includes a face and/or a limb, the target material includes effect sticker material;
the determining module, when determining the display parameter of the target material in the video picture based on the key point information of the target object and adding the target material on the video picture by using the display parameter, is specifically configured to:
identifying an expression category of the face and/or an action category of the limb appearing in the video frame;
and under the condition that the expression category and/or the action category accord with a set triggering display condition, determining the display position of the special effect sticker material based on the positions of the key points of the target object, and adding the sticker material to the display position of the video picture, wherein the display position of the sticker material changes along with the change of the positions of the key points of the target object.
In some embodiments, the at least one of the adjustment options that is triggered comprises an adjustment option of a target object;
the determining module, when determining, in response to the triggering operation, a display special effect of the video image corresponding to the triggered at least one of the adjustment options, is specifically configured to:
responding to the trigger operation, and acquiring a part to be deformed and a deformation degree of the target object indicated by the trigger operation; acquiring key point information of a target object appearing in the video picture;
determining a to-be-deformed area corresponding to at least one to-be-deformed part in the video picture based on the key point information of the target object;
and according to the deformation degree, carrying out deformation processing on the part to be deformed on the at least one area to be deformed to obtain a video picture after deformation processing.
In some embodiments, the determining module is further configured to:
determining deflection parameters of the part to be deformed based on the key point information of the target object;
the determining module, when performing deformation processing on the to-be-deformed portion on the at least one to-be-deformed region according to the deformation degree to obtain a video picture after deformation processing, is specifically configured to:
determining the deformation direction of the at least one region to be deformed based on the key point information and the deflection parameters;
and according to the deformation degree and the deformation direction, carrying out deformation processing on the part to be deformed on the at least one area to be deformed to obtain a video picture after the deformation processing.
In some embodiments, the at least one of the adjustment options that is triggered comprises an adjustment option of a target object;
the determining module, when determining, in response to the triggering operation, a display special effect of the video image corresponding to the triggered at least one of the adjustment options, is specifically configured to:
responding to the trigger operation, and acquiring an avatar of the target object indicated by the trigger operation; acquiring key point information of the target object;
acquiring target bone parameters matched with the target object according to the key point information;
creating a three-dimensional virtual model corresponding to the target object according to the target skeleton parameters and a preset standard three-dimensional virtual model;
rendering and generating an avatar of the target object based on the three-dimensional virtual model, and replacing the target object appearing in the video picture with the avatar of the target object.
In a third aspect, the present disclosure provides a computer device comprising: a processor, a memory and a bus, the memory storing machine-readable instructions executable by the processor, the processor and the memory communicating over the bus when a computer device is running, the machine-readable instructions when executed by the processor performing the steps of the method as set forth above.
In a fourth aspect, the present disclosure provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of the method as set forth above.
For the effect description of the above-mentioned presentation apparatus, computer device, and computer-readable storage medium, reference is made to the description of the above-mentioned presentation method, which is not repeated herein.
In order to make the aforementioned objects, features and advantages of the present disclosure more comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present disclosure, the drawings required for use in the embodiments will be briefly described below, and the drawings herein incorporated in and forming a part of the specification illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the technical solutions of the present disclosure. It is appreciated that the following drawings depict only certain embodiments of the disclosure and are therefore not to be considered limiting of its scope, for those skilled in the art will be able to derive additional related drawings therefrom without the benefit of the inventive faculty.
FIG. 1 shows a flow chart of a presentation method provided by an embodiment of the present disclosure;
FIG. 2 is a flow chart illustrating an embodiment of a display method provided by an embodiment of the present disclosure;
FIG. 3 illustrates a schematic view of an adjustment interface provided by embodiments of the present disclosure;
FIG. 4 is a flowchart illustrating a specific implementation of a presentation method according to an embodiment of the present disclosure;
FIG. 5 illustrates a schematic view of an adjustment interface provided by embodiments of the present disclosure;
FIG. 6 is a flow chart illustrating an embodiment of a display method provided by an embodiment of the present disclosure;
FIG. 7 illustrates a schematic view of an adjustment interface provided by embodiments of the present disclosure;
FIG. 8 is a flow chart illustrating an embodiment of a display method provided by an embodiment of the present disclosure;
FIG. 9 illustrates a schematic view of an adjustment interface provided by embodiments of the present disclosure;
FIG. 10 shows a schematic view of a display device provided by an embodiment of the present disclosure;
fig. 11 shows a schematic diagram of a computer device provided by an embodiment of the present disclosure.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present disclosure more clear, the technical solutions of the embodiments of the present disclosure will be described clearly and completely with reference to the drawings in the embodiments of the present disclosure, and it is obvious that the described embodiments are only a part of the embodiments of the present disclosure, not all of the embodiments. The components of the embodiments of the present disclosure, generally described and illustrated in the figures herein, can be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present disclosure, presented in the figures, is not intended to limit the scope of the claimed disclosure, but is merely representative of selected embodiments of the disclosure. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the disclosure without making creative efforts, shall fall within the protection scope of the disclosure.
With the development of internet technology, video live broadcast services are gradually emerging and widely applied in the industries of e-commerce, education and the like. In a video live broadcast scene, a main broadcast can use live broadcast equipment to carry out video live broadcast anytime and anywhere, but the display effect of the video live broadcast is often the real environment where the main broadcast is located and the real image of the main broadcast, and the display effect has certain limitation, so that the video live broadcast effect is poor.
For example, in a live scene similar to a classroom, a teacher can realize teaching at home through an online live client, students can also watch the content of the teacher in lessons at home through offline live broadcasting, and live broadcasting pictures of the teacher and/or the students can be displayed in the online live broadcasting client. When the user participates in live broadcasting at home, the environment at home can be shot and displayed in a live broadcasting picture, the display effect may influence the experience of the user participating in live broadcasting, and the live broadcasting effect is not good.
For another example, in a live video scene, if a live broadcasting party only speaks alone, only the live person's picture is presented in the video picture, on one hand, the presented content is monotonous, and on the other hand, the display effect of the live person is also poor.
The above-mentioned drawbacks are the results of the inventor after practical and careful study, and therefore, the discovery process of the above-mentioned problems and the solutions proposed by the present disclosure to the above-mentioned problems should be the contribution of the inventor in the process of the present disclosure.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures.
In view of the above drawbacks in a live video scene, the present disclosure provides a display method, an apparatus, a computer device, and a storage medium, which can set various adjustment options related to a display special effect for a video picture, so that a user (for example, a party participating in live broadcasting) can trigger a required adjustment option based on a display requirement in a live broadcasting process, and complete setting of the display special effect corresponding to the triggered adjustment option. Specifically, the terminal device can respectively determine the display special effect of the video picture corresponding to each adjustment option by responding to the triggering operation of the user on at least one adjustment option, and then can present the corresponding display special effect in the video picture.
An execution subject of the display method provided by the embodiment of the present disclosure is generally a computer device with certain computing capability, and the computer device includes, for example: a terminal device or other processing devices, where the terminal device may be a User Equipment (UE), a mobile device, a user terminal, a handheld device, a computing device, a vehicle-mounted device, a smart television, a wearable device, or the like. The computer equipment can be configured with a display screen or externally connected with the display screen, and the display screen can display video pictures, adjusted video pictures with special display effects and the like. In some possible implementations, the display method may be implemented by a processor calling computer readable instructions stored in a memory.
The following describes a display method provided by the embodiment of the present disclosure by taking an execution subject as a terminal device. In the embodiment of the disclosure, a software tool supporting adjustment of live video pictures can be installed in the terminal device, the software tool can be nested in the live video client, and the software tool can be started before a user uses the live video client to perform live video so as to adjust the display effect of the video pictures. For example, the software tool is used as a live video assistant, and may also be understood as a virtual camera, and a video picture presented after processing by the virtual camera may be a video picture obtained by processing an original video picture by using the software tool. The adjustment function of the video frame that can be supported by the software tool may include, but is not limited to, at least one of adjustment functions of beauty, micro-shaping, makeup, filter, sticker, Avatar (Avatar), virtual background, etc. The above adjustment function and the processing flow for adjusting the original video picture by the above adjustment function will be described in detail below.
Referring to fig. 1, a flowchart of a display method provided in the embodiment of the present disclosure is shown, where the method includes steps S101 to S104, where:
s101: and displaying an adjusting interface of the video picture, wherein the adjusting interface comprises an adjusting option for adjusting the displaying special effect of the video picture.
S102, detecting the trigger operation of at least one adjusting option in the adjusting interface.
And S103, responding to the triggering operation, and determining the display special effect of the video picture corresponding to the triggered at least one adjusting option.
And S104, displaying and showing the special effect in the video picture.
In some embodiments, an adjustment interface of a video picture may be displayed in the terminal device, a first area in the adjustment interface may display an adjustment option for adjusting a display special effect of the video picture, and a second area in the adjustment interface may display a video picture acquired in real time by the camera. Before the video picture is adjusted, the video picture displayed in the second area is the originally acquired video picture, and the adjusted video picture can be displayed in the second area in real time through the triggering operation of the adjustment option for the reference of a user.
The display special effects of the video picture can comprise a display special effect of a foreground area and a display special effect of a background area. The display special effect of the foreground area may specifically include a display special effect of a target object, where the target object includes but is limited to a character, a display object, and the like appearing in a video image, and the display special effect for the character is, for example, a beauty, a makeup, a micro-shaping, an avatar, a sticker, and the like. The display special effect of the background area is, for example, a background beautification special effect. Each type of display special effect can correspond to an adjusting option, and the display parameters of the type of display special effect can be adjusted by triggering the adjusting option.
For example, the triggering operation may include a series of operations performed on the triggered at least one adjustment option, and by detecting operation parameters of the series of operations, the video picture may be adjusted based on the operation parameters to obtain an adjusted video picture.
To facilitate understanding of the adjustment options provided by the present disclosure and the process of adjusting a video picture, the following description will be made with reference to specific examples.
When the triggered at least one adjustment option includes a background beautification option, a specific implementation flow of the display method provided by the present disclosure may be shown in fig. 2, specifically including S201 to S204, where:
s201: and displaying an adjusting interface of the video picture, wherein the adjusting interface comprises a background beautifying option for adjusting the display special effect of the background picture.
S202, detecting the trigger operation of the background beautification option in the adjustment interface.
S203, responding to the trigger operation, and acquiring the scene image of the target theme indicated by the trigger operation.
S204, a first image area where the target object is located and a second image area except the first image area are detected from the video picture.
S205, replacing the second image area in the video picture by using the scene image of the target theme to obtain the video picture conforming to the target theme.
And S206, displaying the video picture conforming to the target theme.
For example, a background segmentation technology may be used to identify the first image area and the second image area in the video picture, determine a background area to be processed, that is, the second image area, based on the identification result, and replace the background area of the original video picture based on a scene image of a target subject selected by a user. The scene images of the target subject can be configured in advance by combining with specific application scene requirements, such as scene images conforming to a classroom background, or scene images conforming to a shopping platform background.
For example, the replacement of the background area may be implemented by an image fusion, for example, after the second image area is identified, the pixel value of the second image area may be adjusted to a set value, for example, to an all-white or all-black effect, and then the video picture after the second image area is adjusted is fused with the scene image with the same size, the display effect of the target object in the first image area in the video picture is retained, and the second image area is replaced by the display effect of the image area at the corresponding position in the scene image, so as to obtain the video picture conforming to the target theme. In order to optimize the display effect in the replacement process, some optimization processing operations of the image, such as optimization processing on the edge transition region of the two regions, may also be added in the fusion process.
For example, an adjustment interface that can be presented after the background beautification option is triggered is shown in fig. 3, for example, a first area in fig. 3 shows the background beautification option and a selectable background option, a user may show the selectable background options after triggering the displayed "virtual scene", for example, scene images of different themes such as "virtual scene 1", "virtual scene 2", and after selecting a scene image of a certain target theme, the user may use the above process to complete the replacement of the background, so that the background of a gray area in a second area in fig. 3 is replaced with the scene image of the target theme selected by the user.
In the above embodiment, the background beautification option can be set for the video picture, so that the user can trigger the background beautification option based on the display requirement in the live broadcast process, and the personalized setting of the background of the video picture is realized. Specifically, the terminal device may respond to the trigger operation to replace an image area other than the image area where the target object is located with a video picture conforming to the target theme, so that the background area of the displayed video picture can conform to the theme required by the user, and various video live broadcast scene requirements can be met.
When the at least one triggered adjustment option includes an adjustment option of the target object, a specific implementation flow of the presentation method provided by the present disclosure may be shown in fig. 4, specifically including S401 to S406, where:
s401, displaying an adjusting interface of the video image, wherein the adjusting interface comprises an adjusting option for adjusting the display special effect of the target object.
S402, detecting the trigger operation of the adjusting options of the target object in the adjusting interface.
And S403, acquiring key point information of the target object appearing in the video picture in response to the trigger operation.
And S404, acquiring a target material for triggering operation indication.
S405, determining display parameters of the target material in the video picture based on the key point information of the target object, and adding the target material on the video picture by using the display parameters.
And S406, displaying the video picture added with the target material.
In some embodiments, the target object may include a face, and accordingly, the target material may include cosmetic material and/or cosmetic material of at least one facial organ, such as a human body. The beauty material may include various types of dermabrasion effect material, and the beauty material of the facial organ may include, for example, lip rouge effect material, eye shadow effect material of the eyes, eyeball effect material, and the like, which are not limited by the present disclosure.
For example, the key points of the target object may be identified by a key point detection algorithm, including facial key points and also including limb key points, etc. The identified keypoint information may include the location and number of the keypoint, etc. Specifically, the position of the face contour and/or the position of at least one facial organ may be determined based on the position of the key point of the target object, and then the display parameter of the beauty material may be determined based on the position of the face contour, or the display parameter of the beauty material may be determined based on the position of the at least one facial organ, so as to add the beauty material and/or the beauty material conforming to the display parameter on the video picture. The display parameters include, but are not limited to, a display position, a display size, and the like.
For example, taking a makeup option as an example, an adjustment interface that can be presented after the makeup option is triggered is shown in fig. 5, a first area in fig. 5 shows the makeup option and alternative makeup sticker material options, such as makeup materials of different types of lipstick stickers, blush stickers, and the like, and after a user selects a certain makeup material, the user can adopt the above process to fuse the selected makeup material to a corresponding position of a face in a video frame to present a makeup effect.
In the above embodiment, the display parameters of the beauty and makeup materials are determined based on the face contour or the position of the facial organ, so that the target object subjected to the beauty or makeup processing is displayed in the video frame, and the effect of beautifying the target object (such as a user) of the video frame is achieved.
In some embodiments, the target object may also include a face and/or a limb, and the target material may also include effect sticker material. The special effect sticker material is, for example, a particle special effect material, a flash special effect material, a raindrop special effect material, and the like, and may be specifically configured according to an actual scene requirement, which is not limited in this disclosure.
For example, an expression category of a face and/or an action category of a limb appearing in the video screen may be recognized, and in the case where the expression category and/or the action category meets a set trigger display condition, a display position of the special effect sticker material is determined based on a position of a key point of a target object (a face, a limb, or the like), and the sticker material is added to the display position of the video screen, wherein the display position of the sticker material changes following a change in the position of the key point of the target object.
For example, facial expression categories may include, but are not limited to, smiling, calming, laughing, beeping, etc., limb action categories include, but are not limited to, gesture categories including, but not limited to, thumbing up, putting out an "ok" pose, etc., torso action categories, etc. The expression categories can be further identified through the identification result of the facial key points, and can also be predicted through a pre-trained neural network for identifying the expression categories. The motion types of the limbs can be further identified through the identification result of the key points of the limbs, and can also be predicted through a pre-trained neural network for identifying the motion of the limbs.
For example, several target expression types and/or target action categories may be configured in advance, and in a case that it is detected that the identified expression category is the target expression, and/or the identified limb action category is the target action, it may be determined that the expression category and/or the action category meets the set trigger presentation condition. Or determining that the expression category and/or the action category meet the set trigger display condition under the condition that the identified expression category is the target expression and the retention time of the target expression exceeds a first set time length and/or the identified limb action category is the target action and the retention time of the target action exceeds a second set time length.
For example, the display position of the effect sticker material may be determined by the position of a key point of the target object (face or limb, etc.), for example, the display position of the effect sticker material corresponding to the target gesture may be determined by detecting the position of a key point of the hand, so that the corresponding effect sticker material may move along with the movement of the hand in the process that the user makes the target gesture and moves the hand. For another example, the display position of the special effect sticker material corresponding to the shaking motion is determined by detecting the position of the head key point, so that the corresponding special effect sticker material can move along with the movement of the head when the user shakes the head.
The embodiment provides the special effect sticker material, and the special effect sticker material can be displayed in the video picture through the expression or the action of the target object, so that the interactivity and the interestingness in the video live broadcast process are increased, and the video watching experience is improved.
In the embodiment of the disclosure, adjustment options of a target object can be set for a video picture, so that a user can trigger a required adjustment option based on a display requirement in a live broadcasting process, and personalized setting of the target object (such as the user) of the video picture is realized. Specifically, the terminal device may respond to the trigger operation, and determine the display parameters of the target material in the video frame based on the detection result of the target object, so that the target material required by the user may be superimposed in the video frame to meet the requirements of various video live scenes.
In the embodiment of the disclosure, besides adjustment options such as beauty/makeup/special effect stickers can be provided for the target object, adjustment options of deformation effects can also be provided. The deformation effect includes, for example, a deformation effect of a face, a deformation effect of a body, such as a deformation effect of a face being thin, a leg being thin, and the like.
For the operation process and the display effect executed by the adjustment option, reference may be made to a specific implementation flow of the display method provided by the present disclosure shown in fig. 6, which includes S601 to S607, where:
s601, displaying an adjusting interface of the video image, wherein the adjusting interface comprises an adjusting option for adjusting the deformation effect of the target object.
S602, detecting the trigger operation of the adjusting option for the deformation effect of the target object in the adjusting interface.
And S603, responding to the trigger operation, and acquiring the part to be deformed and the deformation degree of the target object indicated by the trigger operation.
And S604, acquiring key point information of the target object appearing in the video picture.
S605, determining a to-be-deformed area corresponding to at least one to-be-deformed part in the video picture based on the key point information of the target object.
And S606, according to the deformation degree, deforming the part to be deformed on the at least one area to be deformed to obtain a video picture after deformation processing.
And S607, displaying the video picture after the deformation processing.
The part to be deformed and the deformation degree of the target object can be selected and edited by a user. The method comprises the steps of displaying a deformation effect in an adjusting interface, wherein the deformation effect is provided for a face or limb part, and the adjusting interface is provided with a plurality of special effect options. For example, referring to the editing operation area shown on the right side in the adjustment interface shown in fig. 7, by detecting the user operation in the editing operation area, at least one portion to be deformed and the degree of deformation can be determined. And then carrying out deformation treatment of corresponding deformation degree on the part to be deformed. The deformation processing procedure may be, for example, a mesh-based deformation processing, and the like, which is not limited in this disclosure.
In the foregoing embodiment, the adjustment options for the target object may further include an adjustment option for performing deformation processing on the target object, and based on the deformation portion and the deformation degree indicated by the trigger operation, the deformation of the deformation portion of the target object may be performed to a corresponding degree, for example, adjustment operations such as face thinning may be performed by deformation, so as to achieve an effect of beautifying the target object in the video picture.
In some embodiments, during the deformation process, the deflection parameter of the portion to be deformed may also be determined based on the key point information of the target object. The deflection parameters include, for example, the deflection angle of the face, or the deflection angle of the limbs, and the like. In the process of carrying out deformation processing on the part to be deformed on the at least one area to be deformed according to the deformation degree, the deformation direction of the at least one area to be deformed can be determined based on the key point information and the deflection parameters, and then the part to be deformed on the at least one area to be deformed is subjected to deformation processing according to the deformation degree and the deformation direction, so that a video picture after deformation processing is obtained. In this way, deflection parameters such as the orientation of the target object can be taken into consideration when the target object is subjected to deformation processing, so that a better deformation effect can be achieved.
In the embodiment of the disclosure, besides adjustment options such as beauty/makeup/special effect sticker/deformation processing and the like, adjustment options of the virtual image can be provided for the target object. The avatar may be, for example, an avatar such as a cartoon character avatar.
For the operation process and the display effect executed by the adjustment option, reference may be made to a specific implementation flow of the display method provided by the present disclosure shown in fig. 8, which includes S801 to S807, where:
s801, displaying an adjusting interface of the video picture, wherein the adjusting interface comprises an adjusting option for adjusting the virtual image of the target object.
S802, detecting the trigger operation of the adjusting option of the virtual image in the adjusting interface.
And S803, responding to the trigger operation, and acquiring the virtual image of the target object indicated by the trigger operation.
And S804, obtaining key point information of the target object.
And S805, acquiring target skeleton parameters matched with the target object according to the key point information, and creating a three-dimensional virtual model corresponding to the target object according to the target skeleton parameters and a preset standard three-dimensional virtual model.
And S806, rendering and generating an avatar of the target object based on the three-dimensional virtual model, and replacing the target object appearing in the video picture with the avatar of the target object.
S807, a video frame including the avatar of the target object is displayed.
For example, in the case that the target object includes a face, the target bone parameters may be bone parameters for describing a facial contour and facial details (such as a bridge of the nose, an eye socket, ears, etc.) of the target object, and may include three-dimensional key point parameters of the head, and plane feature parameters formed by connecting three-dimensional coordinate points, and the like. The preset standard three-dimensional virtual model can be a three-dimensional virtual model which is remolded in advance according to a standard facial image, the preset standard three-dimensional virtual model is adjusted by using target skeleton parameters matched with the target object, so that the adjusted three-dimensional virtual model can be more fit with the facial contour and facial details of the target object, the adjusted three-dimensional virtual model can be used as the three-dimensional virtual model corresponding to the target object, and the virtual image which is in accordance with the facial contour and facial details of the target object can be rendered. The virtual image can be superposed in the area of the face of the target object in an image fusion or layer superposition mode, so that the original face of the target object can be shielded, and the special display effect of the virtual image with the target object is obtained. For example, referring to the adjustment interface shown in fig. 9, the original face in the video frame displayed on the left side of the adjustment interface is blocked by the selected avatar, so that the user can use the avatar to present a live video, and the avatar can move in real time following the face of the user.
In the embodiment of the disclosure, the adjustment options for the target object may further include an adjustment option for generating an avatar of the target object, and after the avatar indicated by the triggering operation is received, the avatar matched with the target object may be remodeled through key point information of the target object, so that the avatar corresponding to a display area of the target object in a video picture is displayed, interestingness in a live video process is increased, and privacy of the target object can be guaranteed.
In the embodiment of the disclosure, various adjustment options related to the display special effect can be set for the video picture, so that a user can trigger the required adjustment options based on the display requirements in the live broadcasting process, and the setting of the display special effect corresponding to the triggered adjustment options is completed. Specifically, the terminal device can respectively determine the display special effect of the video picture corresponding to each adjustment option by responding to the triggering operation of the user on at least one adjustment option, and then can present the corresponding display special effect in the video picture.
It will be understood by those skilled in the art that in the method of the present invention, the order of writing the steps does not imply a strict order of execution and any limitations on the implementation, and the specific order of execution of the steps should be determined by their function and possible inherent logic.
Based on the same technical concept, a display device corresponding to the display method is further provided in the embodiment of the present disclosure, and as the principle of solving the problem of the device in the embodiment of the present disclosure is similar to the display method in the embodiment of the present disclosure, the implementation of the device may refer to the implementation of the method, and repeated details are not repeated.
Referring to fig. 10, there is shown a schematic structural diagram of a display apparatus according to an embodiment of the present disclosure, the apparatus includes:
a first display module 1001 configured to display an adjustment interface of a video image, where the adjustment interface includes an adjustment option for adjusting a display special effect of the video image;
a detecting module 1002, configured to detect a trigger operation on at least one of the adjustment options in the adjustment interface;
a determining module 1003, configured to determine, in response to the triggering operation, a special display effect of the video frame corresponding to the triggered at least one of the adjustment options;
a second display module 1004, configured to display the display special effect in the video frame.
In some embodiments, the at least one of the adjustment options that is triggered comprises a background beautification option;
the determining module 1003, when determining, in response to the triggering operation, a special display effect of the video image corresponding to the triggered at least one of the adjustment options, is specifically configured to:
responding to the trigger operation, and acquiring a scene image of a target theme indicated by the trigger operation;
detecting a first image area where the target object is located and a second image area except the first image area from the video picture;
and replacing a second image area in the video picture by using the scene image of the target theme to obtain the video picture conforming to the target theme.
In some embodiments, the at least one of the adjustment options that is triggered comprises an adjustment option of a target object;
the determining module 1003, when determining, in response to the triggering operation, a special display effect of the video image corresponding to the triggered at least one of the adjustment options, is specifically configured to:
responding to the trigger operation, acquiring key point information of a target object appearing in the video picture, and acquiring a target material indicated by the trigger operation;
and determining display parameters of the target material in the video picture based on the key point information of the target object, and adding the target material on the video picture by using the display parameters.
In some embodiments, the target object comprises a face, the target material comprising cosmetic material and/or cosmetic material of at least one facial organ;
the determining module 1003, when determining the display parameter of the target material in the video picture based on the key point information of the target object, and adding the target material on the video picture by using the display parameter, is specifically configured to:
determining a face contour position and/or a position of at least one facial organ based on the positions of the key points of the target object;
determining display parameters of the beauty material based on the face contour position, and/or determining display parameters of the beauty material based on the position of the at least one facial organ; the display parameters comprise display position and display size
And adding the beauty material and/or the makeup material which accord with the display parameters on the video picture.
In some embodiments, the target object includes a face and/or a limb, the target material includes effect sticker material;
the determining module 1003, when determining the display parameter of the target material in the video picture based on the key point information of the target object, and adding the target material on the video picture by using the display parameter, is specifically configured to:
identifying an expression category of the face and/or an action category of the limb appearing in the video frame;
and under the condition that the expression category and/or the action category accord with a set triggering display condition, determining the display position of the special effect sticker material based on the positions of the key points of the target object, and adding the sticker material to the display position of the video picture, wherein the display position of the sticker material changes along with the change of the positions of the key points of the target object.
In some embodiments, the at least one of the adjustment options that is triggered comprises an adjustment option of a target object;
the determining module 1003, when determining, in response to the triggering operation, a special display effect of the video image corresponding to the triggered at least one of the adjustment options, is specifically configured to:
responding to the trigger operation, and acquiring a part to be deformed and a deformation degree of the target object indicated by the trigger operation; acquiring key point information of a target object appearing in the video picture;
determining a to-be-deformed area corresponding to at least one to-be-deformed part in the video picture based on the key point information of the target object;
and according to the deformation degree, carrying out deformation processing on the part to be deformed on the at least one area to be deformed to obtain a video picture after deformation processing.
In some embodiments, the determining module 1003 is further configured to:
determining deflection parameters of the part to be deformed based on the key point information of the target object;
the determining module, when performing deformation processing on the to-be-deformed portion on the at least one to-be-deformed region according to the deformation degree to obtain a video picture after deformation processing, is specifically configured to:
determining the deformation direction of the at least one region to be deformed based on the key point information and the deflection parameters;
and according to the deformation degree and the deformation direction, carrying out deformation processing on the part to be deformed on the at least one area to be deformed to obtain a video picture after the deformation processing.
In some embodiments, the at least one of the adjustment options that is triggered comprises an adjustment option of a target object;
the determining module 1003, when determining, in response to the triggering operation, a special display effect of the video image corresponding to the triggered at least one of the adjustment options, is specifically configured to:
responding to the trigger operation, and acquiring an avatar of the target object indicated by the trigger operation; acquiring key point information of the target object;
acquiring target bone parameters matched with the target object according to the key point information;
creating a three-dimensional virtual model corresponding to the target object according to the target skeleton parameters and a preset standard three-dimensional virtual model;
rendering and generating an avatar of the target object based on the three-dimensional virtual model, and replacing the target object appearing in the video picture with the avatar of the target object.
Based on the same technical concept, the embodiment of the application also provides computer equipment. Referring to fig. 7, a schematic structural diagram of a computer device 1100 provided in the embodiment of the present application includes a processor 1101, a memory 1102, and a bus 1103. When the computer device 1100 is in operation, communication between the processor 1101 and the memory 1102 is via the bus 1103, such that the processor 1101 performs the following instructions:
the method comprises the steps of displaying an adjusting interface of a video picture, wherein the adjusting interface comprises an adjusting option for adjusting a display special effect of the video picture;
detecting a trigger operation of at least one adjusting option in the adjusting interface;
in response to the triggering operation, determining a display special effect of the video picture corresponding to the triggered at least one adjusting option;
and displaying the display special effect in the video picture.
The embodiments of the present disclosure also provide a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program performs the steps of the presentation method described in the above method embodiments. The storage medium may be a volatile or non-volatile computer-readable storage medium.
The computer program product of the display method provided in the embodiments of the present disclosure includes a computer-readable storage medium storing a program code, where instructions included in the program code may be used to execute the steps of the display method described in the above method embodiments, which may be referred to specifically for the above method embodiments, and are not described herein again.
The embodiments of the present disclosure also provide a computer program, which when executed by a processor implements any one of the methods of the foregoing embodiments. The computer program product may be embodied in hardware, software or a combination thereof. In an alternative embodiment, the computer program product is embodied in a computer storage medium, and in another alternative embodiment, the computer program product is embodied in a Software product, such as a Software Development Kit (SDK), or the like.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the system and the apparatus described above may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again. In the several embodiments provided in the present disclosure, it should be understood that the disclosed system, apparatus, and method may be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one logical division, and there may be other divisions when actually implemented, and for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of devices or units through some communication interfaces, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts shown as units may or may not be physical units, may be located in one position, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present disclosure may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a non-volatile computer-readable storage medium executable by a processor. Based on such understanding, the technical solution of the present disclosure may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present disclosure. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
Finally, it should be noted that: the above-mentioned embodiments are merely specific embodiments of the present disclosure, which are used for illustrating the technical solutions of the present disclosure and not for limiting the same, and the scope of the present disclosure is not limited thereto, and although the present disclosure is described in detail with reference to the foregoing embodiments, those skilled in the art should understand that: any person skilled in the art can modify or easily conceive of the technical solutions described in the foregoing embodiments or equivalent technical features thereof within the technical scope of the present disclosure; such modifications, changes or substitutions do not depart from the spirit and scope of the embodiments of the present disclosure, and should be construed as being included therein. Therefore, the protection scope of the present disclosure shall be subject to the protection scope of the claims.

Claims (11)

1. A method of displaying, the method comprising:
the method comprises the steps of displaying an adjusting interface of a video picture, wherein the adjusting interface comprises an adjusting option for adjusting a display special effect of the video picture;
detecting a trigger operation of at least one adjusting option in the adjusting interface;
in response to the triggering operation, determining a display special effect of the video picture corresponding to the triggered at least one adjusting option;
and displaying the display special effect in the video picture.
2. The method of claim 1, wherein the at least one of the adjustment options that is triggered comprises a background beautification option;
the determining, in response to the triggering operation, a special display effect of the video picture corresponding to the triggered at least one of the adjustment options includes:
responding to the trigger operation, and acquiring a scene image of a target theme indicated by the trigger operation;
detecting a first image area where the target object is located and a second image area except the first image area from the video picture;
and replacing a second image area in the video picture by using the scene image of the target theme to obtain the video picture conforming to the target theme.
3. The method of claim 1, wherein the at least one of the adjustment options that is triggered comprises an adjustment option for a target object;
the determining, in response to the triggering operation, a special display effect of the video picture corresponding to the triggered at least one of the adjustment options includes:
responding to the trigger operation, acquiring key point information of a target object appearing in the video picture, and acquiring a target material indicated by the trigger operation;
and determining display parameters of the target material in the video picture based on the key point information of the target object, and adding the target material on the video picture by using the display parameters.
4. The method of claim 3, wherein the target object comprises a face, the target material comprises cosmetic material and/or cosmetic material of at least one facial organ;
the determining, based on the key point information of the target object, display parameters of the target material in the video picture, and adding the target material on the video picture by using the display parameters includes:
determining a face contour position and/or a position of at least one facial organ based on the positions of the key points of the target object;
determining display parameters of the beauty material based on the face contour position, and/or determining display parameters of the beauty material based on the position of the at least one facial organ; the display parameters comprise a display position and a display size;
and adding the beauty material and/or the makeup material which accord with the display parameters on the video picture.
5. The method of claim 3, wherein the target object comprises a face and/or a limb, the target material comprises effect sticker material;
the determining, based on the key point information of the target object, display parameters of the target material in the video picture, and adding the target material on the video picture by using the display parameters includes:
identifying an expression category of the face and/or an action category of the limb appearing in the video frame;
and under the condition that the expression category and/or the action category accord with a set triggering display condition, determining the display position of the special effect sticker material based on the positions of the key points of the target object, and adding the sticker material to the display position of the video picture, wherein the display position of the sticker material changes along with the change of the positions of the key points of the target object.
6. The method of claim 1, wherein the at least one of the adjustment options that is triggered comprises an adjustment option for a target object;
the determining, in response to the triggering operation, a special display effect of the video picture corresponding to the triggered at least one of the adjustment options includes:
responding to the trigger operation, and acquiring a part to be deformed and a deformation degree of the target object indicated by the trigger operation; acquiring key point information of a target object appearing in the video picture;
determining a to-be-deformed area corresponding to at least one to-be-deformed part in the video picture based on the key point information of the target object;
and according to the deformation degree, carrying out deformation processing on the part to be deformed on the at least one area to be deformed to obtain a video picture after deformation processing.
7. The method of claim 6, further comprising:
determining deflection parameters of the part to be deformed based on the key point information of the target object;
according to the deformation degree, the deformation processing is carried out on the part to be deformed on the at least one area to be deformed to obtain a video picture after the deformation processing, and the method comprises the following steps:
determining the deformation direction of the at least one region to be deformed based on the key point information and the deflection parameters;
and according to the deformation degree and the deformation direction, carrying out deformation processing on the part to be deformed on the at least one area to be deformed to obtain a video picture after the deformation processing.
8. The method of claim 1, wherein the at least one of the adjustment options that is triggered comprises an adjustment option for a target object;
the determining, in response to the triggering operation, a special display effect of the video picture corresponding to the triggered at least one of the adjustment options includes:
responding to the trigger operation, and acquiring an avatar of the target object indicated by the trigger operation; acquiring key point information of the target object;
acquiring target bone parameters matched with the target object according to the key point information;
creating a three-dimensional virtual model corresponding to the target object according to the target skeleton parameters and a preset standard three-dimensional virtual model;
rendering and generating an avatar of the target object based on the three-dimensional virtual model, and replacing the target object appearing in the video picture with the avatar of the target object.
9. A display device, the device comprising:
the first display module is used for displaying an adjustment interface of a video picture, and the adjustment interface comprises an adjustment option for adjusting a display special effect of the video picture;
the detection module is used for detecting the triggering operation of at least one adjusting option in the adjusting interface;
the determining module is used for responding to the triggering operation and determining the display special effect of the video picture corresponding to the triggered at least one adjusting option;
and the second display module is used for displaying the display special effect in the video picture.
10. A computer device, comprising: a processor, a memory and a bus, the memory storing machine-readable instructions executable by the processor, the processor and the memory communicating over the bus when a computer device is running, the machine-readable instructions when executed by the processor performing the steps of the presentation method as claimed in any one of claims 1 to 8.
11. A computer-readable storage medium, having stored thereon a computer program which, when being executed by a processor, carries out the steps of the presentation method as claimed in any one of claims 1 to 8.
CN202010758534.6A 2020-07-31 2020-07-31 Display method and device, computer equipment and storage medium Pending CN111880709A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010758534.6A CN111880709A (en) 2020-07-31 2020-07-31 Display method and device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010758534.6A CN111880709A (en) 2020-07-31 2020-07-31 Display method and device, computer equipment and storage medium

Publications (1)

Publication Number Publication Date
CN111880709A true CN111880709A (en) 2020-11-03

Family

ID=73204886

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010758534.6A Pending CN111880709A (en) 2020-07-31 2020-07-31 Display method and device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111880709A (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112929683A (en) * 2021-01-21 2021-06-08 广州虎牙科技有限公司 Video processing method and device, electronic equipment and storage medium
CN113132641A (en) * 2021-04-23 2021-07-16 北京达佳互联信息技术有限公司 Shooting control method and device, electronic equipment and storage medium
CN113240777A (en) * 2021-04-25 2021-08-10 北京达佳互联信息技术有限公司 Special effect material processing method and device, electronic equipment and storage medium
CN113850746A (en) * 2021-09-29 2021-12-28 北京字跳网络技术有限公司 Image processing method, image processing device, electronic equipment and storage medium
CN114339393A (en) * 2021-11-17 2022-04-12 广州方硅信息技术有限公司 Display processing method, server, device, system and medium for live broadcast picture
WO2023284410A1 (en) * 2021-07-15 2023-01-19 北京字跳网络技术有限公司 Method and apparatus for adding video effect, and device and storage medium
CN115909825A (en) * 2021-08-12 2023-04-04 广州视源电子科技股份有限公司 System, method and teaching end for realizing remote education

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130129145A1 (en) * 2011-11-22 2013-05-23 Cywee Group Limited Orientation correction method for electronic device used to perform facial recognition and electronic device thereof
CN107948667A (en) * 2017-12-05 2018-04-20 广州酷狗计算机科技有限公司 The method and apparatus that special display effect is added in live video
CN108124109A (en) * 2017-11-22 2018-06-05 上海掌门科技有限公司 A kind of method for processing video frequency, equipment and computer readable storage medium
CN108171716A (en) * 2017-12-25 2018-06-15 北京奇虎科技有限公司 Video personage based on the segmentation of adaptive tracing frame dresss up method and device
CN109788190A (en) * 2018-12-10 2019-05-21 北京奇艺世纪科技有限公司 A kind of image processing method, device, mobile terminal and storage medium
CN110766777A (en) * 2019-10-31 2020-02-07 北京字节跳动网络技术有限公司 Virtual image generation method and device, electronic equipment and storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130129145A1 (en) * 2011-11-22 2013-05-23 Cywee Group Limited Orientation correction method for electronic device used to perform facial recognition and electronic device thereof
CN108124109A (en) * 2017-11-22 2018-06-05 上海掌门科技有限公司 A kind of method for processing video frequency, equipment and computer readable storage medium
CN107948667A (en) * 2017-12-05 2018-04-20 广州酷狗计算机科技有限公司 The method and apparatus that special display effect is added in live video
CN108171716A (en) * 2017-12-25 2018-06-15 北京奇虎科技有限公司 Video personage based on the segmentation of adaptive tracing frame dresss up method and device
CN109788190A (en) * 2018-12-10 2019-05-21 北京奇艺世纪科技有限公司 A kind of image processing method, device, mobile terminal and storage medium
CN110766777A (en) * 2019-10-31 2020-02-07 北京字节跳动网络技术有限公司 Virtual image generation method and device, electronic equipment and storage medium

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112929683A (en) * 2021-01-21 2021-06-08 广州虎牙科技有限公司 Video processing method and device, electronic equipment and storage medium
CN113132641A (en) * 2021-04-23 2021-07-16 北京达佳互联信息技术有限公司 Shooting control method and device, electronic equipment and storage medium
CN113240777A (en) * 2021-04-25 2021-08-10 北京达佳互联信息技术有限公司 Special effect material processing method and device, electronic equipment and storage medium
WO2023284410A1 (en) * 2021-07-15 2023-01-19 北京字跳网络技术有限公司 Method and apparatus for adding video effect, and device and storage medium
CN115909825A (en) * 2021-08-12 2023-04-04 广州视源电子科技股份有限公司 System, method and teaching end for realizing remote education
CN113850746A (en) * 2021-09-29 2021-12-28 北京字跳网络技术有限公司 Image processing method, image processing device, electronic equipment and storage medium
CN114339393A (en) * 2021-11-17 2022-04-12 广州方硅信息技术有限公司 Display processing method, server, device, system and medium for live broadcast picture

Similar Documents

Publication Publication Date Title
CN111880709A (en) Display method and device, computer equipment and storage medium
US11727596B1 (en) Controllable video characters with natural motions extracted from real-world videos
CN108305312B (en) Method and device for generating 3D virtual image
CN110363867B (en) Virtual decorating system, method, device and medium
US9030486B2 (en) System and method for low bandwidth image transmission
CN112150638A (en) Virtual object image synthesis method and device, electronic equipment and storage medium
CN113287118A (en) System and method for face reproduction
CN111640197A (en) Augmented reality AR special effect control method, device and equipment
CN107705240B (en) Virtual makeup trial method and device and electronic equipment
CN113298858A (en) Method, device, terminal and storage medium for generating action of virtual image
CN111667588A (en) Person image processing method, person image processing device, AR device and storage medium
CN110928411B (en) AR-based interaction method and device, storage medium and electronic equipment
CN114821675B (en) Object processing method and system and processor
KR20200000106A (en) Method and apparatus for reconstructing three dimensional model of object
CN111640192A (en) Scene image processing method and device, AR device and storage medium
CN111639613B (en) Augmented reality AR special effect generation method and device and electronic equipment
CN111625100A (en) Method and device for presenting picture content, computer equipment and storage medium
CN111652983A (en) Augmented reality AR special effect generation method, device and equipment
KR20190015332A (en) Devices affecting virtual objects in Augmented Reality
CN113657357A (en) Image processing method, image processing device, electronic equipment and storage medium
CN108537162A (en) The determination method and apparatus of human body attitude
CN111028318A (en) Virtual face synthesis method, system, device and storage medium
CN111105489A (en) Data synthesis method and apparatus, storage medium, and electronic apparatus
CN111586428A (en) Cosmetic live broadcast system and method with virtual character makeup function
CN111383313A (en) Virtual model rendering method, device and equipment and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20201103

RJ01 Rejection of invention patent application after publication