CN117425050A - Video-based interaction method, device, equipment and storage medium - Google Patents

Video-based interaction method, device, equipment and storage medium Download PDF

Info

Publication number
CN117425050A
CN117425050A CN202311415712.5A CN202311415712A CN117425050A CN 117425050 A CN117425050 A CN 117425050A CN 202311415712 A CN202311415712 A CN 202311415712A CN 117425050 A CN117425050 A CN 117425050A
Authority
CN
China
Prior art keywords
head portrait
effect
target
page
generation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311415712.5A
Other languages
Chinese (zh)
Inventor
张艺萌
李沛沣
汤波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Zitiao Network Technology Co Ltd
Original Assignee
Beijing Zitiao Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Zitiao Network Technology Co Ltd filed Critical Beijing Zitiao Network Technology Co Ltd
Priority to CN202311415712.5A priority Critical patent/CN117425050A/en
Publication of CN117425050A publication Critical patent/CN117425050A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/478Supplemental services, e.g. displaying phone caller identification, shopping application
    • H04N21/4788Supplemental services, e.g. displaying phone caller identification, shopping application communicating with other users, e.g. chatting
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • H04N21/4316Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations for displaying supplemental content in a region of the screen, e.g. an advertisement in a separate window
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/482End-user interface for program selection
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/485End-user interface for client configuration
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/81Monomedia components thereof
    • H04N21/8146Monomedia components thereof involving graphical data, e.g. 3D object, 2D graphics

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • General Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Marketing (AREA)
  • Computer Graphics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The present disclosure provides a video-based interaction method, apparatus, device, and storage medium, where the method includes: first, a preset head portrait generation entry is displayed on a play page of a first target video, wherein the first target video is used for displaying a first artificial intelligence AI effect head portrait, then, the head portrait generation page is displayed in response to a triggering operation for the preset head portrait generation entry, a second AI effect head portrait is obtained based on target image materials and target AI effect resources in response to the head portrait generation operation for triggering target image materials and target AI effect resources, which acts on the head portrait generation page, and the second AI effect head portrait is used for setting a user head portrait. Therefore, the embodiment of the disclosure can support the generation of the preset head portrait based on the display of the video playing page, trigger the generation of the AI effect head portrait, enrich the relevant interactive functions of the video and promote the experience of the user.

Description

Video-based interaction method, device, equipment and storage medium
Technical Field
The present disclosure relates to the field of data processing, and in particular, to a video-based interaction method, apparatus, device, and storage medium.
Background
With the continuous development of computer technology, video-related interactive functions are layered endlessly, and the demands of users for video-related interactive functions are also becoming more and more diversified.
In order to meet the increasing diversified demands of users on video related interactive functions, how to enrich the video related interactive functions is a technical problem to be solved at present.
Disclosure of Invention
In order to solve the technical problems, an embodiment of the present disclosure provides a video-based interaction method.
In a first aspect, the present disclosure provides a video-based interaction method, the method comprising:
displaying a preset head portrait generating inlet on a playing page of a first target video; the first target video is used for displaying a first artificial intelligent AI effect head portrait;
responding to a triggering operation of the preset head portrait generation entrance, and displaying a head portrait generation page;
obtaining a second AI effect avatar based on the target image material and the target AI effect resource in response to an avatar generation operation that is triggered for the target image material and the target AI effect resource acting on the avatar generation page; the second AI effect head portrait is used for setting a user head portrait.
In an optional embodiment, the avatar generation page is provided with an avatar setting control, and the method further includes:
and setting the second AI effect head portrait as a user head portrait of a target user in response to a trigger operation of the head portrait setting control on the head portrait generation page.
In an optional implementation manner, the head portrait generation page is provided with a material uploading control, and after the head portrait generation page is displayed in response to a triggering operation of the preset head portrait generation entry, the method further includes:
responding to the triggering operation of the material uploading control on the head portrait generation page, and displaying a user material page;
in response to a selected operation for any image material on the user material page, the image material is determined to be a target image material.
In an optional embodiment, the avatar generation page is provided with a video generation control, and the method further includes:
generating a second target video based on the second AI-effect avatar in response to a trigger operation for the video generation control on the avatar generation page; wherein the second target video is used for displaying the second AI effect head portrait.
In an alternative embodiment, the obtaining, in response to an avatar generation operation triggered on the avatar generation page for a target image material and a target AI effect resource, a second AI effect avatar based on the target image material and the target AI effect resource includes:
responding to the head portrait generating operation triggered by the target image material and the target AI effect resource acting on the head portrait generating page, and switching and displaying the head portrait generating operation as a second AI effect head portrait by the target image material according to a preset switching effect; the second AI effect header is obtained based on the target image material and the target AI effect resource.
In an alternative embodiment, the first target video belongs to a gyroscope video, and the first target video includes a plurality of layers, where the plurality of layers includes a layer where the first AI effect header is located, and the method further includes:
acquiring a deflection angle corresponding to a playing page of the first target video;
and controlling the first AI effect head portrait in the first target video to shift in a shift range corresponding to the layer.
In an alternative embodiment, the method further comprises:
displaying a target page; the target page is provided with a current user head portrait of a target user, the current user head portrait is provided with AI effect prompt information, and the AI effect prompt information is used for prompting to trigger generation of the AI effect head portrait.
In an alternative embodiment, the method further comprises:
responding to the AI effect head portrait acted on the target page to generate a trigger operation, and obtaining a third AI effect head portrait based on the current user head portrait of the target user; and the third AI effect head portrait is used for triggering and setting the user head portrait of the target user.
In a second aspect, the present disclosure provides a video-based interaction device, the device comprising:
the first display module is used for displaying a preset head portrait generation inlet on a playing page of the first target video; the first target video is used for displaying a first artificial intelligent AI effect head portrait;
the second display module is used for responding to the triggering operation of the preset head portrait generation entrance and displaying a head portrait generation page;
a first obtaining module, configured to obtain a second AI-effect header based on a target image material and a target AI-effect resource in response to a header generation operation triggered on the header generation page for the target image material and the target AI-effect resource; the second AI effect head portrait is used for setting a user head portrait.
In a third aspect, the present disclosure provides a computer readable storage medium having instructions stored therein, which when run on a terminal device, cause the terminal device to implement the above-described method.
In a fourth aspect, the present disclosure provides a video-based interactive apparatus comprising: the computer program comprises a memory, a processor and a computer program stored in the memory and capable of running on the processor, wherein the processor realizes the method when executing the computer program.
In a fifth aspect, the present disclosure provides a computer program product comprising computer programs/instructions which when executed by a processor implement the above-described method.
Compared with the prior art, the technical scheme provided by the embodiment of the disclosure has at least the following advantages:
the embodiment of the disclosure provides an interaction method based on video, firstly, a preset head portrait generation entry is displayed on a play page of a first target video, wherein the first target video is used for displaying a first artificial intelligent AI effect head portrait, the head portrait generation page is displayed in response to triggering operation for the preset head portrait generation entry, a second AI effect head portrait is obtained based on target image materials and target AI effect resources in response to head portrait generation operation for triggering target image materials and target AI effect resources, which acts on the head portrait generation page, and the second AI effect head portrait is used for setting a user head portrait. Therefore, the embodiment of the disclosure can support the generation of the preset head portrait based on the display of the video playing page, trigger the generation of the AI effect head portrait, enrich the relevant interactive functions of the video and promote the experience of the user.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the disclosure and together with the description, serve to explain the principles of the disclosure.
In order to more clearly illustrate the embodiments of the present disclosure or the solutions in the prior art, the drawings that are required for the description of the embodiments or the prior art will be briefly described below, and it will be obvious to those skilled in the art that other drawings can be obtained from these drawings without inventive effort.
Fig. 1 is a flowchart of a video-based interaction method provided in an embodiment of the present disclosure;
fig. 2 is a schematic diagram of a playing page of a first target video according to an embodiment of the disclosure;
fig. 3 is a schematic diagram of an avatar generation page provided in an embodiment of the present disclosure;
FIG. 4 is a schematic diagram of another avatar generation page provided by an embodiment of the present disclosure;
FIG. 5 is a schematic diagram of another avatar generation page provided by an embodiment of the present disclosure;
FIG. 6 is a schematic diagram of another avatar generation page provided by an embodiment of the present disclosure;
FIG. 7 is a schematic diagram of another avatar generation page provided by an embodiment of the present disclosure;
FIG. 8 is a schematic illustration of a user's personal home page provided in an embodiment of the present disclosure;
fig. 9 is a schematic structural diagram of a video-based interaction device according to an embodiment of the present disclosure;
fig. 10 is a schematic structural diagram of an interactive device based on video according to an embodiment of the present disclosure.
Detailed Description
In order that the above objects, features and advantages of the present disclosure may be more clearly understood, a further description of aspects of the present disclosure will be provided below. It should be noted that, without conflict, the embodiments of the present disclosure and features in the embodiments may be combined with each other.
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present disclosure, but the present disclosure may be practiced otherwise than as described herein; it will be apparent that the embodiments in the specification are only some, but not all, embodiments of the disclosure.
With the continuous development of computer technology, video-related interactive functions are layered endlessly, and the demands of users for video-related interactive functions are also becoming more and more diversified.
In order to meet the increasing diversified demands of users on video related interactive functions, how to enrich the video related interactive functions is a technical problem to be solved at present.
To this end, the embodiments of the present disclosure provide a video-based interaction method, first, displaying a preset avatar generation portal on a play page of a first target video, where the first target video is used to display a first artificial intelligent AI effect avatar, displaying the avatar generation page in response to a trigger operation for the preset avatar generation portal, and obtaining a second AI effect avatar based on a target image material and a target AI effect resource in response to an avatar generation operation for triggering the target image material and the target AI effect resource acting on the avatar generation page, where the second AI effect avatar is used to set a user avatar. Therefore, the embodiment of the disclosure can support the generation of the preset head portrait based on the display of the video playing page, trigger the generation of the AI effect head portrait, enrich the relevant interactive functions of the video and promote the experience of the user.
Based on this, an embodiment of the present disclosure provides a video-based interaction method, and referring to fig. 1, a flowchart of the video-based interaction method provided for an embodiment of the present disclosure is provided, where the method specifically includes:
s101: and displaying a preset head portrait generating inlet on a playing page of the first target video.
The first target video is used for displaying a first artificial intelligence AI effect head portrait.
The video-based interaction method provided by the embodiment of the disclosure can be applied to clients, for example, the clients can be deployed on mobile terminals such as smart phones, the clients deployed on Personal Computers (PC) and the like.
In the embodiment of the disclosure, the first target video is displayed with a first artificial intelligence AI effect head portrait, that is, the first target video is a video related to the first artificial intelligence AI effect head portrait, and specifically, a video including the first artificial intelligence AI effect head portrait may be generated and released for video creation by a video creation user.
The playing page of the first target video can be used for playing the first target video based on a media information stream, and support a function of switching playing videos based on the media information stream, wherein the media information stream can comprise a recommended video stream and the like.
And displaying a preset head portrait generation entry on a playing page of the first target video, wherein the related page for AI effect head portrait generation can be triggered and displayed by triggering the preset head portrait generation entry.
In the embodiment of the disclosure, the first artificial intelligence AI effect head portrait refers to a head portrait with an AI effect, for example, a head portrait generated based on any one or more AI effect resources.
In the embodiment of the present disclosure, a preset avatar generation entry is displayed on a play page of a first target video, where the preset avatar generation entry may be displayed in any area on the play page of the first target video. The display style of the preset avatar generation portal may be an anchor point form, a preset control form, etc. on the playing page, which is not limited in the embodiments of the present disclosure.
Fig. 2 is a schematic diagram of a playing page of a first target video according to an embodiment of the present disclosure, where the first target video is being displayed on the playing page, and the first target video may be in a playing state, a pause playing state, or the like. The first target video is a video related to the first artificial intelligence AI effect head portrait, i.e. the first target video displays the first AI effect head portrait. In addition, a preset avatar generation entry 201 is displayed on the play page of the first target video.
S102: and responding to the triggering operation of the preset head portrait generation entrance, and displaying a head portrait generation page.
The triggering operation of the portal for generating the preset head portrait may include an interaction operation such as a single click operation, a double click operation, a long press operation, etc. triggered by the portal for generating the preset head portrait on the playing page of the first target video, which is used for triggering the playing page of the first target video to be switched and displayed to the head portrait generating page.
In the embodiment of the disclosure, after the playing page of the first target video is switched to be displayed to the head portrait generation page, the target image material and the target AI effect resource are determined based on the head portrait generation page.
In practical application, the target image material used for generating the second AI effect head portrait is user material authorized by the user, namely, the generation of the second AI effect head portrait based on the target image material and the target AI effect resource can be triggered on the premise of user authorization.
In an alternative embodiment, before displaying the head portrait generation page, an authorization popup may be displayed, after obtaining the user authorization based on the authorization popup, the head portrait generation page is displayed, and then a subsequent head portrait generation operation is triggered based on the head portrait generation page.
S103: in response to an avatar generation operation acting on the avatar generation page that is triggered for a target image material and a target AI-effect resource, a second AI-effect avatar is obtained based on the target image material and the target AI-effect resource.
The second AI effect head portrait is used for setting a user head portrait.
In the embodiment of the disclosure, the target image material may include a personal head portrait set by the current user, a picture material in the album of the current user, and a picture material obtained by the current user based on a photographing page.
The head portrait generation page comprises target image materials and AI effect resources, wherein the target AI effect resources can be any AI effect resource, specifically, the target AI effect resources can be AI effect resources which are arranged at the first position in the AI effect resources, and the target AI effect resources can be determined based on the selected operation of a user.
The target AI effect resource is determined based on the selected operation of the user, which means that the switching of the target AI effect resource can be realized based on the selected operation.
In an alternative embodiment, when a trigger operation for a preset avatar generation portal is received, the display avatar generation page displays execution progress information for the avatar generation operation on the avatar generation page when an upload operation for the target image material and a selected operation for the target AI effect resource on the avatar generation page are received.
For example, as shown in fig. 3, a schematic diagram of an avatar generation page according to an embodiment of the present disclosure is provided, where the avatar generation page displays AI effect resources in a selected state, i.e. target AI effect resources 301, and execution progress information, as shown in 302. And after the AI head portrait generation operation is triggered, displaying the effect of switching and displaying the target image material as a head portrait with the second AI effect on the head portrait generation page.
In practical application, in order to demonstrate the effect contrast before and after the head portrait generation to further promote user experience, in the embodiment of the present disclosure, the target image material is switched and displayed as the second AI effect head portrait based on the preset switching effect, where the preset switching effect may be set based on the requirement.
Specifically, after the generation of the AI avatar is completed based on the target image material and the target AI effect resource, the target image material may be switched and displayed as the generated second AI effect avatar according to the preset switching effect, where the second AI effect avatar is generated based on the target image material and the target AI effect resource.
As shown in fig. 4, a schematic diagram of another avatar generation page provided in an embodiment of the present disclosure is shown. The avatar generation page has displayed thereon a process of switching display by the target image material as a second AI-effect avatar, wherein the target image material 401, the second AI-effect avatar 402 is replacing the target image material 401 until the complete second AI-effect avatar is displayed.
In addition, the processing procedure of generating the second AI effect header based on the target image material and the target AI effect resource may be implemented at the client or at the server, which is not limited in any way.
In an alternative embodiment, when an avatar generation operation triggered by the target image material and the target AI effect resource acting on the avatar generation page is received, a process of generating the second AI effect avatar based on the target image material and the target AI effect resource may be implemented at the client, for example, may be implemented by using an AI avatar generation module deployed by the client.
In another alternative embodiment, when receiving an avatar generation operation triggered by a target image material and a target AI effect resource acting on an avatar generation page, the client sends the target image material and the target AI effect resource to the target server, the target server generates a second AI effect avatar based on the target image material and the target AI effect resource, and the client acquires the second AI effect avatar from the target server.
In practical application, in order to support setting the generated AI effect head portrait, namely the second AI effect head portrait as the user head portrait of the target user so as to meet the user requirement and promote the user experience, in the embodiment of the disclosure, a head portrait setting control is set on the head portrait generation page, and the setting of the second AI effect head portrait as the user head portrait can be triggered through the head portrait setting control.
In an alternative embodiment, when a trigger operation for the avatar setting control on the avatar generation page is received, the second AI-effect avatar displayed on the avatar generation page is set as the user avatar of the target user.
In addition, in order to remind the user that the head portrait setting operation is completed, a notification message of successful head portrait setting can be displayed on the head portrait generation page, so that user experience is further improved.
On the basis of the embodiment, an avatar preservation control can be further arranged on the avatar generation page, and the second AI effect avatar can be triggered and preserved through the avatar preservation control.
For example, as shown in fig. 5, a schematic diagram of an avatar generation page is provided in an embodiment of the disclosure. The avatar generation page has displayed thereon a second AI-effect avatar 501, and an avatar setting control 502, and an avatar save control 503, which, when receiving a trigger operation for the avatar setting control on the avatar generation page, sets the second AI-effect avatar displayed on the avatar generation page as the user avatar of the target user, at which time an avatar setting success notification message 504 is displayed on the avatar generation page.
In the video-based interaction method provided by the embodiment of the disclosure, first, a preset head portrait generation entry is displayed on a play page of a first target video, wherein the first target video is used for displaying a first artificial intelligent AI effect head portrait, the head portrait generation page is displayed in response to a triggering operation for the preset head portrait generation entry, a second AI effect head portrait is obtained based on a target image material and a target AI effect resource in response to a head portrait generation operation for triggering the target image material and the target AI effect resource, which acts on the head portrait generation page, and the second AI effect head portrait is used for setting a user head portrait. Therefore, the embodiment of the disclosure can support the generation of the preset head portrait based on the display of the video playing page, trigger the generation of the AI effect head portrait, enrich the relevant interactive functions of the video and promote the experience of the user.
In an alternative embodiment, the first target video may be a gyroscope video, where the first target video includes a plurality of layers including a layer where the first AI-effect header is located.
In the process of playing the first target video, a deflection angle corresponding to a playing page of the first target video can be obtained through a gyroscope in the device, and then, the first AI effect head portrait in the first target video is controlled to deflect in a deflection range corresponding to a layer where the first AI effect head portrait is located according to the deflection angle.
If the device adjusts the display angle, the gyroscope in the device can detect the change of the deflection angle corresponding to the playing page of the first target video, and then the first AI effect head portrait in the first target video is controlled to deflect in the deflection range corresponding to the layer according to the change of the deflection angle, so that the video playing experience of three-dimensional stereoscopic impression is brought to the user.
In practical application, in order to support the user to re-upload the target image material, in the embodiment of the present disclosure, after the head portrait generation page is displayed when receiving the triggering operation for the preset head portrait generation entry, a material uploading control may be further set on the head portrait generation page, and the re-determination of the target image material may be triggered by the material uploading control.
Specifically, when a trigger operation for a material uploading control on the head portrait generation page is received, a user material page is displayed, and when a selection operation for any image material on the user material page is received, the image material is determined to be a target image material.
Before the user triggers the head portrait generating operation, a material uploading control can be triggered to upload the target image material again; after the user triggers the head portrait to generate operation, a material uploading control can be triggered to upload the target image material again, and then the AI effect head portrait is generated again based on the re-uploaded target image material.
As shown in fig. 5, the avatar generation page has a material upload control 505 displayed thereon, and when a trigger operation for the material upload control 505 is received, a user material page is displayed, and when a selection operation for any image material on the user material page is received, the image material is determined as a target image material. The user material page is used for displaying all materials in the user album.
In practical application, in order to avoid situations that the target image material is not uploaded and the target image material does not contain a portrait on the portrait generation page, in the embodiment of the present disclosure, before the portrait generation operation is triggered, operations such as image recognition are performed on the target image material, and in situations that the target image material is not uploaded and the target image material does not contain a portrait, a notification message is displayed, where an image recognition module may be deployed for the image recognition operation based on requirements, which is not limited in any way.
In an optional implementation manner, after the head portrait generation page is displayed, image recognition is performed on the target image material, if the recognition result includes a result that no portrait is recognized, a notification message for uploading the portrait picture is displayed on the head portrait generation page, and the target image material can be uploaded again by triggering a material uploading control on the head portrait generation page. The notification message of uploading the portrait pictures is used for reminding a user to upload the target image materials again.
As shown in fig. 6, when a target image material image is identified, if no portrait is identified in the identification result, a notification message for transmitting a portrait picture is displayed on the portrait generation page, as shown in 601.
In another alternative embodiment, when it is detected that the header generation page does not contain the target image material, a notification message for uploading the portrait picture is displayed on the header generation page.
As shown in fig. 7, when it is monitored that the header generation page does not contain the target image material, a notification message 701 for uploading the portrait image is displayed on the header generation page.
On the basis of the embodiment, a video generation control can be further set on the avatar generation page to support generation of a second target video based on the generated second AI effect avatar, so that user experience is further improved. In addition, a preview effect control can be further arranged on the head portrait generation page, so that a user can preview and view the generated second target video.
Specifically, when a trigger operation for a video generation control on the avatar generation page is received, a second target video is generated based on the second AI-effect avatar. The second target video is used for displaying a second AI effect head portrait.
In an alternative implementation manner, a second AI effect head portrait is displayed on the head portrait generation page, when a video generation control triggering operation is received, a second target video is generated based on the second AI effect head portrait, when a preview effect control triggering operation is received, a preview page is displayed, the second target video is displayed on the preview page, and when a triggering operation of a release control for a first target video is received, the first target video is sent to a release work of a user personal homepage of a user.
In another optional implementation manner, when the avatar generation operation is triggered, the video generation control is selected by default, after the second AI effect avatar is generated, the user may trigger the preview effect control to display a preview page, and the preview page displays a second target video generated based on the second AI effect avatar, and the release control triggers the current user to release the second target video by triggering the release control.
For example, the video generating control 507 in the selected state and the preview effect control 508 are displayed on the avatar generating page shown in fig. 5, when a trigger operation for the preview effect control is received, a preview page is displayed, through which a second target video generated based on the second AI effect avatar can be viewed, and further, the current user is triggered to issue the second target video by triggering the issue control on the preset page.
By setting the video generation control on the head portrait generation page, the function of generating the second target video based on the head portrait with the second AI effect is realized, and the related interaction function of the video is enriched.
In practical application, in order to prompt the user to set the currently set personal head portrait as the AI effect head portrait, AI effect prompt information can be displayed on a target page to prompt to trigger generation of the AI effect head portrait, wherein the target page contains the personal head portrait set by the current user, and the personal head portrait is in an editable state, for example, the target page can be a personal homepage of the user and an edited data page.
Specifically, a target page is displayed, wherein a current user head portrait of a target user is displayed on the target page, and AI effect prompt information is displayed on the current user head portrait and used for prompting to trigger generation of the AI effect head portrait.
And when the AI effect head portrait generating triggering operation acted on the target page is received, obtaining a third AI effect head portrait based on the current user head portrait of the target user, wherein the third AI effect head portrait is used for triggering and setting the user head portrait of the target user.
Wherein, AI effect suggestion information can be based on the demand setting. Specifically, an animation effect or the like, such as an animation effect of a display aperture, is possible.
In an alternative embodiment, a user personal homepage is displayed, the user personal homepage displays a current user head portrait of a target user, and display effect prompt information, when a trigger operation of the user head portrait acting on the target page is received, a head portrait setting panel is pulled up on the user personal homepage, an AI head portrait setting control is displayed on the head portrait setting panel, when a trigger operation of the AI head portrait setting on the head portrait setting panel is received, a head portrait generation page is displayed, and further a third AI effect head portrait is obtained based on the current user head portrait of the target user. The specific process of generating the third AI-effect header may be referred to the discussion above.
As shown in fig. 8, a schematic diagram of a user personal homepage provided in an embodiment of the present disclosure is provided, where a current user avatar 801 of a target user is displayed on the user personal homepage, and AI effect prompt information is displayed, when a trigger operation of the user avatar 801 acting on the target page is received, an avatar setting panel 802 is pulled up on the user personal homepage, and an AI-setting avatar control 803 is displayed on the avatar setting panel, and when a trigger operation of the AI-setting avatar control 803 acting on the avatar setting panel is received, an avatar generation page is displayed, and further, a third AI effect avatar is obtained based on the current user avatar of the target user. The third AI effect head portrait is used for triggering the user head portrait of the target user.
Based on the above method embodiments, the present disclosure further provides a video-based interaction device, and referring to fig. 9, a schematic structural diagram of the video-based interaction device provided by the embodiments of the present disclosure is provided, where the device includes:
the first display module 901 is configured to display a preset avatar generation entry on a play page of a first target video; the first target video is used for displaying a first artificial intelligent AI effect head portrait;
a second display module 902, configured to display an avatar generation page in response to a trigger operation for the preset avatar generation portal;
a first obtaining module 903, configured to obtain a second AI-effect header based on a target image material and a target AI-effect resource in response to a header generation operation triggered on the header generation page for the target image material and the target AI-effect resource; the second AI effect head portrait is used for setting a user head portrait.
In an optional embodiment, the avatar generation page is provided with an avatar setting control, and the apparatus further includes:
and the setting module is used for responding to the triggering operation of the head portrait setting control on the head portrait generating page, and setting the second AI effect head portrait as the user head portrait of the target user.
In an optional implementation manner, the head portrait generation page is provided with a material uploading control, and the device further comprises:
the third display module is used for responding to the triggering operation of the material uploading control on the head portrait generation page and displaying a user material page;
and the determining module is used for determining the image material as a target image material in response to a selected operation on any image material on the user material page.
In an optional embodiment, the avatar generation page is provided with a video generation control, and the apparatus further includes:
a first generation module for generating a second target video based on the second AI-effect avatar in response to a trigger operation for the video generation control on the avatar generation page; wherein the second target video is used for displaying the second AI effect head portrait.
In an alternative embodiment, the first obtaining module is specifically configured to:
responding to the head portrait generating operation triggered by the target image material and the target AI effect resource acting on the head portrait generating page, and switching and displaying the head portrait generating operation as a second AI effect head portrait by the target image material according to a preset switching effect; the second AI effect header is obtained based on the target image material and the target AI effect resource.
In an alternative embodiment, the first target video belongs to a gyroscope video, and the first target video includes a plurality of layers, where the plurality of layers includes a layer where the first AI effect header is located, and the apparatus further includes:
the acquisition module is used for acquiring a deflection angle corresponding to a playing page of the first target video;
and the control module is used for controlling the first AI effect head portrait in the first target video to shift in a shift range corresponding to the layer according to the deflection angle.
In an alternative embodiment, the apparatus further comprises:
the fourth display module is used for displaying the target page; the target page is provided with a current user head portrait of a target user, the current user head portrait is provided with AI effect prompt information, and the AI effect prompt information is used for prompting to trigger generation of the AI effect head portrait.
In an alternative embodiment, the apparatus further comprises:
the second obtaining module is used for responding to the AI effect head portrait acted on the target page to generate a trigger operation, and obtaining a third AI effect head portrait based on the current user head portrait of the target user; and the third AI effect head portrait is used for triggering and setting the user head portrait of the target user.
In the video-based interaction device provided in the embodiment of the present disclosure, first, a preset avatar generation entry is displayed on a play page of a first target video, where the first target video is used to display a first artificial intelligent AI effect avatar, the avatar generation page is displayed in response to a trigger operation for the preset avatar generation entry, and a second AI effect avatar is obtained based on the target image material and the target AI effect resource in response to an avatar generation operation for triggering the target image material and the target AI effect resource, which is acted on the avatar generation page, where the second AI effect avatar is used to set a user avatar. Therefore, the embodiment of the disclosure can support the generation of the preset head portrait based on the display of the video playing page, trigger the generation of the AI effect head portrait, enrich the relevant interactive functions of the video and promote the experience of the user.
In addition to the above methods and apparatuses, the embodiments of the present disclosure further provide a computer readable storage medium, where instructions are stored, when the instructions are executed on a terminal device, cause the terminal device to implement the video-based interaction method according to the embodiments of the present disclosure.
The disclosed embodiments also provide a computer program product comprising computer programs/instructions which, when executed by a processor, implement the video-based interaction method of the disclosed embodiments.
In addition, the embodiment of the present disclosure further provides a video-based interaction device, as shown in fig. 10, which may include:
a processor 1001, a memory 1002, an input device 1003, and an output device 1004. The number of processors 1001 in the video-based interaction device may be one or more, one processor being exemplified in fig. 10. In some embodiments of the present disclosure, the processor 1001, memory 1002, input device 1003, and output device 1004 may be connected by a bus or other means, with bus connections being exemplified in fig. 10.
The memory 1002 may be used to store software programs and modules, and the processor 1001 performs various functional applications and data processing of the video-based interactive apparatus by executing the software programs and modules stored in the memory 1002. The memory 1002 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, application programs required for at least one function, and the like. In addition, memory 1002 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid-state storage device. The input means 1003 may be used to receive input numeric or character information and to generate signal inputs related to user settings and function control of the video-based interactive apparatus.
In particular, in this embodiment, the processor 1001 loads executable files corresponding to the processes of one or more application programs into the memory 1002 according to the following instructions, and the processor 1001 executes the application programs stored in the memory 1002, so as to implement the various functions of the video-based interaction device.
It should be noted that in this document, relational terms such as "first" and "second" and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The foregoing is merely a specific embodiment of the disclosure to enable one skilled in the art to understand or practice the disclosure. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the disclosure. Thus, the present disclosure is not intended to be limited to the embodiments shown and described herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (11)

1. A video-based interaction method, the method comprising:
displaying a preset head portrait generating inlet on a playing page of a first target video; the first target video is used for displaying a first artificial intelligent AI effect head portrait;
responding to a triggering operation of the preset head portrait generation entrance, and displaying a head portrait generation page;
obtaining a second AI effect avatar based on the target image material and the target AI effect resource in response to an avatar generation operation that is triggered for the target image material and the target AI effect resource acting on the avatar generation page; the second AI effect head portrait is used for setting a user head portrait.
2. The method of claim 1, wherein the avatar generation page has an avatar setting control disposed thereon, the method further comprising:
and setting the second AI effect head portrait as a user head portrait of a target user in response to a trigger operation of the head portrait setting control on the head portrait generation page.
3. The method according to claim 1, wherein the avatar generation page is provided with a material upload control, and the method further comprises, after displaying the avatar generation page in response to a trigger operation for the preset avatar generation portal:
responding to the triggering operation of the material uploading control on the head portrait generation page, and displaying a user material page;
in response to a selected operation for any image material on the user material page, the image material is determined to be a target image material.
4. The method of claim 1, wherein the avatar generation page has a video generation control disposed thereon, the method further comprising:
generating a second target video based on the second AI-effect avatar in response to a trigger operation for the video generation control on the avatar generation page; wherein the second target video is used for displaying the second AI effect head portrait.
5. The method of claim 1, wherein the obtaining a second AI effect avatar based on the target image material and the target AI effect resource in response to an avatar generation operation triggered on the avatar generation page for the target image material and the target AI effect resource comprises:
responding to the head portrait generating operation triggered by the target image material and the target AI effect resource acting on the head portrait generating page, and switching and displaying the head portrait generating operation as a second AI effect head portrait by the target image material according to a preset switching effect; the second AI effect header is obtained based on the target image material and the target AI effect resource.
6. The method of claim 1, wherein the first target video belongs to a gyroscope video, the first target video comprising a plurality of layers including a layer in which the first AI effect header is located, the method further comprising:
acquiring a deflection angle corresponding to a playing page of the first target video;
and controlling the first AI effect head portrait in the first target video to shift in a shift range corresponding to the layer.
7. The method according to claim 1, wherein the method further comprises:
displaying a target page; the target page is provided with a current user head portrait of a target user, the current user head portrait is provided with AI effect prompt information, and the AI effect prompt information is used for prompting to trigger generation of the AI effect head portrait.
8. The method of claim 7, wherein the method further comprises:
responding to the AI effect head portrait acted on the target page to generate a trigger operation, and obtaining a third AI effect head portrait based on the current user head portrait of the target user; and the third AI effect head portrait is used for triggering and setting the user head portrait of the target user.
9. A video-based interactive apparatus, the apparatus comprising:
the first display module is used for displaying a preset head portrait generation inlet on a playing page of the first target video; the first target video is used for displaying a first artificial intelligent AI effect head portrait;
the second display module is used for responding to the triggering operation of the preset head portrait generation entrance and displaying a head portrait generation page;
a first obtaining module, configured to obtain a second AI-effect header based on a target image material and a target AI-effect resource in response to a header generation operation triggered on the header generation page for the target image material and the target AI-effect resource; the second AI effect head portrait is used for setting a user head portrait.
10. A computer readable storage medium, characterized in that the computer readable storage medium has stored therein instructions, which when run on a terminal device, cause the terminal device to implement the method according to any of claims 1-8.
11. A video-based interactive apparatus, comprising: a memory, a processor, and a computer program stored on the memory and executable on the processor, the processor implementing the method of any one of claims 1-8 when the computer program is executed.
CN202311415712.5A 2023-10-27 2023-10-27 Video-based interaction method, device, equipment and storage medium Pending CN117425050A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311415712.5A CN117425050A (en) 2023-10-27 2023-10-27 Video-based interaction method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311415712.5A CN117425050A (en) 2023-10-27 2023-10-27 Video-based interaction method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN117425050A true CN117425050A (en) 2024-01-19

Family

ID=89532170

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311415712.5A Pending CN117425050A (en) 2023-10-27 2023-10-27 Video-based interaction method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN117425050A (en)

Similar Documents

Publication Publication Date Title
CN110784752B (en) Video interaction method and device, computer equipment and storage medium
CN107920274B (en) Video processing method, client and server
CN114025189B (en) Virtual object generation method, device, equipment and storage medium
CN112169320B (en) Method, device, equipment and storage medium for starting and archiving application program
CN114257829B (en) Resource processing method, device and equipment for live broadcast room and storage medium
CN111343512B (en) Information acquisition method, display device and server
CN103546813A (en) Android platform based video preview method and smart television
CN112169318B (en) Method, device, equipment and storage medium for starting and archiving application program
WO2022252998A1 (en) Video processing method and apparatus, device, and storage medium
CN113596555B (en) Video playing method and device and electronic equipment
CN112911147B (en) Display control method, display control device and electronic equipment
CN113938731A (en) Screen recording method and display device
WO2024104182A1 (en) Video-based interaction method, apparatus, and device, and storage medium
CN112169319B (en) Application program starting method, device, equipment and storage medium
CN111954076A (en) Resource display method and device and electronic equipment
CN117425050A (en) Video-based interaction method, device, equipment and storage medium
CN115119064B (en) Video processing method, device, equipment and storage medium
CN115623272A (en) Video processing method, device, equipment and storage medium
CN116489441A (en) Video processing method, device, equipment and storage medium
CN112165646B (en) Video sharing method and device based on barrage message and computer equipment
CN115623226A (en) Live broadcast method, device, equipment, storage medium and computer program product
CN114979746B (en) Video processing method, device, equipment and storage medium
CN115086739B (en) Video processing method, device, equipment and storage medium
CN113179445B (en) Video sharing method based on interactive object and interactive object
CN117135392A (en) Video processing method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination