CN116126177A - Data interaction control method and device, electronic equipment and storage medium - Google Patents

Data interaction control method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN116126177A
CN116126177A CN202211662253.6A CN202211662253A CN116126177A CN 116126177 A CN116126177 A CN 116126177A CN 202211662253 A CN202211662253 A CN 202211662253A CN 116126177 A CN116126177 A CN 116126177A
Authority
CN
China
Prior art keywords
target
digital person
man
video
interaction interface
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211662253.6A
Other languages
Chinese (zh)
Inventor
林建彪
沈亚阳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xiamen Black Mirror Technology Co ltd
Original Assignee
Xiamen Black Mirror Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xiamen Black Mirror Technology Co ltd filed Critical Xiamen Black Mirror Technology Co ltd
Priority to CN202211662253.6A priority Critical patent/CN116126177A/en
Publication of CN116126177A publication Critical patent/CN116126177A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The invention discloses a data interaction control method, a device, electronic equipment and a storage medium, wherein the method comprises the following steps: acquiring an interaction triggering event on a human-computer interaction interface, and determining video rendering parameters of a target digital person according to the interaction triggering event; a preset rendering server is called to render the target digital person according to the video rendering parameters, and a video stream comprising the target digital person is obtained; and displaying the video stream on the man-machine interaction interface, and executing rendering operation in the interaction process by a special preset rendering server, so that the interaction response speed of digital people is improved, and the user experience is improved.

Description

Data interaction control method and device, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to a data interaction control method, apparatus, electronic device, and storage medium.
Background
The digital person is a virtual simulation of the human body in different levels of form and function by using an information science method.
In the prior art, more digital human interaction technologies load digital human and perform rendering and interaction by calling a local user terminal, but the mode has higher hardware requirements on the local user terminal, and meanwhile, in the interaction process, the rendering time of the digital human and the output time of an action video stream are overlong, so that the response speed in the interaction process is seriously influenced, and the experience of a user is poor.
Therefore, how to improve the interactive response speed of digital people is a technical problem to be solved at present.
It should be noted that the information disclosed in the above background section is only for enhancing understanding of the background of the present disclosure and thus may include information that does not constitute prior art known to those of ordinary skill in the art.
Disclosure of Invention
The embodiment of the application provides a data interaction control method, a data interaction control device, electronic equipment and a storage medium, which are used for improving the interaction response speed of digital people.
In a first aspect, a data interaction control method is provided, the method including: acquiring an interaction triggering event on a human-computer interaction interface, and determining video rendering parameters of a target digital person according to the interaction triggering event; a preset rendering server is called to render the target digital person according to the video rendering parameters, and a video stream comprising the target digital person is obtained; and displaying the video stream on the man-machine interaction interface.
In a second aspect, there is provided a data interaction control apparatus, the apparatus comprising: the acquisition module is used for acquiring an interaction trigger event on the man-machine interaction interface and determining video rendering parameters of the target digital person according to the interaction trigger event; the video rendering module is used for calling a preset rendering server to render the target digital person according to the video rendering parameters to obtain a video stream comprising the target digital person; and the video display module is used for displaying the video stream on the man-machine interaction interface.
In a third aspect, there is provided an electronic device comprising: a processor; and a memory for storing executable instructions of the processor; wherein the processor is configured to perform the data interaction control method of the first aspect via execution of the executable instructions.
In a fourth aspect, a computer-readable storage medium is provided, on which a computer program is stored, which computer program, when being executed by a processor, implements the data interaction control method according to the first aspect.
By applying the technical scheme, the interactive triggering event on the man-machine interaction interface is obtained, and the video rendering parameters of the target digital person are determined according to the interactive triggering event; a preset rendering server is called to render the target digital person according to the video rendering parameters, and a video stream comprising the target digital person is obtained; and displaying the video stream on the man-machine interaction interface, and executing rendering operation in the interaction process by a special preset rendering server, so that the interaction response speed of digital people is improved, and the user experience is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the description of the embodiments will be briefly introduced below, it being obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 shows a flow chart of a data interaction control method according to an embodiment of the present invention;
fig. 2 is a schematic flow chart of a data interaction control method according to another embodiment of the present invention;
fig. 3 is a schematic flow chart of a data interaction control method according to another embodiment of the present invention;
fig. 4 shows a schematic structural diagram of a data interaction control device according to an embodiment of the present invention;
fig. 5 shows a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
The following description of the embodiments of the present application will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are only some, but not all, of the embodiments of the present application. All other embodiments, which can be made by one of ordinary skill in the art without undue burden from the present disclosure, are within the scope of the present disclosure.
It is noted that other embodiments of the present application will be readily apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This application is intended to cover any variations, uses, or adaptations of the application following, in general, the principles of the application and including such departures from the present disclosure as come within known or customary practice within the art to which the application pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the application being indicated by the following claims.
It is to be understood that the present application is not limited to the precise construction set forth herein below and shown in the drawings, and that various modifications and changes may be made without departing from the scope thereof. The scope of the application is limited only by the appended claims.
It should be noted that the following application scenario is only shown for the convenience of understanding the spirit and principles of the present application, and embodiments of the present application are not limited in any way in this respect. Rather, embodiments of the present application may be applied to any scenario where applicable.
The embodiment of the application provides a data interaction control method, as shown in fig. 1, which comprises the following steps:
step S101, an interaction trigger event on a man-machine interaction interface is obtained, and video rendering parameters of a target digital person are determined according to the interaction trigger event.
In this embodiment, the man-machine interaction interface may be a man-machine interaction interface of a client, where the client is installed in a terminal device, and a user enters the man-machine interaction interface after opening the client in the terminal device. Terminal devices include, but are not limited to, smart phones, tablet computers, laptop portable computers, desktop computers, self-service terminals, wearable electronic devices, and the like. The user interacts with the target digital person through the man-machine interaction interface, and the target digital person can be generated in real time, uploaded by the user, or received from other servers or terminals. The interaction triggering event is triggered by an input operation of a user on the man-machine interaction interface, wherein the input operation can comprise operations of inputting text, inputting voice, triggering actions, triggering keys and the like. The user can actively perform input operation on the man-machine interaction interface according to the interaction requirement so as to generate an interaction triggering event on the man-machine interaction interface. The user can also input operation on the man-machine interaction interface according to the guidance of the text information or the voice information on the man-machine interaction interface so as to generate an interaction triggering event on the man-machine interaction interface.
After the interaction trigger event is acquired, determining video rendering parameters of the target digital person according to the interaction trigger event, wherein the video rendering parameters are used for driving the target digital person to generate interaction behaviors corresponding to the interaction trigger event, and the video rendering parameters comprise one or more of sound rendering data, expression rendering data and action rendering data.
Step S102, a preset rendering server is called to render the target digital person according to the video rendering parameters, and a video stream comprising the target digital person is obtained.
The preset rendering server realizes the rendering function based on a 3D engine, which may be UE4, unity, etc. The preset rendering server can be directly called, or can be called through a preset WebRTC (Web Real-Time Communication) server. The preset rendering server renders the target digital person according to the video rendering parameters to obtain a corresponding video stream, and the target digital person in the video stream shows a corresponding interaction behavior.
For example, if the input operation corresponding to the interaction triggering event is that a section of voice is input, the target digital person in the video stream will make a mouth-shaped animation of "rainy today, at 10-20 degrees" and send out the corresponding voice. If the input operation corresponding to the interaction triggering event is to input a claimant "our clapping bar" clapping gesture, the clapping gesture may be made on the man-machine interface, or may be made at intervals, then in the corresponding video stream, the target digital person will make a corresponding clapping action, and make a facial expression (such as a lazy smile) corresponding to the clapping action, and also may make a corresponding voice (such as "yey"). If the input operation corresponding to the interaction triggering event is to input text content of how to wash hands, a target digital person in the video stream can answer the question of how to wash hands through voice, simultaneously make expressions and actions matched with the voice, and demonstrate how to wash hands through limb actions.
Step S103, displaying the video stream on the man-machine interaction interface.
In this embodiment, a video display area may be set on the human-computer interaction interface, a video stream may be displayed in the video display area, and a playing progress may be displayed while the video stream is displayed, so that a user may adjust the playing progress at any time. The mute option can also be displayed, so that the user can control whether the video stream emits sound or not to meet the mute requirements of the user in different scenes. A full screen option may also be displayed so that the user may view the video stream full screen.
In some embodiments of the present application, before acquiring the interaction triggering event on the human-computer interaction interface, the method further includes:
acquiring a creation trigger event on the man-machine interaction interface, and determining a digital person creation parameter according to the creation trigger event;
calling the preset rendering server to create an initial digital person according to the digital person creation parameter;
and storing the initial digital person, and displaying the initial digital person on the man-machine interaction interface.
In this embodiment, the user may perform creation of the digital person based on the human-computer interaction interface, and the creation triggering event is triggered by a creation operation in the human-computer interaction interface, where the creation operation is a single operation or a set of operations. Determining digital person creation parameters according to the creation triggering event, then calling a preset rendering server to create an initial digital person according to the digital person creation parameters, storing the initial digital person according to a storage instruction, or automatically storing the initial digital person, and displaying the initial digital person on a human-computer interaction interface, so that efficient creation of the digital person is realized, and user experience is improved.
Optionally, the creating operation includes uploading a face photo, and inputting gender and wind type; the digital person creation parameters include gender, wind type, and face feature data extracted from a face photograph. The face photo comprises a target face, and an initial digital person is created based on the target face. The sex is used for specifying the sex of the digital person, the wind type data is used for specifying the wind type of the digital person, such as the wind type of reality, beauty, lovely and the like, each wind type can be previewed through sample pictures, and a user can conveniently select according to own requirements. In order to match a digital person with a target face in a face photo, a better visual effect is ensured, the content, illumination, format, size and the like of the photo need to meet preset conditions, for example, the photo needs to be a positive face photo, the illumination is uniform and sufficient, the expression is naturally relaxed, the photo belongs to a preset format (such as a JPG format or PNG format) and the size of the photo does not exceed a preset size (such as 10 MB).
In some embodiments of the present application, after presenting the initial digital person at the human-machine interaction interface, the method further comprises:
acquiring an editing trigger event on the man-machine interaction interface, and determining editing parameters of the initial digital person according to the editing trigger event;
calling the preset rendering server to edit the initial digital person according to the editing parameters to obtain the target digital person;
and displaying the target digital person on the man-machine interaction interface.
In this embodiment, the user may edit the created digital person based on the man-machine interface, and the editing trigger event is triggered by an editing operation in the man-machine interface, where the editing operation is a single operation or a set of operations, and the editing operation generates a corresponding editing parameter. And determining editing parameters of the initial digital person according to the editing trigger event, then calling a preset rendering server to edit the initial digital person according to the editing parameters, obtaining a target digital person after the editing is completed, and displaying the target digital person on a human-computer interaction interface, so that the editing of the digital person can be efficiently performed, and the user experience is improved.
Optionally, the editing parameter includes at least one of a scene, an action, an article, a subtitle, a dubbing, a text, a material, a filter, music, a transition, and a lens. Wherein, the scene is the scene where the digital person is located, different scenes can be set, the position of the scene, the angle between the scene and the person, the size of the scene and the like can be adjusted; the action is to enable the digital person to display corresponding actions, such as lecture broadcasting, daily interaction, POSE of standing or sitting posture, social expression, emotion expression, impulse performance, sports and the like; the item may be a digital person carrying or an item in the surrounding environment; the subtitles may be text corresponding to dubbing; dubbing can be automatic synthesis from TTS (Text To Speech) or sound and music recorded by a user; the characters are characters except subtitles; the materials can be preset pictures or video information, and can also be content uploaded by a user, and the positions of the materials, the placing modes of the materials, the proportion size, the transparency and the like can be adjusted; the filter is a shooting effect; the music may be presented background music; the transition is to enable the digital person to enter different scenes; the lens is a setting parameter of the virtual camera, such as a lens position, a lens angle, a lens switching parameter, and the like.
In some embodiments of the present application, after presenting the target digital person at the human-machine interaction interface, the method further comprises:
acquiring a local storage trigger event on the man-machine interaction interface, and determining a local storage path and a digital person identifier according to the local storage trigger event;
and storing the target digital person in the local storage path according to the digital person identification.
In this embodiment, the local storage trigger event is triggered by a local storage operation on the man-machine interaction interface, where the local storage operation includes designating a local storage path and naming a digital person identifier for a target digital person, determining the local storage path and the digital person identifier according to the local storage trigger event, and then storing the target digital person in the local storage path according to the digital person identifier, and subsequently querying the target digital person based on the digital person identifier, so that a user can efficiently access the target digital person.
Optionally, the local storage triggering event may also be triggered automatically when a preset triggering condition is met.
In some embodiments of the present application, after storing the target digital person in the local storage path as the digital person identifier, the method further comprises:
acquiring a calling trigger event of the human-computer interaction interface for the target digital person, and determining a user identifier according to the calling trigger event;
judging whether the user identifier meets a preset authentication rule or not;
if yes, displaying the target digital person and the digital person identifier on the man-machine interaction interface;
and if not, displaying prompt information of refusing access on the man-machine interaction interface.
In this embodiment, the invoking triggering event is triggered by the invoking operation of the user on the man-machine interaction interface on the target digital person, the user identifier is determined according to the invoking triggering event, if the user identifier meets the preset authentication rule, the user is determined to be a legal user, the target digital person can be invoked, and the target digital person and the digital person identifier are displayed on the man-machine interaction interface; if the user identification does not meet the preset authentication rule, determining that the user is an illegal user, and cannot call the target digital person, and displaying prompt information for refusing access on a man-machine interaction interface, so that the security when the digital person is accessed is improved.
By applying the technical scheme, the interactive triggering event on the man-machine interaction interface is obtained, and the video rendering parameters of the target digital person are determined according to the interactive triggering event; a preset rendering server is called to render the target digital person according to the video rendering parameters, and a video stream comprising the target digital person is obtained; and displaying the video stream on the man-machine interaction interface, and executing rendering operation in the interaction process by a special preset rendering server, so that the interaction response speed of digital people is improved, and the user experience is improved.
The embodiment of the application also provides a data interaction control method, as shown in fig. 2, which comprises the following steps:
step S201, an interaction trigger event on a man-machine interaction interface is obtained, and operation data of a user is determined according to the interaction trigger event.
In this embodiment, the user interacts with the target digital person through the man-machine interaction interface, and the target digital person may be generated in real time, uploaded by the user, or received from another server or terminal. The interaction triggering event is triggered by an input operation of a user on the man-machine interaction interface, wherein the input operation can comprise operations of inputting text, inputting voice, triggering actions, triggering keys and the like. The user can actively perform input operation on the man-machine interaction interface according to the interaction requirement so as to generate an interaction triggering event on the man-machine interaction interface. The user can also input operation on the man-machine interaction interface according to the guidance of the text information or the voice information on the man-machine interaction interface so as to generate an interaction triggering event on the man-machine interaction interface. Since the interaction trigger event is triggered by the input operation, the operation data of the user can be determined according to the interaction trigger event.
Step S202, inquiring a preset corresponding relation table according to the operation data to obtain a target configuration file.
In this embodiment, a preset corresponding relation table is established in advance according to corresponding relations between different operation data and different configuration files, the configuration files include video rendering parameters for enabling a target digital person to execute interaction, and the target configuration file is obtained after the preset corresponding relation table is queried according to the operation data.
Step S203, determining the video rendering parameters according to the target configuration file.
In this embodiment, the video rendering parameters are used to drive the target digital person to generate the interaction behavior corresponding to the interaction trigger event, where the video rendering parameters include one or more of sound rendering data, expression rendering data, and action rendering data, and the video rendering parameters can be quickly determined according to the target configuration file, so that the response speed during interaction can be improved.
Step S204, a preset rendering server is called to render the target digital person according to the video rendering parameters, and a video stream comprising the target digital person is obtained.
The preset rendering server realizes the rendering function based on a 3D engine, which may be UE4, unity, etc. The preset rendering server can be directly called, or can be called through the preset WebRTC server. And the preset rendering server renders the target digital person according to the video rendering parameters to obtain a corresponding video stream.
Step S205, displaying the video stream on the man-machine interaction interface.
In this embodiment, a video display area may be set on the human-computer interaction interface, a video stream may be displayed in the video display area, and a playing progress may be displayed while the video stream is displayed, so that a user may adjust the playing progress at any time. The mute option can also be displayed, so that the user can control whether the video stream emits sound or not to meet the mute requirements of the user in different scenes. A full screen option may also be displayed so that the user may view the video stream full screen.
By applying the technical scheme, the interactive triggering event on the man-machine interaction interface is acquired, and the operation data of the user is determined according to the interactive triggering event; inquiring a preset corresponding relation table according to the operation data to obtain a target configuration file; determining the video rendering parameters according to the target configuration file; a preset rendering server is called to render the target digital person according to the video rendering parameters, and a video stream comprising the target digital person is obtained; and displaying the video stream on the man-machine interaction interface, executing the rendering operation in the interaction process by a special preset rendering server, and determining video rendering parameters according to the target configuration file obtained by table lookup, thereby improving the interaction response speed of digital people and improving the user experience.
The embodiment of the application also provides a data interaction control method, as shown in fig. 3, comprising the following steps:
step S301, an interaction trigger event on a man-machine interaction interface is obtained, and video rendering parameters of a target digital person are determined according to the interaction trigger event.
In this embodiment, the user interacts with the target digital person through the man-machine interaction interface, and the target digital person may be generated in real time, uploaded by the user, or received from another server or terminal. The interaction triggering event is triggered by an input operation of a user on the man-machine interaction interface, wherein the input operation can comprise operations of inputting text, inputting voice, triggering actions, triggering keys and the like. The user can actively perform input operation on the man-machine interaction interface according to the interaction requirement so as to generate an interaction triggering event on the man-machine interaction interface. The user can also input operation on the man-machine interaction interface according to the guidance of the text information or the voice information on the man-machine interaction interface so as to generate an interaction triggering event on the man-machine interaction interface.
After the interaction trigger event is acquired, determining video rendering parameters of the target digital person according to the interaction trigger event, wherein the video rendering parameters are used for driving the target digital person to generate interaction behaviors corresponding to the interaction trigger event, and the video rendering parameters comprise one or more of sound rendering data, expression rendering data and action rendering data.
Step S302, a preset rendering server is called to render the target digital person according to the video rendering parameters, and a video stream comprising the target digital person is obtained.
The preset rendering server realizes the rendering function based on a 3D engine, which may be UE4, unity, etc. The preset rendering server can be directly called, or can be called through the preset WebRTC server. The preset rendering server renders the target digital person according to the video rendering parameters to obtain a corresponding video stream, and the target digital person in the video stream shows a corresponding interaction behavior.
Step S303, displaying the video stream on the man-machine interaction interface.
A video display area can be arranged on the man-machine interaction interface, a video stream is displayed in the video display area, the playing progress can be displayed while the video stream is displayed, and a user can adjust the playing progress at any time. The mute option can also be displayed, so that the user can control whether the video stream emits sound or not to meet the mute requirements of the user in different scenes. A full screen option may also be displayed so that the user may view the video stream full screen.
Step S304, a storage trigger event on the man-machine interaction interface is obtained, and a target video format and a target storage path are determined according to the storage trigger event.
In this embodiment, a user stores a video stream by performing a storage operation on a man-machine interaction interface, the storage operation triggers a storage trigger event, and a target video format and a target storage path are determined according to the storage trigger event.
Optionally, the target video format includes any one of wmv, asf, asx, rm, rmvb, mpg, mpeg, mpe, 3gp, mov, mp4, m4v, avi, dat, mkv, flv, vob, and the like, and the target storage path is a local storage path or a cloud path (such as a cloud URL, or a specified mailbox address, etc.).
Step S305, exporting the video stream into a target video file according to the target video format, and storing the target video file in the target storage path.
In this embodiment, the video stream is first exported as the target video file according to the target video format, and then the target video file is stored in the target storage path, so that the video stream is stored, which is convenient for the user to watch at any time, and improves the user experience.
Optionally, an additional option may be displayed on the man-machine interface, so as to improve flexibility of the storage mode, where the additional option is one or more of a definition option, an option of exporting a video clip or a preview sample, and an option of whether encryption is needed.
In some embodiments, after storing the target video file in the target storage path, the method further comprises:
acquiring a new interaction triggering event on the man-machine interaction interface;
if the new interaction triggering event is consistent with the interaction triggering event, calling the target video file from the target storage path;
and playing the target video file on the man-machine interaction interface.
In this embodiment, when a new interaction trigger event on a man-machine interaction interface is acquired, whether the new interaction trigger event is consistent with the interaction trigger event acquired before is determined, if so, it is indicated that a user needs to make a digital person display the interaction behavior in the previous video stream again, and because the target video file is already stored in the target storage path, the target video file is directly called from the target storage path at this time, and the target video file is played on the man-machine interaction interface, so that the corresponding digital person interaction behavior is displayed to the user.
For example, if the new interaction trigger event is still triggered by a "our clapping bar" clapping gesture, the target video file is directly called from the target storage path, the target video file is played, the target digital person is shown to make a corresponding clapping action, and a facial expression (such as lazy smile) corresponding to the clapping action is made, and a corresponding voice (such as "yey") is made.
Because the preset rendering server is not called any more to render again, the target video file is directly called from the target storage path to display the interactive behavior, and the interactive response speed of the digital person is improved.
By applying the technical scheme, the interactive triggering event on the man-machine interaction interface is obtained, and the video rendering parameters of the target digital person are determined according to the interactive triggering event; a preset rendering server is called to render the target digital person according to the video rendering parameters, and a video stream comprising the target digital person is obtained; displaying the video stream on the man-machine interaction interface; acquiring a storage trigger event on the man-machine interaction interface, and determining a target video format and a target storage path according to the storage trigger event; and exporting the video stream into a target video file according to the target video format, storing the target video file in the target storage path, executing rendering operation in the interaction process by a special preset rendering server, and storing the video stream, thereby improving the interaction response speed of digital people and improving the user experience.
The embodiment of the application also provides a data interaction control device, as shown in fig. 4, where the device includes:
the acquisition module 401 is configured to acquire an interaction trigger event on a human-computer interaction interface, and determine a video rendering parameter of a target digital person according to the interaction trigger event;
the video rendering module 402 is configured to invoke a preset rendering server to render the target digital person according to the video rendering parameters, so as to obtain a video stream including the target digital person;
and the video display module 403 is configured to display the video stream on the man-machine interaction interface.
In a specific application scenario, the obtaining module 401 is specifically configured to:
determining operation data of a user according to the interaction triggering event;
inquiring a preset corresponding relation table according to the operation data to obtain a target configuration file;
determining the video rendering parameters according to the target configuration file;
the preset corresponding relation table is established according to the corresponding relation between different operation data and different configuration files.
In a specific application scenario, the device further includes a first storage module configured to:
acquiring a storage trigger event on the man-machine interaction interface, and determining a target video format and a target storage path according to the storage trigger event;
and exporting the video stream into a target video file according to the target video format, and storing the target video file in the target storage path.
In a specific application scenario, the device further includes a calling module, configured to:
acquiring a new interaction triggering event on the man-machine interaction interface;
if the new interaction triggering event is consistent with the interaction triggering event, calling the target video file from the target storage path;
and playing the target video file on the man-machine interaction interface.
In a specific application scenario, the apparatus further includes a creation module configured to:
acquiring a creation trigger event on the man-machine interaction interface, and determining a digital person creation parameter according to the creation trigger event;
calling the preset rendering server to create an initial digital person according to the digital person creation parameter;
and storing the initial digital person, and displaying the initial digital person on the man-machine interaction interface.
In a specific application scenario, the device further includes an editing module, configured to:
acquiring an editing trigger event on the man-machine interaction interface, and determining editing parameters of the initial digital person according to the editing trigger event;
calling the preset rendering server to edit the initial digital person according to the editing parameters to obtain the target digital person;
and displaying the target digital person on the man-machine interaction interface.
In a specific application scenario, the device further includes a second storage module, configured to:
acquiring a local storage trigger event on the man-machine interaction interface, and determining a local storage path and a digital person identifier according to the local storage trigger event;
and storing the target digital person in the local storage path according to the digital person identification.
By applying the above technical scheme, the data interaction control device comprises: the acquisition module is used for acquiring an interaction trigger event on the man-machine interaction interface and determining video rendering parameters of the target digital person according to the interaction trigger event; the video rendering module is used for calling a preset rendering server to render the target digital person according to the video rendering parameters to obtain a video stream comprising the target digital person; the video display module is used for displaying the video stream on the man-machine interaction interface, and a special preset rendering server is used for executing rendering operation in the interaction process, so that the interaction response speed of digital people is improved, and the user experience is improved.
The embodiment of the invention also provides an electronic device, as shown in fig. 5, which comprises a processor 501, a communication interface 502, a memory 503 and a communication bus 504, wherein the processor 501, the communication interface 502 and the memory 503 complete communication with each other through the communication bus 504,
a memory 503 for storing executable instructions of the processor;
a processor 501 configured to execute via execution of the executable instructions:
acquiring an interaction triggering event on a human-computer interaction interface, and determining video rendering parameters of a target digital person according to the interaction triggering event;
a preset rendering server is called to render the target digital person according to the video rendering parameters, and a video stream comprising the target digital person is obtained;
and displaying the video stream on the man-machine interaction interface.
The communication bus may be a PCI (Peripheral Component Interconnect, peripheral component interconnect standard) bus, or an EISA (Extended Industry Standard Architecture ) bus, or the like. The communication bus may be classified as an address bus, a data bus, a control bus, or the like. For ease of illustration, the figures are shown with only one bold line, but not with only one bus or one type of bus.
The communication interface is used for communication between the terminal and other devices.
The memory may include RAM (Random Access Memory ) or may include non-volatile memory, such as at least one disk memory. Optionally, the memory may also be at least one memory device located remotely from the aforementioned processor.
The processor may be a general-purpose processor, including a CPU (Central Processing Unit ), NP (Network Processor, network processor), etc.; but also DSP (Digital Signal Processing, digital signal processor), ASIC (Application Specific Integrated Circuit ), FPGA (Field-Programmable Gate Array, field programmable gate array) or other programmable logic device, discrete gate or transistor logic device, discrete hardware components.
In yet another embodiment of the present invention, there is also provided a computer-readable storage medium having stored therein a computer program which, when executed by a processor, implements the data interaction control method as described above.
In yet another embodiment of the present invention, there is also provided a computer program product containing instructions which, when run on a computer, cause the computer to perform the data interaction control method as described above.
In the above embodiments, it may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, produces a flow or function in accordance with embodiments of the present invention, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable apparatus. The computer instructions may be stored in a computer-readable storage medium or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, the computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center by a wired (e.g., coaxial cable, fiber optic, digital subscriber line), or wireless (e.g., infrared, wireless, microwave, etc.). The computer readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server, data center, etc. that contains an integration of one or more available media. The usable medium may be a magnetic medium (e.g., floppy disk, hard disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., solid state disk), etc.
It is noted that relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
In this specification, each embodiment is described in a related manner, and identical and similar parts of each embodiment are all referred to each other, and each embodiment mainly describes differences from other embodiments.
The foregoing description is only of the preferred embodiments of the present invention and is not intended to limit the scope of the present invention. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present invention are included in the protection scope of the present invention.

Claims (10)

1. A data interaction control method, the method comprising:
acquiring an interaction triggering event on a human-computer interaction interface, and determining video rendering parameters of a target digital person according to the interaction triggering event;
a preset rendering server is called to render the target digital person according to the video rendering parameters, and a video stream comprising the target digital person is obtained;
and displaying the video stream on the man-machine interaction interface.
2. The method of claim 1, wherein the determining video rendering parameters of the target digital person from the interaction trigger event comprises:
determining operation data of a user according to the interaction triggering event;
inquiring a preset corresponding relation table according to the operation data to obtain a target configuration file;
determining the video rendering parameters according to the target configuration file;
the preset corresponding relation table is established according to the corresponding relation between different operation data and different configuration files.
3. The method of claim 1, wherein after displaying the video stream at the human-machine interaction interface, the method further comprises:
acquiring a storage trigger event on the man-machine interaction interface, and determining a target video format and a target storage path according to the storage trigger event;
and exporting the video stream into a target video file according to the target video format, and storing the target video file in the target storage path.
4. The method of claim 3, wherein after storing the target video file in the target storage path, the method further comprises:
acquiring a new interaction triggering event on the man-machine interaction interface;
if the new interaction triggering event is consistent with the interaction triggering event, calling the target video file from the target storage path;
and playing the target video file on the man-machine interaction interface.
5. The method of claim 1, wherein prior to acquiring the interaction trigger event on the human-machine interaction interface, the method further comprises:
acquiring a creation trigger event on the man-machine interaction interface, and determining a digital person creation parameter according to the creation trigger event;
calling the preset rendering server to create an initial digital person according to the digital person creation parameter;
and storing the initial digital person, and displaying the initial digital person on the man-machine interaction interface.
6. The method of claim 5, wherein after presenting the initial digital person at the human-machine interaction interface, the method further comprises:
acquiring an editing trigger event on the man-machine interaction interface, and determining editing parameters of the initial digital person according to the editing trigger event;
calling the preset rendering server to edit the initial digital person according to the editing parameters to obtain the target digital person;
and displaying the target digital person on the man-machine interaction interface.
7. The method of claim 6, wherein after presenting the target digital person at the human-machine interaction interface, the method further comprises:
acquiring a local storage trigger event on the man-machine interaction interface, and determining a local storage path and a digital person identifier according to the local storage trigger event;
and storing the target digital person in the local storage path according to the digital person identification.
8. A data interaction control device, the device comprising:
the acquisition module is used for acquiring an interaction trigger event on the man-machine interaction interface and determining video rendering parameters of the target digital person according to the interaction trigger event;
the video rendering module is used for calling a preset rendering server to render the target digital person according to the video rendering parameters to obtain a video stream comprising the target digital person;
and the video display module is used for displaying the video stream on the man-machine interaction interface.
9. An electronic device, comprising:
a processor; and
a memory for storing executable instructions of the processor;
wherein the processor is configured to perform the data interaction control method of any of claims 1-7 via execution of the executable instructions.
10. A computer-readable storage medium, on which a computer program is stored, characterized in that the computer program, when executed by a processor, implements the data interaction control method of any one of claims 1 to 7.
CN202211662253.6A 2022-12-23 2022-12-23 Data interaction control method and device, electronic equipment and storage medium Pending CN116126177A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211662253.6A CN116126177A (en) 2022-12-23 2022-12-23 Data interaction control method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211662253.6A CN116126177A (en) 2022-12-23 2022-12-23 Data interaction control method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN116126177A true CN116126177A (en) 2023-05-16

Family

ID=86300063

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211662253.6A Pending CN116126177A (en) 2022-12-23 2022-12-23 Data interaction control method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN116126177A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117519663A (en) * 2024-01-08 2024-02-06 广州趣丸网络科技有限公司 Intelligent production platform for digital people
CN117519663B (en) * 2024-01-08 2024-04-26 广州趣丸网络科技有限公司 Intelligent production platform for digital people

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117519663A (en) * 2024-01-08 2024-02-06 广州趣丸网络科技有限公司 Intelligent production platform for digital people
CN117519663B (en) * 2024-01-08 2024-04-26 广州趣丸网络科技有限公司 Intelligent production platform for digital people

Similar Documents

Publication Publication Date Title
CN107770626B (en) Video material processing method, video synthesizing device and storage medium
CN109120866B (en) Dynamic expression generation method and device, computer readable storage medium and computer equipment
US10607382B2 (en) Adapting content to augumented reality virtual objects
US20140188997A1 (en) Creating and Sharing Inline Media Commentary Within a Network
US10592199B2 (en) Perspective-based dynamic audio volume adjustment
US20140193138A1 (en) System and a method for constructing and for exchanging multimedia content
US20210409787A1 (en) Techniques for providing interactive interfaces for live streaming events
CN111880874A (en) Media file sharing method, device and equipment and computer readable storage medium
US11178468B2 (en) Adjustments to video playing on a computer
CN107515870B (en) Searching method and device and searching device
CN114880062B (en) Chat expression display method, device, electronic device and storage medium
US20140282000A1 (en) Animated character conversation generator
US10698744B2 (en) Enabling third parties to add effects to an application
WO2023279726A1 (en) Video generation method and apparatus, and electronic device, storage medium, computer program and computer program product
CN108469991B (en) Multimedia data processing method and device
CN116126177A (en) Data interaction control method and device, electronic equipment and storage medium
KR101221540B1 (en) Interactive media mapping system and method thereof
CN110366002B (en) Video file synthesis method, system, medium and electronic device
CN114554231A (en) Information display method and device, electronic equipment and storage medium
TWI652600B (en) Online integration of augmented reality editing devices and systems
CN113301436A (en) Play control method, device and computer readable storage medium
CN110909204A (en) Video publishing method and device and electronic equipment
TWM560053U (en) Editing device for integrating augmented reality online
CN111666793A (en) Video processing method, video processing device and electronic equipment
US20170287521A1 (en) Methods, circuits, devices, systems and associated computer executable code for composing composite content

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination